PRISMs, Gom Jabbars, and Consciousness
It’s Saturday night. I could be drinking now. I should be drinking now; a friend of mine has been liberated from his wife and larva for the weekend— a greater cause for celebration than he’ll admit publicly— and I should be out there helping him kill brain cells. And yet I have chosen to stay home, so that I may plough through a couple of technical papers by some Yale egghead with the unlikely name of Ezequiel Morsella, and bring his words down from the mountaintop1.
It is not often I choose to be such a wet blanket. (Usually it’s an autonomic reflex.) But this Morsella guy has come up with an intriguing theory about consciousness, and I think he might really be on to something. I noticed a couple of New Scientist-type headlines on an in-press paper that takes a stab at empirically testing this theory, but I gotta give a nod to Nick Nimchuk for pointing me at the original 2005 paper.
Morsella has gone back to basics. Forget art, symphonies, science. Forget the step-by-step learning of complex tasks. Those may be some of the things we use consciousness for now but that doesn’t mean that’s what it evolved for, any more than the cones in our eyes evolved to give kaleidoscope makers something to do. What’s the primitive, bare-bones, nuts-and-bolts thing that consciousness does once we’ve stripped away all the self-aggrandizing bombast?
Morsella’s answer is delightfully mundane: it mediates conflicting motor commands to the skeletal muscles.
It’s a revelation, watching this dude whittle away at the options. Consciousness isn’t just about motor commands: we snatch our hand back from a hot stove, play arpeggios far faster than the conscious mind can keep up, and those are all “voluntary” motor actions.
It’s not just about puzzle-solving, or reconciling conflicting inputs, either: look at all the illusions and mind tricks that depend on the brain performing exactly those kinds of operations unconsciously. Binocular rivalry, inattentional blindness, ventriloquism: incompatible views shown to each eye, buildings appearing and disappearing from our field of view, the mouth moving here but the sound coming from over there: somehow the brain puts all those conflicting inputs together and serves up the final product without any sense of conscious conflict. The speaker mouths “ga”; the sound hitting the observer’s ear is “ba”; but the observer hears “ga”, an intermediate sound, without ever being aware of a conflict in need of sophisticated and complex resolution.
Not problem solving, then. Not motor commands. Or rather, not just those things in isolation. But when you snatch your hand from a flame faster than the conscious mind can act, there’s no other agenda to keep your hand where it is, is there? What if there were? Morsella asks. What about the person carrying the scorching plate from kitchen to dining room? Part of him says drop that fucker, it’s burning my fingers, but some other part is saying No, you have guests to feed, you don’t want to clean up the mess, suffer just a little longer and everything will be okay. Or let’s move all the way up to life and death: what about the person trapped beneath the ice, one part burning with the need to inhale, another terrified of drowning should that actually happen?
Not just conflict, or problems to be solved. Not just motor commands to carry out. Rather, conflicting motor commands: competing agendas, both involving voluntary motion. That‘s when you wake up.
Morsella sees us as a series of systems, each with its own agenda: feeding, predator avoidance, injury prevention, and so on. Mostly these systems operate on their own, independently. We can’t voluntarily dilate our eyes, for example. We can’t consciously control our digestive processes, nor are we even generally aware of them — peristalsis, like the pupil reflex, is the purview of the smooth muscles (and no, gas production by gut bacteria is not the same thing). But when digestion is finished — when the rectum is full, and you’re ready to take the mother of all dumps, but you’re on the in-laws good living-room carpet and your incontinent uncle is hogging the toilet — then, sure as shit, you become conscious of the process. There’s a sphincter under voluntary control that’s just urging you to let go. There are other agendas suggesting that that would be a really bad idea. And I would challenge anyone who has ever been in that position to tell me that that situation is not one in which conscious awareness of one’s predicament is, to put it mildly, heightened.
So most of our activities — somatic and cognitive — operate under the purview of these various systems, and as long as they don’t come into conflict we’re not aware of them. But when there is conflict — when SubSystem A tells the body to do this and SubSystem B says no, do that — then we’ve got a problem. Then, the competing agendas enter the arena to do battle. Consciousness, according to Morsella, is a forum for crosstalk between different systems, the only forum in which these systems can communicate when in conflict. He describes it as “a senate, in which representatives from different provinces are always in attendance, regardless of whether they should sit quietly or debate.”2 I myself prefer the Thunderdome For Subroutines metaphor, in which competing agendas duke it out for dominance. The urge to inhale, the fear of drowning. The need to defecate, the price of carpet cleaner. Two plans enter: one plan leaves, and runs down the motor nerves, and is put into action.
Morsella calls it PRISM: the Principle of Parallel Responses Into Skeletal Muscle. He claims the acronym works conceptually, “for just as a prism can combine different colors to yield a single hue, phenomenal states cull simultaneously activated response tendencies to yield a single, adaptive skeletomotor action.” Yeah, right. I bet the dude spent as long playing with Scrabble tiles to come up with a cool-sounding name as he did writing the actual paper, but we’ll let that slide.
I find PRISM appealing on a number of fronts: it resonates both with the latest MRI findings and with ancient insights from the 19th century (“Thinking is for doing”, Morsella quotes from 1890). It explains Blindsight, Alien Hand Syndrome, any number of Sacksian neurological disorders as a loss of integration, the result of inadequate crosstalk between systems. It makes me look at mirror neurons in a whole new way. It’s a testable hypothesis, falsifiable, predictive.
And most importantly, for me — speaking as someone who built a book predicated on the subject — it recognizes its limits. It presents the conscious arena as a necessary place for deliberation, but it doesn’t even try to explain why Thunderdome should have an audience (or rather, why Thunderdome should be its own audience). These are the things that happen in the conscious arena, and only here, as far as we know— but Morsella admits explicitly that it’s easy to imagine another system that does the same thing, more efficiently, without conscious involvement. He leaves the door open for scramblers.
“…this does not mean that current models of nervous activity or
other contraptions are incapable of achieving what phenomenal
states achieve; it means only that, in the course of human evolution,
these physical events happened to be what were selected to solve
certain computational challenges … while intersystem integration
could conceivably occur without something like phenomenal states
(as in an automaton or in an elegant “blackboard” neural network
with all of its modules nicely interconnected), such a solution was not
selected in our evolutionary history.”
Even cooler, he goes on to postulate a whole new system, something that is facultatively conscious, and which I can pretty much guarantee is gonna show up in Dumbspeech if that book ever gets a publisher:
“Although one could easily imagine more efficient arrangements
that invoke phenomenal states only under conditions of conflict,
chronic engagement happens to be a rather parsimonious and, in
some sense, efficient evolutionary solution to the problem of
intersystem interaction. Just as traffic lights, pool filters, and
ball-return machines at bowling alleys operate and expend energy
continuously (regardless of whether their function is presently
needed), chronic engagement is “efficiently inefficient” in the
sense that it does not require additional mechanisms to determine
whether channels of cross-talk should be open or closed.”
And further:
“…One could imagine a conscious nervous system that
operates as humans do but does not suffer any internal strife. In
such a system, knowledge guiding skeletomotor action would be
isomorphic to, and never at odds with, the nature of the phenomenal state
— running across the hot desert sand in order to reach water
would actually feel good, because performing the action is
deemed adaptive. Why our nervous system does not operate with
such harmony is perhaps a question that only evolutionary biology
can answer. Certainly one can imagine such integration occurring
without anything like phenomenal states, but from the present
standpoint, this reflects more one’s powers of imagination than
what has occurred in the course of evolutionary history.”
This guy is pointing out the way to whole new forms of consciousness, utterly alien and yet completely plausible. This would be a terrific and uplifting point upon which to end; but because “Peter Watts” and “uplifting” are two terms that do not belong in the same sentence, I think I’ll wind this up on a more somber note. Once again, Ezequiel Morsella:
“It is reasonable to assume that, early in development, skeletomotor
behavior openly reflects the (unchecked and unsuppressed)
tendencies of the response systems. There is no question that an
infant or toddler would immediately drop a plate that was a bit too
hot. But as development unfolds, behavior begins to reflect the
collective development of the quasi-independent learning histories
of the response systems.”
What he is saying here is, the more cognitively sophisticated you become, the more able you are to suppress hardwired aversion responses in favor of long-term agendas. The more sublime your awareness, the more pain you can withstand. And is anyone here not thinking of the Bene Gesserit and their gom jabbar, from Dune? Herbert, once again, had it right. His simple device reduced Voight-Kampf to its essence: testing for humanity through torture.
If Morsella is right, consciousness scales with conflict: the greater the discord between systems, the higher the level of awareness. You are never more alive, more awake, more conscious, than when in excruciating conflict with yourself. If self-awareness is the hallmark of humanity, then Sophie’s Choice may be its most mind-expanding exemplar.
Abu Ghraib was not just a torture chamber. It was a transcendence machine.
Postscript, 11/10/09: Pointing out an obvious religious angle that I missed completely, Caitlin Sweet asks rhetorically, “Why do you think monks whipped themselves until they bled? Transcendence, baby.”
1 Actually, I could have done this this afternoon, but I was drinking at lunch with a couple of other folks, which led to the teensiest bit of drowsiness afterwards and an unexpected coma that only ended when Banana the Cat— outraged by the sight of a food bowl that remained empty a full three minutes past his usual dinnertime— hooked me through the nose with his claw.
2This neatly explains why we are consciously aware of things like hunger even when they are not in obvious conflict with other agendas— although one could also argue that as a prey species, hunger always involves a conflict insofar as going out to forage is to put yourself at risk from predators.
Man, some of your entries here are downright brilliant. You should collect them in a book.
You are never more alive, more awake, more conscious, than when in excruciating conflict with yourself. Ha, the Shadows from Babylon 5 were right.
even more, it suggests the origin of consciousness may require conditions of limited resources, suboptimal fitness, etc, where those difficult strategic decisions need to be carefully balanced. A dystopic pre-requisite for noogenesis that is likely to include somewhat antisocial tendencies…
“The more sublime your awareness, the more pain you can withstand. And is anyone here not thinking of the Bene Gesserit and their gom jabbar, from Dune?”
It also brings to mind the Saw movies.
“This guy is pointing out the way to whole new forms of consciousness, utterly alien and yet completely plausible. This would be a terrific and uplifting point upon which to end…”
As someone with a vested interest in humanity’s form of consciousness, I wouldn’t mind some evidence that there’s a good reason to be driven by conflict. If I dislocate my shoulder, I’d like to believe that the pain in re-setting it is more than just a failure in evolution. Also, similar to the efficiency of the non-conscious scramblers, the idea of a civilization of aliens that never reverts to their baser instincts is simultaneously inspiring and terrifying.
“Abu Ghraib was not just a torture chamber. It was a transcendence machine.”
Keeping in mind, that would only apply in the cases where the tortured person was actively holding anything back. Using the Dune example, locking someone’s hand in the nerve induction box just for the hell of it wouldn’t actually prove anything about their humanity.
@Nick: That’s true. I was thinking more of the hooded guy standing on the box with the electrodes on his hands, believing that if he fell off the box he’d be electrocuted. Definite conflict there, after a few hours of trying to maintain one’s balance…
@Peter: Of course, that was the image that came up in my mind when I read “Abu Ghraib”. I’m not sure if it means my consciousness is over-evolved or under-evolved that I failed to make the direct connection from there.
For additional fun with this model, consider the implications of schizophrenia.
Using another Frank Herbert reference, I’m thinking of the Bureau of Sabotage, the purpose of which is to slow down excessively “efficient” governance – that make a good parallel with the cognitive purpose of stalling conflict. Herbert did not say whether the agents of “BuSab” had pointy hair.
Looks like someone figured out how to apply Subsumption Architecture in robotics (Rodney Brooks et. al, 1986) to … consciousness http://en.wikipedia.org/wiki/Subsumption_architecture
This sounded plausible for a second to me but… After a second I realized that a person is only aware of conflicting impulses … if one of them is conscious intent to begin with!
If your conscious intent was focused on some thing like talking to guests or escaping terrorist home invaders, your conscious wouldn’t be on conflicting motor impulses. Moreover, there are “smaller” motor impulse conflicts that don’t trigger conscious awareness any more than other small details.
I think the concept of consciousness being a highly refined framework of reciprocating response systems to be… ancient. It is simple but limited in current form, it does not account for the reciprocal aspect of the local, the material.
You might find Dewey B. Larson’s work stimulating, to expand upon some of these thoughts. Check this out: Reciprocal System of Theory Googling that will also yield more results.
Ah, illumination from a half-forgotten tidbit from psychology: Soldiers with better education are more resistant to battle fatigue (or whatever they’re calling it these days) than less educated soldiers.
I always assumed it was some sort of perspective thing, like they could see the long term goals or better imagine the day they could ship out and go home or something like that. Hard to imagine my dog foregoing ANYTHING for greater benefit later.
Maybe that’s your evolutionary value of the consciousness right there. And maybe that’s an under-hyped value of education: years of forcing one’s self to shelve plans to fight and fuck one’s way through adolescence in favor of peering at arcane scribbles conditions us to contribute more to our clade. Much to think about for the nanowrimo endeavor.
Not having read the article (and lacking any formal education in biology to do it), I have to say this “conflict” thing sounds like simple decision-making, and “sekeletal muscles” is just another word for “doing things that influence the external world”. But maybe I’m not getting it right.
It does raise an interesting SF concept, of an alien which is only conscious when he is in conflict. Imagine a being that would slip into non-consciousness if he’s doing anything that doesn’t require thinking, such as walking, working in an assembly line, etc.
I think it is an interesting point in many ways, but how does it relate exactly to self awareness or reflection – the daily experience of consciousness? The “voice” in your head that you identify as conscious thought?
What are the conflicting impulses involved in reading and understanding this article, for example?
is the point that consciousness emerged to mediate these conflicting agendas, but it hangs around even when mediation is not necessary and therefore we have literature, art, videogames, sports and the vast majority of civilization as a result?
I don’t buy this from a comms / cybernetics perspective. For networks in general most bandwith is allocated to 1) data in transit and 2) signalling. Conflict resolution is a scant 3rd, and it generally doesn’t introduce any new infrastructure (In other words conflict resolution is usually just done via more of (1) and (2) and never looks like anything special).
There’s something tautological about Morsella’s result. Would have liked it if he took a range of networks and showed how they APPEARED to us to be trending towards consciousness / sentience without anything being fundamentally different in the underlying network (except maybe its topology – eg more hubs and greater overall density?).
What my (layman’s) ears are hearing is something like, “Rogers cares about your human relationships”… when we all know that this telco doesn’t give a toss; all that really happens is that each time I want to reach a friend my cellphone cries ‘mommy’ and a neighbourhood access point responds. There’s some low-level conflict resolution (albeit of a few different kinds) but it’s a pretty big jump to say that this breeds consciousness.
I’ll admit it’s a tough topic to keep free of big jumps…
Another interesting point to consider is the unconscious override.
To take the crapping your pants example to the next step. Chances are no matter how bad you have to go, even if you took mineral oil laced exlax just before, you are simply going to be incapable of crapping your pants. Unless decades of potty-training and social intimidation having been effective for some reason, for most adults, even if they tried they would not be able to actually crap their pants – especially in public.
So, no matter how painfully conscious you are of the conflicting agendas, there is a conditioned unconscious override that is going to prevent you from committing certain acts. Even to the point where you’ve made a conscious decision to due it, the body will not respond.
As someone who works at a drug rehab / homeless shelter and is studying psych (undergrad) this triggers a few thoughts. I can’t think of a better example of someone mediating a conflict than an addict weighing long term survival against (what I understand to be the high-jacked survival instinct that is) drug dependency. Are addicts in recovery, then, at the deep end of the consciousness pool?
The obvious counterpoint to this is the lab rat wired in to its own pleasure center and tapping the button till it starves. So if what we’re talking about here is sacrificing short term, survival-oriented decisions in favor of those with benefits appearing only in relation to a longer timeline then could consciousness crop up only once an entity has wiring for perceiving longer timelines?
Also, in response to a comment made by Michael Grosberg, I imagine working an assembly line might result in a very heightened conscious state: the conflict between desire for money and the desire to not be working an assembly line. But there would be a definite benefit in that situation to turning off consciousness.
“Chances are no matter how bad you have to go, even if you took mineral oil laced exlax just before, you are simply going to be incapable of crapping your pants.”
I can assure you that eventually you can’t hold it in.
“…for most adults, even if they tried they would not be able to actually crap their pants – especially in public.”
That cries out for a simple experiment.
I’ve been in that situation where I wanted to crap my pants. Actually tried to do it, not even in public really, just stuck somewhere where I could not get to a restroom. And even then I couldn’t do it. Maybe if I was drunk.
Still, surely any conditioned behavior can be reconditioned just as Army training is designed to overcome the natural conditioning against harming other people.
Nevertheless, most behavior is still unconscious and people continually find themselves incapable of some action despite consciously willing themselves to do it – from killing someone even in self-defense to crapping one’s pants.
Even though consciousness may seem to be a method of mediating contrary impulses, is it possible that consciousness is simply an aftereffect of unconscious mediation? Could it be that the actual mediation that determines behavior is still primarily done in the unconscious and consciousness is simply something like static or feedback?
To take the senate example, the actual senators do all their work behind closed doors while the television camera of consciousness simply captures a fraction of what is really going on.
The fact that all these elements behave as separate entities allows for consciousness, awareness or reflection to take place, but in reality, it is more like a passenger in the body than actually a player in the organism’s functions.
Tuckerize this man.
I like this theory. Not least because it completely explains my thought “processes” in the crucial seconds before giving my four-year-old nephew a knee-check to the chest as he ran straight toward a hot fireplace this Thanksgiving. Sometime during the conflict between “he’s going to hurt himself” and “he might stop just in time,” another system took over and said: “Lift your knee slightly.”
Closest I’ll ever come to goaltending, conscious or not.
@Nick – The pain in re-setting your shoulder isn’t a failure in evolution, it’s an optimization. You will never forget that pain, and the remembrance of it will likely prevent you from repeating whatever action lead to your shoulder’s dislocation in the first place.
@Derek – it’s not a failure in the sense that it’s appropriate for a conflict-driven consciousness. However, in the alternative adaptive consciousness posited by Morsella, I would imagine that re-setting the shoulder would feel good, as it’s a better option than leaving it dislocated. Instead, putting yourself in a situation where you were likely to dislocate your shoulder again would feel painful, even before you were actually being harmed.
The more I think about it, the more I see some of the interesting possibilities for this in a SF story. For Peter’s sake, I’ll avoid talking about them here…
Still, I’m not sure that this mediation and its outcome is a result of consciousness or if consciousness is simply the aftereffect of the unconscious mediation.
To take the trapped under the ice example. Whether you hold your breath or try to breathe and drown in the end could be a completely unconscious action based on the relative strength of conflicting impulses. Conscious awareness of this may play no active role in the final outcome.
Perhaps it is simply there to modulate anxiety in the “body politic” – to give one a sense of unity and control over what are in the end involuntary actions.
Would this “transcendence” of yours work the same way if it were a conflict between a pleasurable/beneficial act and a sense of guilt? Does the conflict of not dropping a hot plate result in the same level of awareness as the conflict of being faithful to your spouse and banging a hot coworker? Or a religious fundamentalist masturbating?
Okay, me read paper now.
Not having read the paper, I assume the idea is that the primordial consciousness arose to handle these conflicts requiring higher-order decision-making, and that the consciousness state eventually was maintained at all times (traffic signal analogy) because of cost-efficiencies. I can see how evolutionary pressures might lead to such a situation.
What if the cost efficiency was better to have it remain off most of the time, or there was a competing factor that led to that state,and so was selected for instead? You’d get an organism that experienced moments of consciousness only when the soma encountered a high-order conflict. Such an organism might evolve a very rapid thought process, with emphasis on immediate integration of acquired data (would the senses store information for large-scale download?) coupled with rapid long-term planning, for those brief moments of consciousness before it returned to ‘cruise-control’ mode. It would ‘wake up’, integrate all of the inputs, render a decision, act on it while simultaneously setting up long-term priorities/actions for the soma, and then ‘sleep’ again.
Sharks come to mind, for some reason….
DavidK
That is an interesting take on the bicameral mind of Julian Jaynes. In his version, the idea was that the two halves of the brain would only communicate when a crises was reached and decisions, choices needed to be made. Thus the voices of gods telling people what to do.
Then, after facing serious catastrophes, people developed modern consciousness because they had to deal with stuff all the time. This trait was then passed on in the societies that formed.
Still, personally, I think this mediation probably could be unconscious. Each impulse is like a horse and the faster, stronger horse will win the race. Consciousness might simply acknowledge it like a spectator than a rider.
@all of you: Keep in mind, this paper doesn’t even try to deal with the hard problem of how the subjective state of consciousness actually arises from the physical operation of the brain; that remains as mind-boggling and intractable as ever. What he’s pointing out is that certain types of conflict resolution seem to happen mainly within the arena of consciousness and not elsewhere, and this is an interesting correlation. More of a what-it’s-for than a how-it-works kinda thing…
@half of you: I find it disturbing that so many of my readers are so conversant on the subject of pants-crapping
Paul “The Pageman” Pajo said:
Looks like someone figured out how to apply Subsumption Architecture in robotics (Rodney Brooks et. al, 1986) to … consciousness http://en.wikipedia.org/wiki/Subsumption_architecture
Actually, I’d almost put it the other way around— at least, most biological processes seem to follow a subsumption-like hierarchy, and life’s been iterating since a ways before 1986. (I’m kind of surprised to learn that that perspective didn’t show up in robotics before then, in fact…)
The ironically-named Joe The User Said:
This sounded plausible for a second to me but… After a second I realized that a person is only aware of conflicting impulses … if one of them is conscious intent to begin with!
Not sure I buy this; could be a little like the flashlight assigned to find the light, and finding it everywhere it looks; by definition, when you’re not conscious you’re not aware of that state, so you tend to overestimate the amount of consciousness in the mix at any given time.
Also, don’t forget Morsella’s senate metaphor; the daemons are present in the chamber even if they’re snoozing.
If your conscious intent was focused on some thing like talking to guests or escaping terrorist home invaders, your conscious wouldn’t be on conflicting motor impulses. Moreover, there are “smaller” motor impulse conflicts that don’t trigger conscious awareness any more than other small details.
The paper actually goes into this— all the muscular conflicts that don’t entail conscious awareness. Mainly smooth-muscle and cardiac stuff, as the author points out. And while Morsella is arguing that consciousness is the go-to arena for conflicting skeletal-muscle agendas, the subjective perception if those conflicts is of course at a much higher level. You are not aware of “conflicting motor impulses”; you’re aware of conflicting high-level desires.
Parnell Springmeyer said:
…You might find Dewey B. Larson’s work stimulating, to expand upon some of these thoughts. Check this out: Reciprocal System of Theory.
I had not heard of this RTS thing before. (Neither, apparently, have many others.) Thanks.
Keippernicus said:
Soldiers with better education are more resistant to battle fatigue (or whatever they’re calling it these days) than less educated soldiers.
I did not know that. Source?
Michael Grosberg said:
Not having read the article (and lacking any formal education in biology to do it), I have to say this “conflict” thing sounds like simple decision-making, and “skeletal muscles” is just another word for “doing things that influence the external world”. But maybe I’m not getting it right.
No, I think that’s pretty much it, except the “decision-making” is anything but simple; you could talk about “nothing more than simple consciousness” too, but the problem remains big as a honking planetoid.
It does raise an interesting SF concept, of an alien which is only conscious when he is in conflict.
The paper specifically mentions that, too; and I’m thinking I might steal it for my own take on vampires in the Blindsight universe.
John Henning said:
I think it is an interesting point in many ways, but how does it relate exactly to self awareness or reflection — the daily experience of consciousness? The “voice” in your head that you identify as conscious thought?
See “@allof you”, top of this comment.
is the point that consciousness emerged to mediate these conflicting agendas, but it hangs around even when mediation is not necessary and therefore we have literature, art, videogames, sports and the vast majority of civilization as a result?
Morsella explicitly explores this question too, in the context of movies. The paper’s right there behind the link: check it out.
To take the crapping your pants example to the next step. Chances are no matter how bad you have to go, even if you took mineral oil laced exlax just before, you are simply going to be incapable of crapping your pants.
See “@halfof you”. That’s all I’m going to say. Except to agree with the conveniently-aliased crappedmypants that it is, in fact, quite possible to crap your pants— not just once, but many, many times. In fact, it happened to a friend of mine.
Ian j. said:
Are addicts in recovery, then, at the deep end of the consciousness pool?
Wow. Good question— and my tentative answer, if we buy Morsella’s argument, is yes.
The obvious counterpoint to this is the lab rat wired in to its own pleasure center and tapping the button till it starves.
This might not be a legitimate counterexample insofar as it deals with a neurological system which has been, by definition, hacked and subverted. Those implanted wires and the buttons they’re connected to are not part of the evolved system, and actions resulting from their presence shouldn’t be regarded as “natural” in this context.
Also, in response to a comment made by Michael Grosberg, I imagine working an assembly line might result in a very heightened conscious state: the conflict between desire for money and the desire to not be working an assembly line. But there would be a definite benefit in that situation to turning off consciousness.
Or at least redirecting it inward.
John Henning said:
Even though consciousness may seem to be a method of mediating contrary impulses, is it possible that consciousness is simply an aftereffect of unconscious mediation? Could it be that the actual mediation that determines behavior is still primarily done in the unconscious and consciousness is simply something like static or feedback?
I think that’s pretty much inevitable. Assuming that consciousness itself emerges from neural activity, and that cause precedes effect, then by definition the nerves have to act prior to the onset of consciousness. The heavy lifting is all nonconscious: the subjective state is a post-hoc postcard in the rearview mirror.
How’s that for a mangled metaphor?
Derek Martin said:
The more I think about it, the more I see some of the interesting possibilities for this in a SF story. For Peter’s sake, I’ll avoid talking about them here…
Too late. You’re still not getting any royalties, though.
DavidK said:
What if the cost efficiency was better to have it remain off most of the time, or there was a competing factor that led to that state, and so was selected for instead? You’d get an organism that experienced moments of consciousness only when the soma encountered a high-order conflict. Such an organism might evolve a very rapid thought process, with emphasis on immediate integration of acquired data (would the senses store information for large-scale download?) coupled with rapid long-term planning, for those brief moments of consciousness before it returned to ‘cruise-control’ mode. It would ‘wake up’, integrate all of the inputs, render a decision, act on it while simultaneously setting up long-term priorities/actions for the soma, and then ’sleep’ again.
See M Grosberg’s comment, and my response — and again, the original paper, which suggests pretty much this exact scenario.
[…] Shared No Moods, Ads or Cutesy Fucking Icons (Re-reloaded) » PRISMs, Gom Jabbars, and Consciousness. […]
This idea might be true. It is also probably true that sleep originated as a means to conserve energy during times of the day you’re not active. However, trying to understand sleep as it exists in modern human brains on those terms alone will not lead to particularly productive lines of thought.
With the way evolution works, there is a lot of extra function piled on top of pretty much everything. The origin of consciousness is only one part, and not necessarily the most important.
I’d like to know what form of consciousness, according to this theory, I’m in when I’m writing. My best writing mode is a lot more pleasant (for me) than any of my other modes, because for the most part I’m in soma and no longer aware of myself. At the same time, I’m making conscious decisions about which word to choose next. So am I the shark in that moment, or the addict?
Interestingly, many writers (most recently I heard William Gibson say it) claim that they are not really the authors of their novels. Instead they result from a conversation with their unconscious.
From my point of view, people are least effective when they are most subjectively conscious. Just look at sports or performance art (including acting). Universally, people who are good at these, who are “talented,” will say that the more they think about what they are doing, the less successful they are.
Again, I don’t think consciousness necessarily serves any particular physiological purpose. It results from the complex unconscious functions perhaps, but seems much more a tail-end afterthought – a conceptual parasite – than actually involved in the important decisions and processes that sustain the organism.
I finally gave the 2005 paper more than a cursory read. There are two bits that apply directly to many of the disagreements people are posting. First, the line:
“Contrary to what our subjective experience leads us to believe, many of our complex behaviors and mental processes can occur without the guidance of phenomenal processing.”
So, keep in mind that just because you think that you’re making a conscious decision to do something, it’s quite possible that instead your consciousness is simply justifying what your unconscious mind has already decided to do.
Also, since it applies pretty directly to the current discussion, I’ll quote one paragraph regarding the type of conflict discussed:
“It should be clarified that conscious conflicts are fundamentally different from mere doubts or dilemmas, as when one ruminates whether one should do x or y (e.g., vacation in Granada or Hawaii). In contrast to such kinds of thinking, conscious conflicts are active and, in terms of phenomenology, “hot” (Metcalfe & Mischel, 1999). The tugging and pulling from their competing inclinations obtrusively creep up on awareness and seem to be beyond one’s mental control. They seem, rather, to be visceral and automatic (Metcalfe & Mischel, 1999). For example, one can easily choose to forget the allure of Hawaii after deciding to vacation in Granada, but one has no such control over the inclinations arising after one has decided to endure breathlessness or tissue damage for some end, conflicts that, in a sense, cannot be postponed or ignored. In addition, one may face a dilemma regarding which foods to eat, but this is altogether different, both in degree and in kind, from the powerful states one experiences during pain, breathlessness, starvation, or the suppression of elimination behaviors. In short, unlike doubts or dilemmas, one has no direct cognitive control over how and when these conflicts occur.”
Together, these bits emphasize that part of the reason that Morsella comes to the conclusions he does, and part of the reason that he is able to come to any conclusion at all, is that he limits the discussion quite a bit. He has plausible reasons for doing so, but it makes it difficult to have a good discussion unless you’re also willing to accept his limits. On the other hand, a major reason that he placed those limits was to make his model falsifiable, something that a more abstract model would have difficulty with (and something that is always appreciated in the so-called “soft” sciences).
One issues I’m curious about that Morsella avoids (by sticking to humans) is how consciousness applies to “lesser” species. At what level of intelligence does an animal require consciousness to mediate these conflicts? When a dog chews off his leg to escape a trap, is he showing consciousness, or is the tissue damage system so overwhelmed that the additional pain caused by the self-inflicted harm is not even perceived? Similarly, how does consciousness link with self awareness (another subject Morsella avoids)? Is self-awareness a necessary result of consciousness, or does it require an additional layer of evolution–in which case, what’s the evolutionary drive behind it?
—–
@Peter: “Too late. You’re still not getting any royalties, though.”
That was my comment, so I’ll take the misquote and your words as free license to speculate…
1) One plot hook that jumped out at me is an adaptive consciousness needing to be convinced to take what it views as a non-optimal course of action. If a species evolved on another planet with a trend towards adaptive consciousness, it’s possible that the idea of a conflicted consciousness would be so foreign to that species that they would be incapable of understanding our motives (similar to the aliens in “Footfall” completely failing to understand the concept of conditional surrender). It’s possible that said aliens would have to somehow be convinced to follow an agenda that causes actual conflict. There’s obviously a possibility of it becoming a pretty bad plot hook, as any intelligent species would need some way to deal with external influences, but I feel like there’s something in there.
2) Similar to the McGurk effect (ba+ga=da), it would seem that the best way to overcome a foe that is non-conscious would be to “hack” its senses in such a way that it integrates conflicting inputs (including one or more threatening inputs) into a non-threatening composite. Typing that, I start to see echoes of the crucifix glitch…
@John Henning: I don’t think consciousness necessarily serves any particular physiological purpose. It results from the complex unconscious functions perhaps, but seems much more a tail-end afterthought – a conceptual parasite – than actually involved in the important decisions and processes that sustain the organism.
What if consciousness is part of the reality modeling that some animals do in order to deal with complex social interactions. Since, in the wild, most primates are intensely social, having a mental organizing principle for in-depth information about your fellows is really helpful to survival – why not organize it being-by-being?
Further, I’m a fan of the concept that initial “I” is undifferentiated, and then gradually coalescences into models of others, one of whom is the “I” that watches the self dispassionately, particularly after language is acquired. Sounds weird, but consider that all children and many adults are functional animists, as if the human brain software defaults to creating a mental model of other objects as having personality and intent. Good default for animals living in close groups, don’t you think.
I think the “I” in opposition to “you” is an outgrowth of the modeling schema we evolved – if I don’t need to model you, I don’t need a me in here to interact with you.
ALso, reading the above, I don’t think we are all talking about the same phenomenon when we say “consciousness.” I feel we are seriously lacking a definition.
Interesting idea. Subjective consciousness as primarily a tool for daydreaming, postulating – a kind of r&d of the mind.
Hell, if we look at what consciousness “does” it primarily spends its time focusing on unreal events, fantasies, anxieties.
[…] the twit within got wind of all this, and soon my body was wracked with an ever-intensifying flood of opposing neurological directives*: WAKE! no, SLEEP! LOOK UP! No, LIE BACK! Yo, Chill out… this all phoney bullshit.. No man, […]
One could imagine a conscious nervous system that operates as humans do but does not suffer any internal strife. In such a system, knowledge guiding skeletomotor action would be isomorphic to, and never at odds with, the nature of the phenomenal state — running across the hot desert sand in order to reach water would actually feel good, because performing the action is deemed adaptive. Why our nervous system does not operate with such harmony is perhaps a question that only evolutionary biology can answer.
That would require a lifeform of more focused intellect than ours. It would have one goal and one goal only in its life (most likely survival); every action would have to be taken in the service of that prime directive. Otherwise, how would it choose?.We have these conflicts because our priorities shift in relation to changing external and internal circumstances “Am I already dying? Okay, I can take risks I wouldn’t take if I didn’t have cancer. Am I sacrificing myself for a stranger? Yes? Then fuck that guy. Am I sacrificing myself to save my kid? My mate? My mother? Okay, let’s do that.” OR ” I’m dehydrated, malnourished and on the verge of heatstroke but I really need to kill that dude.”
The alien consciousness wouldn’t have these conflicts. It would take whatever action served its prime directive. Reality is complicated. Flexibility is the only option.
[…] I really can’t do justice to the source material with any commentary of my own right now, so I’ll just give you the link: PRISMs, Gom Jabbars, and Consciousness […]
[…] elsewhere.(tags: belief philosophy plantinga epistemology christianity religion alvin-plantinga)PRISMs, Gom Jabbars, and ConsciousnessPeter Watts talks about a paper which claims consciousness arose out of the need to chose between […]
One thing that is fascinating when dealing with consciousness (And might make a good exploration for Mr. Watts if he reads these and a sequel to Blindsight is ever made.) is the little-understood art of Zen-No-Mind. It’s basically the practice of removing conscious thought from your actions so that they don’t get in the way of meditation and/or martial arts. Imagination, improvisation, and reasoning at your standard intelligence level is still possible, but without interference or concerns with regards to personal attachments. (Not necessarily the removal and/or control of emotion.)
If nothing else, reading about it definitely makes you go, “Hmmmmmm…”.
😉