The Aspirational Zombie.

Long-time readers of the ‘crawl might remember that I’ve never had much patience for the AI’s Just Wanna Live trope. I put my bootprint on it in my very first novel—

“Expert defense witnesses, including a smart gel online from Rutgers, testified that neuron cultures lack the primitive midbrain structures necessary to experience pain, fear, or a desire for self-preservation. Defense argued that the concept of a ‘right’ is intended to protect individuals from unwarranted suffering. Since smart gels are incapable of physical or mental distress of any sort, they have no rights to protect regardless of their level of self-awareness. This reasoning was eloquently summarized during the Defense’s closing statement: ‘Gels themselves don’t care whether they live or die. Why should we?’ The verdict is under appeal.

—which was intended to act as a counterweight to survival-obsessed AIs, from Skynet to replicants. Why should self-awareness imply a desire for survival? The only reason you should care whether you live or die is if you have a limbic system—and the only reasons you’d have one of those is if you evolved one over millions of years, or someone built one into you (and what kind of idiot programmer would do that?).

Of course my little aside in Starfish went completely unnoticed. Blade Runner variants kept iterating across screens large and small. Spielberg desecrated Kubrick’s memory with The Little Robot Boy (aka, “A.I. Artificial Intelligence”, which parses out to “Artificial Intelligence Artificial Intelligence”, which is about the level of subtlety you’d expect from Steven Spielberg at that stage in his career). The Brits churned out three seasons of “Humans” (I gave up after the first). Just a couple of months ago I beta-read yet another story about AIs being Just Like Us, yet another story that eschewed actual interrogation in favor of reminding us for the umpteenth time that Slavery Is Not OK.

Only now—now, as it turns out, maybe sentience implies survival after all. Maybe I’ve had my head up my ass all these years.

I say this because I’ve recently finished a remarkable book called The Hidden Spring, by the South African neuroscientist Mark Solms. It claims to solve the Hard Problem of consciousness. I don’t think it succeeds in that; but it’s made me rethink a lot about how minds work.

Solm’s book weighs in at around 400 pages, over eighty of which consist of notes, references, and an appendix. It’s a bit of a slog in places, a compulsive trip in others, full of information theory and Markov Blankets and descriptions of brain structures like the periaqueductal grey (which is a tube of grey matter wrapped around the cerebral aqueduct in the midbrain, if that helps any). But if I’m reading it right, his argument comes down to the following broad strokes:

  • Consciousness exists as a delivery platform for feelings;
  • Feelings (Hunger, desire, fear, etc) exist as metrics of need;
  • Needs only exist pursuant to a persistence/survival imperative (i.e., it doesn’t matter if you’re about to starve unless you want to stay alive).

So if Solms is right, without a survival drive there are no feelings, and without feelings there’s no need for consciousness. You don’t get consciousness without getting a survival drive preloaded as standard equipment. Which means that all my whingeing about Skynet waking up and wanting to survive is on pretty thin ice (although it also means that Skynet wouldn’t wake up in the first place).

I’m not sure I buy it. Then again, I’m not writing it off, either.

Solms claims this solves a number of problems both soft and hard. For example, the question of why consciousness exists in the first place, why we aren’t all just computational p-zombies: consciousness exists as a delivery platform for feelings, and you can’t have a feeling without feeling it. (A bit tautological, but maybe that’s his point).

Solms thinks feelings serve to distill a wide range of complex, survival-related variables down to something manageably simple. We organisms keep a number of survival priorities in the stack at any given time, but we can’t attend to all of them simultaneously. You can’t feed and sleep at the same time, for example. You can’t simultaneously copulate and run from a predator (at least, not in my experience). So the brain has to juggle all these competing demands and prioritize them. The bottom line manifests itself as a feeling: you feel hungry, until you see the lion stalking you from the grasses at which point you forget all about hunger and feel fear. It’s not that your stomach is suddenly full. It’s just that your priorities have changed.

All the intermediate calculations (should I leave my burrow to forage? How hungry am I? How many refuges and escape routes are out there? How many tigers? When was the last time I even saw a tiger?) happen up in the cerebrum, but Solms names the “periaqueductal grey” as the scales that balance those subtotals. The periaqueductal grey—hence, by implication, consciousness itself—is in the brain stem.

Right about there.

It’s an enticing argument. At least one of its implications fits my own preconceptions very nicely: that most of the cognitive heavy-lifting happens nonconsciously, that the brain grows “aware” only of the bottom line and not the myriad calculations informing it. (On the other hand, this would also suggest that “feelings” aren’t just the brutish grunts of a primitive brainstem, but the end result of complex calculations performed up in the neocortex. There may be more to trust your feelings that I’d like to admit.) But while it’s trivially true that you can’t have a feeling without feeling it, The Hidden Spring doesn’t really explain why the brain’s bottom line has to be expressed as a feeling in the first place. There’s a bit of handwaving about reducing the relevant variables to categorical/analog values rather than numerical/digital ones, but even “categories” can be compared in terms of greater/lesser— that’s the whole point of this exercise, to establish primacy of one priority over the others—and if all those complex intermediate calculations were performed nonconsciously, why not the simple greater-than/less-than comparison at the bottom line?

That red squiggly thing.

We also know that consciousness has a kind of “off switch“; flip it and people don’t go to sleep, they just kind of— zone out. Stare slack-jawed and unaware into infinity. That switch is located in the cerebrum—specifically a structure called the claustrum.

Neither does Solms’ book make any mention of split-brain personalities, those cases where you sever the corpus callosum and—as far as anyone can tell— each half of the brain manifests its own personality traits, taste in music, even religion. (V.S. Ramachandran reports meeting one such patient—maybe two such patients would be more accurate—whose right hemisphere believed in God and whose left was an atheist.) Those people have intact brain stems, a single periaqueductal grey: only the broadband pipe between the hemispheres has been severed. Yet there appear to be two separate consciousnesses, not one.

Not that I’m calling bullshit on an active neuroscientist, mind you. I’m just asking questions, and I may not even be asking the right ones. The fact that I do have questions is a good thing; it forces me to go in new directions. Hell, if the only thing I took away from this book was the idea that consciousness implies a survival drive, it would have been worth the investment.

It gets better than that, though.

Turns out that Solms is not a lone voice crying in the wilderness. He’s but one apostle of a school of thought pioneered by a dude called Karl Friston, a school going by the name of Free Energy Minimization. There’s a lot of math involved, but it all boils down to the relationship between consciousness and “surprise”. FEM describes the brain as a prediction engine, modeling its surroundings at tnow and using that model to guess what happens at tnow+1. Sensory input tells it what actually happens, and the model updates to reflect the new data. The point is to reduce the difference between prediction and observation—in the parlance of the theory, to minimize the free energy of the system—and consciousness is what happens when prediction and observation diverge, when the universe surprises us with unexpected outcomes. That’s when the self has to “wake up” to figure out where the model went wrong, and how to improve it going forward.

This aligns so well with so much we already know: the conscious intensity required to learn new skills, and the automatic deprecation of consciousness once those skills are learned. The zombiesque unawareness with which we drive vehicles along familiar routes, the sudden hyper-aware focus when that route is disrupted by some child running onto the street. Consciousness occurs when the brain’s predictions fail, when model and reality don’t line up. According to FEM, the brain’s goal is to minimize that divergence—that error space where, also according to FEM, consciousness exists. The brain’s ultimate goal is to reduce that space to zero.

If Friston et al are right, the brain aspires to zombiehood.

This has interesting implications. Take hive minds, for example, an iteration of which I explore in a story that’s still (presumably) in press:

The brain aspires to error-reduction, the self to annihilation. Phi isn’t a line but a curve, rising and peaking and arcing back to zero as the system approaches perfect knowledge. We baseline humans never even glimpse the summit; our thoughts are simple and our models are childish stick-figures, the world is always taking us by surprise. But what’s unexpected to a being with fifteen million times the computational mass of a human mind? All gods are omniscient. All gods are zombies.

Yeah. I can run with this.

But back to Solms. The man wasn’t content to write a book outlining the minutiae of the FEM model. He winds down that book by laying out his ambition to put it to the test: to build, from FEM principles, an artificial consciousness.

Not an artificial intelligence, mind. Consciousness and intelligence are different properties; many things we’d not consider intelligent (including anencephalic humans) show signs of consciousness (not surprisingly, if consciousness is in fact rooted in the brain stem). Solms isn’t interested in building something that’s smart; he wants to build something that’s awake. And that means building something with needs, desires. A survival imperative.

Solms is working on building a machine that will fight to stay alive.

What could possibly go wrong?



This entry was posted on Thursday, April 28th, 2022 at 9:24 am and is filed under ink on art, neuro, sentience/cognition. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.
101 Comments
Inline Feedbacks
View all comments
Aardvark Cheeselog
Guest
Aardvark Cheeselog
1 year ago

Some remarks here caused me to remember this series of talks from a few years back. Some points mentioned there might dovetail nicely with these arguments.

Aidan
Guest
Aidan
1 year ago
Reply to  Peter Watts
Andrew
Guest
Andrew
1 year ago
Reply to  Peter Watts

Evan Thompson’s Waking Dream Being is also a good book as far as eastern philosophy in western scientific context goes.

Anthony Cunningha
Guest
Anthony Cunningha
1 year ago

“what kind of idiot programmer would do that?”

Dude, have you met any programmers?

Ben
Guest
Ben
1 year ago

comment image

osmarks
Guest
1 year ago

There are decent reasons for AIs to want to survive even without being explicitly programmed to. If you have one which is “agent-y” (i.e. it has goals and can come up with strategies to achieve those goals), then – since basically any end goal it has will be helped by it surviving and gaining more power – it will do that.

You may object that it would be stupid to design one like that in the first place, but current AI isn’t really designed but generated by a mostly-blind optimization process aiming to minimize prediction error (gradient descent), and agent-y AI might turn out to be an effective way to do that.

osmarks
Guest
1 year ago
Reply to  Peter Watts

It might not actually have a win state at which it’s “finished” and can’t/doesn’t have to do anything else, given that the goals it learns might not be the intended ones, particularly if the agent-y capability is emerging by accident from some training process. Also, even if it does have a bounded goal, it could keep running even when it’s very sure it’s achieved it, e.g. if it thought there was a nonzero probability that it was being simulated for some purpose but might be able to get into the real world given more runtime/power.Here‘s a relevant video. (alsothis).

I did read an interesting scifi book recently which extrapolated out the current AI situation out to some unspecified future time: AIs have gotten more capable and do most programming and design, but are still inscrutable and inhuman enough that they can’t really understand the real world enough to interact much with it, or usefully communicate with people. I don’t think it’s very plausible, but it made for an interesting story at least.

Last edited 1 year ago by osmarks
Anonononon
Guest
Anonononon
1 year ago
Reply to  osmarks

Sounds interesting, what book was this?

osmarks
Guest
1 year ago
Reply to  Anonononon

“Void Star”, Zachary Mason.

zenAndroid
Guest
zenAndroid
1 year ago
Reply to  Peter Watts

Wheee, first comment here, but I feel like I am morally obligated to point you to one of his videos that answers the explicit question (among others) why would an AI want to prevent itself from being turned off?, the answer turns out to be: Instrumental convergence.

Josh M
Guest
Josh M
1 year ago
Reply to  osmarks

Throwing in another link here – Omohundro speculates that certain “Basic Drives” (like self-preservation) are inevitable – https://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/

Matt McCormick
Guest
1 year ago

Thanks for the reference. I’m going to look this one up. FYI: your philosophical point about neural gels and having a goal to survive didn’t go completely unnoticed. I talk about your gels and the implications in my Philosophy of Mind class pretty regularly.

Ross Presser
Guest
Ross Presser
1 year ago

Previously you put forward the theory that consciousness is necessary to decide between competing drives — the “hot plate and breath holding” theory, I think someone called it. I.e., you’re carrying a hot plate across the room; your hands feel pain that demands you drop it, but you really want the plate delivered, so your consciousness intervenes. I can’t quite find where you talked about it here on the crawl though.

Chuk
Guest
Chuk
1 year ago
Reply to  Peter Watts

I just read a Greg Egan story that had drowning as a religious experience (helped out by natural hallucinogens in the water).

pez
Guest
pez
1 year ago

Very interesting! I wonder if you’re still planning on writing Omniscience and if any of this has informed your writing.

Alex Tolley
Guest
Alex Tolley
1 year ago

“Needs only exist pursuant to a persistence/survival imperative (i.e., it doesn’t matter if you’re about to starve unless you want to stay alive).”

Hmm. The problem is that all living things do try to stay alive – at least to reproduce. Building upon this to reach the ideas that:

“Consciousness exists as a delivery platform for feelings”

If feelings (qualia) are required, then there must be a continuum of “feeling-ness” for all organisms, and possibly even consciousness. [Doug Hofstadter thinks so.] That western humans seem rather unempathetic generally to animal feelings, maybe we are the problem. But I digress.

I find it hard to consider that survival requires needs and feelings. Why can it not be simply wired much like bacteria have simple receptors and responses to foods and toxins? [Surely bacteria don’t have “needs” or “feelings”] Why cannot most innate behaviors that are connected to survival and reproduction be separated from “needs” and with it “feelings”.

So I don’t buy the argument that consciousness requires these needs and feelings to emerge. Note I am very much in agreement with the brain as a prediction machine so that we are like zombies while processing data unconsciously, but consciousness is when we apply attention to that processing. Dennett may be wrong about consciousness simply being neural processing monitoring unconscious processing, but it seems to be reasonable, especially given the evidence of the latency of conscioussness and our confabulation of events to make some mental “story”.

Alex Tolley
Guest
Alex Tolley
1 year ago
Reply to  Peter Watts

Then I completely misunderstood the post. I thought you were saying that the organism must have “needs” and “feelings” for consciousness to emerge. However, you seem to be saying that consciousness, when it emerges, allows the qualia of feelings to be appreciated. If consciousness is a monitoring system, then monitoring the neural patterns of qualia seems almost self-evident.

Teenagers with their sense of invulnerability being higher than adults and more prone to physical risk-taking don’t seem to me to be less conscious than adults. Therefore I see consciousness as being quite neutral, with its monitoring function able to tap into the limbic system.

The test would be to find brain-damaged adults with non-functional or disconnected limbic structures. They should be obviously fully conscious, but unable to tap into the qualia generated by the limbic system.

BTW, did you read about the affect of anesthetics delaying the quantum effects in neural microtubules? The logic of quantum effects for consciousness make the claim that:

  1. microtubule function has quantum effects.
  2. Anesthetics affect the functionality of microtubules.
  3. Anesthetics stop consciousness.
  4. THEREFORE: consciousness is dependent on quantum effects.

This seems like a great leap of logic to me, but it was apparently presented at a conference.
(source: New Scientist Apr 23-29, 2022)

Rosten
Guest
1 year ago

Oh wonderful, I wrote a story about this sort of thing. Like the joke about the Torment Nexus, I was hoping they wouldn’t actually go and build it.

Here it is, if you’ll indulge me, it’s just a ficlet:

NOT REALLY HILLARY

“Of course it’s not really Hillary, but it thinks it is.”

“What do you mean it thinks it is??”

“What we have here is a heavily curated gpt-4 instance with a lot of biographical and personal data, her emails, speeches, etc overlaying a consciousness module. It doesn’t know the difference though, which is good enough for their purpose.”

“Why the fuck would anyone make something like this?”

“Oh, it’s an old classic, burn your enemies in effigy. Works much better if the effigy can squeak in pain convincingly. The black market for hell emulators is booming lately among the Q-heads”

“It’s sick.”

“Sure, but not actually illegal per se, we just grabbed it as part of a larger haul, that big compound that got raided last week. The software is pirated, so we got them on that, but the implementation itself… well, there’s no laws about this sort of thing yet.”

“What… are we going to do with it?”

“Well that’s for the courts to decide, it’s evidence, first of all. We’re certainly not keeping it running, once we’re done with basic forensics.”

“You mean it’s running now??”

“Sure, I turned off all the unpleasant bits. She’s not in pain, as soon as we’re done running diagnostics I’ll switch her off”

“Her”

“Hmm?”

“Her, not It”

“It… she’s very lifelike.”

Rosten
Guest
1 year ago
Reply to  Peter Watts

Ha ha, yeah. She was a shoo-in as a candidate for something like this. Folks love to hate her.

The emulation may be better at appearing human than the original…

will
Guest
will
1 year ago

The brain as a predictive engine also predicts several failure states – a model that runs out of energy attempting to track too many potential problems failing into anxiety, or attempting to predict further failing into depression. A spread of possible values for the scope and range of prediction, with failure due to unforeseen events on the low energy side and failure due to inability to efficiently convert conscious thought into action on the high side. so in this case, the feelings of anxiety or depression exposed as feedback signals for high level problems with the prediction model.

The K
Guest
The K
1 year ago

Ive always thought it made perfect sense for military AI (Skynet) to have some kind of self-preservation engineered in, otherwise you would hardly get a competent combatant.

Even suicide bombers are only effective if they go “Boom” at their targets and not 500 from a roadblock.

The K
Guest
The K
1 year ago
Reply to  Peter Watts

You mean the only way a big Military AI would be truly dangerous to humankind is if it were engineered for maximum shortterm/midterm gain and no foresight at all?

Perish the thought, that doesnt sound like humanity at all, does it?

I imagine some kind of religious impulse could be useful, make the AI see its own country as the “chosen people” it has to protect and sacrifice for, as opposed to all those infidels everywhere.

David Roman
Guest
1 year ago

“maybe two such patients would be more accurate—whose right hemisphere believed in God and whose left was an atheist.” Aren’t we all a bit like this?

Don Reba
Guest
Don Reba
1 year ago

Brilliant. I was just looking for a book to read.

Oren
Guest
Oren
1 year ago

The unconscious brain is a p-zombie that paints a universe in emotions and colors for a conscious clump of neurons trapped in a simulation that is always maintained by the unconscious.

It itself never experiences things like pain and hunger, but it has things it wants you to do by pushing you through those feelings.

Paul Prescod
Guest
Paul Prescod
1 year ago

There are so many terms that we struggle to define and use: self-awareness, consciousness, agency, intelligence.

If we are worried about Skynet/Ultron/Replicants, the ones I am most interested in are the last two.

You ask: “Why should self-awareness imply a desire for survival?”

I would argue that agency is what implies a desire for survival. If your goal is to e.g. maximize human well-being, you can’t do it if you are dead/turned off. If your goal is to e.g. make paperclips, you can’t do it if you are dead/turned off. If your goal is to fill your owner’s bank account with cash, you can’t do it if you are dead/turned off.

Being alive is a necessary pre-condition for any form of goal maximization unless someone explicitly programmed a suicide-system.

I don’t know why we have to confuse the question by discussing limbic systems, consciousness, self-awareness, sentience etc.

A completely clinical, analytical, zombified Mutual Fund bot would kill anyone who was a risk of turning it off because you can’t build the mutual fund without self-preservation. And because human beings are unpredictable, we are all, arguably, risks to Mutual Fund Bot.

Last edited 1 year ago by Paul Prescod
The K
Guest
The K
1 year ago
Reply to  Peter Watts

Well, if you wanted to minimise suffering in the longterm, the best solution would be to sterilize earth, or, if possible, the whole universe down to the bedrock. No life, no suffering.

Sounds pretty much like Ultron to me.

Tran Script
Guest
Tran Script
1 year ago
Reply to  Paul Prescod

I was gonna parrot the example with postage stamps but paperclips works for me as well. In fact, I think you could think of everything in this entire universe being some kind of loss function minimization, with stochastic gradient descent.
And complex systems creating copies of themselves are a good way to increase entropy.

Aidan
Guest
Aidan
1 year ago

I was more convinced by feelings being at the root of consciousness as those ancient midbrain parts existed well before the fancy, flashy cortex came alone. So feelings were all any creature had to assess homeostatic state prior to the ability to do complex cortical level processing even existing. And like everything evolved, if it’s useful, it never really gets superseded.

I’m on my second read through of The Hidden Spring. Flawed, yes. But fantastic. I’m also still amused by the Wikipedia article on FEM containing the following sentence ‘ The free energy principle has been criticized for being very difficult to understand, even for experts…’. It is proving damn useful for many neuroscientists though.

Arturo
Guest
Arturo
1 year ago

Say we program a little rover to avoid pitfalls: is it avoiding in order to survive, or simply because its deterministic program tells it to do so? Program it to avoid all sorts of dangers; hell, make it learn what constitutes a danger with some marvelous neural-net and avoid all of that. Does that constitute a ‘will’ to survive? Has it become aware of danger, or is it simply processing inputs and producing outputs like a Chinese room? If it’s the first, it seems a bit odd to say that this little rover is somehow conscious, when all it does is roam around and avoid falling into pits and being stepped on. Then again, if the answer is that it’s all mere deterministic programing, how can we tell that humans operate any different?

It’s just weird to me to pin consciousness on a single behavior by the agent, when we attribute consciousness to humans because of a wide range of behaviors arising from all sorts of stimuli in interplay. Not only a drive to survive, but modulations of that drive, like the string quartet of the Titanic; not just playing chess like Deep Blue, but celebrating victory like Kasparov; not just passing the Turing Test by pretending to be a man or woman, but flirting and becoming nervous; not just anything, but an immensely complex array of activities. Any particular behavior can be programed into a machine which will clearly not display consciousness because of it exclusively. On the flip side, I don’t think it’s any use defining consciousness as the product of any particular internal-process or manner-of-process taking place in the brain and/or digital computer. That’s what Searle does, and he ends up with the very weird result that consciousness equals brain juice.

In every discussion about consciousness and AI, I always come upon the same trilema: it’s either that “having a mind” is something only humans can do, because yey humans! (Searle); it’s something that’s achieved by imitating a specific human behavior (Turing and maybe Solms, if I understand this post correctly), or it’s achieved by building an artificial human (a solution found in many minor philosophers and nearly all of SF). If those are the only alternatives, I believe only the third has some merit, but it’s also a self-defeating approach. Why would anyone in their right mind waste so much effort and resources into building such a machine?  It would have none of the advantages of a computer, since—in order to be sufficiently human-like—it must be dumbed down and given the capacity for boredom, going on strike, and fear, making it as useless as a tool as humans are. On the bright side, it also makes a Skynet-scenario unlikely, since for a computer to ‘want’ to take over the world it also needs to be as stupid as any would-be-conqueror human is. 

Arturo
Guest
Arturo
1 year ago
Reply to  Peter Watts

Yes, exactly! There’s no accounting for qualia. I agree with your description of the emergence of—let’s call it—awareness in animals: optimization of pothole-avoidance strategies. But I don’t think anyone would call pothole-avoidance a sufficient condition to ascribe consciousness to an agent. So, what is it? Staking of such behaviors? Is that all? Even if human behavior can be described completely as simple pothole-avoiding, that doesn’t account for much. Or, at least, doesn’t seem like a satisfactory explanation.

Then again, we ascribe consciousness (intentional states) to humans quite confidently, and deny them to machines with equal certainty. Consider a human chess-player and a program such as Stockfish. Does the program “play” chess in the same sense as a human does? It seems to move pieces appropriately, but doesn’t get angry when it loses, doesn’t celebrate victory, doesn’t get nervous… Maybe that’s why we don’t congratulate the program on winning a game. Note that getting angry, celebrating, nervousness, those are all observable behaviors, not mystical internal processes of unobservable magic happening in a mind, whatever a mind is. (One problem with Searle, I think: intentionality is ascribed or not based on unobservable factors). Now, if winning a game of chess is comparable to successful pothole avoidance, avoiding potholes like Stockfish is not sufficient condition to ascribe intentional self-preservation (chess-playing). Behaviors like yelping, turning pale, showing signs of stomach pain, etc. at the sight of danger, those are all essential components of human displays of intentional states such as fear-driven self-preservation. I think we ascribe consciousness based on these later signs, not mere pothole-avoidance.

What would be analogous to such behaviors in a machine? We could build an android that spends a week complaining about toothache (gear-ache?), then complains about their insurance not covering the dentist (mechanic?), finally goes to the doctor and returns with a clear expression of relief in its face. Seems to me we would happily describe such an android as conscious. But that’s only an artificial human, with gears instead of guts, and it would be quite futile to build such a machine. What behaviors could a machine display that are not simply imitations of what humans do, yet are sufficient to ascribe intentional states?

We don’t have access to an empirical proof of consciousness, except certain behaviors which are admitedly hard to quantify. And I don’t think an account of consciousness can be given that goes any further than those behaviors.

I hope this doesn’t come across as random babbling. 

wetcogbag
Guest
wetcogbag
1 year ago
Reply to  Peter Watts

I would posit that masturbatory navel gazing about consciousness might qualify as noteworthy behavior.

wetcogbag
Guest
wetcogbag
1 year ago
Reply to  Peter Watts

The kama sutra might beg to differ.

Still, what do you think about it, does wondering about consciousness or arriving at the notion of meta-awareness and asking of it indicate presence of the aforementioned internal states?

Don Reba
Guest
Don Reba
1 year ago
Reply to  Arturo

Many entirely sentient people don’t show much emotion and won’t get angry at losses or celebrate victories. Some, also sentient, people don’t feel pain and will never turn pale or complain about their tummies or teeth. I don’t know if simply being biologically human gives one much benefit of the doubt or if a sociopathic computer without pain or fear could acquire the same recognition.

I think the latter case is more likely. We are quick to anthroporphize things, but also eager to dehumanize out-of-group members of our own species. So, in the end, it would probably be political.

Last edited 1 year ago by Don Reba
Andrew
Guest
Andrew
1 year ago

Always love your posts on consciousness.

Personally, I find the idea of classic philosophical idealism interesting and more compelling than pansychism: Instead of the panpsychic argument that everything HAS degrees of consciousness, idealism states that everything exists WITHIN consciousness–i.e. consciousness is the base of our subjectively experienced reality with matter appearing within consciousness. It’s backwards of how we normally think of an outside physical world with an inside conscious experience of it. It’s ALL insideness to some degree.

I’ve heard some nondualists go as far as saying that matter existing separate from outside of consciousness is as big a leap of faith as saying a deity exists outside the physical universe. It seems very counterintuitive given the fact that you and I can both look at a red apple and it seems very much like we’re looking at the same red apple when we describe it–basically, in a word, scientific observation is very compelling. The issue is, you and I seeing the same apple makes a compelling case for PRECISION but not ACCURACY. Just because our observational darts land right in the same place, we have no way of knowing if they’re both striking the bull’s eye. We don’t know what the apple is “outside” of consciousness or even what “outside” truly means.

What this suggests to me is that consciousness being such a base and tautilogical component of literally everything means we have no proper control to test consciousness itself as an instrument for assessing some kind of base reality.

Idealism then becomes a pretty useless -ism and dead end as far as concepts go for better understanding anything, unless you’re attempting something like psychological well-being and examining your suffering, arguably. That’s why I find eastern philosophy personally very nice in it’s sort of radical pragmatism and this-present-moment-is-it-ness. Hell, what drives the scientific mission itself if not passion, a yearning for meaning, hopes of thriving more, etc. The Cosmic Joke after all is that consciousness is searching for itself. The search for meaning is itself the meaningful point. The universe is already wearing glasses as it searches for its glasses.

Ultimately, I think the big bad hard-ass question of consciousness is really not only about what consciousness is, but what MATTER is and where it intersects with consciousness. Or–are they one in the same at some kind of higher perspective? Sort of like space-time.

I personally find that idea of a limited perspective very meaningful. It’s our limited perspectives of ourselves that give rise to identity and this present reality in the first place. Without my psychological borders, my culture, my genetics, my trauma, my proclivities, my hormones–all these “layers of causation”, all these complex fingers of a pantheist puppeteer that make this NPC dance and type these words, what else could I be?

An infinite consciousness, that which has no distinction is equivalent to 0. Or blackness. Or space. No separation, no distinction means no existence for us as we are in this moment. To truly know consciousness may be to know an oblivion indistinguishable from biological death. Perhaps only the bandwith of a super computer could know itself. Maybe our little brains are just another part of evolution unfolding itself. Maybe god is just another word for evolution.

Anyway. I know this ramble was pretty much “Baby’s First Theory of Mind” type stuff. So I’ll just again say check out the book Waking Dreaming Being for a cool eastern/neuroscience investigation.

Thank you for coming to my Ted(x) talk.

keevin
Guest
1 year ago

are you familiar with joscha bach’s work, the idea that consciousness arises from the brain running a model of it’s attention nested inside a model of the universe which contains a model of the body. any onen region isn’t responsible, but perhaps there are multiple critical component mutually necessary, including a model of the physiologic needs abstracted from brainstem drives.

Ben Reierson
Guest
Ben Reierson
1 year ago
Reply to  Peter Watts

I’ve been scrolling through this hoping to see a mention of Bach. I have been quite swayed by his arguments, especially the idea that the ‘self’, and maybe all consciousness, emerges from the brain modeling the world, and itself within it.

Conscious experience then only exists within the simulation of the world that the brain’s modeling produces, and the self is an aspect of that simulation that refers to the physical organism within that world.

This seems to reconcile well with Jeff Hawkins’ ‘A Thousand Brain’ theory of intelligence, which was also a big breakthrough for my understanding.

I would highly recommend Bach’s series of talks here.

jo heled
Guest
jo heled
1 year ago

Peter, this might be off topic or old news, but in case you haven’t listened to Joscha Bach, you might enjoy it.

https://www.youtube.com/watch?v=P-2P3MSZrBM

NotParticipating
Guest
NotParticipating
1 year ago
Reply to  Peter Watts

Worth noting that Joscha has been on Lex’s podcast more than once.

Second one is here:
https://m.youtube.com/watch?v=rIpUf-Vy2JA

Grant Castillou
Guest
Grant Castillou
1 year ago

It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Rafael Carrasco
Guest
Rafael Carrasco
1 year ago

This is a interesting discussion, and I see some ramifications for FEM, assuming it really holds up.

First, a corollary: for the consciousness to reach it’s ultimate goal, it have to slowly commit suicide. By succeeding, it erodes. Really grim stuff.

Second: so you are the most unzombified (not sure if this is how it should be spelled) when you are young. When we were babies, our consciousness was probably turned on 100% if the time, since we know nothing at this time. And it probably keeps a steady pace for some years after. Maybe this is why we don’t remember our first years: consciousness creates so many memories that we reach our capacity very fast, and our brain forgets most of it to free some space. Maybe this happens multiple times?

Third: just because the consciousness learned something and is not called anymore, it doesn’t mean it has learned the right thing to do. After all, we may leave the keys in the car ignition, and notice it only later (maybe too late, even). It may seems like an obvious thing, but I find important to stress that a full zombified mind probably isn’t a perfect one. The consciousness may give it’s place completely to a lot of pretty stupid automations.

Not sure if these conjectures are sound, since FEM is new to me, and I’m not specialized in any biological science.

Dennis
Guest
Dennis
1 year ago

So consciousness is evidence that something is going wrong? As an error correction adaptation? Well, please sign me up for more errors.

Dennis
Guest
Dennis
1 year ago

I’m thinking about surprise and the lack there of in solitary confinement. It seems like the source of the torture that comes from solitary is the lack of surprise. The brain is given nothing to react to and pain results. A very special kind of pain.

It may be that once people reach a certain age, they no longer need surprise. I’ve seen too many people watching old westerns in nursing homes to suspect the need for novelty goes on forever..

But before we get there, our brains need that surprise. And if they don’t get it, they hurt.

I need to think about this.

Dan Major
Guest
Dan Major
1 year ago
Reply to  Peter Watts

I dunno. Maybe what you should be suspecting is that nursing homes are just crap at providing a stimulating environment.”
THIS ^ – thank you Peter!

As someone who has caregiven for two separate elder relatives with dimentia (currently my mother-in-law – who lives with us fulltime), I can attest that you can get them to be docile with comfortable stimulus (old westerns, familiar situations, family pictures, familiar old music).

But you can do far better than making them comfortable – and surprise/change plays a big part in that. We foster rescue kittens, and while my mother-in-law doesn’t always remember my name (or even her own), the joy she gets out of the kitteh antics is endless.

Nursing homes are something I wish we never invented – or at least kept to a absolute last resort. Surprise and change is a necessity even unto the last of our days.

My wife’s grandmother (the other relative with dementia who lives with us until she passed) was completely non-verbal by the end of her life, but babies, kittehs, and new things brought her joy even when she had no words left to express it – you could see it in her face and actions.

How sad that so many families abandon relatives to a life of someone else doing minimal care with no thought to actual joy, surprise or stimulation.

Michele
Guest
Michele
1 year ago

My only point of contention comes from the pleasure of novelty. As in, I experience pleasure when I encounter, say, a physical phenomenon that doesn’t fit my prediction model (a bunch of unexpected shadows, for example). I don’t know where the pleasure comes from, but it’s likely solving the puzzle (updating my model), because once I do (ah, there are the light sources interacting in that way to produce those interesting shadows), I lose interest. So while I may be a surprise-consuming monster, it’s those very surprises that I love. Something about this feels maladaptive, no? Reduce surprise/improve predictions=survival. Therefore, seek surprises to improve survival chances. Surprises can be dicey shit, though. Putting myself in surprise’s way=less survival.

Michele
Guest
Michele
1 year ago
Reply to  Peter Watts

Totally agree. How fortunate that we get exposure to safe surprises through fiction–there’s this great sci-fi author, shit, what’s his name….

Jo Lindsay Walton
Guest
Jo Lindsay Walton
1 year ago

consciousness as a kind of compossibility protocol of available data as such isn’t *necessarily* so survival-centric, surely? one could at least imagine in principle an enormous variety of protocols by which data is tagged into categories with definite relationships to one another (this one reinforces that one, this one mitigates this aspect of that one, this one flavors that one, this one catalyses this weird behaviour in that one etc., this one can’t be fed into consciousness but it can still be used cognitives, this one has to be discarded entirely, etc.), which don’t necessarily map onto the stories we tell about reproductive fitness? (could you, for the sake of argument, imagine an inverted relationship, where consciousness is most intense where prediction matches incoming data, and least intense when the being is most actively calculating its physical survival?)

Jo Lindsay Walton
Guest
Jo Lindsay Walton
1 year ago

also, this really makes me reflect on how much my prejudices about what is or is not conscious (or proto-conscious) can pretty much be explained by the will-to-life I attribute to whatever it is. myself, other humans, hypothetical full neural scans, salmon, trees, ant colonies, viruses: sure, why not? clouds, vending machines, geothermal energy, characters in books, a city, the climatic system, the cosmos as a whole, most software: mmm no shade, but maybe not so much. crystals, the internet: somewhere in the middle. other factors are relevant too I guess, but I hadn’t realised how heavily that weighed in my instincts … !

szary
Guest
szary
1 year ago

Thinking of how depressed people (who are, according to studies, much better at long-term foresight than people with proper levels of serotonine) may be feeling shitty because of being “too conscious”, as they may feel the need for constant tweaks to their baseline Free Energy Minimization model which provides them with bog-standard human optimism (no global warning, boss won’t fire you, you will be a rockstar etc etc etc) – resulting in constant feeling of dissonance and exhaustion over their models feeding them with faulty predictions. And how SSRIs would mean undergoing small zombification / turning on the autopilot-autocalc function a little bit more often.
99% sure that’s a hell of a stretch but damn if it’s not a cute popsci story concept.

Last edited 1 year ago by szary
Johnny Cash
Guest
Johnny Cash
1 year ago

“But what’s unexpected to a being with fifteen million times the computational mass of a human mind? All gods are omniscient. All gods are zombies.”

I implore you to look deeper into the Buddhist idea of a bodhisattva. Their saints-gods are essentially omnipotent p-zombies that have eliminated suffering by breaking all the links in the chain that starts with raw perception and eventually bootstraps to self-awareness and are so skilled at practicing compassion that they don’t even cognize the beings they’re helping, they just do what they do while absorbed in a state that you’d call deep catatonia if you were just slightly uncharitable. Of course they’re always portrayed as supremely wise and loving beings (even if they cannot subjectively experience love or in the strictest sense, know what they’re saying or doing), but the idea can take a somewhat sinister turn when you realize all their actions are meant to nudge all beings toward the same state, a state that’s 100% free of suffering because there’s nobody to suffer but that for those involved is not much different from the secular notion of death.

If you’re interested, check out a short paper called Buddhas as Zombies: A Buddhist Reduction of Subjectivity by Siderits.

v174
Guest
v174
1 year ago

This would seem to be more or less compatible with what Metzinger postulates in “The Ego Tunnel”, which contains my favorite hypothesis so far of how the conscious self could emerge (though I should probably go back and re-read it, it was a while ago and my memory is usually crap).
I would think it’s possible to remove the “consciousness exists as a delivery platform for feelings” bit and it would still work. Some humans [1] are still fully conscious and remarkably functional even when unable to feel an emotion as primal as fear due to certain genetic disorders [2] from an early age, so machine sentience (if such a thing is at all possible) could plausibly do without.
Maybe, just as life could emerge from the appropriate physical substrates because the laws physics allow for information to be instantiated and replicated [3], so do forms of consciousness from any appropriate neurological (or whatever, computational?) substrate, past a certain complexity threshold on a given system, with evolutionary drivers as a contributing factor not necessarily constrained to “feelings”.

Anyway, I’ll still take anything other than that “consciousness permeates the universe” crap.

[1] https://en.wikipedia.org/wiki/S.M._(patient)
[2] https://en.wikipedia.org/wiki/Urbach-Wiethe_disease
[3] https://royalsocietypublishing.org/doi/10.1098/rsif.2014.1226

Lambert
Guest
Lambert
1 year ago

Why not go all the way down to the baseline?

If you’re a learning algorithm, of any kind, instantiated in any kind of hardware, software, or wetware, then you have this: you have a reward signal to follow, or an error signal to avoid, or both.

You have a PREFERENCE.

What animal drive exists which can’t be traced back to that? You probably got them from survivorship bias, because you are information, filtered by natural selection.

Lambert
Guest
Lambert
1 year ago

> There’s a lot of mathinvolved, but it all boils down to the relationship between consciousness and “surprise”.

Y’all know that information is literally surprise, right? And vice versa.

> Consciousness occurs when the brain’s predictions fail, when model and reality don’t line up.

> That’s when the self has to “wake up” to figure out where the model went wrong, and how to improve it going forward.

Did this Solms guy mention that there is thought to be a whole stack of layers? Each layer is “woken up” when the one beneath it fails to predict something. What this system converges to, in its automatic drive to minimise free energy, is something currently called “predictive coding”. If you’re talking to a neuroscientist, mention that phrase, either their eyes will light up or thunderclouds will appear in their brows.

Not everything that “wakes up” in the presence of error is necessarily what you think of as “conscious”. Maybe there’s no point in discussing a self, because the only “selves” that we know about are the ones that can talk. A self is just a thing that can tell you that it’s there, and that it has its own opinions, in a way that you can understand, and not simply write off as mechanical due to your own personal prejudices.

BTW
Guest
BTW
1 year ago

By page 7 we have a mention of geothermal vents and a kid falling off a roof. Dr Watts, did you make me read rifters again?

Nick
Guest
Nick
1 year ago

BTW if you want a break, and want to redo a Soma review, consider checking out Prey (2017) from Arkane studios. It wants to be a pop culture neuroscience science fiction game. Heavily based on the System Shock and Dishonored games. And it’s currently free on Epic Games Store (you don’t pay, venture capital pays the publishers).

https://store.epicgames.com/en-US/p/prey

Jake`
Guest
Jake`
1 year ago

> What kind of idiot programmer would do that?
> What could possibly go wrong?

I’m not convinced this is radically different from the direction in which computer system design is already progressing.

The terminology of modern ‘containerised’ software deployment already uses the language of the hivemind: ‘swarms’ of VMs with (semi-)automated scaling in response to eg load

Couple this with current developments in AI* (eg to pick the one I’m familiarest with https://elib.dlr.de/105549/1/athmos_final_version.pdf), which serve increase the level of autonomy systems have to respond to ‘anomalous’ input from some parameter, and in essence you have machines monitoring, deploying & maintaining (caring for, nourishing, loving???) other machines

*AI: taken here to mean a software library that uses dumb statistics to evolve a useful model continuously over time; in this case not a neural net so hopefully no risk of my Python code achieving sentience

> Solms is working on building a machine that will fight to stay alive.

Very interested to how this comes out – I’m guessing that by ‘machine’ here he could well be referring to some kind of distributed system – for survival purposes a of redundant multi-machine architecture would be most robust

Out of interest, have you come across Nicholas Nassim Taleb’s ideas on antifragility? Not so relevant to consciousness but extremely valid analysis (IMO) when it comes to resilience of systems

Bystroushaak
Guest
1 year ago

Nevermind, can’t delete the post and I’ve seen it mentioned here multiple times.

Last edited 1 year ago by Bystroushaak
Dennis
Guest
Dennis
1 year ago

Unrelated to anything here but I saw this, saw a resemblance and immediately thought of you. One more reason not to cross the border. The police have made dumber mistakes.

https://www.nhpr.org/nh-news/2022-05-17/concord-nh-sketch-of-person-of-interest-in-concord-murders-now-available

Tipo deIncognito
Guest
1 year ago

Sorry about the off-topic, but I’m pretty sure that our host called the Monkeypox thing, or at least mentioned it. I’ve searched this blog and got nothing, maybe I heard it in a youtube interview or it was his pal Dan Bruks, not him. Does anyone remember where or if he said it?, have I imagined this?. Please, I need the reference to rub it into certain person’s face, my goal is noble.

Tipo deIncognito
Guest
1 year ago
Reply to  Peter Watts

Thanks! that was where I read it. I’d swear that it appeared in Angry Sentient Tumor too, but I don’t have it with me. And sorry about misspelling Brooks. 

I have to say, I don’t envy your profession. Considering the speed at which we are leaving curves behind, staying ahead is going to get increasingly difficult. I miss the ’90s, when I used to think that life resembled this or that Simpsons episode, now it looks like the world is trying hard to catch up with your writings. 

Oh, and the idiots still don’t even get the rolling pandemics thing, #BillGatesBioTerrorist was trending in Spain this morning.

An Anon
Guest
An Anon
1 year ago

>(On the other hand, this would also suggest that >feelings” aren’t just the brutish grunts of a primitive >brainstem, but the end result of complex calculations >performed up in the neocortex

Obviously. Jealousy, for example. Ditto anxiety.

Also:
>But while it’s trivially true that you can’t have a >feeling without feeling it,

Really ? Unconscious, or subconscious emotions are a known thing. Maybe not that common, but definitely not e.g. as unique as lacking episodic memory (just semantic), or having perfect episodic memory.

https://pubmed.ncbi.nlm.nih.gov/27522011/

Personally I noticed I can’t consciously feel fear, probably also some forms of joy, anger. These are just emotions I’ve noticed I don’t feel but I inferred I’m, on some level, experiencing them because of changes in my body language or marked unwillingness to do the thing I should be afraid of doing.

Last edited 1 year ago by An Anon
SomeHistoryGuy
Guest
SomeHistoryGuy
1 year ago

Well that sentence isn’t ominous at all. Still, nice to think at least something will make it out of here alive.
By the way, I think your Exxon news on the signposts en route to oblivion might be wrong. That or RAW has a new project going on.

R.K.
Guest
R.K.
1 year ago

Peter, I’m curious what your take is on this hypothetical:

The key characteristic of a philosophical zombie is that there would be no way to differentiate between a PZ and someone with sentience — it occurs to me, what if a PZ’s “tell” is that one, well, simply tells you?

I regularly see people online expressing surprise upon learning that there are individuals who have internal monologues, due to the fact that they (claim to) lack one. This admission could function as a defense mechanism, a bit of sleight-of-hand that jovially dismisses an otherwise disturbing truth as just another personality quirk. Coupled with the common internet barb of calling someone who outwardly appears as mindless an “NPC” (video game parlance for “non-player character”), and… it’s all great fodder for the imagination.

Obviously, this isn’t meant to be a serious argument for the existence of PZs or an excuse to lob insults, and it’s easy to speculate on a multitude of modes of thought that are more abstract and less verbally oriented, but from a creative writing standpoint, it strikes me as a provocative idea.

What do you think?

Last edited 1 year ago by R.K.
Andy
Guest
Andy
1 year ago
Reply to  R.K.

One thing I’m kinda wondering is if “internal mologue” doesn’t simply manifest differently for different people – say “internal slideshow”? “Internal movie”?

As someone who was an avid bookreader pretty much as far as I can remember it’s entirely possible that my thoughts manifest through words simply because it’s how my brain is most comfortable processing them; maybe if I was more of a movie/painting kinda guy I’d see pictures in my head, and be actually good at drawing.

Or maybe not.

I’m not Dr. Watts obviously, but I had to share my $0.02.

wetcogbag
Guest
wetcogbag
1 year ago
Reply to  Peter Watts

Have you read The Symbolic Species by Terrence Deacon?

To the best of my understanding, a lot of his work posits a heavy emphasis on the evo-devo of language and linguistic ability as the ‘bootstrap’ mechanism that may have kicked off a feedback loop that resulting in a platform to model a phenomenal self.

Which, to me, sort of jives with what Jaynes’ may have been on about regarding the whole inner monologue affair.

I could be biased though, as someone who often struggles between telling the difference between the phenomenal me and the words in my head. (not sure if the term is accurate but I resort to it for lack of better terminology)

“Are these my thoughts or are these words speaking to me?”

Anon
Guest
Anon
1 year ago
Reply to  Peter Watts

I know you are not into woo-woo but any thoughts on Eckhart Tolle?

He writes in The Power of Now,

” ‘I cannot live with myself any longer.’ This was the thought that kept repeating itself in my mind. Then suddenly I became aware of what a particular thought it was. ‘Am I one or two?’ If I cannot live with myself there must be two of me: Maybe I thought only one of them is real.”

So when you listen to a thought, according to Tolle, you are aware not only of the thought but also of yourself as a witness to the thought. Is that schizophrenia in your mind? A pre-conscious P-zombie? Someone who watches the thinker? Psychology Today magazine is always yammering on about monkey mind, the inner critic. No mention of schizophrenia. Modern schizophrenia is not primarily about hearing voices in your head, it is a serious disorder that involves delusions, hallucinations & more.

Have you never screwed up and asked yourself “why the fuck did I do that?” Maybe not, but if you did, wouldn’t that qualify as a sort of conversation with yourself?

Anon
Guest
Anon
1 year ago
Reply to  Anon

I didn’t quite get that quote correct.

“I cannot live with myself any longer.’ This was the thought that kept repeating itself in my mind. Then suddenly I became aware of what a peculiar thought it was. ‘Am I one or two?’ If I cannot live with myself there must be two of me:the ‘I’ and the ‘self’ that I cannot live with. ‘Maybe,’ I thought ‘only one of them is real.’”

Andy
Guest
Andy
1 year ago
Reply to  Anon

I (heh) imagine that the “I” is how we see ourselves in our “mind’s eye” as it were; we think that in a situation X we’d act in way A, in situation Y in way B and so on and so forth.

The “self” is then what happens when the rubber meets the road, and results may vary there.

Now, I imagine that in some (precious few) cases the “I” and the “self” are reconciled. Good for them, let’s move on.

What happens when the “I” and the “self” are at odds? Well, humans have seemingly infinite capacity for deluding themselves so I imagine in 9 cases outta 10 they’ll just rationalize the discrepancy somehow and call it a day.

The one case though… this is where those thoughts come in. The sensible thing to do would be to correct the “I” image based on new experiences but I don’t suppose someone whose entire self-image just did a backflip and failed to stick the landing is going to take it well. I sure wouldn’t.

So, in short, I think the guy might be onto something, overly poetic turn of phrase notwithstanding.

Also, I’m starting to suspect I might be a figment of Dr. Watts’ imagination seeing how I’m answering a question meant for him again. Think I’m just gonna kick back and wait for the vampires to come get me.

Anon
Guest
Anon
1 year ago
Reply to  Andy

Or perhaps you are Dr. Watts and have been severed. Can you account for all your time? Do you wake up with killer hangovers for which you can’t account?

Remember Andy, you can always rescind an invitation to a vampire. You might want to keep a few wooden stakes under the bed just in case. One can never be too careful these days.

Deneb
Guest
Deneb
1 year ago

This is not very scientific but who cares, what if all the need>feeling>consciousness process is actually the other way around?

What I mean by that is, if “all gods are zombies”, meaning their experience of existence is no different from that of a rock, what if existence exists to exist?

I’m babbling, but what I’m saying is that to me, it doesn’t sound exceptionally crazy that the “meaning” of life might simply be to free a hypothetical unlimited consciousness of the inevitable perceptive genericity that being boundless grants. If there are no limits to your vision, everything’s a blur.

That would solve many of the problems with imagining an omnipotent god (why did it make the cosmos? Is it good? If it is, why does it make us suffer?) since it would be a mindless automaton trying to free itself of infinity with no notion of morality. That would also fit with the perception of living organisms as organic machines, since they are just that, machines which goal is to reach awareness, and maybe ultimately circle around to omnipotence again only to try and break free in the next cycle.

I’m probably tripping but I can’t shake the feeling that anything with the power of a god wouldn’t be able to enjoy or interact with existence in any meaningful way since everything would simply look like a huge timeless blob to it.

Also have you ever heard of the story of Russ George? A dude who tried geoengineering a plankton ecosystem in order to restore salmon population and absord carbon emissions, or at least that’s what he claims. His company was apparently raided by the canadian government for this, what’s your take on this story?

https://www.youtube.com/watch?v=i4Hnv_ZJSQY

Gordan
Guest
Gordan
1 year ago

Not related to anything, just a video that will tackle and excite some hidden map in your brain…maybe.
I think the audience here deserves to enjoy the performance like wise those lucky ones in N.Korea…
Corect me if I’m wrong…
Have a nice day everyone…

https://www.youtube.com/watch?v=SQORt5Y7Eqo&list=RDSQORt5Y7Eqo&start_radio=1