{"id":10307,"date":"2022-11-01T11:50:34","date_gmt":"2022-11-01T19:50:34","guid":{"rendered":"https:\/\/www.rifters.com\/crawl\/?p=10307"},"modified":"2022-11-01T11:50:35","modified_gmt":"2022-11-01T19:50:35","slug":"the-pong-imperative-driving-dishbrain-to-suicide","status":"publish","type":"post","link":"https:\/\/www.rifters.com\/crawl\/?p=10307","title":{"rendered":"The Pong Imperative: Driving Dishbrain to Suicide."},"content":{"rendered":"\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p style=\"text-align: right;\"><a id=\"post-10307-Dishbrain\"><\/a> Achilles Desjardins had always found smart gels a bit creepy. People thought of them as brains in boxes, but they weren&#8217;t. They didn&#8217;t have the parts. Forget about the neocortex or the cerebellum\u2014these things had <em>nothing<\/em>. No hypothalamus, no pineal gland, no sheathing of mammal over reptile over fish. No instincts. No <em>desires<\/em>. Just a porridge of cultured neurons, really: four-digit IQs that didn&#8217;t give a rat&#8217;s ass whether they even lived or died. Somehow they learned through operant conditioning, although they lacked the capacity either to enjoy reward or suffer punishment. Their pathways formed and dissolved with all the colorless indifference of water shaping a river delta.<\/p><\/blockquote>\n\n\n\n<p class=\"has-text-align-right\">\u2014<em>Maelstrom,<\/em> 2001<\/p>\n\n\n\n<p>There&#8217;s an obvious contradiction in those last two sentences. Reward\/Punishment is the very foundation of operant conditioning; how can it work on something that experiences neither? I wasn&#8217;t sure, back at the turn of the century. I knew what feedback was. I knew that the pseudoneurons of neural nets had weights attached to them, that the odds of one of them firing would increase some fractional amount every time it made the &#8220;right&#8221; move. I figured that by the time my story took place, people would have figured out how to make the meat act like the software. It was far from the biggest liberty I took in that novel. No one rang me up to call bullshit on it, anyway.<\/p>\n\n\n\n<p>Now, a quarter of a century later, I have my answer. Yes, folks, it&#8217;s time for another ripped-from-the-headlines story about Head Cheeses IRL.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"alignright size-large is-resized\"><a href=\"https:\/\/www.rifters.com\/crawl\/wp-content\/uploads\/2022\/11\/fx1_lrg.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.rifters.com\/crawl\/wp-content\/uploads\/2022\/11\/fx1_lrg.jpg\" alt=\"\" class=\"wp-image-10310\" width=\"298\" height=\"298\" srcset=\"https:\/\/www.rifters.com\/crawl\/wp-content\/uploads\/2022\/11\/fx1_lrg.jpg 996w, https:\/\/www.rifters.com\/crawl\/wp-content\/uploads\/2022\/11\/fx1_lrg-300x300.jpg 300w, https:\/\/www.rifters.com\/crawl\/wp-content\/uploads\/2022\/11\/fx1_lrg-150x150.jpg 150w, https:\/\/www.rifters.com\/crawl\/wp-content\/uploads\/2022\/11\/fx1_lrg-768x768.jpg 768w\" sizes=\"auto, (max-width: 298px) 100vw, 298px\" \/><\/a><\/figure><\/div>\n\n\n\n<p><a href=\"https:\/\/www.cell.com\/neuron\/fulltext\/S0896-6273(22)00806-6\">This one comes via the latest issue of <em>Neuron<\/em><\/a> courtesy of an Australian outfit called Cortical Labs, where Brett Kagan and his buddies grew a neuron culture in a dish and taught it\u2014more accurately, spurred it to teach itself\u2014how to play Pong. Some of you started sending me the link a couple of weeks back, and I&#8217;ll admit at first blush I thought the whole thing was a step backward. Neuron cultures playing Pong in 2022? Weren&#8217;t we talking about hooking them up to <a href=\"https:\/\/phys.org\/news\/2008-10-grid-brains.html\">power grids and stock markets<\/a> over ten years ago? Weren&#8217;t researchers from the University of Reading using them to <a href=\"https:\/\/phys.org\/news\/2008-08-robot-biological-brain-insights.html\">control<\/a> little <a href=\"https:\/\/www.taylorfrancis.com\/chapters\/edit\/10.1201\/b12690-18\/autonomous-mobile-robot-biological-brain-kevin-warwick-dimitris-xydas-slawomir-nasuto-victor-becerra-mark-hammond-julia-downes-simon-marshall-benjamin-whalley?context=ubx&amp;refId=0340a5a3-b9da-4566-85a8-5f36c827ba80\">robots<\/a> back in 2008? But I followed the links, and the title\u2014&#8221;<strong><em>In vitro<\/em> neurons learn and exhibit sentience when embodied in a simulated game-world<\/strong>&#8220;\u2014caught my eye.<\/p>\n\n\n\n<p>Signs of sentience, you say.<\/p>\n\n\n\n<p>Okay. Show me what you got.<\/p>\n\n\n\n<p>What they&#8217;ve got, as it turns out, is a nifty little proof-of-principle in support of the Free-Energy-Minimization model I was chewing over <a href=\"https:\/\/www.rifters.com\/crawl\/?p=10225\">last April<\/a>. Back then it was Mark Solms, forcing me to rethink my assertion that consciousness could be decoupled from the survival instinct. The essence of Solms&#8217; argument is that feelings are a metric of need, you don&#8217;t have needs unless you have an agenda (i.e., survival), and you can&#8217;t <em>feel<\/em> feelings without being subjectively aware of them (i.e., conscious). I wasn&#8217;t fully convinced, but I was shaken free of certain suppositions I&#8217;d encrusted around myself over a couple of decades. If Solms was right, I realized, consciousness wasn&#8217;t independent of survival drives; it was a <em>manifestation<\/em> of them.<\/p>\n\n\n\n<p>But the fundamental point that ties this school of thought together is right there in the name: Free Energy <em>Minimization<\/em>. Self-organizing complex systems are just plain lazy, according to FEM. They aspire to low-energy states. If they&#8217;re the kind of system that acts in response to input from an external environment, the way to keep things chill is to keep them predictable: know exactly what&#8217;s coming, know exactly how to react, live on autopilot. Surprise is anathema, surprise means the environment isn&#8217;t doing what you expect. You have two choices when that happens: either rejig your predictive model to conform with the new observed reality, or <em>act<\/em> on that observed reality in a way that brings it more into line with your predictions. If you&#8217;re a weather simulation you might update your correlations relating barometric pressure and precipitation. If you&#8217;re an earthworm you might move away from an unpleasant stimulus that&#8217;s pushing you out of homeostasis.<\/p>\n\n\n\n<p>In each case you want to minimize energy costs, and that means minimizing the difference between what you expect to happen and what actually does. Consciousness exists in the space between those two things. The wider the difference\u2014the greater the surprise\u2014the more &#8220;conscious&#8221; the response. Conversely: the more accurate your model, the lower the informational entropy and the less awake you are<sup><sup><a id=\"post-10307-footnote-ref-1\" href=\"#post-10307-footnote-1\">[1]<\/a><\/sup><\/sup>.<\/p>\n\n\n\n<p>And right there: that&#8217;s your motivation. You don&#8217;t need pain or pleasure centers. You don&#8217;t need to program specific imperatives or win states. The tendency to &#8220;low informational entropy&#8221; is an intrinsic part of the system. That&#8217;s what the FEM lobby claims.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"alignleft size-large is-resized\"><a href=\"https:\/\/www.rifters.com\/crawl\/wp-content\/uploads\/2022\/11\/dishbrain.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.rifters.com\/crawl\/wp-content\/uploads\/2022\/11\/dishbrain-753x1024.jpg\" alt=\"\" class=\"wp-image-10312\" width=\"272\" height=\"369\" srcset=\"https:\/\/www.rifters.com\/crawl\/wp-content\/uploads\/2022\/11\/dishbrain-753x1024.jpg 753w, https:\/\/www.rifters.com\/crawl\/wp-content\/uploads\/2022\/11\/dishbrain-221x300.jpg 221w, https:\/\/www.rifters.com\/crawl\/wp-content\/uploads\/2022\/11\/dishbrain-768x1044.jpg 768w, https:\/\/www.rifters.com\/crawl\/wp-content\/uploads\/2022\/11\/dishbrain-1129x1536.jpg 1129w, https:\/\/www.rifters.com\/crawl\/wp-content\/uploads\/2022\/11\/dishbrain.jpg 1200w\" sizes=\"auto, (max-width: 272px) 100vw, 272px\" \/><\/a><\/figure><\/div>\n\n\n\n<p>Which means that if you&#8217;re a neuron culture spread across an electrode array like peanut butter on toast, and some of those electrodes feed you information about whether a ball is hitting a paddle, while others take instructions <em>from<\/em> you about how to move that paddle\u2014and if you receive a predictable signal when the ball hits the paddle, but a burst of random static when it misses\u2014you will learn, on your own, to minimize the frequency of those bursts of surprise. You will learn to play Pong.<\/p>\n\n\n\n<p>That&#8217;s what Kagan <em>et al<\/em> did. They let &#8220;Dishbrain&#8221; grow over an array of electrodes: some arbitrarily defined as motor nerves, others as sensory nerves. The sensories sent signals corresponding to paddle\/ball dynamics; the motors received commands on paddle motion. Kagan <em>et al<\/em> set Pong in motion, shocked or stroked Dishbrain as appropriate, and waited.<\/p>\n\n\n\n<p>Dishbrain figured it out in five minutes.<\/p>\n\n\n\n<p>Well, sort of. The paddle intercepted the ball significantly more often than random chance would predict. That&#8217;s not to say it ever became a black belt at Pong: there&#8217;s a lot of daylight between <em>better than random<\/em> and <em>world champion<\/em>, and in some cases Dishbrain performed barely better (occasionally even worse) than one or another of the control cultures. I&#8217;m guessing the impressive videos presented in the paper, while described as &#8220;representational&#8221;, were cherry-picked.<\/p>\n\n\n\n<p>Still. Dishbrain learned to hit the ball, in some cases apparently anticipating where it would arrive before it even hit the backboard. A puddle of neurons, finding itself in a virtual game environment with unknown rules\u2014without even so much as the &#8220;maximize score&#8221; instruction granted to that Starcraft-beating headline-grabbing <a href=\"https:\/\/www.nature.com\/articles\/d41586-019-03343-4\">DeepMind<\/a> you&#8217;ve read about\u2014 organized itself on the fly to act in a way that minimized unpredictable input. Score one for FEM.<\/p>\n\n\n\n<p>As for the claim that dishbrain is &#8220;sentient&#8221;: predictably, it&#8217;s proven controversial. Kagan <em>et al<\/em> use a particularly narrow definition of the term\u2014&#8221;responsive to sensory impressions through adaptive internal processes&#8221;\u2014attributed to Karl Friston (a high priest of FEM, who also happens to be one of the paper&#8217;s authors). Others have bristled at the term, since it generally connotes subjective experience. Dean Burnett out of Cardiff Psychology School isn&#8217;t willing to go beyond &#8220;<a href=\"https:\/\/www.bbc.com\/news\/science-environment-63195653\">thinking system<\/a>&#8220;. Even Kagan admits that Dishbrain shows <a href=\"https:\/\/www.nature.com\/articles\/d41586-022-03229-y\">no signs of consciousness<\/a>.<\/p>\n\n\n\n<p>Personally, I think they&#8217;re playing it a bit too safe. Sure there&#8217;s no hard evidence that Dishbrain is &#8220;awake&#8221; in the commonly understood sense\u2014but then again there&#8217;s no hard evidence that <em>you&#8217;re<\/em> awake, and not just a sophisticated iteration of Google&#8217;s Lambda in a meat chassis. A few years back PNAS published <a href=\"https:\/\/rifters.com\/real\/articles\/What-insects-can-tell-us-about-the-origins-of-consciousness-PNAS-2016-Barron-4900-8.pdf\">a paper<\/a> that made a pretty good case for insect consciousness: insects might not have the specific brain structures associated with consciousness in we mammals, it said, but they have structures that perform analogous functions. They acquire information from their environment; they monitor their own internal states; they&nbsp; integrate those two data sets into a unified model that generates behavioral responses. Many argue that it&#8217;s that integration that results in subjective experience. Vertebrates, cephalopods, and insects are all built to do that in their own way, so it stands to reason they&#8217;re all phenomenally conscious (unlike, say, nematodes). Dishbrain also embodies those three components\u2014in a rudimentary form, certainly, but perhaps not that much simpler than you&#8217;d find in a bristletail. Who&#8217;s to say that it <em>isn&#8217;t<\/em> conscious, even in the wider qualia-based sense?<\/p>\n\n\n\n<p>There&#8217;s excitement in these findings. The idea that all self-organizing networks have at least one &#8220;motive&#8221; baked in is a revelation (to me, anyway). But that&#8217;s the beginning of inspiration, not the end. What do you <em>do<\/em> with that insight, as a science fiction writer? What are the consequences that can be explored narratively?<\/p>\n\n\n\n<p>How can all of this go <em>wrong<\/em>?<\/p>\n\n\n\n<p>Well, here&#8217;s something: all motives are not created equal. The universal motive accruing to self-organizing systems is Predictability, not Survival or Reproduction. So take Dishbrain. Take a head cheese. Put it in an environment that generates predictable feedback not when it paddles a ball but when it throttles its nutrient supply, or otherwise degrades its own integrity. Reward it for self-harm; watch it commit suicide.<\/p>\n\n\n\n<p>What might that look like if you scaled it up to a human brain?<\/p>\n\n\n\n<p>We may not even have to speculate. There are people out there who serve, if not as real-world examples, at least serve as real-world analogs. It&#8217;s not a perfect point-for-point mapping: we have <em>actual<\/em> brains, and that means brains stems and amygdalas and evolved agendas that <em>do<\/em> tie directly to survival and reproduction.<\/p>\n\n\n\n<p>And yet people exist who offer a glimpse of how such a hack might manifest, people who <em>do <\/em>seek to <a href=\"https:\/\/www.theguardian.com\/science\/2022\/oct\/02\/is-the-body-key-to-understanding-consciousness\">physically compromise themselves<\/a>. They experience the unshakable conviction that one of their body parts doesn&#8217;t belong to them, that it is alien, that it needs to be <em>removed<\/em>. They&#8217;ve been known to try sawing off the offending limb, or blow it off with a shotgun. One soul tried for decades to damage his leg enough to force an amputation, finally succeeding after immersing it in dry ice.<\/p>\n\n\n\n<p>And when these people succeed in losing the offending arm or leg\u2014they feel happy. They feel that they have finally become who they truly are. They feel <a href=\"https:\/\/www.youtube.com\/watch?v=RnWL9jUgOJk\"><em>whole<\/em><\/a>.<\/p>\n\n\n\n<p>The clinical term is <a href=\"https:\/\/en.wikipedia.org\/wiki\/Body_integrity_dysphoria\">Body Integrity Dysphoria<\/a><sup><sup><a id=\"post-10307-footnote-ref-2\" href=\"#post-10307-footnote-2\">[2]<\/a><\/sup><\/sup>, and while it&#8217;s extremely rare, researchers have been able to tag certain tentative neurological correlates. Diminished skin conductance response distal to the desired amputation point, for one thing. Reduced gray matter in the superior parietal lobule, for another.<\/p>\n\n\n\n<p>Interesting thing about the superior parietal lobule: it&#8217;s <a href=\"https:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC7425349\/\">part of the somatosensory cortex<\/a>.<\/p>\n\n\n\n<p>You remember the somatosensory cortex, aka the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Cortical_homunculus\"><em>Penfield homunculus<\/em><\/a>: that strip of brain that maps the body in terms of sensory and motor processing. And you know about <a href=\"https:\/\/en.wikipedia.org\/wiki\/Phantom_limb\">Phantom Limbs<\/a>, which result from the fact that even after a limb has been amputated, the corresponding part of the homunculus\u2014the map of that particular somatic territory\u2014persists in the brain. As long as that part of the switchboard is still firing, a person will feel sensation from the corresponding limb regardless of whether it&#8217;s still attached. (It has also been commonly observed that the parts of the strip that map feet and genitals butt up against each other; leakage between the two might explain why foot fetishes are the most common sexual kink. But I digress.)<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><a href=\"https:\/\/www.rifters.com\/crawl\/wp-content\/uploads\/2022\/11\/Sensory_Homunculus.png\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"560\" src=\"https:\/\/www.rifters.com\/crawl\/wp-content\/uploads\/2022\/11\/Sensory_Homunculus-1024x560.png\" alt=\"\" class=\"wp-image-10313\" srcset=\"https:\/\/www.rifters.com\/crawl\/wp-content\/uploads\/2022\/11\/Sensory_Homunculus-1024x560.png 1024w, https:\/\/www.rifters.com\/crawl\/wp-content\/uploads\/2022\/11\/Sensory_Homunculus-300x164.png 300w, https:\/\/www.rifters.com\/crawl\/wp-content\/uploads\/2022\/11\/Sensory_Homunculus-768x420.png 768w, https:\/\/www.rifters.com\/crawl\/wp-content\/uploads\/2022\/11\/Sensory_Homunculus.png 1178w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><figcaption>This is your body on Brain. Any questions?<\/figcaption><\/figure>\n\n\n\n<p>To me at least, BID seems like nothing so much as a bizarro Phantom Limb Syndrome, where the brain\u2014instead of registering that something&#8217;s there when it isn&#8217;t\u2014somehow registers that something <em>isn&#8217;t<\/em> there (or at least, shouldn&#8217;t be) when it is. And if the one syndrome results from a piece of the map being there when it shouldn&#8217;t be, maybe the other results from a part of the map being missing (reduced gray matter, remember?) when it should be present. In both cases, the internal model of the self-organizing system is at odds with incoming data. In BID the system takes action to reduce that dissonance. To eliminate the surprise. To minimize informational entropy.<\/p>\n\n\n\n<p>Move the paddle; remove the limb. Maybe the same thing, when you get right down it.<\/p>\n\n\n\n<p>All speculation, of course. That&#8217;s what we do here. Then again, with fewer than 500 cases on record, hard data are not exactly thick upon the ground. And the limited MRI results we do have seem consistent with Penfield&#8217;s involvement.<\/p>\n\n\n\n<p>What might a suicidal Dishbrain look like if you scaled it up onto Human architecture? Well, we&#8217;ve been playing around with ways to program neurons for decades now. Transcranial magnetic stimulation. Compressed ultrasound. Sony even <a href=\"https:\/\/arstechnica.com\/uncategorized\/2005\/04\/4785-2\/\">filed a patent<\/a> a couple of decades back; who knows how far they&#8217;ve come since then?<\/p>\n\n\n\n<p>Say they pull it off, learned how to tweak to somatosensory cortex. (That&#8217;s what they were aiming for: a way to plant input directly into the brain, without having to go through those messy eyes and ears and noses. They were pitching it as the ultimate game interface.) Imagine, twenty minutes in the future, we all have Sony headsets talking to our homunculi. Imagine someone hacks them to erase certain parts of that map, induce Body Integrity Dysphoria on command.<\/p>\n\n\n\n<p>The thing is, arms and legs are not the only limbs we have. The neck is a limb too.<\/p>\n\n\n\n<p>I wonder how comprehensive Sony&#8217;s liability insurance might be.<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<p><\/p>\n\n\n\n<ol class=\"wp-block-list\"><li id=\"post-10307-footnote-1\">\n<p>This relationship, by the way, inspired me to write a story about hive minds and all the catastrophic things that are bound to go wrong if Neuralink works the way Elon Musk wants it to. I&#8217;m told that story has caught the eye of one of Neuralink&#8217;s cofounders, which is especially remarkable given that it hasn&#8217;t been published yet.<a href=\"#post-10307-footnote-ref-1\">\u2191<\/a><\/p>\n<\/li><li id=\"post-10307-footnote-2\">\n<p>If you&#8217;re feeling a sense of deja vu here: yeah, the parallels to gender dysphoria and transitioning are pretty striking. The experts opine that BID is a related syndrome, but nonetheless a distinct one. <a href=\"#post-10307-footnote-ref-2\">\u2191<\/a><\/p>\n<\/li><\/ol>\n","protected":false},"excerpt":{"rendered":"<p>Achilles Desjardins had always found smart gels a bit creepy. People thought of them as brains in boxes, but they weren&#8217;t. They didn&#8217;t have the parts. Forget about the neocortex or the cerebellum\u2014these things had nothing. No hypothalamus, no pineal gland, no sheathing of mammal over reptile over fish. No instincts. No desires. Just a [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[10,33],"tags":[],"class_list":["post-10307","post","type-post","status-publish","format-standard","hentry","category-neuro","category-sentiencecognition"],"_links":{"self":[{"href":"https:\/\/www.rifters.com\/crawl\/index.php?rest_route=\/wp\/v2\/posts\/10307","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.rifters.com\/crawl\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.rifters.com\/crawl\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.rifters.com\/crawl\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.rifters.com\/crawl\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=10307"}],"version-history":[{"count":41,"href":"https:\/\/www.rifters.com\/crawl\/index.php?rest_route=\/wp\/v2\/posts\/10307\/revisions"}],"predecessor-version":[{"id":10358,"href":"https:\/\/www.rifters.com\/crawl\/index.php?rest_route=\/wp\/v2\/posts\/10307\/revisions\/10358"}],"wp:attachment":[{"href":"https:\/\/www.rifters.com\/crawl\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=10307"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.rifters.com\/crawl\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=10307"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.rifters.com\/crawl\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=10307"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}