No Brainer.

For decades now, I have been haunted by the grainy, black-and-white x-ray of a human skull.

It is alive but empty, with a cavernous fluid-filled space where the brain should be. A thin layer of brain tissue lines that cavity like an amniotic sac. The image hails from a 1980 review article in Science: Roger Lewin, the author, reports that the patient in question had “virtually no brain”. But that’s not what scared me; hydrocephalus is nothing new, and it takes more to creep out this ex-biologist than a picture of Ventricles Gone Wild.

The stuff of nightmares. (From Oliviera et al 2012)

The stuff of nightmares. (From Oliveira et al 2012)

What scared me was the fact that this virtually brain-free patient had an IQ of 126.

He had a first-class honors degree in mathematics. He presented normally along all social and cognitive axes. He didn’t even realize there was anything wrong with him until he went to the doctor for some unrelated malady, only to be referred to a specialist because his head seemed a bit too large.

It happens occasionally. Someone grows up to become a construction worker or a schoolteacher, before learning that they should have been a rutabaga instead. Lewin’s paper reports that one out of ten hydrocephalus cases are so extreme that cerebrospinal fluid fills 95% of the cranium. Anyone whose brain fits into the remaining 5% should be nothing short of vegetative; yet apparently, fully half have IQs over 100. (Why, here’s another example from 2007; and yet another.) Let’s call them VNBs, or “Virtual No-Brainers”.

The paper is titled “Is Your Brain Really Necessary?”, and it seems to contradict pretty much everything we think we know about neurobiology. This Forsdyke guy over in Biological Theory argues that such cases open the possibility that the brain might utilize some kind of extracorporeal storage, which sounds awfully woo both to me and to the anonymous neuroskeptic over at; but even Neuroskeptic, while dismissing Forsdyke’s wilder speculations, doesn’t really argue with the neurological facts on the ground. (I myself haven’t yet had a chance to more than glance at the Forsdyke paper, which might warrant its own post if it turns out to be sufficiently substantive. If not, I’ll probably just pretend it is and incorporate it into Omniscience.)

On a somewhat less peer-reviewed note, VNBs also get routinely trotted out by religious nut jobs who cite them as evidence that a God-given soul must be doing all those things the uppity scientists keep attributing to the brain. Every now and then I see them linking to an off-hand reference I made way back in 2007 (apparently is the only place to find Lewin’s paper online without having to pay a wall) and I roll my eyes.

And yet, 126 IQ. Virtually no brain. In my darkest moments of doubt, I wondered if they might be right.

So on and off for the past twenty years, I’ve lain awake at night wondering how a brain the size of a poodle’s could kick my ass at advanced mathematics. I’ve wondered if these miracle freaks might actually have the same brain mass as the rest of us, but squeezed into a smaller, high-density volume by the pressure of all that cerebrospinal fluid (apparently the answer is: no). While I was writing Blindsight— having learned that cortical modules in the brains of autistic savants are relatively underconnected, forcing each to become more efficient— I wondered if some kind of network-isolation effect might be in play.

Now, it turns out the answer to that is: Maybe.

Three decades after Lewin’s paper, we have “Revisiting hydrocephalus as a model to study brain resilience” by de Oliveira et al. (actually published in 2012, although I didn’t read it until last spring). It’s a “Mini Review Article”: only four pages, no new methodologies or original findings— just a bit of background, a hypothesis, a brief “Discussion” and a conclusion calling for further research. In fact, it’s not so much a review as a challenge to the neuro community to get off its ass and study this fascinating phenomenon— so that soon, hopefully, there’ll be enough new research out there warrant a real review.

The authors advocate research into “Computational models such as the small-world and scale-free network”— networks whose nodes are clustered into highly-interconnected “cliques”, while the cliques themselves are more sparsely connected one to another. De Oliveira et al suggest that they hold the secret to the resilience of the hydrocephalic brain. Such networks result in “higher dynamical complexity, lower wiring costs, and resilience to tissue insults.” This also seems reminiscent of those isolated hyper-efficient modules of autistic savants, which is unlikely to be a coincidence: networks from social to genetic to neural have all been described as “small-world”. (You might wonder— as I did— why de Oliveira et al. would credit such networks for the normal intelligence of some hydrocephalics when the same configuration is presumably ubiquitous in vegetative and normal brains as well. I can only assume they meant to suggest that small-world networking is especially well-developed among high-functioning hydrocephalics.) (In all honesty, it’s not the best-written paper I’ve ever read. Which seems to be kind of a trend on the ‘crawl lately.)

The point, though, is that under the right conditions, brain damage may paradoxically result in brain enhancement. Small-world, scale-free networking— focused, intensified, overclockedmight turbocharge a fragment of a brain into acting like the whole thing.

Can you imagine what would happen if we applied that trick to a normal brain?

If you’ve read Echopraxia, you’ll remember the Bicameral Order: the way they used tailored cancer genes to build extra connections in their brains, the way they linked whole brains together into a hive mind that could rewrite the laws of physics in an afternoon. It was mostly bullshit, of course: neurological speculation, stretched eight unpredictable decades into the future for the sake of a story.

But maybe the reality is simpler than the fiction. Maybe you don’t have to tweak genes or interface brains with computers to make the next great leap in cognitive evolution. Right now, right here in the real world, the cognitive function of brain tissue can be boosted— without engineering, without augmentation— by literal orders of magnitude. All it takes, apparently, is the right kind of stress. And if the neuroscience community heeds de Oliveira et al‘s clarion call, we may soon know how to apply that stress to order. The singularity might be a lot closer than we think.

Also a lot squishier.

Wouldn’t it be awesome if things turned out to be that easy?

Dr. Fox and the Borg Collective

Take someone’s EEG as they squint really hard and think Hello. Email that brainwave off to a machine that’s been programmed to respond to it by tickling someone else’s brain with a flicker of blue light. Call the papers. Tell them you’ve invented telepathy.

I mean, seriously: aren't you getting tired of seeing these guys?

I mean, seriously: aren’t you getting tired of these guys?

Or: teach one rat to press a lever when she feels a certain itch. Outfit another with a sensor that pings when the visual cortex sparks a certain way. Wire them together so the sensor in one provokes the itch in the other: one rat sees the stimulus and the other presses the lever. Let Science Daily tell everyone that you’ve built the Borg Collective.

There’s been a lot of loose talk lately about hive minds. Most of it doesn’t live up to the hype. I got so irked by all that hyperbole— usually accompanied by a still from “The Matrix”, or a picture of Spock in the throes of a mind meld— that I spent a good chunk of my recent Aeon piece bitching about it. Most of these “breakthroughs”, I grumbled, couldn’t be properly described as hive consciousness or even garden-variety telepathy. I described it as the difference between experiencing an orgasm and watching a signal light on a distant hill spell out oh-god-oh-god-yes in Morse Code.

I had to allow, though, that it might be only a matter of time before you could scrape the hype off one of those stories and find some actual substance beneath. In fact, the bulk of my Aeon essay dealt with the implications of the day when all those headlines came true for real.

I think we might have just hit a milestone.


Here’s something else to try. Teach a bunch of thirsty rats to distinguish between two different sounds; motivate them with sips of water, which they don’t get unless they push the round lever when they hear “Sound 0″ and the square one when they hear “Sound 1″.

Once they’ve learned to tell those sounds apart, turn them into living logic gates. Put ‘em in a daisy-chain, for example, and make them play “Broken Telephone”: each rat has to figure out whether the input is 0 or 1 and pass that answer on to the next in line. Or stick ‘em in parallel, give them each a sound to parse, let the next layer of rats figure out a mean response. Simple operant conditioning, right? The kind of stuff that was old before most of us were born.

Now move the stimulus inside. Plant it directly into the somatosensory cortex via a microelectrode array (ICMS, for “IntraCortical MicroStimulation”). And instead of making the rats press levers, internalize that too: another array on the opposite side of the cortex, to transmit whatever neural activity it reads there.

Call it “brainet”. Pais-Vieira et al do.

The paper is “Building an organic computing device with multiple interconnected brains“, from the same folks who brought you Overhyped Rat Mind Meld and Monkey Videogame Hive. In addition to glowing reviews from the usual suspects, it has won over skeptics who’ve decried the hype associated with this sort of research in the past. It’s a tale of four rat brains wired together, doing stuff, and doing it better than singleton brains faced with the same tasks. (“Split-brain patients outperform normal folks on visual-search and pattern-recognition tasks,” I reminded you all back at Aeon; “two minds are better than one, even when they’re in the same head”). And the payoff is spelled out right there in the text: “A new type of computing device: an organic computer… could potentially exceed the performance of individual brains, due to a distributed and parallel computing architecture”.

Bicameral Order, anyone? Moksha Mind? How could I not love such a paper?

And yet I don’t. I like it well enough. It’s a solid contribution, a real advance, not nearly so guilty of perjury as some.

And yet I’m not sure I entirely trust it.

I can’t shake the sense it’s running some kind of con.


The real thing.  Sort of.

The real thing. Sort of. (From Pais-Vieira et al 2015.

There’s much to praise. We’re talking about an actual network, multiple brains in real two-way communication, however rudimentary. That alone makes it a bigger deal than those candy-ass one-direction set-ups that usually get the kids in such a lather.

In fact, I’m still kind of surprised that the damn thing even works. You wouldn’t think that pin-cushioning a live brain with a grid of needles would accomplish much. How precisely could such a crude interface ever interact with all those billions of synapses, configured just so to work the way they do? We haven’t even figured out how brains balance their books in one skull; how much greater the insight, how many more years of research before we learn how to meld multiple minds, a state for which there’s no precedent in the history of life itself?

But it turns out to be way easier than it looks. Hook a blind rat up to a geomagnetic sensor with a simple pair of electrodes, and he’ll be able to navigate a maze— using ambient magnetic fields— as well as any sighted sibling. Splice the code for the right kind of opsin into a mouse genome and the little rodent will be able to perceive colors she never knew before. These are abilities unprecedented in the history of the clade— and yet somehow, brains figure out the user manuals on the fly. Borg Collectives may be simpler than we ever imagined: just plug one end of the wire into Brain A, the other into Brain B, and trust a hundred billion neurons to figure out the protocols on their own.

Which makes it a bit of a letdown, perhaps, when every experiment Pais-Vieira et al describe comes down, in the end, to the same simple choice between 0 and 1. Take the very climax of their paper, a combination of “discrete tactile stimulus classification, BtB interface, and tactile memory storage” bent to the real-world goal of weather prediction. Don’t get too excited— it was, they admit up front, a very simple exercise. No cloud cover, no POP, just an educated guess at whether the chance of rain is going up or down at any given time.

Hey, can't be any worse than the weather person on CBC's morning show...

Hey, can’t be any worse than the weather person on CBC’s morning show…

The front-end work was done by two pairs of rats wired into “dyads”; one dyad was told whether temperature was increasing (0) or decreasing (1), while the other was told the same about barometric pressure. If all went well, each simply spat out the same value that had been fed into it; they were then reintegrated into the full-scale 4-node brainet, which combined those previous outputs to decide whether the chance of precip was rising or falling. It was exactly the same kind of calculation, using exactly the same input, that showed up in other tasks from the same paper; the main difference was that this time around, the signals were labeled “temperature rising” or “temperature falling” instead of 0 and 1. No matter. It all still came down to another encore performance of Brainet’s big hit single, “Torn Between Two Signals”, although admittedly they played both acoustic and electric versions in the same set.

I’m aware of the obvious paradox in my attitude, by the way. On the one hand I can’t believe that such simple technology could work at all when interfaced with living brains; on the other hand I’m disappointed that it doesn’t do more.

I wonder how brainet would resolve those signals.


Of course, Pais-Vieira et al did more than paint weather icons on old variables. They ran brainet through other paces— that “broken telephone” variant I mentioned, for example, in which each node in turn had to pass on the signal it had received until that signal ended up back at the first rat in the chain— who (if the run was successful) identified the serially-massaged data as the same one it had started out with. In practice, this worked 35% of the time, a significantly higher success rate than the 6.25%— four iterations, 50:50 odds at each step— you’d expect from random chance. (Of course, the odds of simply getting the correct final answer were 50:50 regardless of how long the chain was; there were only two states to choose from. Pais-Vieira et al must have tallied up correct answers at each intermediate step when deriving their stats, because it would be really dumb not to; but I had to take a couple of passes at those paragraphs, because at least one sentence—

“the memory of a tactile stimulus could only be recovered if the individual BtB communication links worked correctly in all four consecutive trials.”

— was simply wrong. Whatever the merits of this paper, let’s just say that “clarity” doesn’t make the top ten.)

What the rats saw. Ibid.

What the rats saw. Ibid.

More nodes, better results. Ibid.

More nodes, better results. Ibid.

The researchers also used brainet to transmit simple images— again, with significant-albeit-non-mind-blowing results— and, convincingly showed that general performance improved with a greater number of brains in the net. On the one hand I wonder if this differs in any important way from simply polling a group of people with a true-false question and going with the majority response; wouldn’t that also tend towards greater accuracy with larger groups, simply because you’re drawing on a greater pool of experience? Is every Gallup focus group a hive mind?

On the other hand, maybe the answer is: yes, in a way. Conventional neurological wisdom describes even a single brain as a parliament of interacting modules. Maybe group surveys is exactly the way hive minds work.


So you cut them some slack. You look past the problematic statements because you can figure out what they were trying to say even if they didn’t say it very well. But the deeper you go, the harder it gets. We’re told, for example, that Rat 1 has successfully identified the signal she got from Rat 4— but how do we know that? Rat 4, after all, was only repeating a signal that originated with Rat 1 in the first place (albeit one relayed through two other rats). When R1’s brain says “0”, is it parsing the new input or remembering the old?

Sometimes the input array is used as a simple starting gun, a kick in the sulcus to tell the rats Ready, set, Go: sync up! Apparently the rat brains all light up the same way when that happens, which Pais-Vieira et al interpret as synchronization of neural states via Brain-to-Brain interface. Maybe they’re right. Then again, maybe rat brains just happen to light up that way when spiked with an electric charge. Maybe they were no more “interfaced” than four flowers, kilometers apart, who simultaneously turn their faces toward the same sun.

Ah, but synchronization improved over time, we’re told. Yes, and the rats could see each other through the plexiglass, could watch their fellows indulge in the “whisking and licking” behaviors that resulted from the stimulus. (I’m assuming here that “whisking” behavior has to do with whiskers and not the making of omelets, which would be a truly impressive demonstration of hive-mind capabilities.) Perhaps the interface, such as it was, was not through the brainet at all— but through the eyes.

I’m willing to forgive a lot of this stuff, partly because further experimentation resolves some of the ambiguity. (In one case, for example, the rats were rewarded only if their neural activity desynchronised, which is not something they’d be able to do without some sense of the thing they were supposed to be diverging from.) Still, the writing— and by extension, the logic behind it— seems a lot fuzzier than it should be. The authors apparently recognize this when they frankly admit

“One could argue that the Brainet operations demonstrated here could result from local responses of S1 neurons to ICMS.”

They then list six reasons to believe otherwise, only one of which cuts much ice with me (untrained rats didn’t outperform random chance when decoding input). The others— that performance improved during training, that anesthetized or inattentive animals didn’t outperform chance, that performance degraded with reduced trial time or a lack of reward— suggest, to me, only that performance was conscious and deliberate, not that it was “nonlocal”.

Perhaps I’m just not properly grasping the nuances of the work— but at least some of that blame has to be laid on the way the paper itself is written. It’s not that the writing is bad, necessarily; it’s actually worse than that. The writing is confusing— and sometimes it seems deliberately so. Take, for example, the following figure:

Alone against the crowd. Ibid.

Alone against the crowd. Ibid.

Four rats, their brains wired together. The red line shows the neural activity of one of those rats; the blue shows mean neural activity of the other three in the network, pooled. Straightforward, right? A figure designed to illustrate how closely the mind of one node syncs up with the rest of the hive.

Of course, a couple of lines weaving around a graph aren’t what you’d call a rigorous metric: at the very least you want a statistical measure of correlation between Hive and Individual, a hard number to hang your analysis on. That’s what R is, that little sub-graph inset upper right: a quantitative measure of how precisely synced those two lines are at any point on the time series.

I mean, Jesus, Miguel. What are you afraid of? See how easy it is?

What are you afraid of, Miguel? See how easy it is?

So why is the upper graph barely more than half the width of the lower one?

The whole point of the figure is to illustrate the strength of the correlation at any given time. Why wouldn’t you present everything at a consistent scale, plot R along the same ruler as FR so that anyone who wants to know how tight the correlation is at time T can just see it? Why build a figure that obscures its own content until the reader surrenders, is forced to grab a ruler, and back-converts by hand?

What are you guys trying to cover?


Some of you have probably heard of the Dr. Fox Hypothesis. It postulates that “An unintelligible communication from a legitimate source in the recipient’s area of expertise will increase the recipient’s rating of the author’s competence.” More clearly, Bullshit Baffles Brains.

But note the qualification: “in the recipient’s area of expertise”. We’re not talking about some Ph.D. bullshitting an antivaxxer; we’re talking about an audience of experts being snowed by a guy speaking gibberish in their own field of expertise.

In light of this hypothesis, it shouldn’t surprise you that controlled experiments have shown that wordy, opaque sentences rank more highly in people’s minds than simple, clear ones which convey the same information. Correlational studies report that the more prestigious a scientific journal tends to be, the worse the quality of the writing you’ll find therein. (I read one fist-hand account of someone who submitted his first-draft manuscript— which even he described as “turgid and opaque”— to the same journal that had rejected the much-clearer 6th draft of the same paper. It was accepted with minor revisions.)

Pas-Vieira et al appears in Nature’s “Scientific Reports”. You don’t get much more prestigious than that.

So I come away from this paper with mixed feelings. I like what they’ve done— at least, I like what I think they’ve done. From what I can tell the data seem sound, even behind all the handwaving and obfuscation. And yet, this is a paper that acts as though it’s got something to hide, that draws your attention over here so you won’t notice what’s happening over there. It has issues, but none are fatal so far as I can tell. So why the smoke and mirrors? It’s like being told a wonderful secret by a used-car salesman.

These guys really had something to say.

Why didn’t they just fucking say it?




(You better appreciate this post, by the way. Even if it is dry as hell. It took me 19 hours to research and write the damn thing.

(I ought to put up a paywall.)

Posted in: neuro, relevant tech by Peter Watts 15 Comments

Spock the Impaler: A Belated Retrospective on Vulcan Ethics.

When I first wrote these words, the Internet was alive with the death of Leonard Nimoy. I couldn’t post them here, because Nowa Fantastyka got them first (or at least, an abridged version thereof), and there were exclusivity windows to consider. As I revisit these words, though, Nimoy remains dead, and the implications of his legacy haven’t gone anywhere. So this is still as good a time as any to argue— in English, this time— that any truly ethical society will inevitably endorse the killing of innocent people.

Bear with me.

As you know, Bob, Nimoy’s defining role was that of Star Trek‘s Mr. Spock, the logical Vulcan who would never let emotion interfere with the making of hard choices. This tended to get him into trouble with Leonard McCoy, Trek‘s resident humanist. “If killing five saves ten it’s a bargain,” the doctor sneered once, in the face of Spock’s dispassionate suggestion that hundreds of colonists might have to be sacrificed to prevent the spread of a galaxy-threatening neuroparasite. “Is that your simple logic?”

The logic was simple, and unassailable, but we were obviously supposed to reject it anyway. (Sure enough, that brutal tradeoff had been avoided by the end of the episode[1], in deference to a TV audience with no stomach for downbeat endings.) Apparently, though, it was easier to swallow 16 years later, when The Wrath of Kahn rephrased it as “The needs of the many outweigh the needs of the few”. That time it really caught on, went from catch-phrase to cliché in under a week. It’s the second-most-famous Spock quote ever. It’s so comforting, this paean to the Greater Good. Of course, it hardly ever happens— here in the real world, the needs of the few almost universally prevail over those of the many— but who doesn’t at least pay lip-service to the principle?

Most of us, apparently:

“…progress isn’t directly worth the life of a single person. Indirectly, fine. You can be Joseph Stalin as long as you don’t mean to kill anyone. Bomb a dam in a third world shit-hole on which a hundred thousand people depend for water and a thousand kids die of thirst but it wasn’t intentional, right? Phillip Morris killed more people than Mao but they’re still in the Chamber of Commerce. Nobody meant for all those people to die drowning in their own blood and even after the Surgeon General told them the inside scoop, they weren’t sure it caused lung cancer.

“Compare that to the risk calculus in medical research. If I kill one person in ten thousand I’m shut down, even if I’m working on something that will save millions of lives. I can’t kill a hundred people to cure cancer, but a million will die from the disease I could have learned to defeat.”

I’ve stolen this bit of dialog, with permission, from an aspiring novelist who wishes to remain anonymous for the time being. (I occasionally mentor such folks, to supplement my fantastically lucrative gig as a midlist science fiction author.) The character speaking those words is a classic asshole: arrogant, contemptuous of his colleagues, lacking any shred of empathy.

And yet, he has a point.

He’s far from the first person to make it. The idea of the chess sacrifice, the relative value of lives weighed one against another for some greater good, is as old as Humanity itself (even older, given some of the more altruistic examples of kin selection that manifest across the species spectrum). It’s a recurrent theme even in my own fiction: Starfish sacrificed several to save a continent, Maelstrom sacrificed millions to save a world (not very successfully, as it turns out). Critics have referred to the person who made those calls as your typical cold-blooded bureaucrat, but I always regarded her as heroic: willing to make the tough calls, to do what was necessary to save the world (or at least, increase the odds that it could be saved). Willing to put Spock’s aphorism into action when there is no third alternative.

And yet I don’t know if I’ve ever seen The Needs of the Many phrased quite so starkly as in that yet-to-be-published snippet of fiction a few paragraphs back.

Perhaps that’s because it’s not really fiction. Tobacco killed an estimated 100 million throughout the 20th Century, and— while society has been able to rouse itself for the occasional class-action lawsuit— nobody’s ever been charged with Murder by Cigarette, much less convicted. But if your struggle to cure lung cancer involves experiments that you know will prove fatal to some of your subjects, you’re a serial killer. What kind of society demonizes those who’d kill the Few to save the Many, while exempting those who kill the Many for no better reason than a profit margin? Doesn’t Spock’s aphorism demand that people get away with murder, so long as it’s for the greater good?

You’re not buying it, are you? It just seems wrong.

I recently hashed this out with Dave Nickle over beers and bourbons. (Dave is good for hashing things out with; that’s one of the things that make him such an outstanding writer.) He didn’t buy it either, although he struggled to explain why. For one thing, he argued, Big Tobacco isn’t forcing people to put those cancer sticks in their mouths; people choose for themselves to take that risk. But that claim gets a bit iffy when you remember that the industry deliberately tweaked nicotine levels in their product for maximum addictive effect; they did their level best to subvert voluntary choice with irresistible craving.

Okay, Dave argued, how about this: Big Tobacco isn’t trying to kill anyone— they just want to sell cigarettes, and collateral damage is just an unfortunate side effect. “Your researcher, on the other hand, would be gathering a group of people— either forcibly or through deception— and directly administering deadly procedures with the sure knowledge that one or more of those people would die, and their deaths were a necessary part of the research. That’s kind of premeditated, and very direct. It is a more consciously murderous thing to do than is selling tobacco to the ignorant. Hence, we regard it as more monstrous.”

And yet, our researchers aren’t trying to kill people any more than the tobacco industry is; their goal is to cure cancer, even though they recognize the inevitability of collateral damage as— yup, just an unfortunate side effect. To give Dave credit, he recognized this, and characterized his own argument as sophistry— “but it’s the kind of sophistry in which we all engage to get ourselves through the night”. In contrast, the “Joseph Mengele stuff— that shit’s alien.”

I think he’s onto something there, with his observation that the medical side of the equation is more “direct”, more “alien”. The subjective strangeness of a thing, the number of steps it takes to get from A to B, are not logically relevant (you end up at B in both cases, after all). But they matter, somehow. Down in the gut, they make all the difference.

I think it all comes down to trolley paradoxes.

You remember those, of course. The classic example involves two scenarios, each involving a runaway trolley headed for a washed-out bridge. In one scenario, its passengers can only be saved by rerouting it to another track—where it will kill an unfortunate lineman. In the other scenario, the passengers can only be saved by pushing a fat person onto the track in front of the oncoming runaway, crushing the person but stopping the train.

Ethically, the scenarios are identical: kill one, save many. But faced with these hypothetical choices, people’s responses are tellingly different. Most say it would be right to reroute the train, but not to push the fat person to their death— which suggests that such “moral” choices reflect little more than squeamishness about getting one’s hands dirty. Reroute the train, yes— so long as I don’t have to be there when it hits someone. Let my product kill millions— but don’t put me in the same room with them when they check out. Let me act, but only if I don’t have to see the consequences of my action.

Morality isn’t ethics, isn’t logic. Morality is cowardice— and while Star Trek can indulge The Needs of the Many with an unending supply of sacrificial red shirts, here in the real world that cowardice reduces Spock’s “axiomatic” wisdom to a meaningless platitude.

The courage of his convictions.

The courage of his convictions.

Trolley paradoxes can take many forms (though all tend to return similar results). I’m going to leave you with one of my favorites. A surgeon has five patients, all in dire and immediate need of transplants— and a sixth, an unconnected out-of-towner who’s dropped in unexpectedly with a broken arm and enough healthy compatible organs to save everyone else on the roster.

The needs of the many outweigh the needs of the few. Everyone knows that much. Why, look: Spock’s already started cutting.

What about you?



[1] “Operation: Annihilate!”, by Steven W. Carabatsos. In case you were wondering.

Sweet Justice. (And puppets.)

According to Rule 34, someone is getting off on this.

According to Rule 34, someone is getting off on this.

Today’s opening act is a left-over I forgot to include in that last post: a bit of flesh sculpture I was not allowed to show off in “Pones & Bones” because it would have risked  spoiling a yet-to-be-aired episode of “Hannibal”. That episode recently aired, though, so the embargo is lifted. Behold: the hoofed, flayed, and headless wonder that I have christened Hoofnibal, both under construction at Mindwarp workshop (right) and during its formal debut during the episode “Primavera” (below) .

I would like to emphasize that there is no CGI in the sequence: Will’s hallucination is a puppet, moving in real time on the set. Let’s hear it for Practical FX.


More to the point, though: Let’s also hear it for The BUG!

A wee bit of background. Early in our courtship, Caitlin Sweet referred to me as “A DOOFUS” (the caps are hers). Stung, I could only reply “That’s Dr. Doofus to you, Unicorn Girl“— which was a not-too-subtle reminder that I write hard-as-nails SF while she writes fluffy rainbow fantasy.

The thing is, though, Caitlin does not write fluffy rainbow fantasy. The only rainbows you’re likely to see in her novels are those that swirl across the oily film on an open sewer. The Pattern Scars begins with its protagonist, a young girl called Nola, going into a trance at the sight of a bloodstain; the next day her mother sells her to the local brothel as a seer. It gets worse from there. (Oh, it seems to get better for a little while. It seems to get suspiciously, unbelievably better, even. But no. Way worse.) I like to think of myself as Captain Stoneface when it comes to my emotional vulnerability to most fiction; I literally teared up at the end of The Pattern Scars.

Caitlin turns tropes inside out. The Pattern Scars, at its heart, is an inversion of the Cassandra myth: instead of a seer whose truthful prophecies are never believed, Caitlin gives us one doomed to prophesy lies which are always accepted as gospel. The Door in the Mountain— part one of a two-parter which concludes with the imminent The Flame in the Maze— retells the Theseus myth through the eyes of an Ariadne who (in a bizarro twist on the sweet hapless innocence of her archetype) is a manipulative sadist driven by rage and jealousy. The supporting cast might best be described as the twisted love-children of Davids Lynch and Cronenberg (Icarus and Daedalus are two personal favorites). Caitlin is way closer to Martin than to Tolkien; the last thing you can call her is “Unicorn Girl”.

Is this not exactly the face that comes to mind when you imagine a female George RR Martin?

Is this not exactly the face that comes to mind when you imagine a female George RR Martin? (Photo: Martin Springett)

Which is, of course, exactly why she enthusiastically embraced the term the moment she saw it (although the official acronym is BUG— Beloved Unicorn Girl— because “UG” lacks the appropriate resonance. Also: Bed BUG).

My point is: Caitlin’s stuff is gritty, gorgeous, and unsentimental. If it contains anything even approaching cliché, you can be assured that that element exists only to be subverted or blown from the water at a later date. She does not do happy endings; the most you’ll get is an ambiguous one.

Did I mention that Erik Mohr's cover art is also up for an Aurora?

Did I mention that Erik Mohr’s cover art is also up for an Aurora?

All of which means she’s not the kind of fantasy author the YA market is likely to swoon over. I think we’ve both lost count of the agents and publishers who’ve turned her down with some variant of You’re a brilliant, brilliant writer but your protagonist is so unlikeable: can’t you make her more like Hermione from Harry Potter?

No. No she can’t, you fucking idiots. She does not write to market. She has never once said I’m going to add a perky sidekick so the popcorn set doesn’t get away. All that matters to the BUG, when she’s writing, is whether the story works the way it’s supposed to. Whether it meets her standards.

And so her stuff gets ignored. Teenyboppers who stumble across it in search of the latest medieval fantasy with a plucky female protagonist scratch their heads and leave, their stomachs vaguely unsettled. When critics find it, they rave; but that doesn’t happen nearly as often as it should.

So I am very glad to point out that Caitlin Sweet’s The Door in the Mountain is a finalist for the Sunburst Award, YA category. That category, I think, is misplaced; but the recognition is not. It is, to put not too fine a point on it, About Fucking Time. And I can say this without fear of vote-skewing, because the award is juried.

Yeah, of course I’m biased. Of course she’s my wife. But she wasn’t always.

Why do you think I fell in love with her in the first place?


Posted in: ink on art, writing news by Peter Watts 12 Comments

Space Invaders.

So, a few assorted and domestic pictures with which to see out the week.  To your right, as promised a few weeks back, some Rifters-based fan art from “Toa-lagara” over at Deviant Art (and also, now, in the Rifters Gallery, with her permission). Russians do dark art so beautifully.Immediately below, a special edition enhanced appearance of Philippe Jozelon’s evocative Echopraxie cover for Fleuve (my French publishers).  Interesting side note: the French edition is dedicated to “MICROBE. Qui m’a sauvé la vie”.  I know at least some of you will get the joke.

I remember writing this very scene.

I remember writing this very scene. (Click to embiggen.)


Now with 100% fewer distracting alphanumerics! (Click to embiggen.)

And finally…

This is pretty much a typical summer evening on the porch of the Magic Bungalow.

This is "Silverpaw", aka "TP" because he first came to us with what appeared to be toilet paper stuck on his butt.

This is “Silverpaw”, aka “TP” because he first came to us with what appeared to be toilet paper stuck on his butt. (You can still see a bit of it stuck to his left flank.)

The sock-clad foot is mine.

Silverpaw is without a doubt the most fearless of the bunch. You do not fuck with Silverpaw

Silverpaw is without a doubt the most fearless of the bunch. You do not fuck with Silverpaw.

At approximately 21:58 on the evening of June 18, 2015, while we were watching back episodes of "Bob's Burgers", Silverpaw figured out how to open the front door.

At approximately 21:58 on the evening of June 18, 2015, while we were watching back episodes of “Bob’s Burgers”, Silverpaw figured out how to open the front door.

He made it as far as the Ponearium before we managed to lure him out the back. We locked the doors.

At approximately 22:02, Silverpaw was back inside. (Photo credit: Micropone Rossiter)

This may be our last transmission.

This may be our last transmission.


Posted in: art on ink, misc by Peter Watts 19 Comments

Gallo’s Humor.

Ah Jeez, here we go again.

The gun, it smokes.

The gun, it smokes.

The weird thing is, I completely see where Irene Gallo was coming from. I sympathize. I know what it’s like to see the assholes piling up outside the gate, to roll your eyes and shake your head at the inanities and the outright lies— even though it’s obvious that rolling your eyes and shaking your head accomplishes nothing, that reasoned argument accomplishes nothing because those guys didn’t arrive at their positions though reason. Hell, I myself— on this very ‘Crawl— have gleefully fantasized about Stephen Harper getting gunned down in the street, about Liz Cheney’s entrails being strung along a barbed-wire fence.

I get it. Sometimes you just blow up. It’s human. It’s natural.

Still. If we always did whatever came naturally, the only reason I wouldn’t have bashed in a few hundred skulls by now would be because someone else would have bashed in mine before I even hit puberty. Humanity comes with all sorts of primal impulses as standard equipment; I imagine many of Gallo’s defenders would not be especially happy if we let all those drives off the leash just because they were “natural”. One of the first things we point to when lauding Human exceptionalism is our ability to restrain our impulses. And if we fail sometimes— as we’re inevitably bound to— at the very least we can try to walk it back afterward.

So I can see myself in Irene Gallo’s shoes. And if I actually found myself there, I like to think I’d say certain things when those whom I’d intemperately described as Nazis or racists raised their hands to claim that they’d fought against Apartheid during their youth in South Africa, or that they were rabbis, or that they’d exchanged actual gunfire with the brownshirts:

“Holy shit,” (I like to think I’d say,) “You’re right. It’s just— I really hate these guys, you know? And the bile’s been building up for a while now, and when I got that question everything just kind exploded over the keyboard. I think my anger’s justified, but it called for a sniper rifle and I used a sawed-off shotgun. I really stepped over the line. This is me, stepping back, with apologies to those I impugned.”

What I would not have done, when challenged, is post a series of inane cat photos with the caption KITTEH! emblazoned across the top (although granted, Gallo did dial it back to “kitteh?” after a few iterations, when her strategy did not appear to be having the desired effect).

Things kind of went downhill from there. The internet— or at least, this little genre bubble thereof— blew up again, loud enough for the Daily Dot to notice way out in the real world. Tom Doherty stuck a boilerplate disclaimer over at and was immediately vilified for being A) a misogynist asshole because he publicly reprimanded Irene Gallo when he should have given her a medal for speaking Truth to Power, and also for being B) a left-wing libtard pussy who gave Irene Gallo a slap on the wrist when she should have been fired outright. Gallo herself issued one of those boilerplate fauxpologies whose lineage hearkens all the way back to the ancestral phrase “mistakes were made”. None of it seemed to help much.

Blowing up is not the only thing that comes naturally to humans. Tribalism is in there too.

Before we go any further, let me just cover my ass with a disclaimer of my own: I am no great supporter of puppies, regardless of temperament. (Any regular on this blog already knows the kinds of furry quadruped who own my heart.) I understand that of the two breeds under consideration, the Rabids are far more extreme and downright toxic; Theodore Beale, judging by some of his pithier quotes, seems to be Benjanun Sriduangkaew’s bizarro twin, separated at birth. The Sads, in contrast, have enough legitimacy to warrant at least respectful disagreement and engagement from the likes of George Martin and Eric Flint; they have also distanced themselves from their more diseased cousins (although the point that the final Hugo ballot is more representative of the Rabid slate than the Sad one is well-taken). Even so, I don’t find even the Sad Puppies’ arguments especially meritorious.

So let there be no mistake here: I come not to praise Puppies.

I come to bury the rest of you.


As a former marine mammalogist, I feel especially qualified to pass judgment on this meme.  Is it just me, or does it seem a bit wonky that the victims of the piece seem to be the Victorian couple who just want to express their bigotry in peace, and the villain is the disenfranchised Otarriid who politely challenges their prejudice?

As a former marine mammalogist, I feel especially qualified to pass judgment on this meme. Am I the only one who finds it questionable that the heroes of the piece seem to be the Victorian couple who just want to express their bigotry in peace, while the villain is the disenfranchised Otarriid who politely challenges their prejudice with a request for evidence?

Eric Flint put forth the most reasonable take I’ve yet seen on why Gallo misstepped. Over on and io9, a lot of people don’t buy it. They’ve made a number of arguments and hurled a number of insults, perhaps the dumbest of which was accusing someone of “sea-lioning” after they’d asked a single, on-point question. (The alleged sea-lion also claimed to be a part-time rabbi, so— assuming, as always, that we can take such claims at face value— you can understand how the whole Nazi-sympathizer thing might not go over especially well.) A lot of other claims were made repeatedly, though. Some, in fact, were repeated often enough to warrant their own subtitles:


You Can’t Handle the Truth

Doherty threw Gallo under the bus [get used to that phrase— it shows up 21 times under Doherty’s post alone, which is a bit ironic given the number  complaining there about the suspicious similarity of the puppy-sympathisers’ talking points]. He handed a victory to the Puppies when he should have backed her up for having the courage to tell the truth— and everyone knows it’s the truth because noun, verb, Vox Day.

Let’s ignore for the moment the hordes of sad-puppy sympathizers who’ve come out of the woodwork claiming to be anti-apartheid activists, Jews, people of color, married to people of color, queer, veterans— and who do not like being stuck on the same planet as Vox Day, much less the same political clade. I suppose you could call bullshit on most of them— this wouldn’t even be a proper internet argument if accusations of misrepresentation and sock-puppetry weren’t part of the background noise. So let’s set those personal testimonials aside for the moment, and consider a different fact:

Back when the Puppies first seized control of the bridge, Entertainment Weekly (and, I’m pretty sure, The Guardian, although I can’t find the pre-edited version online— maybe I’m thinking Salon) published remarks about the Puppies that were actually milder than Gallo’s. Within hours, it had deleted those remarks and published a meek, surprisingly unconditional retraction which described their own coverage as “unfair and inaccurate”. It was, in tone and content, quite similar to Tom Doherty’s more recent remarks on

I don’t know any Puppies. I don’t know if the people speaking out on their behalf are grass-roots or astroturf (although they can’t all be sock puppets— the gender, ethnicity, and partnerships of some of these folks are a matter of public record, and they’re not all straight white dudes). But I can only assume that these retractions occurred as a response to considered legal opinion. And the fact that different corporations caved so completely, printing such similar apologies, suggests to me that Irene Gallo’s “truth” was, at the very least, legally actionable. This is not a characteristic that usually accrues to Truth, outside Spanish Inquisitions.


The “Personal Space” Perspective.

Well, even if Gallo misspoke, she was just expressing a personal opinion on her personal facebook page. Tor had no right to censor what their employees say and do on their own personal time.

There’s gotta be a word for that— you know, for selecting the negative attributes of a few people you hate on a personal level, and projecting those traits onto an entire demographic. I only wish I could remember what it was…

There’s gotta be a word for that— you know, for selecting the negative attributes of a few people you hate on a personal level, and projecting those traits onto an entire demographic. I only wish I could remember what it was…

Go check out Irene Gallo’s personal facebook page. Most of the posts there consist of pimpage for Tor artists, cover reveals for upcoming Tor releases, various bits of Tor-related news, and genre award links. Hell, the very post that got her wrist slapped was a promo for Kameron Hurley’s The Geek Feminist Revolution, soon to be available from (you guessed it) Tor: and the heading she chose to capture eyeballs was “Making the Sad Puppies Sadder— proud to have a tiny part of this”.

The time stamp on that post reads Monday, 11 May 2015, 14:14

I don’t think there’s anything wrong with using your personal facebook page as a delivery platform for employer pimpage. I think people should feel free to blur the line between their personal and professional lives until the two are nigh-on indistinguishable, if they like. But having erased those boundaries, you don’t get to reassert them at your convenience. And if anyone tries to claim, after the fact, that on this one occasion you weren’t really presenting yourself as a corporate spokesperson— especially when said occasion involves an advertisement for a company product, posted during work hours, presumably while sitting at your work desk— the demographic who takes this claim at face value will be either very small, or very stupid.

Evidently it was that second thing.


The Sexism Scenario

Isn’t it curious how Tor never feels the need to do anything when their male authors say more extreme things than Gallo ever did [Scalzi and Wright and Card get cited a lot in this regard, although I saw at least one lost soul wanting to know why Tor wasn’t calling out Vox Day]. Isn’t it telling how that Frenkel guy got away with harassing women for years before Tor cut him loose— but a woman makes one intemperate comment and they throw her under the bus? Misogyny much?

First, can we at least agree that Jim Frenkel’s tenure at Tor would have been over pretty much the moment he went onto facebook to proudly post selfies of his ongoing harassment of women? He lasted as long as he did because he committed his offenses in the shadows, where they could be more safely ignored by Corporate.

Tor is a colony organism; its fitness is defined in terms of profit margin. Like all corporate entities, it’s at least partially sociopathic. Its immune system responds most emphatically to threats that endanger its bottom line— which, almost by definition, means public threats. I think that anyone who regards Doherty’s response as an act of sexism is looking at the world through polarized lenses; to me, this reads above all else like an act of damage control. If Gallo had been male, I believe Tor’s reaction would have been the same.

As for those who somehow seem to think that authors are employees— that Tor’s legal liability extends not just to what Irene Gallo posts from her office computer during work hours, but to everything posted by anyone Tor has ever published— all I can say is, you’ve been seriously misinformed about the nature of the sacred bond between author and publisher. (Or maybe I have— maybe I should be complaining about Tor’s failure to provide me with health insurance and a regular paycheck.)

At the very least, you should have boycotted those guys the moment they started publishing Orson Scott Card.


Of course, Tom Doherty is not the only one to have come in for a world o’Twitter Rage. Much ire, as always, is directed at the Puppies themselves— much of it justified, in my opinion. But I’m not writing this to jump on that particular bandwagon, nor do I need to; you can’t swing a cat these days without hitting someone’s list of puppycrimes.

The hypocrisy of certain Gallonites, however, doesn’t seem to be getting nearly as much attention (at least, not here in the Civilized World; the Puppies may be all over it, but I tend to avoid those territories).  I’ve seen Sad Puppies go out of their way to distance themselves from the rabid end of the spectrum:

“Vox Day is an A-hole. As a Sad Puppy, I had to look him up on Google.”

— only to get shot down:

The fact that you joined a movement without adequately understanding what its leaders stood for, compounded by the fact that you continue to identify with that movement AFTER you’ve seen ample evidence of what they stand for, inclines me to give you zero credibility on this issue.”


“you are supporting [Beale’s] agenda.  That makes those who support culpable.  If they didn’t want to be associated with that reprehensible excuse for a human being, they should not have stood to be counted with him.”

Turn this argument around and see how you like it.

Imagine being told that you had no business advocating for social justice issues because you didn’t know about— oh, say, Requires Hate— prior to signing up. Imagine being told with a straight face— nay, with a righteously angry face— that you have “zero credibility” because you continue to advocate for social justice issues, even after learning of that vile creature’s existence.

Yeah, I know RH didn’t start the movement. She merely exploited it. But the analogy holds where it needs to: RH was, in her day, a significant player in the SJ scene, with allies who extended (and, as far as I can tell, continue to extend) into the halls of Tor itself. She was relatively central for such a decentralized movement— But she did not speak for everyone. If anyone told you that you couldn’t advocate for social justice without also supporting RH, how would you respond?

(As a side note, it’s nice to see RH’s influence so greatly diminished in recent months. She still spews the same BS— although her favored target seems to have shifted to “racist white women” in the wake of Laura Mixon’s report— but to far less effect. Think Saruman, reduced to whining in the Shire after being kicked out of Isengard. RH might even provide a valuable social service these days, functioning as a sort of rhetorical flypaper for idiots. As long as they stick to her, the rest of us can get on with our lives.)


Another common talking point is the obvious timing of this whole blow-out, of the fact that Beale sat on his screen-grab for weeks before releasing the hounds just prior to the Nebula Awards. This was manufactured outrage over phantom pain. Nobody was really hurt by Gallo’s comments; they were nothing but a convenient foothold from which to launch an attack.

Well, duh.

Beale is the enemy. That’s what enemies do, if they’re smart; they keep their powder dry. That’s one of the things that makes them enemies, for chrissake. That obvious fact should make it less advisable to play into their hands. Gallo said what she said— and to all those who’d say Jeez, let it go— that was four whole weeks ago, I’d answer Fine: why hasn’t the statute of limitations passed on all those Beale quotes I keep seeing, all of which are much older?

Not that I’m excusing Beale, mind you. I personally have a hard time believing that anyone could make some of his claims with a straight face. (White men don’t rape, so mistrust the victim unless she’s accusing a Black or Hispanic?) Maybe he’s just being ironic, although I’m more inclined to regard such statements as batshit insane. Either way, I’d laugh in the face of anyone who tried to impose a statute of limitations on Theodore Beale quotes; I suspect most of you would as well. By that same token, neither do we get to declare Gallo’s remarks off-limits after a measly month.

I imagine a number of you are already objecting to this equivalence on the grounds that Gallo’s single comment, ill-advised though it may have been, doesn’t come anywhere close to the levels of offensiveness that Theodore Beale manages even on a mild day. I tend to agree. I thought Gallo’s comment fell pretty wide of the mark, but I personally didn’t find anything especially offensive about it.

Then again, I’m not a Jewish person who’s been told he’s in bed with Nazis. It may be wise to defer to such people in matters of offense given and received.


Over the past few days I’ve sampled a fair number of blog posts and editorials dealing with Gallogate. I’ve recognized a number of the folks who’ve posted comments there, who’ve “liked” the relevant links and rejoinders sliding down my Facebook wall. Some I know only from their handles, when they’ve posted here on the ‘Crawl; others are personal friends.

They all support Irene Gallo.

I would too, if she’d only stood up and offered an apology that didn’t read as though it had been crafted by corporate mealworms. She fucked up; we all do, sometimes. She played into enemy hands. It was a minor and a momentary slip. But the real fuck-up was in how she and her supporters dealt with the aftermath.

There are good reasons to repudiate Puppies. There are legitimate arguments to be made against both Sad and (especially) Rabid breeds— which makes it all the more frustrating that so much of what I’ve seen lately boils down to dumb, naked tribalism. Fallacies that would be instantly derided if made by the other side become gospel; any who question are presumed to be With The Tewwowists (or more precisely, the sea lions). I’m reminded of my own observation back when the Mixon report came out: we’re not a community at all. We’re a bunch of squabbling tribes fighting over the same watering hole.

I didn’t want to write this. There’s so much other nifty stuff to talk about. Preserved soft tissue in dinosaur fossils, reported the same week “Jurassic World” premieres. Island nations, finally suing the Fossil Fuel industry for compensation over habitat loss due to climate change. And I still haven’t got around to writing my epic comparison of “Fury Road” and “Kingsman”.

It would have been a lot more fun to write about any of that. But this is just fucked. So many people bend the data to support forgone conclusions; so few put their conclusions on hold until they’ve followed those data to see where they might lead. So much gut reaction. So little neocortical involvement.

Judging by past experience, I could lose some fans over this. There’s even a chance I could lose actual friends (although I think most of the opportunists masquerading as friends got exposed the last time I took an unpopular stand on something). Which, if you look at it a certain way, is a good thing; it would add evidence to my argument about the evils of mindless groupthink. But here it is, for better or worse. I’ve never been much for bandwagons.

Unless I build them myself, I guess.



Posted in: rant by Peter Watts 65 Comments

The 21-Second God.



We lost fifteen million souls that day.

Fifteen million brains sheathed in wraparound full-sensory experience more real than reality: skydiving, bug-hunting, fucking long-lost or imaginary lovers whose fraudulence was belied only by their perfection. Gang-bangs and first-person space battles shared by thousands— each feeding from that trickle of bandwidth keeping them safely partitioned one from another, even while immersed in the same sensations. All lost in an instant.

We still don’t know what happened.

The basics are simple enough. Any caveman could tell you what happens when you replace a dirt path with a twenty-lane expressway: bandwidth rises, latency falls, and suddenly the road is big enough to carry selves as well as sensation. We coalesces into a vast and singular I. We knew those risks. That’s why we installed the valves to begin with: because we knew what might happen in their absence.

But we still don’t know how all those safeguards failed at the same time. We don’t know who did it (or what— rumors of rogue distributed AIs, thinking microwave thoughts across the stratosphere, have been neither confirmed or denied). We’ll never know what insights arced through that godlike mind-hive in the moments it took to throw the breakers, unplug the victims, wrest back some measure of control. We’ve spent countless hours debriefing the survivors (those who recovered from their catatonia, at least); they told us as much as a single neuron might, if you ripped it out of someone’s head and demanded to know what the brain was thinking.

Those lawsuits launched by merely human victims have more or less been settled. The others— conceived, plotted, and put into irrevocable motion by the 21-Second God in those fleeting moments between emergence and annihilation— continue to iterate across a thousand jurisdictions. The first motions were launched, the first AIgents retained, less than ten seconds into Coalescence. The rights of mayfly deities. The creation and the murder of a hive mind. Restitution strategies that would compel some random assortment of people to plug their brains into a resurrected Whole for an hour a week, so 21G might be born again. A legal campaign lasting years, waged simultaneously on myriad fronts, all planned out in advance and launched on autopilot. The hive lived for a mere 21 seconds, but it learned enough in that time to arrange for its own second coming. It wants its life back.

A surprising number of us want to join it.

Some say we should just throw in the towel and concede. No army of lawyers, no swarm of AIgents could possible win against a coherent self with the neurocomputational mass of fifteen million human brains, no matter how ephemeral its lifespan. Some suggest that even its rare legal defeats are deliberate, part of some farsighted strategy to delay ultimate victory until vital technological milestones have been reached.

The 21-Second God is beyond mortal ken, they say. Even our victories promote Its Holy Agenda.

Posted in: fiblet by Peter Watts 47 Comments

False Prophecy

(…being another reprint of a months-old Nowa Fantaskya column, because I’m still in Vancouver and haven’t yet had time to do my epic comparison of Fury Road and Kingsman)

I’ve been called a prophet now and again. Articles about neuron cultures running robots or power grids generally provoke a comment or two about the “smart gels” from my rifters trilogy. βehemoth is likely to get a shout-out with each new report of mysterious sulfur-munching microbes, deep in the bowels of hydrothermal rift vents. Recently The Atlantic posted a piece about Louis Michaud’s work on energy-generating tornadoes; readers of Echopraxia pricked up their ears.

I didn’t foresee any of it, of course. I just read about it back before it made headlines, when it was still obscured by the jargon of tech reports and patent applications. In fact, my successful “predictions”— submarine ecotourism, Internet weather systems, smart gels— are happening way sooner than I ever expected.

Predict the future? I can barely predict the present.

I’ve only made one “prediction” (although “insight” would probably be a better term) whose rudiments I haven’t stolen. I’m really proud of it, though. Screw those recycled factoids about head cheeses and vortex engines: I’m the guy who wondered if Consciousness— that exalted mystery everyone holds so dear and no one understands— might not just be some kind of neurological side-effect. I’m the guy who wondered if we’d be better off without it.

I may not be the first to pose that question— I’m probably not— but if I reinvented that wheel at least I did it on my own, without reading over the shoulders of giants. And the evidence in support of that view— the review papers, the controlled experiments— as far as I know, those started piling up after Blindsight was written. So maybe I did get there first. Maybe, driven solely by narrative desperation and the desire for a cool punchline, I threw a dart over my shoulder and just happened to hit a bullseye that only later would get a name in the peer-reviewed literature:

UTA, they call it now. “Unconscious Thought Advantage”. The phenomenon whereby you arrive at the best answer to a problem by not thinking about it. I like to think I got there on my own.

So you can imagine how it feels to stand before you now, wondering if it was bullshit after all.

The paper is “On making the right choice: A meta-analysis and large-scale replication attempt of the unconscious thought advantage” by Nieuwenstein et al. The journal is Judgment and Decision-Making, which I’d never heard of but this particular paper got taken seriously by Nature so I’m guessing it’s not a fanzine. And the finding? The finding is—

Actually, a bit of background first.

Say someone gives Dick and Jane a problem to solve— something with a lot of variables, like a choice between two different kinds of car. They’re both given the same data to work with, but while Dick gets to concentrate on the problem before making his decision, Jane has to spend that time doing unrelated word puzzles. The weird thing is, Jane makes a better decision than Dick, despite the fact that she didn’t consciously think about the problem. Conscious thought actually seems to impair complex decision-making.

I first encountered such findings almost a decade ago, while correcting the galleys for Blindsight; you can imagine the joyful dance my hooves tapped out upon the floor. In the years since, dozens of studies have sought to confirm the existence of the Unconscious Thought Advantage. Most have done so. Some haven’t.

Now along come Nieuwenstein et al. They wonder if those positive results might just be artefacts of sloppy methodology and small sample size. They point out a host of uncontrolled variables that might have contaminated previous studies— “mindset, gender, motivation, expertise about the choice at hand, attention and memory” for starters— and while I’d agree that such elements add noise to the data, it seems to me they’d be more likely to obscure a real pattern than create a false one. And though it’s certainly true that small samples are more likely to produce spurious results, that’s what statistics are for: A significant P-value has already taken sample size into account.

Still. Sideline those quibbles and look at what Nieuwenstein et al actually did. They used a much larger sample, applied stricter protocols. They avoided the things they regarded as methodological flaws from previous studies, reran the tests— and found no evidence of a UTA. No difference in effectiveness between conscious and nonconscious problem-solving.


It’s not a fatal blow. In fact, Nieuwenstein’s study actually found the same raw pattern as previous research: the responses of distracted problem-solvers were 5% more accurate than those of the conscious-analysis group. The difference just wasn’t statistically significant this time around. So even if we accept these results as definitive, the most they tell us is that nonconscious decision-making is as effective as the conscious kind. Consciousness confers no advantage. So the question remains: what is it good for?

The authors tried to talk their way around this in their discussion, arguing that “people form their judgments subconsciously and quickly, then use conscious processes to rationalize them”. They speculated that perhaps these experiments don’t really compare two modes of cognition at all, that both groups came to their conclusions as soon as they got the data. Whatever happened afterward— focused contemplation, or distracting word-puzzle— was irrelevant. It’s a self-defeating rationale, though. It’s not a defense of conscious analysis, only an acknowledgment that consciousness may be irrelevant in either case.

The jury remains out. A day after “On Making the Right Choice…” came out, the authors of the original, pro-UTA papers were already attacking its methodology. Even Nieuwenstein et al admit that they haven’t shown that the UTA model is false— only that it hasn’t yet been proven. And these new findings, even if they stand, leave unanswered the question of what consciousness is good for. The dust has yet to settle.

I have to admit, though, that Nonconscious Isn’t Any Worse doesn’t have quite the same ring as Nonconscious Is Better. Which, personally, kind of sucks.

Why couldn’t they have gone after my smart gels instead?




Posted in: blindsight, sentience/cognition by Peter Watts 36 Comments

By & About

Me, that is. In reference to a couple of essays that have gone live over the past 24 hours.


AEscifiI haven’t had a lot contact with the good folks over at The Canadian Science Fiction Review— I don’t even know why they call themselves “Æ”, now that I think of it— but over the years I’ve got the sense that they like my stuff (well, a lot of it, at least— not even the strength of Æ’s fannishness was enough to get them to like βehemoth). Now they’ve posted “God and the Machines” by Aurora nominee Jonathan Crowe: a short essay on my short fiction, which among other things deals with the question of why everybody thinks I’m so damn grimdark when I’m actually quite cuddly. (Thank you, Jonathan. I was getting tired being the only one to point that out.) (Also, great title.)

Crowe posits something I hadn’t considered: that I don’t write the darkest stuff out there by any means, but it seems darker because I use Hard-SF as the delivery platform. I serve up crunchy science souffle, but I serve it with a messy “visceral” prose that “bleeds all over the page”. It’s a contrast effect, he seems to be saying; the darkness looks deeper in comparison to the chrome and circuitry that frames it. (Also, while those at the softer end of the spectrum tend to lay their nihilistic gothiness at the feet of Old Ones and Tentacle Breathers, I tend to lay it on the neurocircuitry of the  human brain. My darkness is harder to escape, because— as the protagonist of “Cloudy with a Chance of Meatballs” once reminisced— “You can’t run away from your own feet”.)  Something to think about, anyway.

It’s a good read. You should check it out.


The other essay is not about me but by me, and it just went up today over at Aeon. It’s basically a distillation of ideas and thought experiments from various talks and short stories and blog posts I’ve made over the years, mixed in with some late-breaking developments in Brain-Machine Interface technology. It explores some of the ramifications of shared consciousness and multibrain networks. (Those who’ve read my recent exercise in tentacle porn won’t be surprised that those ramifications are a bit dark around the edges).


Illustration by Richard Wilkinson.

In contrast with my experience of “God and the Machines”, I wasn’t expecting to learn anything new from “The Bandwidth of a Soul”, because (obviously) I wrote the damn thing. Surprisingly, though, I did learn things. I learned that it’s not called “The Bandwidth of a Soul” any more. I’m not quite sure what it is called: the visible heading reads “Hive Consciousness” but the page itself (and all the twitter links feeding back to it) are titled “Do We Really Want To Fuse Our Minds Together?” (I guess this is just something that magazines do. A couple of years back I wrote an autobiographical bit about flesh-eating disease for The Daily; its title morphed from “The Least Unlucky Bastard” into “I Survived Flesh-Eating Bacteria: One Man’s Near-Death Experience With The Disease Of Your Nightmares”.)

I also learned that the staff of Aeon might feel the need to tip-toe around references to public figures— at the expense of what was, IMHO, one of the better lines in the piece. You will find it at the end of the following paragraph:

I’m not sure how seriously to take [the Cambridge Declaration]. Not that I find the claim implausible – I’ve always believed that we humans tend to underestimate the cognitive complexity of other creatures – but it’s not as though the declaration announced the results of some ground-breaking new experiment to settle the issue once and for all. Rather, its signatories basically sat down over beers and took a show of hands on whether to publicly admit bonobos to the Sapients Club. (Something else that seems a bit iffy is all the fuss raised over the signing of the declaration ‘in the presence of Stephen Hawking’, even though he is neither a neuroscientist nor a signatory. You almost get the sense of a card table hastily erected next to Hawking’s wheelchair, in the hopes that some of his credibility might rub off before he has a chance to roll away.)

You will not find it over at Aeon, though; that last sentence disappeared from the final draft. Obviously the Card Table Lobby has no sense of humor.

I’d also like to give a shout-out here to neuroscientist Erik Hoel, out of Giulio Tononi’s lab at the University of Wisconsin-Madison. It was his back-of-the envelope calculations that generated the bandwidth comparison between smart phones and corpus callosums. I credited the man in-text but that line also seems to have been cut.

Other than that, though— and allowing for the Aeon’s editorial preferences (they like commas; they don’t like hypertext links)— it’s pretty much all there. They even left my Morse-code-orgasm joke intact.

So check that out, too. You’ll get all the neuroscientific speculation I ever put in any of my stories, without having to wade through all that noodly fiction stuff.

Aurora Campbell Panoptopus.

Some of you may have noticed that Echopraxia made it onto the longest short list in SF a few weeks back: the ballot for the John W. Campbell Memorial Award for Best Science Fiction Novel. On the plus side (for me), it’s one of those jury-selected deals, so it’s not a popularity contest like the Hugos. (These days, it’s an especially big deal to not be like the Hugos.) On the minus side, well, there are 15 other finalists, almost all of whom are more famous/accomplished than me. So there’s that.

I didn’t mention it at the time, because on its own it would have made for a pretty insubstantial blog post. Plus there was another impending nom that was embargoed until— actually, until just last night, and I figured the post might be a bit more substantive if I stacked to two of them together. So: Echopraxia also made it onto best-novel final ballot for the Auroras, which consists of a much-more-manageable 5 nominees but which is kind of a popularity contest. Plus the competition is generally more famous/accomplished than me. (Like I’m gonna beat William fucking Gibson. Right.) As chance would have it, this year’s Auroras are being presented at SFContario, where I’m supposed to be serving as both Guest of Honour and Toastmaster. I’ve never been a toastmaster before. I’m still\not entirely sure what one even is. Assuming it’s not some kind of fetish thing revolving around baked goods, I gather it has something to do with presenting the Auroras. I should probably check with the concomm about stepping down, to avoid a conflict of interest.

I am gratified to see certain finalists in other categories, though: you could certainly do worse than vote for Sandra Kasturi’s Chiaroscuro Reading Series in the Best Fan Organizational category, for example. And if Erik Mohr doesn’t win for Best Artist there’s little justice in the world.

Anyway. I figure my chances of winning either prize are somewhere between low and negligible— but that’s okay, because I just hit a bullseye in something else without even trying. To wit:

“People talk about the eyes,” he continued after a bit. “You know, how amazing it is that something without a backbone could have eyes like ours, eyes that put ours to shame even. And the way they change color, right? The way they blend into the background. Eyes gotta figure front and center in that too, you’d think.”

“You’d think.”

Guo shook his head. “It’s all just— reflex. I mean, maybe that little neuron doughnut has its own light on somewhere, you’d think it would pretty much have to, but I guess the interface didn’t access that part. Either that or it just got— drowned out…”

—Me, on this very blog, April 30, 2015.

Octopus chromatophores. Skin that looks back at you.

Octopus chromatophores. The Panoptopus. Skin that looks back at you.

Octopuses can mimic the color and texture of a rock or a piece of coral… But before a cephalopod can take on a new disguise, it needs to perceive the background that it is going to blend into. Cephalopods have large, powerful eyes to take in their surroundings. But two new studies in The Journal Experimental Biology suggest that they have another way to perceive light: their skin. It’s possible that these animals have, in effect, evolved a body-wide eye.

Carl Zimmer, New York Times, May 20, 2015

Here, we present molecular evidence suggesting that cephalopod chromatophores – small dermal pigmentary organs that reflect various colors of light – are photosensitive. … This is the first evidence that cephalopod dermal tissues, and specifically chromatophores, may possess the requisite combination of molecules required to respond to light.

—ACN Kingston et al, Journal of Experimental Biology, May 15, 2015


…our data suggest that a common molecular mechanism for light detection in eyes may have been co-opted for light sensing in octopus skin.

—Ramirez and Oakly, Journal of Experimental Biology, May 15, 2015

Beat them by two weeks.

Okay, so maybe not an absolute bullseye. That little fiblet I wrote went on to describe octopus sensation as involving “this vague distant sense of light I guess, if you really focus you can sort of squint down the optic nerve, but mostly it’s— chemical. Taste and touch.” My focus was on the arms, those individually self-aware arms, and I explicitly claimed that “they don’t see”. Pretty much everything was chemical and tactile. But it was still pretty close to a bullseye—in my attempts to downplay vision and outsource everything to the arms, I described the whole pattern-matching thing as a reflex which didn’t really involve the eyes at all. There was no real insight in that— it’s not as though I’ve been following the octopus literature with any kind of eagle eye— but to me, that’s what makes it cool. I threw a dart, blindfolded; just hitting the board is an accomplishment. And now that actual data are in, I can tart up the final draft with some actual verisimilitude before sending it off to Russia.

I love it when the complete lack of a plan comes together.

Oh, also: there’s some cool rifters fan art from “Toa-Lagara” I stumbled across on Deviant Art. I’ll post it in the appropriate gallery once I get permission from the artist.

Posted in: art on ink, biology, marine, neuro, writing news by Peter Watts 20 Comments