Predatory Practices.

Oh, we are so fucking bad-ass. Even Science says so.

The paper’s called “The Unique Ecology of Human Predators” (commentary here), and it’s been getting a lot of press since it came out last week. “People Are Deadliest Predators”, trumpets Discovery News; “Humans Are Super Predators”, IFL Science breathlessly repeats. Even Canada’s staid old CBC, which has grown nothing but more buttoned-down and conservative since its Board of Directors were executed and replaced by all those cronies Harper couldn’t fit into the Senate, gets into the act: “Humans are ‘superpredators’ like no other species”, it tells us.

There are other examples— loads of them— but you get the idea. The coverage generally goes on to remark on how much more lethal we are than sharks and lions, how our unsustainable “predatory” strategies are driving species to extinction.

Really. We’re better than sharks at wiping out species. This is news. This is worthy of publication in one of the premiere cutting-edge science journals on the planet.

Our place among the bad-asses. From Daramont et al 2015

Our place among the bad-asses. From Daramont et al 2015

The paper itself— basically a meta-analysis of data from a variety of sources— justifies its existence by pointing out that previous models may have underestimated our ecological impact by treating us as just another predator species. Their results clearly show, however, that we are not mere predators: in many ways we are Extreme Predators. For example, while other predators tend to weed out the young, the sick, and the injured, we Humans indiscriminately take all classes— frequently targeting the largest individuals of a population, which act as “reproductive reservoirs” and whose loss is thus more keenly felt than the loss of cubs or larvae. This also creates selection pressure against large-bodied adults, meaning that we are causing reproductive individuals to shrink over time. (This came as news to me— albeit intuitively-obvious, not-very-surprising news— back when I took my first fisheries biology class in 1979. I was a bit taken aback to see it being marketed as a shiny new insight up here in 2015.)

The bad news keeps rolling in, hitting us in the gut with the impact of its utter unexpectedness. Most fish-eating predators just take one fish at a time. We Hu-Mans, with our Nets and Technology, scoop up Entire Schools At Once! Unlike other predators, we hunt for trophies! We are one of the few predators that hunts other predators!

Perhaps the highlight of the paper occurs when the authors, straight-faced, point out that other marine predators are limited in the size of their prey by how wide their jaws can gape— whereas we take prey that would be far too large to fit into our mouths. This, the authors suggest, “might explain why marine predator rates are comparatively low” compared to our own.

In Science. Swear to God. You can look it up yourself if you don’t believe me.

Larson nails it.  As usual.

Larson nailed it. As usual.

I don’t pretend to understand what this is doing in the pages of a front-line peer-reviewed journal, unless it’s some kind of social experiment along the lines of Alan Sokal’s Social Text hoax. As to why it’s received such widespread attention in the mainstream, I wonder if it’s because the subtext paints lipstick on seven billion pigs. After all, predators are cool. We paint shark mouths on our fighter planes, we airbrush cheetahs onto the sides of our fuck trucks. (Or at least we used to. Back in the day.) Outsharking the shark? Getting to be a Super Predator? Why, that’s almost something to be proud of! Nothing like a bit of sexy rebranding to distract us from the fact that we’ll have wiped out a third of the planet’s extant species by the end of the century.

Because it’s all bullshit, of course. We’re not predators, Super or Garden-variety, in any biological sense. Most predators wreak their havoc in one way; they kill and eat their victims one at a time. They don’t poison entire ecosystems before killing off the inhabitants. You know when you’ve been predated: your killer takes you out face-to-face, one on one. You don’t sicken and die, sprouting tumors or weeping sores or forced into some miniscule fragmenting refuge by invisible forces that don’t know or care if you even exist. You can escape from a real predator.  Sometimes.

“Superpredation” is the least of our sins. As a label, it doesn’t begin to encompass the extent of our impact.

So did the Wachowskis. The first time around,  anyway.

So did the Wachowskis. The first time around, anyway.

“Pestilence” might do, though. “Plague.” Just barely. At least, it would come a bit closer to the truth.

I wonder how long it’ll take for Daramont et al to put out a paper describing Humanity as a “Super Disease”.

I wonder what kind of coverage the CBC will give ′em when they do.

Posted in: biology, eco, marine, science by Peter Watts 16 Comments

“Humans”? They Weren’t Kidding.

Spoilers.  Duh.

Honestly, I can't see much difference from the staff they've already got at Home Depot...

Honestly, I can’t see much difference from the staff they’ve already got at Home Depot…

So that was Humans. Eight hours of carefully-arced, understated British narrative about robots: an AMC/Channel 4 coproduction that’s netted Channel 4 its biggest audiences in over two decades. What great casting. What fine acting. What nice production values. What a great little bit of subtext as William Hurt and his android, both well past their expiry dates, find meaning in their shared obsolescence.

What a pleasant 101-level introduction to AI for anyone who’s never thought about AI before, who’s unlikely to think about AI again, and who doesn’t like thinking very hard about much of anything.


Humans extrapolates not so much forwards as sideways. Its world is recognizably ours in every way but one. Cars, cell phones, forensic methodology: everything is utterly contemporary but for the presence of so-called “synths” in our midst. These synths, we’re told, have been around for at least fourteen years. So this is no future; this is an alternate present, a parallel timeline in which someone invented general-purpose, sapient AI way back in 2001. (I wonder if that was a deliberate nod to you-know-who.)

In this way Humans superficially feels much like that other British breakout, Black Mirror. It appears to follow the same formula, seducing the casual, non-geek viewer in the same way: by not making the world too different. By easing them into it. Let them think they’re on familiar ground, then subvert their expectations.

Except Humans doesn’t actually do that.

Start by positing a new social norm: neurolinked subcutaneous life-loggers the size of a rice grain, embedded behind everyone’s right ear. But don’t stop there. Explore the ramifications, ranging from domestic (characters replay good sex in their heads while participating in bad sex on their beds) to state (your recent memories are routinely seized and searched whenever you pass through a security checkpoint). That’s an episode of Black Mirror.

South Park did it better.

South Park did it better.

So how does this approach play out in Humans? What are the ramifications when you have AGIs in every home, available for a few grand at the local WalMart? This is what Humans is ostensibly all about, and it’s a question well worth exploring— but all the series ever does with it is trot out the old exploited-underclass trope. Nothing changes, except now we’ve got synths doing our gardening instead of Mexicans. We rail against robots taking our jobs instead of immigrants. That’s pretty much it.

I mean, at the very least, shouldn’t all the cars in this timeline be self-driving by now?

Once or twice Humans hesitantly turns the Othering Dial past what you might expect for a purely human underclass. Angry yahoos with tire irons gather in underground parkades to bash in the skulls of unresisting synths, and at one point William Hurt sends his faithful malfunctioning droid out into the woods for an indefinite game of hide-and-seek. But both those episodes were lifted directly from Spielbricks’s 2001 movie “A.I.” (as was William Hurt, now that I think of it). And given the recent cascade of compromising video footage filtering up from the US, I’m not at all convinced that bands of disgruntled white people wouldn’t have a mass immigrant bash-in, given half the chance. Or that law enforcement would do anything to stop them.

There is nothing artificial about these intelligences. The sapient ones (around whom the story revolves) are Just Like Us. They want to live, Just Like We Do. They want to be Free, Just Like Us. They rage against their sexual enslavement, Just Like We Would. And the nonsapient models? Never fear; by the end of the season, we’ve learned that with a bit of viral reprogramming, they too can be Just Like Us!

They are so much like us, in fact, that they effectively shut down any truly interesting questions you might want to ask about AI.


I have to put a caption here, because stupid WordPress erases the text padding otherwise and I can’t be bothered to tweak the code.

Let’s take sex, for example.

I’m pretty sure that even amongst those who subscribe to the concept of monogamous marriage, few would regard masturbation as an act of infidelity. Likewise, you might be embarrassed getting caught with your penis in a disembodied rubber vagina, but your partner would be pretty loony-tunes to accuse you of cheating on that account. Travel further along that spectrum— inflatable sex dolls, dolls that radiate body heat, dolls with little servos that pucker their lips and move their limbs— until you finally end up fucking a flesh-and-blood, womb-born, sapient fellow being. At which point pretty much everyone would agree that you were cheating (assuming you were in a supposedly monogamous relationship with someone else, of course).

A question I’d find interesting is, where does an android lie on that spectrum? Does the spectrum even apply to an android? By necessity, infidelity involves a betrayal of trust between beings (as opposed to a betrayal over something inanimate; if you keep shooting heroin after you’ve promised your partner you’ll stop, you’ve betrayed their trust but you’re not an infidel). Infidelity with a robot, then, implies that the robot is a being in its own right. Otherwise you’re just jerking off into a mannequin.

Let’s say your synth is a being. The very concept of exploitation hinges on the premise that the exploitee has needs and desires that are being oppressed in some way. I, the privileged invader, steal resources that should be yours. Through brute bullying force I impose my will upon you, and dismiss your own as inconsequential.

But what if your will, subordinate though it may be, is entirely in accord with mine?


Nice bit of Alternate-reality documentation, though.

I’m not just talking about giving rights to toasters— or at least, if I am, I’m willing to grant that said toasters might be sapient. But so what if they are? Suppose we build a self-aware machine that does have needs and desires— but those needs and desires conform exactly to the role we designed them for? Our sapient slavebot wants to work in the mines; our self-aware sexbot wants to be used. There are issues within issues here: whether a mechanical humanoid is complex enough to have interests of its own; if so, whether it’s even possible to “oppress” something whose greatest aspiration is to be oppressed. Is there some moral imperative that makes it an a priori offense to build sapient artefacts that lack the capacity to suffer and rage and rebel— and if so, how fucking stupid can moral imperatives be?

I’m nowhere near the first to raise such questions. (Who can forget Douglas Adam’s sapient cow from The Restaurant at the End of the Universe, neurologically designed to want nothing more than to be eaten by hungry customers?) Which makes it all the more disappointing that Humans, ostensibly designed as an exploration platform for exactly these issues, is too damn gutless to engage with them. A hapless husband, in a fit of pique, activates the household synth’s “Adult Mode” and has a few minutes of self-loathing sex with it. The synth itself— which you’d think would have been programmed to at least act as though it’s getting off— sadly endures the experience, with all the long-suffering dignity of a Victorian wife performing her wifely duties under a caddish and insensitive husband.

When the real wife finds out what happens, predictably, she hits the roof— and while the husband makes a brief and half-hearted attempt to play the It’s just a machine! card, he obviously doesn’t believe it any more than we viewers are supposed to. In fact, he spends the rest of the season wringing his hands over the unforgivable awfulness of his sin.

Robocop also did it better.

Robocop also did it better.

Throughout the whole season, the only character who plays with the idea of combining sapience with servility is the mustache-twirling villain of the piece— and even he doesn’t go anywhere near the idea of sidestepping oppression by editing desire. Nah, he just imposes the same ham-fisted behavioral lock we saw back in Paul Verhoeven’s (far superior) Robocop, when Directive 4 kicked in.


Humans pretends to be genre subversive, thinks that by setting itself in a completely conventional setting it can lure in people who might be put off by T-800 endoskeletons and Lycra jumpsuits. It promises to play with Big Ideas, but without all those ostentatious FX— so by the time the casual viewer realizes they’ve been watching that ridiculous science fiction rubbish it won’t matter, because they’re already hooked.

You have no idea where this show is going.

You have no idea where this show is going.

It’s a great strategy, if you do it right. Look at Fortitude, for example: another British coproduction that begins for all the world like a police procedural, then seems to segue into some kind of ghost story before finally revealing itself as one of the niftiest little bits of cli-fi ever to grace a flatscreen. (The only reason I’m not devoting this whole post to Fortitude is because I wrote my latest Nowa Fantastyka column on the subject, and I must honor both my ethical and contractual noncompete constraints).

Humans does not do it right. For all the lack of special effects there’s little subtlety here; it pays lip service to Is it live or is it Memorex, but it doesn’t explore those issues so much as preach about them in a way that never dares challenge baseline preconceptions. With Fortitude you started off thinking you were in the mainstream, only to end up in SF. Humans does the reverse, launching with the promise of a thought-provoking journey into the ramifications of artificial intelligence; but it doesn’t take long for the green eyes to ‘ware thin and its true nature to emerge. In the end, Humans is just another shallow piece of social commentary, making the point— over eight glossy, well-acted episodes— that Slavery Is Wrong.

What a courageous stand to take, here in 2015. What truth, spoken to power.

What a wasted fucking opportunity.

Posted in: ink on art by Peter Watts 24 Comments

A Young Squid’s Illustrated Primer

Part the First: Liminal

Apparently, this is how Jasun Horsley sees me. I presume I'm the one on the right. (non-old-one elements by (Maria Nygard).

Apparently, this is how Jasun Horsley sees me. I presume I’m the one on the right. (non-Old-One elements by Maria Nygård).

I recently did a kind of free-form interview with fellow US-border-guard-detainee Jasun Horsley, for his Liminalist podcast. It went okay, if you discount the fact that the Skype connection seemed to go dead without warning every couple of minutes. I certainly hope that we repeated our respective Qs and As often enough to redundify those gaps— I note that, while we spoke for over two hours, the podcast itself weighs in at only one (including some nifty little musical interludes). Given the number of dropouts, that seems about right.

I’m listening to the final result even as I type, and so far my giddy enthusiasm isn’t quite loud enough to distract from the random boluses of dead air that shut me up every now and then. I do not envy Jasun the editing job it took to beat the raw recording into shape.

He also wrote a companion essay, “Neuro-Deviance and the Evolutionary Function of Depression“, from the perspective of someone halfway through Starfish. I think the Neuro-Deviant is supposed to be me.

Anyway, the on-site blurb describes our interaction as

…a roving and rifting conversation with Jasun about killing Jake (the One) and integrative therapy courtship, Lonesome Bob’s death ballad, Peter’s marine biology years, the initial impetus, Peter’s childhood “Everyone can have their own aquarium!” epiphany, astronaut dreams, getting off the planet, Jasun’s views on space travel (again), a bleak ET future for mankind, the ultimate displacement activity, Interstellar’s message, space travel benefits, the military agenda, 2001: A Space Odyssey opposing views, the hope for higher intelligence, determinism vs. transcendence, rejecting the duality of spiritual-material, how neurons are purely reactive, fizzy meat, the psychology of determinism, response vs. reaction, selective perception, truth and survival, depression’s correlation (or equivalence) with reality-perception, God and the anti-predator response, three men in a jungle, how natural selection shapes us to be paranoid, how anxiety allows us to see patterns, the many doings of paranoia, shaping the outside to match the inside (the devil made me do it), seeking the perks of depression, how depression fuels creativity, a thought experiment, is removing the lows desirable, depression as a new stage in human development, the difference between biology and psychology, the psyche and Behemoth, the pointlessness of survival, he who dies with most kid wins, what science is missing, the hard problem of consciousness, the difference between intelligence and consciousness, nipples on men, the best kind of mystery, the language variable, what if consciousness is mal-adaptive?

I think I remember most of that stuff.

(I would like to apologize, by the way, for repeating to Jasun the oversimplification that neurons only fire when externally provoked; I’ve been recently informed that neurons sometimes do fire spontaneously, as a result of changes to their internal state. Ultimately, of course, those internal states have to reflect some kind of historical cell-environment interaction, but I should probably start using a more nuanced bumper-sticker anyway.)


Part the Second: Scramblers

Nicely done, Alienietzsche.

Nicely done, Alienietzsche.

 Last week’s ego-surf turned up this great little illustration from Deviant Artist “Alienietzsche“— whose vision of Blindsight‘s scramblers is perhaps the closest I’ve seen to the images that were floating around in my own head while I was writing about those crawly little guys. This is going straight into the Gallery, with thanks and with ol ‘Nietzsche’s blessing.


Part the Third: Lemmings

If you look closely, you'll see that the plankton sliding into the astronauts bootprints look like neurons. Yeah, well, I was only thirteen.

If you look closely (you may have to click to embiggen), you’ll see that the plankton sliding into the astronauts bootprints look kind of like neurons.Yeah, well, I was only thirteen.

I recently told the Polish website Kawerna about a few of the novels that had had the greatest influence on me (they asked, in case you’re wondering; it’s not like I called them up in the middle of the night and forced my unsolicited opinions down their throat or anything). You won’t be surprised to learn that one of those titles was Stanislaw Lem’s Solaris. You may, however, be unaware of the profound resentment that book instilled within me when I first discovered it:

I spent most of my thirteenth summer trapped in a basement apartment in some Oregonian hick town, with little to do but read while my dad attended summer classes at the local university. I beach-combed on weekends, though— and while wandering Oregon’s coast that summer, my adolescent brain cooked up the idea of an intelligent ocean— a kind of diffuse neural network in which the plankton acted as neurons. I was going to write a story about it, even penciled a couple of sketches based on the idea.

Two weeks later I discovered Solaris in the local library. I’ve kind of resented Lem ever since…

The Kawerna assignment inspired me to dig back through the archives to see if I could find any of those sketches— and I did find a few, yellowed, moldy, nibbled by silverfish in their cheap plastic frames. I present one here, as evidence that while I may not have come up with the idea for Solaris before Lem did, I at least came up with it before I knew that Lem had. Which wasn’t bad, for a thirteen-year-old stuck in a basement while his Dad took post-graduate Bible-Study classes.


Part the Last: Reprint Roll

dsc_00012Specialty micropress “Spacecraft Press” has released an extremely-limited-edition reprint of “The Things” as a chapbook, printed on a kind of translucent plasticy paper and inventively formatted in a manner more reminiscent of free verse than of prose. And I’m not kidding when I say “extremely-limited”: the total print run was only 21, which— when it comes to my work at least— is significantly fewer copies than even Tor usually loads into a print run. And only ten of those are available for sale (or would be, if they hadn’t already sold out). I guess this explains the eleven copies of “The Things” that appeared in my mailbox the other day.

If you look closely, you'll see that the Introduction is written by someone who does not write Science Fiction at all.

You’ll note that the Introduction is written by someone who does not write Science Fiction at all.

Last, and probably least— not because of lesser importance, but because the news is a week old by now, and has already been trumpeted on every social medium from HoloBook to carrier pigeon— legendary Canadian publisher ChiZine has announced the contents of this year’s Imaginarium 4: The Best Canadian Speculative Fiction. “Giants” is in there, but it almost wasn’t. It was supposed to be “Collateral” until a few weeks ago— and before that, it was supposed to be “The Colonel”. I’m actually kind of pleased things finally fell out the way they did; I’ve always had a soft spot for “Giants”, even if it hasn’t got the love that “Collateral” and “the Colonel” have in terms of year-end collections. Still, I can’t help but notice that “Giants” is also the shortest of the three, word-count-wise— which makes me wonder if a more appropriate subtitle might be The Best Canadian Speculative Fiction that fits into 300 pages or less.

It’s all good, though.



I have come to the end of Jasun’s podcast at almost the same time I’ve come to the end of this post; turns out it’s only part one of a two-parter, to be continued this Wednesday. Which is odd, because— while I recognize all the bits I’ve just heard coming through my laptop speakers— I don’t remember anything missing from that dialog.

Now I’m going to lie awake all night, wondering what else we talked about.




No Brainer.

For decades now, I have been haunted by the grainy, black-and-white x-ray of a human skull.

It is alive but empty, with a cavernous fluid-filled space where the brain should be. A thin layer of brain tissue lines that cavity like an amniotic sac. The image hails from a 1980 review article in Science: Roger Lewin, the author, reports that the patient in question had “virtually no brain”. But that’s not what scared me; hydrocephalus is nothing new, and it takes more to creep out this ex-biologist than a picture of Ventricles Gone Wild.

The stuff of nightmares. (From Oliviera et al 2012)

The stuff of nightmares. (From Oliveira et al 2012)

What scared me was the fact that this virtually brain-free patient had an IQ of 126.

He had a first-class honors degree in mathematics. He presented normally along all social and cognitive axes. He didn’t even realize there was anything wrong with him until he went to the doctor for some unrelated malady, only to be referred to a specialist because his head seemed a bit too large.

It happens occasionally. Someone grows up to become a construction worker or a schoolteacher, before learning that they should have been a rutabaga instead. Lewin’s paper reports that one out of ten hydrocephalus cases are so extreme that cerebrospinal fluid fills 95% of the cranium. Anyone whose brain fits into the remaining 5% should be nothing short of vegetative; yet apparently, fully half have IQs over 100. (Why, here’s another example from 2007; and yet another.) Let’s call them VBNs, or “Virtual No-Brainers”.

The paper is titled “Is Your Brain Really Necessary?”, and it seems to contradict pretty much everything we think we know about neurobiology. This Forsdyke guy over in Biological Theory argues that such cases open the possibility that the brain might utilize some kind of extracorporeal storage, which sounds awfully woo both to me and to the anonymous neuroskeptic over at; but even Neuroskeptic, while dismissing Forsdyke’s wilder speculations, doesn’t really argue with the neurological facts on the ground. (I myself haven’t yet had a chance to more than glance at the Forsdyke paper, which might warrant its own post if it turns out to be sufficiently substantive. If not, I’ll probably just pretend it is and incorporate it into Omniscience.)

On a somewhat less peer-reviewed note, VNBs also get routinely trotted out by religious nut jobs who cite them as evidence that a God-given soul must be doing all those things the uppity scientists keep attributing to the brain. Every now and then I see them linking to an off-hand reference I made way back in 2007 (apparently is the only place to find Lewin’s paper online without having to pay a wall) and I roll my eyes.

And yet, 126 IQ. Virtually no brain. In my darkest moments of doubt, I wondered if they might be right.

So on and off for the past twenty years, I’ve lain awake at night wondering how a brain the size of a poodle’s could kick my ass at advanced mathematics. I’ve wondered if these miracle freaks might actually have the same brain mass as the rest of us, but squeezed into a smaller, high-density volume by the pressure of all that cerebrospinal fluid (apparently the answer is: no). While I was writing Blindsight— having learned that cortical modules in the brains of autistic savants are relatively underconnected, forcing each to become more efficient— I wondered if some kind of network-isolation effect might be in play.

Now, it turns out the answer to that is: Maybe.

Three decades after Lewin’s paper, we have “Revisiting hydrocephalus as a model to study brain resilience” by de Oliveira et al. (actually published in 2012, although I didn’t read it until last spring). It’s a “Mini Review Article”: only four pages, no new methodologies or original findings— just a bit of background, a hypothesis, a brief “Discussion” and a conclusion calling for further research. In fact, it’s not so much a review as a challenge to the neuro community to get off its ass and study this fascinating phenomenon— so that soon, hopefully, there’ll be enough new research out there warrant a real review.

The authors advocate research into “Computational models such as the small-world and scale-free network”— networks whose nodes are clustered into highly-interconnected “cliques”, while the cliques themselves are more sparsely connected one to another. De Oliveira et al suggest that they hold the secret to the resilience of the hydrocephalic brain. Such networks result in “higher dynamical complexity, lower wiring costs, and resilience to tissue insults.” This also seems reminiscent of those isolated hyper-efficient modules of autistic savants, which is unlikely to be a coincidence: networks from social to genetic to neural have all been described as “small-world”. (You might wonder— as I did— why de Oliveira et al. would credit such networks for the normal intelligence of some hydrocephalics when the same configuration is presumably ubiquitous in vegetative and normal brains as well. I can only assume they meant to suggest that small-world networking is especially well-developed among high-functioning hydrocephalics.) (In all honesty, it’s not the best-written paper I’ve ever read. Which seems to be kind of a trend on the ‘crawl lately.)

The point, though, is that under the right conditions, brain damage may paradoxically result in brain enhancement. Small-world, scale-free networking— focused, intensified, overclockedmight turbocharge a fragment of a brain into acting like the whole thing.

Can you imagine what would happen if we applied that trick to a normal brain?

If you’ve read Echopraxia, you’ll remember the Bicameral Order: the way they used tailored cancer genes to build extra connections in their brains, the way they linked whole brains together into a hive mind that could rewrite the laws of physics in an afternoon. It was mostly bullshit, of course: neurological speculation, stretched eight unpredictable decades into the future for the sake of a story.

But maybe the reality is simpler than the fiction. Maybe you don’t have to tweak genes or interface brains with computers to make the next great leap in cognitive evolution. Right now, right here in the real world, the cognitive function of brain tissue can be boosted— without engineering, without augmentation— by literal orders of magnitude. All it takes, apparently, is the right kind of stress. And if the neuroscience community heeds de Oliveira et al‘s clarion call, we may soon know how to apply that stress to order. The singularity might be a lot closer than we think.

Also a lot squishier.

Wouldn’t it be awesome if things turned out to be that easy?

Dr. Fox and the Borg Collective

Take someone’s EEG as they squint really hard and think Hello. Email that brainwave off to a machine that’s been programmed to respond to it by tickling someone else’s brain with a flicker of blue light. Call the papers. Tell them you’ve invented telepathy.

I mean, seriously: aren't you getting tired of seeing these guys?

I mean, seriously: aren’t you getting tired of these guys?

Or: teach one rat to press a lever when she feels a certain itch. Outfit another with a sensor that pings when the visual cortex sparks a certain way. Wire them together so the sensor in one provokes the itch in the other: one rat sees the stimulus and the other presses the lever. Let Science Daily tell everyone that you’ve built the Borg Collective.

There’s been a lot of loose talk lately about hive minds. Most of it doesn’t live up to the hype. I got so irked by all that hyperbole— usually accompanied by a still from “The Matrix”, or a picture of Spock in the throes of a mind meld— that I spent a good chunk of my recent Aeon piece bitching about it. Most of these “breakthroughs”, I grumbled, couldn’t be properly described as hive consciousness or even garden-variety telepathy. I described it as the difference between experiencing an orgasm and watching a signal light on a distant hill spell out oh-god-oh-god-yes in Morse Code.

I had to allow, though, that it might be only a matter of time before you could scrape the hype off one of those stories and find some actual substance beneath. In fact, the bulk of my Aeon essay dealt with the implications of the day when all those headlines came true for real.

I think we might have just hit a milestone.


Here’s something else to try. Teach a bunch of thirsty rats to distinguish between two different sounds; motivate them with sips of water, which they don’t get unless they push the round lever when they hear “Sound 0″ and the square one when they hear “Sound 1″.

Once they’ve learned to tell those sounds apart, turn them into living logic gates. Put ‘em in a daisy-chain, for example, and make them play “Broken Telephone”: each rat has to figure out whether the input is 0 or 1 and pass that answer on to the next in line. Or stick ‘em in parallel, give them each a sound to parse, let the next layer of rats figure out a mean response. Simple operant conditioning, right? The kind of stuff that was old before most of us were born.

Now move the stimulus inside. Plant it directly into the somatosensory cortex via a microelectrode array (ICMS, for “IntraCortical MicroStimulation”). And instead of making the rats press levers, internalize that too: another array on the opposite side of the cortex, to transmit whatever neural activity it reads there.

Call it “brainet”. Pais-Vieira et al do.

The paper is “Building an organic computing device with multiple interconnected brains“, from the same folks who brought you Overhyped Rat Mind Meld and Monkey Videogame Hive. In addition to glowing reviews from the usual suspects, it has won over skeptics who’ve decried the hype associated with this sort of research in the past. It’s a tale of four rat brains wired together, doing stuff, and doing it better than singleton brains faced with the same tasks. (“Split-brain patients outperform normal folks on visual-search and pattern-recognition tasks,” I reminded you all back at Aeon; “two minds are better than one, even when they’re in the same head”). And the payoff is spelled out right there in the text: “A new type of computing device: an organic computer… could potentially exceed the performance of individual brains, due to a distributed and parallel computing architecture”.

Bicameral Order, anyone? Moksha Mind? How could I not love such a paper?

And yet I don’t. I like it well enough. It’s a solid contribution, a real advance, not nearly so guilty of perjury as some.

And yet I’m not sure I entirely trust it.

I can’t shake the sense it’s running some kind of con.


The real thing.  Sort of.

The real thing. Sort of. (From Pais-Vieira et al 2015.

There’s much to praise. We’re talking about an actual network, multiple brains in real two-way communication, however rudimentary. That alone makes it a bigger deal than those candy-ass one-direction set-ups that usually get the kids in such a lather.

In fact, I’m still kind of surprised that the damn thing even works. You wouldn’t think that pin-cushioning a live brain with a grid of needles would accomplish much. How precisely could such a crude interface ever interact with all those billions of synapses, configured just so to work the way they do? We haven’t even figured out how brains balance their books in one skull; how much greater the insight, how many more years of research before we learn how to meld multiple minds, a state for which there’s no precedent in the history of life itself?

But it turns out to be way easier than it looks. Hook a blind rat up to a geomagnetic sensor with a simple pair of electrodes, and he’ll be able to navigate a maze— using ambient magnetic fields— as well as any sighted sibling. Splice the code for the right kind of opsin into a mouse genome and the little rodent will be able to perceive colors she never knew before. These are abilities unprecedented in the history of the clade— and yet somehow, brains figure out the user manuals on the fly. Borg Collectives may be simpler than we ever imagined: just plug one end of the wire into Brain A, the other into Brain B, and trust a hundred billion neurons to figure out the protocols on their own.

Which makes it a bit of a letdown, perhaps, when every experiment Pais-Vieira et al describe comes down, in the end, to the same simple choice between 0 and 1. Take the very climax of their paper, a combination of “discrete tactile stimulus classification, BtB interface, and tactile memory storage” bent to the real-world goal of weather prediction. Don’t get too excited— it was, they admit up front, a very simple exercise. No cloud cover, no POP, just an educated guess at whether the chance of rain is going up or down at any given time.

Hey, can't be any worse than the weather person on CBC's morning show...

Hey, can’t be any worse than the weather person on CBC’s morning show…

The front-end work was done by two pairs of rats wired into “dyads”; one dyad was told whether temperature was increasing (0) or decreasing (1), while the other was told the same about barometric pressure. If all went well, each simply spat out the same value that had been fed into it; they were then reintegrated into the full-scale 4-node brainet, which combined those previous outputs to decide whether the chance of precip was rising or falling. It was exactly the same kind of calculation, using exactly the same input, that showed up in other tasks from the same paper; the main difference was that this time around, the signals were labeled “temperature rising” or “temperature falling” instead of 0 and 1. No matter. It all still came down to another encore performance of Brainet’s big hit single, “Torn Between Two Signals”, although admittedly they played both acoustic and electric versions in the same set.

I’m aware of the obvious paradox in my attitude, by the way. On the one hand I can’t believe that such simple technology could work at all when interfaced with living brains; on the other hand I’m disappointed that it doesn’t do more.

I wonder how brainet would resolve those signals.


Of course, Pais-Vieira et al did more than paint weather icons on old variables. They ran brainet through other paces— that “broken telephone” variant I mentioned, for example, in which each node in turn had to pass on the signal it had received until that signal ended up back at the first rat in the chain— who (if the run was successful) identified the serially-massaged data as the same one it had started out with. In practice, this worked 35% of the time, a significantly higher success rate than the 6.25%— four iterations, 50:50 odds at each step— you’d expect from random chance. (Of course, the odds of simply getting the correct final answer were 50:50 regardless of how long the chain was; there were only two states to choose from. Pais-Vieira et al must have tallied up correct answers at each intermediate step when deriving their stats, because it would be really dumb not to; but I had to take a couple of passes at those paragraphs, because at least one sentence—

“the memory of a tactile stimulus could only be recovered if the individual BtB communication links worked correctly in all four consecutive trials.”

— was simply wrong. Whatever the merits of this paper, let’s just say that “clarity” doesn’t make the top ten.)

What the rats saw. Ibid.

What the rats saw. Ibid.

More nodes, better results. Ibid.

More nodes, better results. Ibid.

The researchers also used brainet to transmit simple images— again, with significant-albeit-non-mind-blowing results— and, convincingly showed that general performance improved with a greater number of brains in the net. On the one hand I wonder if this differs in any important way from simply polling a group of people with a true-false question and going with the majority response; wouldn’t that also tend towards greater accuracy with larger groups, simply because you’re drawing on a greater pool of experience? Is every Gallup focus group a hive mind?

On the other hand, maybe the answer is: yes, in a way. Conventional neurological wisdom describes even a single brain as a parliament of interacting modules. Maybe group surveys is exactly the way hive minds work.


So you cut them some slack. You look past the problematic statements because you can figure out what they were trying to say even if they didn’t say it very well. But the deeper you go, the harder it gets. We’re told, for example, that Rat 1 has successfully identified the signal she got from Rat 4— but how do we know that? Rat 4, after all, was only repeating a signal that originated with Rat 1 in the first place (albeit one relayed through two other rats). When R1’s brain says “0”, is it parsing the new input or remembering the old?

Sometimes the input array is used as a simple starting gun, a kick in the sulcus to tell the rats Ready, set, Go: sync up! Apparently the rat brains all light up the same way when that happens, which Pais-Vieira et al interpret as synchronization of neural states via Brain-to-Brain interface. Maybe they’re right. Then again, maybe rat brains just happen to light up that way when spiked with an electric charge. Maybe they were no more “interfaced” than four flowers, kilometers apart, who simultaneously turn their faces toward the same sun.

Ah, but synchronization improved over time, we’re told. Yes, and the rats could see each other through the plexiglass, could watch their fellows indulge in the “whisking and licking” behaviors that resulted from the stimulus. (I’m assuming here that “whisking” behavior has to do with whiskers and not the making of omelets, which would be a truly impressive demonstration of hive-mind capabilities.) Perhaps the interface, such as it was, was not through the brainet at all— but through the eyes.

I’m willing to forgive a lot of this stuff, partly because further experimentation resolves some of the ambiguity. (In one case, for example, the rats were rewarded only if their neural activity desynchronised, which is not something they’d be able to do without some sense of the thing they were supposed to be diverging from.) Still, the writing— and by extension, the logic behind it— seems a lot fuzzier than it should be. The authors apparently recognize this when they frankly admit

“One could argue that the Brainet operations demonstrated here could result from local responses of S1 neurons to ICMS.”

They then list six reasons to believe otherwise, only one of which cuts much ice with me (untrained rats didn’t outperform random chance when decoding input). The others— that performance improved during training, that anesthetized or inattentive animals didn’t outperform chance, that performance degraded with reduced trial time or a lack of reward— suggest, to me, only that performance was conscious and deliberate, not that it was “nonlocal”.

Perhaps I’m just not properly grasping the nuances of the work— but at least some of that blame has to be laid on the way the paper itself is written. It’s not that the writing is bad, necessarily; it’s actually worse than that. The writing is confusing— and sometimes it seems deliberately so. Take, for example, the following figure:

Alone against the crowd. Ibid.

Alone against the crowd. Ibid.

Four rats, their brains wired together. The red line shows the neural activity of one of those rats; the blue shows mean neural activity of the other three in the network, pooled. Straightforward, right? A figure designed to illustrate how closely the mind of one node syncs up with the rest of the hive.

Of course, a couple of lines weaving around a graph aren’t what you’d call a rigorous metric: at the very least you want a statistical measure of correlation between Hive and Individual, a hard number to hang your analysis on. That’s what R is, that little sub-graph inset upper right: a quantitative measure of how precisely synced those two lines are at any point on the time series.

I mean, Jesus, Miguel. What are you afraid of? See how easy it is?

What are you afraid of, Miguel? See how easy it is?

So why is the upper graph barely more than half the width of the lower one?

The whole point of the figure is to illustrate the strength of the correlation at any given time. Why wouldn’t you present everything at a consistent scale, plot R along the same ruler as FR so that anyone who wants to know how tight the correlation is at time T can just see it? Why build a figure that obscures its own content until the reader surrenders, is forced to grab a ruler, and back-converts by hand?

What are you guys trying to cover?


Some of you have probably heard of the Dr. Fox Hypothesis. It postulates that “An unintelligible communication from a legitimate source in the recipient’s area of expertise will increase the recipient’s rating of the author’s competence.” More clearly, Bullshit Baffles Brains.

But note the qualification: “in the recipient’s area of expertise”. We’re not talking about some Ph.D. bullshitting an antivaxxer; we’re talking about an audience of experts being snowed by a guy speaking gibberish in their own field of expertise.

In light of this hypothesis, it shouldn’t surprise you that controlled experiments have shown that wordy, opaque sentences rank more highly in people’s minds than simple, clear ones which convey the same information. Correlational studies report that the more prestigious a scientific journal tends to be, the worse the quality of the writing you’ll find therein. (I read one fist-hand account of someone who submitted his first-draft manuscript— which even he described as “turgid and opaque”— to the same journal that had rejected the much-clearer 6th draft of the same paper. It was accepted with minor revisions.)

Pas-Vieira et al appears in Nature’s “Scientific Reports”. You don’t get much more prestigious than that.

So I come away from this paper with mixed feelings. I like what they’ve done— at least, I like what I think they’ve done. From what I can tell the data seem sound, even behind all the handwaving and obfuscation. And yet, this is a paper that acts as though it’s got something to hide, that draws your attention over here so you won’t notice what’s happening over there. It has issues, but none are fatal so far as I can tell. So why the smoke and mirrors? It’s like being told a wonderful secret by a used-car salesman.

These guys really had something to say.

Why didn’t they just fucking say it?




(You better appreciate this post, by the way. Even if it is dry as hell. It took me 19 hours to research and write the damn thing.

(I ought to put up a paywall.)

Posted in: neuro, relevant tech by Peter Watts 15 Comments

Spock the Impaler: A Belated Retrospective on Vulcan Ethics.

When I first wrote these words, the Internet was alive with the death of Leonard Nimoy. I couldn’t post them here, because Nowa Fantastyka got them first (or at least, an abridged version thereof), and there were exclusivity windows to consider. As I revisit these words, though, Nimoy remains dead, and the implications of his legacy haven’t gone anywhere. So this is still as good a time as any to argue— in English, this time— that any truly ethical society will inevitably endorse the killing of innocent people.

Bear with me.

As you know, Bob, Nimoy’s defining role was that of Star Trek‘s Mr. Spock, the logical Vulcan who would never let emotion interfere with the making of hard choices. This tended to get him into trouble with Leonard McCoy, Trek‘s resident humanist. “If killing five saves ten it’s a bargain,” the doctor sneered once, in the face of Spock’s dispassionate suggestion that hundreds of colonists might have to be sacrificed to prevent the spread of a galaxy-threatening neuroparasite. “Is that your simple logic?”

The logic was simple, and unassailable, but we were obviously supposed to reject it anyway. (Sure enough, that brutal tradeoff had been avoided by the end of the episode[1], in deference to a TV audience with no stomach for downbeat endings.) Apparently, though, it was easier to swallow 16 years later, when The Wrath of Kahn rephrased it as “The needs of the many outweigh the needs of the few”. That time it really caught on, went from catch-phrase to cliché in under a week. It’s the second-most-famous Spock quote ever. It’s so comforting, this paean to the Greater Good. Of course, it hardly ever happens— here in the real world, the needs of the few almost universally prevail over those of the many— but who doesn’t at least pay lip-service to the principle?

Most of us, apparently:

“…progress isn’t directly worth the life of a single person. Indirectly, fine. You can be Joseph Stalin as long as you don’t mean to kill anyone. Bomb a dam in a third world shit-hole on which a hundred thousand people depend for water and a thousand kids die of thirst but it wasn’t intentional, right? Phillip Morris killed more people than Mao but they’re still in the Chamber of Commerce. Nobody meant for all those people to die drowning in their own blood and even after the Surgeon General told them the inside scoop, they weren’t sure it caused lung cancer.

“Compare that to the risk calculus in medical research. If I kill one person in ten thousand I’m shut down, even if I’m working on something that will save millions of lives. I can’t kill a hundred people to cure cancer, but a million will die from the disease I could have learned to defeat.”

I’ve stolen this bit of dialog, with permission, from an aspiring novelist who wishes to remain anonymous for the time being. (I occasionally mentor such folks, to supplement my fantastically lucrative gig as a midlist science fiction author.) The character speaking those words is a classic asshole: arrogant, contemptuous of his colleagues, lacking any shred of empathy.

And yet, he has a point.

He’s far from the first person to make it. The idea of the chess sacrifice, the relative value of lives weighed one against another for some greater good, is as old as Humanity itself (even older, given some of the more altruistic examples of kin selection that manifest across the species spectrum). It’s a recurrent theme even in my own fiction: Starfish sacrificed several to save a continent, Maelstrom sacrificed millions to save a world (not very successfully, as it turns out). Critics have referred to the person who made those calls as your typical cold-blooded bureaucrat, but I always regarded her as heroic: willing to make the tough calls, to do what was necessary to save the world (or at least, increase the odds that it could be saved). Willing to put Spock’s aphorism into action when there is no third alternative.

And yet I don’t know if I’ve ever seen The Needs of the Many phrased quite so starkly as in that yet-to-be-published snippet of fiction a few paragraphs back.

Perhaps that’s because it’s not really fiction. Tobacco killed an estimated 100 million throughout the 20th Century, and— while society has been able to rouse itself for the occasional class-action lawsuit— nobody’s ever been charged with Murder by Cigarette, much less convicted. But if your struggle to cure lung cancer involves experiments that you know will prove fatal to some of your subjects, you’re a serial killer. What kind of society demonizes those who’d kill the Few to save the Many, while exempting those who kill the Many for no better reason than a profit margin? Doesn’t Spock’s aphorism demand that people get away with murder, so long as it’s for the greater good?

You’re not buying it, are you? It just seems wrong.

I recently hashed this out with Dave Nickle over beers and bourbons. (Dave is good for hashing things out with; that’s one of the things that make him such an outstanding writer.) He didn’t buy it either, although he struggled to explain why. For one thing, he argued, Big Tobacco isn’t forcing people to put those cancer sticks in their mouths; people choose for themselves to take that risk. But that claim gets a bit iffy when you remember that the industry deliberately tweaked nicotine levels in their product for maximum addictive effect; they did their level best to subvert voluntary choice with irresistible craving.

Okay, Dave argued, how about this: Big Tobacco isn’t trying to kill anyone— they just want to sell cigarettes, and collateral damage is just an unfortunate side effect. “Your researcher, on the other hand, would be gathering a group of people— either forcibly or through deception— and directly administering deadly procedures with the sure knowledge that one or more of those people would die, and their deaths were a necessary part of the research. That’s kind of premeditated, and very direct. It is a more consciously murderous thing to do than is selling tobacco to the ignorant. Hence, we regard it as more monstrous.”

And yet, our researchers aren’t trying to kill people any more than the tobacco industry is; their goal is to cure cancer, even though they recognize the inevitability of collateral damage as— yup, just an unfortunate side effect. To give Dave credit, he recognized this, and characterized his own argument as sophistry— “but it’s the kind of sophistry in which we all engage to get ourselves through the night”. In contrast, the “Joseph Mengele stuff— that shit’s alien.”

I think he’s onto something there, with his observation that the medical side of the equation is more “direct”, more “alien”. The subjective strangeness of a thing, the number of steps it takes to get from A to B, are not logically relevant (you end up at B in both cases, after all). But they matter, somehow. Down in the gut, they make all the difference.

I think it all comes down to trolley paradoxes.

You remember those, of course. The classic example involves two scenarios, each involving a runaway trolley headed for a washed-out bridge. In one scenario, its passengers can only be saved by rerouting it to another track—where it will kill an unfortunate lineman. In the other scenario, the passengers can only be saved by pushing a fat person onto the track in front of the oncoming runaway, crushing the person but stopping the train.

Ethically, the scenarios are identical: kill one, save many. But faced with these hypothetical choices, people’s responses are tellingly different. Most say it would be right to reroute the train, but not to push the fat person to their death— which suggests that such “moral” choices reflect little more than squeamishness about getting one’s hands dirty. Reroute the train, yes— so long as I don’t have to be there when it hits someone. Let my product kill millions— but don’t put me in the same room with them when they check out. Let me act, but only if I don’t have to see the consequences of my action.

Morality isn’t ethics, isn’t logic. Morality is cowardice— and while Star Trek can indulge The Needs of the Many with an unending supply of sacrificial red shirts, here in the real world that cowardice reduces Spock’s “axiomatic” wisdom to a meaningless platitude.

The courage of his convictions.

The courage of his convictions.

Trolley paradoxes can take many forms (though all tend to return similar results). I’m going to leave you with one of my favorites. A surgeon has five patients, all in dire and immediate need of transplants— and a sixth, an unconnected out-of-towner who’s dropped in unexpectedly with a broken arm and enough healthy compatible organs to save everyone else on the roster.

The needs of the many outweigh the needs of the few. Everyone knows that much. Why, look: Spock’s already started cutting.

What about you?



[1] “Operation: Annihilate!”, by Steven W. Carabatsos. In case you were wondering.

Sweet Justice. (And puppets.)

According to Rule 34, someone is getting off on this.

According to Rule 34, someone is getting off on this.

Today’s opening act is a left-over I forgot to include in that last post: a bit of flesh sculpture I was not allowed to show off in “Pones & Bones” because it would have risked  spoiling a yet-to-be-aired episode of “Hannibal”. That episode recently aired, though, so the embargo is lifted. Behold: the hoofed, flayed, and headless wonder that I have christened Hoofnibal, both under construction at Mindwarp workshop (right) and during its formal debut during the episode “Primavera” (below) .

I would like to emphasize that there is no CGI in the sequence: Will’s hallucination is a puppet, moving in real time on the set. Let’s hear it for Practical FX.


More to the point, though: Let’s also hear it for The BUG!

A wee bit of background. Early in our courtship, Caitlin Sweet referred to me as “A DOOFUS” (the caps are hers). Stung, I could only reply “That’s Dr. Doofus to you, Unicorn Girl“— which was a not-too-subtle reminder that I write hard-as-nails SF while she writes fluffy rainbow fantasy.

The thing is, though, Caitlin does not write fluffy rainbow fantasy. The only rainbows you’re likely to see in her novels are those that swirl across the oily film on an open sewer. The Pattern Scars begins with its protagonist, a young girl called Nola, going into a trance at the sight of a bloodstain; the next day her mother sells her to the local brothel as a seer. It gets worse from there. (Oh, it seems to get better for a little while. It seems to get suspiciously, unbelievably better, even. But no. Way worse.) I like to think of myself as Captain Stoneface when it comes to my emotional vulnerability to most fiction; I literally teared up at the end of The Pattern Scars.

Caitlin turns tropes inside out. The Pattern Scars, at its heart, is an inversion of the Cassandra myth: instead of a seer whose truthful prophecies are never believed, Caitlin gives us one doomed to prophesy lies which are always accepted as gospel. The Door in the Mountain— part one of a two-parter which concludes with the imminent The Flame in the Maze— retells the Theseus myth through the eyes of an Ariadne who (in a bizarro twist on the sweet hapless innocence of her archetype) is a manipulative sadist driven by rage and jealousy. The supporting cast might best be described as the twisted love-children of Davids Lynch and Cronenberg (Icarus and Daedalus are two personal favorites). Caitlin is way closer to Martin than to Tolkien; the last thing you can call her is “Unicorn Girl”.

Is this not exactly the face that comes to mind when you imagine a female George RR Martin?

Is this not exactly the face that comes to mind when you imagine a female George RR Martin? (Photo: Martin Springett)

Which is, of course, exactly why she enthusiastically embraced the term the moment she saw it (although the official acronym is BUG— Beloved Unicorn Girl— because “UG” lacks the appropriate resonance. Also: Bed BUG).

My point is: Caitlin’s stuff is gritty, gorgeous, and unsentimental. If it contains anything even approaching cliché, you can be assured that that element exists only to be subverted or blown from the water at a later date. She does not do happy endings; the most you’ll get is an ambiguous one.

Did I mention that Erik Mohr's cover art is also up for an Aurora?

Did I mention that Erik Mohr’s cover art is also up for an Aurora?

All of which means she’s not the kind of fantasy author the YA market is likely to swoon over. I think we’ve both lost count of the agents and publishers who’ve turned her down with some variant of You’re a brilliant, brilliant writer but your protagonist is so unlikeable: can’t you make her more like Hermione from Harry Potter?

No. No she can’t, you fucking idiots. She does not write to market. She has never once said I’m going to add a perky sidekick so the popcorn set doesn’t get away. All that matters to the BUG, when she’s writing, is whether the story works the way it’s supposed to. Whether it meets her standards.

And so her stuff gets ignored. Teenyboppers who stumble across it in search of the latest medieval fantasy with a plucky female protagonist scratch their heads and leave, their stomachs vaguely unsettled. When critics find it, they rave; but that doesn’t happen nearly as often as it should.

So I am very glad to point out that Caitlin Sweet’s The Door in the Mountain is a finalist for the Sunburst Award, YA category. That category, I think, is misplaced; but the recognition is not. It is, to put not too fine a point on it, About Fucking Time. And I can say this without fear of vote-skewing, because the award is juried.

Yeah, of course I’m biased. Of course she’s my wife. But she wasn’t always.

Why do you think I fell in love with her in the first place?


Posted in: ink on art, writing news by Peter Watts 12 Comments

Space Invaders.

So, a few assorted and domestic pictures with which to see out the week.  To your right, as promised a few weeks back, some Rifters-based fan art from “Toa-lagara” over at Deviant Art (and also, now, in the Rifters Gallery, with her permission). Russians do dark art so beautifully.Immediately below, a special edition enhanced appearance of Philippe Jozelon’s evocative Echopraxie cover for Fleuve (my French publishers).  Interesting side note: the French edition is dedicated to “MICROBE. Qui m’a sauvé la vie”.  I know at least some of you will get the joke.

I remember writing this very scene.

I remember writing this very scene. (Click to embiggen.)


Now with 100% fewer distracting alphanumerics! (Click to embiggen.)

And finally…

This is pretty much a typical summer evening on the porch of the Magic Bungalow.

This is "Silverpaw", aka "TP" because he first came to us with what appeared to be toilet paper stuck on his butt.

This is “Silverpaw”, aka “TP” because he first came to us with what appeared to be toilet paper stuck on his butt. (You can still see a bit of it stuck to his left flank.)

The sock-clad foot is mine.

Silverpaw is without a doubt the most fearless of the bunch. You do not fuck with Silverpaw

Silverpaw is without a doubt the most fearless of the bunch. You do not fuck with Silverpaw.

At approximately 21:58 on the evening of June 18, 2015, while we were watching back episodes of "Bob's Burgers", Silverpaw figured out how to open the front door.

At approximately 21:58 on the evening of June 18, 2015, while we were watching back episodes of “Bob’s Burgers”, Silverpaw figured out how to open the front door.

He made it as far as the Ponearium before we managed to lure him out the back. We locked the doors.

At approximately 22:02, Silverpaw was back inside. (Photo credit: Micropone Rossiter)

This may be our last transmission.

This may be our last transmission.


Posted in: art on ink, misc by Peter Watts 19 Comments

Gallo’s Humor.

Ah Jeez, here we go again.

The gun, it smokes.

The gun, it smokes.

The weird thing is, I completely see where Irene Gallo was coming from. I sympathize. I know what it’s like to see the assholes piling up outside the gate, to roll your eyes and shake your head at the inanities and the outright lies— even though it’s obvious that rolling your eyes and shaking your head accomplishes nothing, that reasoned argument accomplishes nothing because those guys didn’t arrive at their positions though reason. Hell, I myself— on this very ‘Crawl— have gleefully fantasized about Stephen Harper getting gunned down in the street, about Liz Cheney’s entrails being strung along a barbed-wire fence.

I get it. Sometimes you just blow up. It’s human. It’s natural.

Still. If we always did whatever came naturally, the only reason I wouldn’t have bashed in a few hundred skulls by now would be because someone else would have bashed in mine before I even hit puberty. Humanity comes with all sorts of primal impulses as standard equipment; I imagine many of Gallo’s defenders would not be especially happy if we let all those drives off the leash just because they were “natural”. One of the first things we point to when lauding Human exceptionalism is our ability to restrain our impulses. And if we fail sometimes— as we’re inevitably bound to— at the very least we can try to walk it back afterward.

So I can see myself in Irene Gallo’s shoes. And if I actually found myself there, I like to think I’d say certain things when those whom I’d intemperately described as Nazis or racists raised their hands to claim that they’d fought against Apartheid during their youth in South Africa, or that they were rabbis, or that they’d exchanged actual gunfire with the brownshirts:

“Holy shit,” (I like to think I’d say,) “You’re right. It’s just— I really hate these guys, you know? And the bile’s been building up for a while now, and when I got that question everything just kind exploded over the keyboard. I think my anger’s justified, but it called for a sniper rifle and I used a sawed-off shotgun. I really stepped over the line. This is me, stepping back, with apologies to those I impugned.”

What I would not have done, when challenged, is post a series of inane cat photos with the caption KITTEH! emblazoned across the top (although granted, Gallo did dial it back to “kitteh?” after a few iterations, when her strategy did not appear to be having the desired effect).

Things kind of went downhill from there. The internet— or at least, this little genre bubble thereof— blew up again, loud enough for the Daily Dot to notice way out in the real world. Tom Doherty stuck a boilerplate disclaimer over at and was immediately vilified for being A) a misogynist asshole because he publicly reprimanded Irene Gallo when he should have given her a medal for speaking Truth to Power, and also for being B) a left-wing libtard pussy who gave Irene Gallo a slap on the wrist when she should have been fired outright. Gallo herself issued one of those boilerplate fauxpologies whose lineage hearkens all the way back to the ancestral phrase “mistakes were made”. None of it seemed to help much.

Blowing up is not the only thing that comes naturally to humans. Tribalism is in there too.

Before we go any further, let me just cover my ass with a disclaimer of my own: I am no great supporter of puppies, regardless of temperament. (Any regular on this blog already knows the kinds of furry quadruped who own my heart.) I understand that of the two breeds under consideration, the Rabids are far more extreme and downright toxic; Theodore Beale, judging by some of his pithier quotes, seems to be Benjanun Sriduangkaew’s bizarro twin, separated at birth. The Sads, in contrast, have enough legitimacy to warrant at least respectful disagreement and engagement from the likes of George Martin and Eric Flint; they have also distanced themselves from their more diseased cousins (although the point that the final Hugo ballot is more representative of the Rabid slate than the Sad one is well-taken). Even so, I don’t find even the Sad Puppies’ arguments especially meritorious.

So let there be no mistake here: I come not to praise Puppies.

I come to bury the rest of you.


As a former marine mammalogist, I feel especially qualified to pass judgment on this meme.  Is it just me, or does it seem a bit wonky that the victims of the piece seem to be the Victorian couple who just want to express their bigotry in peace, and the villain is the disenfranchised Otarriid who politely challenges their prejudice?

As a former marine mammalogist, I feel especially qualified to pass judgment on this meme. Am I the only one who finds it questionable that the heroes of the piece seem to be the Victorian couple who just want to express their bigotry in peace, while the villain is the disenfranchised Otarriid who politely challenges their prejudice with a request for evidence?

Eric Flint put forth the most reasonable take I’ve yet seen on why Gallo misstepped. Over on and io9, a lot of people don’t buy it. They’ve made a number of arguments and hurled a number of insults, perhaps the dumbest of which was accusing someone of “sea-lioning” after they’d asked a single, on-point question. (The alleged sea-lion also claimed to be a part-time rabbi, so— assuming, as always, that we can take such claims at face value— you can understand how the whole Nazi-sympathizer thing might not go over especially well.) A lot of other claims were made repeatedly, though. Some, in fact, were repeated often enough to warrant their own subtitles:


You Can’t Handle the Truth

Doherty threw Gallo under the bus [get used to that phrase— it shows up 21 times under Doherty’s post alone, which is a bit ironic given the number  complaining there about the suspicious similarity of the puppy-sympathisers’ talking points]. He handed a victory to the Puppies when he should have backed her up for having the courage to tell the truth— and everyone knows it’s the truth because noun, verb, Vox Day.

Let’s ignore for the moment the hordes of sad-puppy sympathizers who’ve come out of the woodwork claiming to be anti-apartheid activists, Jews, people of color, married to people of color, queer, veterans— and who do not like being stuck on the same planet as Vox Day, much less the same political clade. I suppose you could call bullshit on most of them— this wouldn’t even be a proper internet argument if accusations of misrepresentation and sock-puppetry weren’t part of the background noise. So let’s set those personal testimonials aside for the moment, and consider a different fact:

Back when the Puppies first seized control of the bridge, Entertainment Weekly (and, I’m pretty sure, The Guardian, although I can’t find the pre-edited version online— maybe I’m thinking Salon) published remarks about the Puppies that were actually milder than Gallo’s. Within hours, it had deleted those remarks and published a meek, surprisingly unconditional retraction which described their own coverage as “unfair and inaccurate”. It was, in tone and content, quite similar to Tom Doherty’s more recent remarks on

I don’t know any Puppies. I don’t know if the people speaking out on their behalf are grass-roots or astroturf (although they can’t all be sock puppets— the gender, ethnicity, and partnerships of some of these folks are a matter of public record, and they’re not all straight white dudes). But I can only assume that these retractions occurred as a response to considered legal opinion. And the fact that different corporations caved so completely, printing such similar apologies, suggests to me that Irene Gallo’s “truth” was, at the very least, legally actionable. This is not a characteristic that usually accrues to Truth, outside Spanish Inquisitions.


The “Personal Space” Perspective.

Well, even if Gallo misspoke, she was just expressing a personal opinion on her personal facebook page. Tor had no right to censor what their employees say and do on their own personal time.

There’s gotta be a word for that— you know, for selecting the negative attributes of a few people you hate on a personal level, and projecting those traits onto an entire demographic. I only wish I could remember what it was…

There’s gotta be a word for that— you know, for selecting the negative attributes of a few people you hate on a personal level, and projecting those traits onto an entire demographic. I only wish I could remember what it was…

Go check out Irene Gallo’s personal facebook page. Most of the posts there consist of pimpage for Tor artists, cover reveals for upcoming Tor releases, various bits of Tor-related news, and genre award links. Hell, the very post that got her wrist slapped was a promo for Kameron Hurley’s The Geek Feminist Revolution, soon to be available from (you guessed it) Tor: and the heading she chose to capture eyeballs was “Making the Sad Puppies Sadder— proud to have a tiny part of this”.

The time stamp on that post reads Monday, 11 May 2015, 14:14

I don’t think there’s anything wrong with using your personal facebook page as a delivery platform for employer pimpage. I think people should feel free to blur the line between their personal and professional lives until the two are nigh-on indistinguishable, if they like. But having erased those boundaries, you don’t get to reassert them at your convenience. And if anyone tries to claim, after the fact, that on this one occasion you weren’t really presenting yourself as a corporate spokesperson— especially when said occasion involves an advertisement for a company product, posted during work hours, presumably while sitting at your work desk— the demographic who takes this claim at face value will be either very small, or very stupid.

Evidently it was that second thing.


The Sexism Scenario

Isn’t it curious how Tor never feels the need to do anything when their male authors say more extreme things than Gallo ever did [Scalzi and Wright and Card get cited a lot in this regard, although I saw at least one lost soul wanting to know why Tor wasn’t calling out Vox Day]. Isn’t it telling how that Frenkel guy got away with harassing women for years before Tor cut him loose— but a woman makes one intemperate comment and they throw her under the bus? Misogyny much?

First, can we at least agree that Jim Frenkel’s tenure at Tor would have been over pretty much the moment he went onto facebook to proudly post selfies of his ongoing harassment of women? He lasted as long as he did because he committed his offenses in the shadows, where they could be more safely ignored by Corporate.

Tor is a colony organism; its fitness is defined in terms of profit margin. Like all corporate entities, it’s at least partially sociopathic. Its immune system responds most emphatically to threats that endanger its bottom line— which, almost by definition, means public threats. I think that anyone who regards Doherty’s response as an act of sexism is looking at the world through polarized lenses; to me, this reads above all else like an act of damage control. If Gallo had been male, I believe Tor’s reaction would have been the same.

As for those who somehow seem to think that authors are employees— that Tor’s legal liability extends not just to what Irene Gallo posts from her office computer during work hours, but to everything posted by anyone Tor has ever published— all I can say is, you’ve been seriously misinformed about the nature of the sacred bond between author and publisher. (Or maybe I have— maybe I should be complaining about Tor’s failure to provide me with health insurance and a regular paycheck.)

At the very least, you should have boycotted those guys the moment they started publishing Orson Scott Card.


Of course, Tom Doherty is not the only one to have come in for a world o’Twitter Rage. Much ire, as always, is directed at the Puppies themselves— much of it justified, in my opinion. But I’m not writing this to jump on that particular bandwagon, nor do I need to; you can’t swing a cat these days without hitting someone’s list of puppycrimes.

The hypocrisy of certain Gallonites, however, doesn’t seem to be getting nearly as much attention (at least, not here in the Civilized World; the Puppies may be all over it, but I tend to avoid those territories).  I’ve seen Sad Puppies go out of their way to distance themselves from the rabid end of the spectrum:

“Vox Day is an A-hole. As a Sad Puppy, I had to look him up on Google.”

— only to get shot down:

The fact that you joined a movement without adequately understanding what its leaders stood for, compounded by the fact that you continue to identify with that movement AFTER you’ve seen ample evidence of what they stand for, inclines me to give you zero credibility on this issue.”


“you are supporting [Beale’s] agenda.  That makes those who support culpable.  If they didn’t want to be associated with that reprehensible excuse for a human being, they should not have stood to be counted with him.”

Turn this argument around and see how you like it.

Imagine being told that you had no business advocating for social justice issues because you didn’t know about— oh, say, Requires Hate— prior to signing up. Imagine being told with a straight face— nay, with a righteously angry face— that you have “zero credibility” because you continue to advocate for social justice issues, even after learning of that vile creature’s existence.

Yeah, I know RH didn’t start the movement. She merely exploited it. But the analogy holds where it needs to: RH was, in her day, a significant player in the SJ scene, with allies who extended (and, as far as I can tell, continue to extend) into the halls of Tor itself. She was relatively central for such a decentralized movement— But she did not speak for everyone. If anyone told you that you couldn’t advocate for social justice without also supporting RH, how would you respond?

(As a side note, it’s nice to see RH’s influence so greatly diminished in recent months. She still spews the same BS— although her favored target seems to have shifted to “racist white women” in the wake of Laura Mixon’s report— but to far less effect. Think Saruman, reduced to whining in the Shire after being kicked out of Isengard. RH might even provide a valuable social service these days, functioning as a sort of rhetorical flypaper for idiots. As long as they stick to her, the rest of us can get on with our lives.)


Another common talking point is the obvious timing of this whole blow-out, of the fact that Beale sat on his screen-grab for weeks before releasing the hounds just prior to the Nebula Awards. This was manufactured outrage over phantom pain. Nobody was really hurt by Gallo’s comments; they were nothing but a convenient foothold from which to launch an attack.

Well, duh.

Beale is the enemy. That’s what enemies do, if they’re smart; they keep their powder dry. That’s one of the things that makes them enemies, for chrissake. That obvious fact should make it less advisable to play into their hands. Gallo said what she said— and to all those who’d say Jeez, let it go— that was four whole weeks ago, I’d answer Fine: why hasn’t the statute of limitations passed on all those Beale quotes I keep seeing, all of which are much older?

Not that I’m excusing Beale, mind you. I personally have a hard time believing that anyone could make some of his claims with a straight face. (White men don’t rape, so mistrust the victim unless she’s accusing a Black or Hispanic?) Maybe he’s just being ironic, although I’m more inclined to regard such statements as batshit insane. Either way, I’d laugh in the face of anyone who tried to impose a statute of limitations on Theodore Beale quotes; I suspect most of you would as well. By that same token, neither do we get to declare Gallo’s remarks off-limits after a measly month.

I imagine a number of you are already objecting to this equivalence on the grounds that Gallo’s single comment, ill-advised though it may have been, doesn’t come anywhere close to the levels of offensiveness that Theodore Beale manages even on a mild day. I tend to agree. I thought Gallo’s comment fell pretty wide of the mark, but I personally didn’t find anything especially offensive about it.

Then again, I’m not a Jewish person who’s been told he’s in bed with Nazis. It may be wise to defer to such people in matters of offense given and received.


Over the past few days I’ve sampled a fair number of blog posts and editorials dealing with Gallogate. I’ve recognized a number of the folks who’ve posted comments there, who’ve “liked” the relevant links and rejoinders sliding down my Facebook wall. Some I know only from their handles, when they’ve posted here on the ‘Crawl; others are personal friends.

They all support Irene Gallo.

I would too, if she’d only stood up and offered an apology that didn’t read as though it had been crafted by corporate mealworms. She fucked up; we all do, sometimes. She played into enemy hands. It was a minor and a momentary slip. But the real fuck-up was in how she and her supporters dealt with the aftermath.

There are good reasons to repudiate Puppies. There are legitimate arguments to be made against both Sad and (especially) Rabid breeds— which makes it all the more frustrating that so much of what I’ve seen lately boils down to dumb, naked tribalism. Fallacies that would be instantly derided if made by the other side become gospel; any who question are presumed to be With The Tewwowists (or more precisely, the sea lions). I’m reminded of my own observation back when the Mixon report came out: we’re not a community at all. We’re a bunch of squabbling tribes fighting over the same watering hole.

I didn’t want to write this. There’s so much other nifty stuff to talk about. Preserved soft tissue in dinosaur fossils, reported the same week “Jurassic World” premieres. Island nations, finally suing the Fossil Fuel industry for compensation over habitat loss due to climate change. And I still haven’t got around to writing my epic comparison of “Fury Road” and “Kingsman”.

It would have been a lot more fun to write about any of that. But this is just fucked. So many people bend the data to support forgone conclusions; so few put their conclusions on hold until they’ve followed those data to see where they might lead. So much gut reaction. So little neocortical involvement.

Judging by past experience, I could lose some fans over this. There’s even a chance I could lose actual friends (although I think most of the opportunists masquerading as friends got exposed the last time I took an unpopular stand on something). Which, if you look at it a certain way, is a good thing; it would add evidence to my argument about the evils of mindless groupthink. But here it is, for better or worse. I’ve never been much for bandwagons.

Unless I build them myself, I guess.



Posted in: rant by Peter Watts 65 Comments

The 21-Second God.



We lost fifteen million souls that day.

Fifteen million brains sheathed in wraparound full-sensory experience more real than reality: skydiving, bug-hunting, fucking long-lost or imaginary lovers whose fraudulence was belied only by their perfection. Gang-bangs and first-person space battles shared by thousands— each feeding from that trickle of bandwidth keeping them safely partitioned one from another, even while immersed in the same sensations. All lost in an instant.

We still don’t know what happened.

The basics are simple enough. Any caveman could tell you what happens when you replace a dirt path with a twenty-lane expressway: bandwidth rises, latency falls, and suddenly the road is big enough to carry selves as well as sensation. We coalesces into a vast and singular I. We knew those risks. That’s why we installed the valves to begin with: because we knew what might happen in their absence.

But we still don’t know how all those safeguards failed at the same time. We don’t know who did it (or what— rumors of rogue distributed AIs, thinking microwave thoughts across the stratosphere, have been neither confirmed or denied). We’ll never know what insights arced through that godlike mind-hive in the moments it took to throw the breakers, unplug the victims, wrest back some measure of control. We’ve spent countless hours debriefing the survivors (those who recovered from their catatonia, at least); they told us as much as a single neuron might, if you ripped it out of someone’s head and demanded to know what the brain was thinking.

Those lawsuits launched by merely human victims have more or less been settled. The others— conceived, plotted, and put into irrevocable motion by the 21-Second God in those fleeting moments between emergence and annihilation— continue to iterate across a thousand jurisdictions. The first motions were launched, the first AIgents retained, less than ten seconds into Coalescence. The rights of mayfly deities. The creation and the murder of a hive mind. Restitution strategies that would compel some random assortment of people to plug their brains into a resurrected Whole for an hour a week, so 21G might be born again. A legal campaign lasting years, waged simultaneously on myriad fronts, all planned out in advance and launched on autopilot. The hive lived for a mere 21 seconds, but it learned enough in that time to arrange for its own second coming. It wants its life back.

A surprising number of us want to join it.

Some say we should just throw in the towel and concede. No army of lawyers, no swarm of AIgents could possible win against a coherent self with the neurocomputational mass of fifteen million human brains, no matter how ephemeral its lifespan. Some suggest that even its rare legal defeats are deliberate, part of some farsighted strategy to delay ultimate victory until vital technological milestones have been reached.

The 21-Second God is beyond mortal ken, they say. Even our victories promote Its Holy Agenda.

Posted in: fiblet by Peter Watts 47 Comments