Intellectual Fortitude

In 1971, barely into my teens, I went to a movie with my dad: The Andromeda Strain, based on Michael Crichton’s bestseller, and one of the more faithful adaptations of an SF novel put to film. It’s not a perfect movie. Even back then I could see it wasn’t great on character development. There was a lot of expository dialog in which scientists told each other things they would already have known, if not for the need to fill the average movie-goer in on what amino acids are. But there was one way in which the movie stood out from others of its kind, in which it continues to stand out even today:

It portrayed scientists doing science.

Admittedly, depending on how low you set your standards you can see that all the time. Tony Stark invents Strong AI overnight, all by himself. Some goofball biologist hooks himself up to a brain in a vat and intuits the genetic complexities of Pacific Rim‘s monstrous Kaiju. Anne Hathaway’s character in Interstellar witters on about the transcendent properties of Love as a Universal Force. A thousand movies portray scientists either as goofy caricatures or charismatic lone wolves, pulling conceptual breakthroughs from their asses through the sheer force of their own intellect.

Of course, these characters were invented by screenwriters who have no clue how science works, and who couldn’t care less. Their goal was to provide mindless entertainment to hordes of popcorn-munchers. The Andromeda Strain, with its average-looking everyday researchers and their plodding scientific method, would never get made today. (If you don’t believe me, just look at what Robert Schenkkan did to Crichton’s story when A&E rebooted it as a miniseries back in 2008).

At least, that’s what I thought before I watched the first season of Fortitude. I mentioned that show back when I was complaining about the (significantly inferior) Humans, but I couldn’t go into detail until a certain overseas embargo had expired.  And here we are.

Fortitude is an offbeat British/Norwegian co-production which made it to North America this year, despite the fact that its glacial pacing and delayed payoffs should have been a death sentence in any demographic raised on instant gratification. Set in the Norwegian arctic, it begins with a man being mauled by a polar bear. It begins with two children finding a mammoth carcass, barely frozen in melting ice, and a short-tempered Russian facing off against a Norwegian sheriff with poor impulse control. It begins with a woman in a hotel room, aiming a rifle at the closed door while a man on the other side raises a tentative knocking hand. It begins with infidelity and fever, with a plan to carve a hotel from the heart of a glacier, with a scientist being hacked to death by a mysterious assailant wielding a potato peeler.

That’s some of what happens in the first episode. None of it is explained in that first hour. The characters are ciphers, their motives hidden from the viewer. If you want everything spelled out in nice bite-sized chunks— if you prefer Transformers to 2001— this is not your movie. Hell, Fortitude doesn’t even tell you what genre you’re in until almost the end of the season.

Don’t go to Wikipedia for help on that score. It classifies Fortitude as “Psychological Thriller/Drama/Mystery”. In fact, it’s science fiction— but the science elements, while speculative, are so utterly plausible that I feel as if I’m misusing the term. It hinges on science, yes: on speculative biology, on events that have not yet happened but which could. Isn’t that the very definition of hard SF? And yet, having watched all those cryptic pieces coming together over eleven hypnotic hours, “SF” still doesn’t do it justice to my mind. Fortitude is more immediate than that label suggests, as if I were to describe a story about an Ebola epidemic as “science fiction” six months before an outbreak happened in the real world.

Not quite the recipe, but you get the idea.

Not quite the exact recipe, but you get the idea.

If I had to sum it up in thirty words or less, I’d describe Fortitude as a cross between Twin Peaks and John Carpenter’s The Thing, as written by Michael Crichton. Ostensibly a crime drama revolving around a series of brutal murders in a small town— “fortitude” might be exactly what you need when they show the bodies, by the way— it mixes in subplots involving cancer, infidelity, politics, shamanism, climate change, rape, mob justice, wildlife biology, and food-related sexual obsession. (Also a pig in a hyperbaric chamber— still not sure what that was doing there.) Everyone has dangerous secrets to hide, and you can’t shake a creepy sense of something supernatural in this icebound berg. But the payoff, when it comes, is far more down to earth. The season’s almost over before you see the science behind the fiction— and even then, with that element revealed, you might mistake it for just one thread in a messy tapestry.

Tug on it, though; you’ll see a whole web of connections.

All of which would be enough to give Fortitude my personal seal of approval. But it goes one step further, serving up perhaps the most understated and accurate portrayal of working scientists that I’ve seen in a genre show. Blind alleys abound. In contrast to Tony Stark’s infallible intuition, hypotheses— when tested— turn out to be wrong. Researchers worry out loud about confirmation bias. Unexpected findings inspire literature searches for real-world precedents. And Fortitude‘s scientists are more than delivery platforms for exposition, they’re people as well as professionals. The local wildlife biologist, at ease in a world of hungry polar bears, delights in mocking a visiting biologist brought in for his first-hand experience with “apex predators” (turns out he did his thesis on badgers); she uses her lab equipment to cut up reindeer steaks. The characters muse over beers on Darwin’s thoughts about God.

It’s not to everyone’s taste. A friend of mine threw up his hands in confusion after the first episode, plaintively wondered if it got better. I’ll tell you what I told him: no, it does not get better. It pays off. It demands more patience than the average eyeball bait, and it rewards that patience more richly.

For all its crypsis and glacial pacing, that strategy seems to have worked— well enough to get Fortitude renewed for a second season, at least. I don’t know where they’ll go from here. The central mystery has been resolved, and besides, half the cast is dead. Then again, the solution to that mystery turned out to be just one manifestation of an environmental meltdown that contains within it the seeds of myriad disasters. Perhaps the next season will explore one of those. Perhaps it will go in some other direction entirely.

I hope we’re still around to see what that is.

Posted in: ink on art by Peter Watts 23 Comments

Squirrel!

So that thing I can’t talk about is looking more likely to happen, and the rest of my 2015 is looking increasingly hectic, so (with the exception of the occasional Nowa Fantastique reprint) any blog posts I’m likely to make for the next little while will be short on deeply researched science and long on opinion.

Fortunately, I have a lot of opinion. Unfortunately, much of it is wrong. Like, for example, my intermittent belief— although perhaps “faint hope” would be a better term— that we Canadians are not, after all, such a profoundly stupid people.

What galls me is that this particular belief was so hard-won. It had to fight upstream against years of evidence to the contrary. After all, we were the nation that voted for the government of Stephen Harper— not once, not twice, but three times, ending in a majority. The administration that quit Kyoto; that muzzles Elections officials and gags scientists, that shuts down the collection of new data and destroys archives of old, that literally burns books. The government that audits birdwatching groups if they have the temerity to speak out about protecting bees; that presided over the greatest violation of civil rights, the greatest mass arrest in Canadian History; that suppressed voter turnout in unfriendly ridings through the use of faked robocalls. The government that describes anyone opposed to warrantless online surveillance as pro-pedophile. A government dissolved after being found to be literally in Contempt of Parliament, a government so corrupt that even Brian Mulroney— Brian Goddamn Mulroney— excoriated it.

That government.

And if my hopes have been raised and dashed in the past— if, for example, I begin to take heart in the Tories’ occasional inability to ram through whatever rights-corroding Bill they’ve introduced this week, only to discover how many Canadians actually believe that “if you’re not a terrorist you have nothing to fear“— well, that’s the price I pay for being a perennial optimist. And when the writ was dropped this past summer, the polls gave me such cause for hope. Recession and senate scandals and endless corruption all seemed to be taking their toll. The NDP— the NDP!— was leading in the polls, and the Conservatives were sinking like a bag of shit to the bottom of a swamp. Maybe we weren’t the brightest bunch of vertebrates on the planet, thought I; but if we’re not quite smart enough to turn against the guy who’s been beating us with a stick after five years, at least we seem to be catching on after nine. So I dared to hope again.

Look at us now. Just look at us now:

From Éric Grenier's Poll Tracker, via the CBC.

From Éric Grenier’s Poll Tracker, via the CBC.

What caused the turnaround? The niqab. A bit of cloth draped across the face in deference, apparently, to the demands of one of our more prudish Sky Fairies.

Really? This is the most important thing we have to fight about?  (Patrick Doyle/Canadian Press)

Really? This is the hill we’re gonna die on? (Patrick Doyle/Canadian Press)

Yes, of course it’s dumb. So’s the rosary, the crucifix— all the myriad beads and rattles shaken in thrall to invisible masters of any stripe. (Of course, if you simply dig the iconography as pure fashion statement, more power to you.) So what? Does anyone seriously think that Zunera Ishaq is going to pull a gun at her citizenship ceremony? Does anyone think her religious garb would disguise her, help her escape justice, if she did? You can’t even invoke the argument that she’s being oppressed by a misogynistic culture (actually you can, but it’s irrelevant in this case) because this is pretty obviously something she wants. The mind boggles, to reflect on the sheer idiocy required to think of it as a security issue— more fundamentally, to think that it’s anyone’s fucking business, much less the government’s.

The mind boggles, to see how many Canadians think exactly that.

In a flash, we forget it all: the tar sands, the long-form census, the flouting of electoral laws and the gutting of environmental ones, a foreign policy that has reduced us to an international laughingstock on every front from human rights to the environment to the Middle East.  Warrantless surveillance, a dismal economy, rising unemployment. The criminalisation of free speech and the unsupervised expansion of police powers. A Minister of Science and Technology who describes evolution as a “religious belief”. The evisceration of the CBC. Secret trade deals. Harper waves a colored rag in our faces and right on cue we bark—

Squirrel!

—completely forgetting that we’re chest-deep in quicksand.

My goddamned country.  Modified from "Liza_Tigress".

My goddamned country. (Modified from “Liza_Tigress”.)

It’s worked so well, in fact, that Harper is now musing about passing legislation to ban niqabs from the federal workplace. It doesn’t matter that the federal court has told him to fuck off, that just this week that same court even turned down his lackey’s request for a stay on that verdict, pending appeal. Hell, that all probably helped his cause. And now— now they’re promising to institute an actual honest-to-God fink line to encourage neighbors can snoop on each other and report “barbaric cultural practices.”

Now Muslim women are being physically attacked on the street (not that there’s anything especially new about this, I suspect, beyond the sudden attendant publicity) and Justin Trudeau ineffectually bleats “This is not Canada!” But he’s wrong: this is exactly Canada. Harper’s ploy wouldn’t stand a chance if this wasn’t Canada. And Trudeau should know: he was right there helping Harper build the damn thing when he cravenly supported a panopticon bill redefining “terrorist” as anyone who expresses support for someone the government doesn’t like. And because this is Canada, the only major political party with the ‘nads to vote against C-51 is now trailing badly in the polls.

Don’t talk to me about percentages. Don’t tell me that I’m being too harsh, that two thirds of Canada’s population wouldn’t spit on Harper if he was on fire, that he owes his power entirely to gerrymandered riding boundaries and vote-splitting on the left. That shouldn’t matter. Harper’s contempt for empirical fact, his evangelical devotion to ideology over evidence— his ongoing campaign to actively destroy evidence when it doesn’t accord with said ideology— is so blatant that gerrymandering every riding in the whole damn country shouldn’t be enough to save him in any nation whose mean IQ rises above room temperature. It’s like trying to claim that the USA is not populated by scientific illiterates; you’re not gonna make that case by pointing out that hey, when you give them a multiple-choice question about how long it takes the Earth to circle the sun, only half of them get it wrong.

We’ve learned nothing. Our dalliance with the center wasn’t a considered decision, empirically derived, after all. It was just another distraction— a sparkly thing pounced upon and then forgotten by an electorate with the attention span of a gnat. And once again, my hard-won opinionated optimism proves to be so much shit.

I don’t know whats going to happen in two weeks. I hope conventional wisdom is wrong, that we don’t after all get the government we deserve. But at least you can fly to Iceland now for ninety bucks. Iceland’s nice. They live on geothermal, they jailed their bankers after the meltdown of 2008, and their pop stars sing the praises of biology.

I wonder if their citizenship requirements include a dress code.

Posted in: rant, scilitics by Peter Watts 30 Comments

Tumors and Tuition.

You’ll notice I haven’t been posting much lately. I may not be posting much for the rest of the year. Infuriatingly, I can’t tell you why just yet. There’s a contract, which I haven’t yet signed. There’s a clause that allows the whole thing to implode right up until the approval of a certain deliverable. In the meantime I work on spec, on faith, and under embargo. My email backlog builds up (sorry, anyone who’s recently sent me monograph-long missives calling for monograph-length responses; it ain’t likely to happen). So while I haven’t had much time to post lately— and while I’ll probably be plugging in the ol’ e-mail autoresponder before too long (maybe even by the time you read this)— I’m gonna break silence just long enough to do something I almost never do. I’m going to shill for funds.

Not for me, though.

You might recognize the name Donna Dunlap. She’s posted the occasional comment to the ‘crawl, but her real claim to fame is that she sat on the Michigan jury that convicted me of Asking Questions back in 2010. She voted to convict— which admittedly sucked— because she felt compelled to abide by an unjust law. But having done that, she spoke out publicly on my behalf. She wrote a letter to the judge supporting me; she stood at my side during sentencing.

That sucked even more. It cost Donna her job, and nearly cost her her house. It netted her an extended campaign of police harassment and false arrest and legal bills that went on for at least a year (and might be going on to this day, for all I know). She never buckled. She is, for all the oxymoronic implications of the term, a decent and honorable Human Being.

I’m not shilling for her either, though.

Donna recently reached out to me on behalf of a friend of hers, Carrie Weiss-Silverman. Carrie has Stage 3 lung cancer— meaning you can’t really call it lung cancer any more. It’s in her lymphatic system, which basically serves as a highway to every other part of the body. I leave it to you to connect those spreading, asymmetric dots.

Yet I’m not even shilling for her.

The person I’m shilling for is Carrie’s son, Parker. Parker is in his first semester at Western Michigan University. His mom has grown preoccupied with distractions like radiation, and chemotherapy, and having to care for two other children while having rampant tumors burned and poisoned out of her. I’m told she’s having a really tough time getting $2,500 together to cover her son’s tuition for the fall semester. (Actually tuition costs way more than that, but Parker has a scholarship to make up the difference.)

Donna’s trying to help Carrie out on that front. (I told you: decent. Honorable.) She’s set up a GoFundme account to try and raise the necessary funds. If you have issues[1] with GoFundMe, Donna can also process donations through her Paypal account at dldnativeblend@yahoo.com. (This is the route that I have taken.)

$2,500. Per-capita that’s pretty trivial, spread out across the folks who read the ‘crawl. If everyone who visits this site on an average day chipped in five bucks, we’d blow past that benchmark before lunchtime. And speaking from my own limited-but-intimate experience, let me just say that if there’s one thing Michigan could do with a lot more of, it’s educated citizens.

It’s one semester’s tuition for one kid. It’s not going to save the rain forest or stop global warming or even land a tar-sands-creme pie in Stephen Harper’s face during his next campaign stop. It’s a small cause. But I think it’s a worthy one.

You have the data. Do what you will.


[1] I know I do. Their so-called “Privacy Policy” reserves the right to sell out user data to “governmental agencies or other companies” when “permitted or required by law”— which means they’ll fuck you over not just when explicitly subpoenaed, but also when they damn well feel like it. GoFundme can GoFuckThemselves.

Posted in: misc by Peter Watts 33 Comments

An Offense Against Nature Itself.

So this happened.

ears1

I’m not even exactly sure what that even is, actually. It  has obviously been engineered, but by some agent lacking even the vaguest grasp of natural selection. Its continued existence hinges on actions that would be described as “Extraordinary Measures” had it landed in a palliative care ward instead of my basement. Drinking water must be provided in a special bottle, for example, because its ears would fill a conventional water bowl, soaking up liquid like a sponge. All vacuuming within a 50-meter radius must be performed without the use of any rotary rug-beating attachment, for fear the ears could get slurped up into the gears and jam the mechanism. Anyone approaching within fifteen meters must affix themselves to a ceiling-mounted track harness and keep all body parts at least 10 cm off the floor.

I did manage a decent double half-hitch. Sheepshank, though, no luck.

I did manage a decent double half-hitch. Sheepshank, though, no luck.

The only practical use I’ve been able to discover for this thing is as a platform to practice my seamanship skills on.

Its biography, and the circumstances of its arrival, remain unclear and not entirely consistent. Evidently it is a purebred something, and would have had significant stud value but for the fact that it had only one descended testicle. Its original host loved it so much on account of its “sweet disposition” that he could not bring himself to kill it until the weekend; other parties had until then to find alternate accommodations. However, it clearly had two descended testicles when I first encountered it, leading other parties to reassure me that no, the original breeder was not incapable of counting past “1”, and that the creature must have simply “got better”. We are awaiting the results of further tests to ascertain whether it has ever been turned into a newt.

The ears do not appear to be prehensile.

The ears do not appear to be prehensile.

I have repeatedly suggested surgery to reduce these deformities to a size that might be less maladaptive, or at least to allow someone to walk down the hall without tripping over them. I have been shouted down on each occasion, and informed that the removal of birth defects is somehow “cruel” and “mean”.

Attempts to decide on a name are ongoing. “Dumbo”, “Obama”, and “Hideousness II” have all been voted down. I wish I could remember the name of the blue chick with the floppy (and equally nonfunctional) tentacles growing out of her head from Return of the Jedi. Or even the species.

Its eyes are a hideous, gelatinous, Lovecraftian red. (Like Lovecraft, this creature is very white.) Its nose twitches constantly, as if the larvae of some horrific ichneumon wasp writhe within the sinuses, verging on eruption.

Suggestions are welcome.

The BUG's attempts to improve my regard for whatever this is— by dressing it in the trappings of power and authority— have so far proven singularly unsuccessful.

The BUG’s attempts to improve my regard for whatever this is— by packaging it in trappings of power and authority— have, so far, proven singularly unsuccessful.

Posted in: misc by Peter Watts 37 Comments

Predatory Practices.

Oh, we are so fucking bad-ass. Even Science says so.

The paper’s called “The Unique Ecology of Human Predators” (commentary here), and it’s been getting a lot of press since it came out last week. “People Are Deadliest Predators”, trumpets Discovery News; “Humans Are Super Predators”, IFL Science breathlessly repeats. Even Canada’s staid old CBC, which has grown nothing but more buttoned-down and conservative since its Board of Directors were executed and replaced by all those cronies Harper couldn’t fit into the Senate, gets into the act: “Humans are ‘superpredators’ like no other species”, it tells us.

There are other examples— loads of them— but you get the idea. The coverage generally goes on to remark on how much more lethal we are than sharks and lions, how our unsustainable “predatory” strategies are driving species to extinction.

Really. We’re better than sharks at wiping out species. This is news. This is worthy of publication in one of the premiere cutting-edge science journals on the planet.

Our place among the bad-asses. From Daramont et al 2015

Our place among the bad-asses. From Daramont et al 2015

The paper itself— basically a meta-analysis of data from a variety of sources— justifies its existence by pointing out that previous models may have underestimated our ecological impact by treating us as just another predator species. Their results clearly show, however, that we are not mere predators: in many ways we are Extreme Predators. For example, while other predators tend to weed out the young, the sick, and the injured, we Humans indiscriminately take all classes— frequently targeting the largest individuals of a population, which act as “reproductive reservoirs” and whose loss is thus more keenly felt than the loss of cubs or larvae. This also creates selection pressure against large-bodied adults, meaning that we are causing reproductive individuals to shrink over time. (This came as news to me— albeit intuitively-obvious, not-very-surprising news— back when I took my first fisheries biology class in 1979. I was a bit taken aback to see it being marketed as a shiny new insight up here in 2015.)

The bad news keeps rolling in, hitting us in the gut with the impact of its utter unexpectedness. Most fish-eating predators just take one fish at a time. We Hu-Mans, with our Nets and Technology, scoop up Entire Schools At Once! Unlike other predators, we hunt for trophies! We are one of the few predators that hunts other predators!

Perhaps the highlight of the paper occurs when the authors, straight-faced, point out that other marine predators are limited in the size of their prey by how wide their jaws can gape— whereas we take prey that would be far too large to fit into our mouths. This, the authors suggest, “might explain why marine predator rates are comparatively low” compared to our own.

In Science. Swear to God. You can look it up yourself if you don’t believe me.

Larson nails it.  As usual.

Larson nailed it. As usual.

I don’t pretend to understand what this is doing in the pages of a front-line peer-reviewed journal, unless it’s some kind of social experiment along the lines of Alan Sokal’s Social Text hoax. As to why it’s received such widespread attention in the mainstream, I wonder if it’s because the subtext paints lipstick on seven billion pigs. After all, predators are cool. We paint shark mouths on our fighter planes, we airbrush cheetahs onto the sides of our fuck trucks. (Or at least we used to. Back in the day.) Outsharking the shark? Getting to be a Super Predator? Why, that’s almost something to be proud of! Nothing like a bit of sexy rebranding to distract us from the fact that we’ll have wiped out a third of the planet’s extant species by the end of the century.

Because it’s all bullshit, of course. We’re not predators, Super or Garden-variety, in any biological sense. Most predators wreak their havoc in one way; they kill and eat their victims one at a time. They don’t poison entire ecosystems before killing off the inhabitants. You know when you’ve been predated: your killer takes you out face-to-face, one on one. You don’t sicken and die, sprouting tumors or weeping sores or forced into some miniscule fragmenting refuge by invisible forces that don’t know or care if you even exist. You can escape from a real predator.  Sometimes.

“Superpredation” is the least of our sins. As a label, it doesn’t begin to encompass the extent of our impact.

So did the Wachowskis. The first time around,  anyway.

So did the Wachowskis. The first time around, anyway.

“Pestilence” might do, though. “Plague.” Just barely. At least, it would come a bit closer to the truth.

I wonder how long it’ll take for Daramont et al to put out a paper describing Humanity as a “Super Disease”.

I wonder what kind of coverage the CBC will give ′em when they do.

Posted in: biology, eco, marine, science by Peter Watts 30 Comments

“Humans”? They Weren’t Kidding.

Spoilers.  Duh.

Honestly, I can't see much difference from the staff they've already got at Home Depot...

Honestly, I can’t see much difference from the staff they’ve already got at Home Depot…

So that was Humans. Eight hours of carefully-arced, understated British narrative about robots: an AMC/Channel 4 coproduction that’s netted Channel 4 its biggest audiences in over two decades. What great casting. What fine acting. What nice production values. What a great little bit of subtext as William Hurt and his android, both well past their expiry dates, find meaning in their shared obsolescence.

What a pleasant 101-level introduction to AI for anyone who’s never thought about AI before, who’s unlikely to think about AI again, and who doesn’t like thinking very hard about much of anything.

*

Humans extrapolates not so much forwards as sideways. Its world is recognizably ours in every way but one. Cars, cell phones, forensic methodology: everything is utterly contemporary but for the presence of so-called “synths” in our midst. These synths, we’re told, have been around for at least fourteen years. So this is no future; this is an alternate present, a parallel timeline in which someone invented general-purpose, sapient AI way back in 2001. (I wonder if that was a deliberate nod to you-know-who.)

In this way Humans superficially feels much like that other British breakout, Black Mirror. It appears to follow the same formula, seducing the casual, non-geek viewer in the same way: by not making the world too different. By easing them into it. Let them think they’re on familiar ground, then subvert their expectations.

Except Humans doesn’t actually do that.

Start by positing a new social norm: neurolinked subcutaneous life-loggers the size of a rice grain, embedded behind everyone’s right ear. But don’t stop there. Explore the ramifications, ranging from domestic (characters replay good sex in their heads while participating in bad sex on their beds) to state (your recent memories are routinely seized and searched whenever you pass through a security checkpoint). That’s an episode of Black Mirror.

South Park did it better.

South Park did it better.

So how does this approach play out in Humans? What are the ramifications when you have AGIs in every home, available for a few grand at the local WalMart? This is what Humans is ostensibly all about, and it’s a question well worth exploring— but all the series ever does with it is trot out the old exploited-underclass trope. Nothing changes, except now we’ve got synths doing our gardening instead of Mexicans. We rail against robots taking our jobs instead of immigrants. That’s pretty much it.

I mean, at the very least, shouldn’t all the cars in this timeline be self-driving by now?

Once or twice Humans hesitantly turns the Othering Dial past what you might expect for a purely human underclass. Angry yahoos with tire irons gather in underground parkades to bash in the skulls of unresisting synths, and at one point William Hurt sends his faithful malfunctioning droid out into the woods for an indefinite game of hide-and-seek. But both those episodes were lifted directly from Spielbricks’s 2001 movie “A.I.” (as was William Hurt, now that I think of it). And given the recent cascade of compromising video footage filtering up from the US, I’m not at all convinced that bands of disgruntled white people wouldn’t have a mass immigrant bash-in, given half the chance. Or that law enforcement would do anything to stop them.

There is nothing artificial about these intelligences. The sapient ones (around whom the story revolves) are Just Like Us. They want to live, Just Like We Do. They want to be Free, Just Like Us. They rage against their sexual enslavement, Just Like We Would. And the nonsapient models? Never fear; by the end of the season, we’ve learned that with a bit of viral reprogramming, they too can be Just Like Us!

They are so much like us, in fact, that they effectively shut down any truly interesting questions you might want to ask about AI.

synth18+

I have to put a caption here, because stupid WordPress erases the text padding otherwise and I can’t be bothered to tweak the code.

Let’s take sex, for example.

I’m pretty sure that even amongst those who subscribe to the concept of monogamous marriage, few would regard masturbation as an act of infidelity. Likewise, you might be embarrassed getting caught with your penis in a disembodied rubber vagina, but your partner would be pretty loony-tunes to accuse you of cheating on that account. Travel further along that spectrum— inflatable sex dolls, dolls that radiate body heat, dolls with little servos that pucker their lips and move their limbs— until you finally end up fucking a flesh-and-blood, womb-born, sapient fellow being. At which point pretty much everyone would agree that you were cheating (assuming you were in a supposedly monogamous relationship with someone else, of course).

A question I’d find interesting is, where does an android lie on that spectrum? Does the spectrum even apply to an android? By necessity, infidelity involves a betrayal of trust between beings (as opposed to a betrayal over something inanimate; if you keep shooting heroin after you’ve promised your partner you’ll stop, you’ve betrayed their trust but you’re not an infidel). Infidelity with a robot, then, implies that the robot is a being in its own right. Otherwise you’re just jerking off into a mannequin.

Let’s say your synth is a being. The very concept of exploitation hinges on the premise that the exploitee has needs and desires that are being oppressed in some way. I, the privileged invader, steal resources that should be yours. Through brute bullying force I impose my will upon you, and dismiss your own as inconsequential.

But what if your will, subordinate though it may be, is entirely in accord with mine?

asimovsynth

Nice bit of Alternate-reality documentation, though.

I’m not just talking about giving rights to toasters— or at least, if I am, I’m willing to grant that said toasters might be sapient. But so what if they are? Suppose we build a self-aware machine that does have needs and desires— but those needs and desires conform exactly to the role we designed them for? Our sapient slavebot wants to work in the mines; our self-aware sexbot wants to be used. There are issues within issues here: whether a mechanical humanoid is complex enough to have interests of its own; if so, whether it’s even possible to “oppress” something whose greatest aspiration is to be oppressed. Is there some moral imperative that makes it an a priori offense to build sapient artefacts that lack the capacity to suffer and rage and rebel— and if so, how fucking stupid can moral imperatives be?

I’m nowhere near the first to raise such questions. (Who can forget Douglas Adam’s sapient cow from The Restaurant at the End of the Universe, neurologically designed to want nothing more than to be eaten by hungry customers?) Which makes it all the more disappointing that Humans, ostensibly designed as an exploration platform for exactly these issues, is too damn gutless to engage with them. A hapless husband, in a fit of pique, activates the household synth’s “Adult Mode” and has a few minutes of self-loathing sex with it. The synth itself— which you’d think would have been programmed to at least act as though it’s getting off— sadly endures the experience, with all the long-suffering dignity of a Victorian wife performing her wifely duties under a caddish and insensitive husband.

When the real wife finds out what happens, predictably, she hits the roof— and while the husband makes a brief and half-hearted attempt to play the It’s just a machine! card, he obviously doesn’t believe it any more than we viewers are supposed to. In fact, he spends the rest of the season wringing his hands over the unforgivable awfulness of his sin.

Robocop also did it better.

Robocop also did it better.

Throughout the whole season, the only character who plays with the idea of combining sapience with servility is the mustache-twirling villain of the piece— and even he doesn’t go anywhere near the idea of sidestepping oppression by editing desire. Nah, he just imposes the same ham-fisted behavioral lock we saw back in Paul Verhoeven’s (far superior) Robocop, when Directive 4 kicked in.

*

Humans pretends to be genre subversive, thinks that by setting itself in a completely conventional setting it can lure in people who might be put off by T-800 endoskeletons and Lycra jumpsuits. It promises to play with Big Ideas, but without all those ostentatious FX— so by the time the casual viewer realizes they’ve been watching that ridiculous science fiction rubbish it won’t matter, because they’re already hooked.

You have no idea where this show is going.

You have no idea where this show is going.

It’s a great strategy, if you do it right. Look at Fortitude, for example: another British coproduction that begins for all the world like a police procedural, then seems to segue into some kind of ghost story before finally revealing itself as one of the niftiest little bits of cli-fi ever to grace a flatscreen. (The only reason I’m not devoting this whole post to Fortitude is because I wrote my latest Nowa Fantastyka column on the subject, and I must honor both my ethical and contractual noncompete constraints).

Humans does not do it right. For all the lack of special effects there’s little subtlety here; it pays lip service to Is it live or is it Memorex, but it doesn’t explore those issues so much as preach about them in a way that never dares challenge baseline preconceptions. With Fortitude you started off thinking you were in the mainstream, only to end up in SF. Humans does the reverse, launching with the promise of a thought-provoking journey into the ramifications of artificial intelligence; but it doesn’t take long for the green eyes to ‘ware thin and its true nature to emerge. In the end, Humans is just another shallow piece of social commentary, making the point— over eight glossy, well-acted episodes— that Slavery Is Wrong.

What a courageous stand to take, here in 2015. What truth, spoken to power.

What a wasted fucking opportunity.

Posted in: ink on art by Peter Watts 24 Comments

A Young Squid’s Illustrated Primer

Part the First: Liminal

Apparently, this is how Jasun Horsley sees me. I presume I'm the one on the right. (non-old-one elements by (Maria Nygard).

Apparently, this is how Jasun Horsley sees me. I presume I’m the one on the right. (non-Old-One elements by Maria Nygård).

I recently did a kind of free-form interview with fellow US-border-guard-detainee Jasun Horsley, for his Liminalist podcast. It went okay, if you discount the fact that the Skype connection seemed to go dead without warning every couple of minutes. I certainly hope that we repeated our respective Qs and As often enough to redundify those gaps— I note that, while we spoke for over two hours, the podcast itself weighs in at only one (including some nifty little musical interludes). Given the number of dropouts, that seems about right.

I’m listening to the final result even as I type, and so far my giddy enthusiasm isn’t quite loud enough to distract from the random boluses of dead air that shut me up every now and then. I do not envy Jasun the editing job it took to beat the raw recording into shape.

He also wrote a companion essay, “Neuro-Deviance and the Evolutionary Function of Depression“, from the perspective of someone halfway through Starfish. I think the Neuro-Deviant is supposed to be me.

Anyway, the on-site blurb describes our interaction as

…a roving and rifting conversation with Jasun about killing Jake (the One) and integrative therapy courtship, Lonesome Bob’s death ballad, Peter’s marine biology years, the initial impetus, Peter’s childhood “Everyone can have their own aquarium!” epiphany, astronaut dreams, getting off the planet, Jasun’s views on space travel (again), a bleak ET future for mankind, the ultimate displacement activity, Interstellar’s message, space travel benefits, the military agenda, 2001: A Space Odyssey opposing views, the hope for higher intelligence, determinism vs. transcendence, rejecting the duality of spiritual-material, how neurons are purely reactive, fizzy meat, the psychology of determinism, response vs. reaction, selective perception, truth and survival, depression’s correlation (or equivalence) with reality-perception, God and the anti-predator response, three men in a jungle, how natural selection shapes us to be paranoid, how anxiety allows us to see patterns, the many doings of paranoia, shaping the outside to match the inside (the devil made me do it), seeking the perks of depression, how depression fuels creativity, a thought experiment, is removing the lows desirable, depression as a new stage in human development, the difference between biology and psychology, the psyche and Behemoth, the pointlessness of survival, he who dies with most kid wins, what science is missing, the hard problem of consciousness, the difference between intelligence and consciousness, nipples on men, the best kind of mystery, the language variable, what if consciousness is mal-adaptive?

I think I remember most of that stuff.

(I would like to apologize, by the way, for repeating to Jasun the oversimplification that neurons only fire when externally provoked; I’ve been recently informed that neurons sometimes do fire spontaneously, as a result of changes to their internal state. Ultimately, of course, those internal states have to reflect some kind of historical cell-environment interaction, but I should probably start using a more nuanced bumper-sticker anyway.)

 *

Part the Second: Scramblers

Nicely done, Alienietzsche.

Nicely done, Alienietzsche.

 Last week’s ego-surf turned up this great little illustration from Deviant Artist “Alienietzsche“— whose vision of Blindsight‘s scramblers is perhaps the closest I’ve seen to the images that were floating around in my own head while I was writing about those crawly little guys. This is going straight into the Gallery, with thanks and with ol ‘Nietzsche’s blessing.

 *

Part the Third: Lemmings

If you look closely, you'll see that the plankton sliding into the astronauts bootprints look like neurons. Yeah, well, I was only thirteen.

If you look closely (you may have to click to embiggen), you’ll see that the plankton sliding into the astronauts bootprints look kind of like neurons.Yeah, well, I was only thirteen.

I recently told the Polish website Kawerna about a few of the novels that had had the greatest influence on me (they asked, in case you’re wondering; it’s not like I called them up in the middle of the night and forced my unsolicited opinions down their throat or anything). You won’t be surprised to learn that one of those titles was Stanislaw Lem’s Solaris. You may, however, be unaware of the profound resentment that book instilled within me when I first discovered it:

I spent most of my thirteenth summer trapped in a basement apartment in some Oregonian hick town, with little to do but read while my dad attended summer classes at the local university. I beach-combed on weekends, though— and while wandering Oregon’s coast that summer, my adolescent brain cooked up the idea of an intelligent ocean— a kind of diffuse neural network in which the plankton acted as neurons. I was going to write a story about it, even penciled a couple of sketches based on the idea.

Two weeks later I discovered Solaris in the local library. I’ve kind of resented Lem ever since…

The Kawerna assignment inspired me to dig back through the archives to see if I could find any of those sketches— and I did find a few, yellowed, moldy, nibbled by silverfish in their cheap plastic frames. I present one here, as evidence that while I may not have come up with the idea for Solaris before Lem did, I at least came up with it before I knew that Lem had. Which wasn’t bad, for a thirteen-year-old stuck in a basement while his Dad took post-graduate Bible-Study classes.

 *

Part the Last: Reprint Roll

dsc_00012Specialty micropress “Spacecraft Press” has released an extremely-limited-edition reprint of “The Things” as a chapbook, printed on a kind of translucent plasticy paper and inventively formatted in a manner more reminiscent of free verse than of prose. And I’m not kidding when I say “extremely-limited”: the total print run was only 21, which— when it comes to my work at least— is significantly fewer copies than even Tor usually loads into a print run. And only ten of those are available for sale (or would be, if they hadn’t already sold out). I guess this explains the eleven copies of “The Things” that appeared in my mailbox the other day.

If you look closely, you'll see that the Introduction is written by someone who does not write Science Fiction at all.

You’ll note that the Introduction is written by someone who does not write Science Fiction at all.

Last, and probably least— not because of lesser importance, but because the news is a week old by now, and has already been trumpeted on every social medium from HoloBook to carrier pigeon— legendary Canadian publisher ChiZine has announced the contents of this year’s Imaginarium 4: The Best Canadian Speculative Fiction. “Giants” is in there, but it almost wasn’t. It was supposed to be “Collateral” until a few weeks ago— and before that, it was supposed to be “The Colonel”. I’m actually kind of pleased things finally fell out the way they did; I’ve always had a soft spot for “Giants”, even if it hasn’t got the love that “Collateral” and “the Colonel” have in terms of year-end collections. Still, I can’t help but notice that “Giants” is also the shortest of the three, word-count-wise— which makes me wonder if a more appropriate subtitle might be The Best Canadian Speculative Fiction that fits into 300 pages or less.

It’s all good, though.

*

Epilog:

I have come to the end of Jasun’s podcast at almost the same time I’ve come to the end of this post; turns out it’s only part one of a two-parter, to be continued this Wednesday. Which is odd, because— while I recognize all the bits I’ve just heard coming through my laptop speakers— I don’t remember anything missing from that dialog.

Now I’m going to lie awake all night, wondering what else we talked about.

 

 

 

No Brainer.

For decades now, I have been haunted by the grainy, black-and-white x-ray of a human skull.

It is alive but empty, with a cavernous fluid-filled space where the brain should be. A thin layer of brain tissue lines that cavity like an amniotic sac. The image hails from a 1980 review article in Science: Roger Lewin, the author, reports that the patient in question had “virtually no brain”. But that’s not what scared me; hydrocephalus is nothing new, and it takes more to creep out this ex-biologist than a picture of Ventricles Gone Wild.

The stuff of nightmares. (From Oliviera et al 2012)

The stuff of nightmares. (From Oliveira et al 2012)

What scared me was the fact that this virtually brain-free patient had an IQ of 126.

He had a first-class honors degree in mathematics. He presented normally along all social and cognitive axes. He didn’t even realize there was anything wrong with him until he went to the doctor for some unrelated malady, only to be referred to a specialist because his head seemed a bit too large.

It happens occasionally. Someone grows up to become a construction worker or a schoolteacher, before learning that they should have been a rutabaga instead. Lewin’s paper reports that one out of ten hydrocephalus cases are so extreme that cerebrospinal fluid fills 95% of the cranium. Anyone whose brain fits into the remaining 5% should be nothing short of vegetative; yet apparently, fully half have IQs over 100. (Why, here’s another example from 2007; and yet another.) Let’s call them VNBs, or “Virtual No-Brainers”.

The paper is titled “Is Your Brain Really Necessary?”, and it seems to contradict pretty much everything we think we know about neurobiology. This Forsdyke guy over in Biological Theory argues that such cases open the possibility that the brain might utilize some kind of extracorporeal storage, which sounds awfully woo both to me and to the anonymous neuroskeptic over at Discovery.com; but even Neuroskeptic, while dismissing Forsdyke’s wilder speculations, doesn’t really argue with the neurological facts on the ground. (I myself haven’t yet had a chance to more than glance at the Forsdyke paper, which might warrant its own post if it turns out to be sufficiently substantive. If not, I’ll probably just pretend it is and incorporate it into Omniscience.)

On a somewhat less peer-reviewed note, VNBs also get routinely trotted out by religious nut jobs who cite them as evidence that a God-given soul must be doing all those things the uppity scientists keep attributing to the brain. Every now and then I see them linking to an off-hand reference I made way back in 2007 (apparently rifters.com is the only place to find Lewin’s paper online without having to pay a wall) and I roll my eyes.

And yet, 126 IQ. Virtually no brain. In my darkest moments of doubt, I wondered if they might be right.

So on and off for the past twenty years, I’ve lain awake at night wondering how a brain the size of a poodle’s could kick my ass at advanced mathematics. I’ve wondered if these miracle freaks might actually have the same brain mass as the rest of us, but squeezed into a smaller, high-density volume by the pressure of all that cerebrospinal fluid (apparently the answer is: no). While I was writing Blindsight— having learned that cortical modules in the brains of autistic savants are relatively underconnected, forcing each to become more efficient— I wondered if some kind of network-isolation effect might be in play.

Now, it turns out the answer to that is: Maybe.

Three decades after Lewin’s paper, we have “Revisiting hydrocephalus as a model to study brain resilience” by de Oliveira et al. (actually published in 2012, although I didn’t read it until last spring). It’s a “Mini Review Article”: only four pages, no new methodologies or original findings— just a bit of background, a hypothesis, a brief “Discussion” and a conclusion calling for further research. In fact, it’s not so much a review as a challenge to the neuro community to get off its ass and study this fascinating phenomenon— so that soon, hopefully, there’ll be enough new research out there warrant a real review.

The authors advocate research into “Computational models such as the small-world and scale-free network”— networks whose nodes are clustered into highly-interconnected “cliques”, while the cliques themselves are more sparsely connected one to another. De Oliveira et al suggest that they hold the secret to the resilience of the hydrocephalic brain. Such networks result in “higher dynamical complexity, lower wiring costs, and resilience to tissue insults.” This also seems reminiscent of those isolated hyper-efficient modules of autistic savants, which is unlikely to be a coincidence: networks from social to genetic to neural have all been described as “small-world”. (You might wonder— as I did— why de Oliveira et al. would credit such networks for the normal intelligence of some hydrocephalics when the same configuration is presumably ubiquitous in vegetative and normal brains as well. I can only assume they meant to suggest that small-world networking is especially well-developed among high-functioning hydrocephalics.) (In all honesty, it’s not the best-written paper I’ve ever read. Which seems to be kind of a trend on the ‘crawl lately.)

The point, though, is that under the right conditions, brain damage may paradoxically result in brain enhancement. Small-world, scale-free networking— focused, intensified, overclockedmight turbocharge a fragment of a brain into acting like the whole thing.

Can you imagine what would happen if we applied that trick to a normal brain?

If you’ve read Echopraxia, you’ll remember the Bicameral Order: the way they used tailored cancer genes to build extra connections in their brains, the way they linked whole brains together into a hive mind that could rewrite the laws of physics in an afternoon. It was mostly bullshit, of course: neurological speculation, stretched eight unpredictable decades into the future for the sake of a story.

But maybe the reality is simpler than the fiction. Maybe you don’t have to tweak genes or interface brains with computers to make the next great leap in cognitive evolution. Right now, right here in the real world, the cognitive function of brain tissue can be boosted— without engineering, without augmentation— by literal orders of magnitude. All it takes, apparently, is the right kind of stress. And if the neuroscience community heeds de Oliveira et al‘s clarion call, we may soon know how to apply that stress to order. The singularity might be a lot closer than we think.

Also a lot squishier.

Wouldn’t it be awesome if things turned out to be that easy?

Dr. Fox and the Borg Collective

Take someone’s EEG as they squint really hard and think Hello. Email that brainwave off to a machine that’s been programmed to respond to it by tickling someone else’s brain with a flicker of blue light. Call the papers. Tell them you’ve invented telepathy.

I mean, seriously: aren't you getting tired of seeing these guys?

I mean, seriously: aren’t you getting tired of these guys?

Or: teach one rat to press a lever when she feels a certain itch. Outfit another with a sensor that pings when the visual cortex sparks a certain way. Wire them together so the sensor in one provokes the itch in the other: one rat sees the stimulus and the other presses the lever. Let Science Daily tell everyone that you’ve built the Borg Collective.

There’s been a lot of loose talk lately about hive minds. Most of it doesn’t live up to the hype. I got so irked by all that hyperbole— usually accompanied by a still from “The Matrix”, or a picture of Spock in the throes of a mind meld— that I spent a good chunk of my recent Aeon piece bitching about it. Most of these “breakthroughs”, I grumbled, couldn’t be properly described as hive consciousness or even garden-variety telepathy. I described it as the difference between experiencing an orgasm and watching a signal light on a distant hill spell out oh-god-oh-god-yes in Morse Code.

I had to allow, though, that it might be only a matter of time before you could scrape the hype off one of those stories and find some actual substance beneath. In fact, the bulk of my Aeon essay dealt with the implications of the day when all those headlines came true for real.

I think we might have just hit a milestone.

*

Here’s something else to try. Teach a bunch of thirsty rats to distinguish between two different sounds; motivate them with sips of water, which they don’t get unless they push the round lever when they hear “Sound 0″ and the square one when they hear “Sound 1″.

Once they’ve learned to tell those sounds apart, turn them into living logic gates. Put ‘em in a daisy-chain, for example, and make them play “Broken Telephone”: each rat has to figure out whether the input is 0 or 1 and pass that answer on to the next in line. Or stick ‘em in parallel, give them each a sound to parse, let the next layer of rats figure out a mean response. Simple operant conditioning, right? The kind of stuff that was old before most of us were born.

Now move the stimulus inside. Plant it directly into the somatosensory cortex via a microelectrode array (ICMS, for “IntraCortical MicroStimulation”). And instead of making the rats press levers, internalize that too: another array on the opposite side of the cortex, to transmit whatever neural activity it reads there.

Call it “brainet”. Pais-Vieira et al do.

The paper is “Building an organic computing device with multiple interconnected brains“, from the same folks who brought you Overhyped Rat Mind Meld and Monkey Videogame Hive. In addition to glowing reviews from the usual suspects, it has won over skeptics who’ve decried the hype associated with this sort of research in the past. It’s a tale of four rat brains wired together, doing stuff, and doing it better than singleton brains faced with the same tasks. (“Split-brain patients outperform normal folks on visual-search and pattern-recognition tasks,” I reminded you all back at Aeon; “two minds are better than one, even when they’re in the same head”). And the payoff is spelled out right there in the text: “A new type of computing device: an organic computer… could potentially exceed the performance of individual brains, due to a distributed and parallel computing architecture”.

Bicameral Order, anyone? Moksha Mind? How could I not love such a paper?

And yet I don’t. I like it well enough. It’s a solid contribution, a real advance, not nearly so guilty of perjury as some.

And yet I’m not sure I entirely trust it.

I can’t shake the sense it’s running some kind of con.

*

The real thing.  Sort of.

The real thing. Sort of. (From Pais-Vieira et al 2015.

There’s much to praise. We’re talking about an actual network, multiple brains in real two-way communication, however rudimentary. That alone makes it a bigger deal than those candy-ass one-direction set-ups that usually get the kids in such a lather.

In fact, I’m still kind of surprised that the damn thing even works. You wouldn’t think that pin-cushioning a live brain with a grid of needles would accomplish much. How precisely could such a crude interface ever interact with all those billions of synapses, configured just so to work the way they do? We haven’t even figured out how brains balance their books in one skull; how much greater the insight, how many more years of research before we learn how to meld multiple minds, a state for which there’s no precedent in the history of life itself?

But it turns out to be way easier than it looks. Hook a blind rat up to a geomagnetic sensor with a simple pair of electrodes, and he’ll be able to navigate a maze— using ambient magnetic fields— as well as any sighted sibling. Splice the code for the right kind of opsin into a mouse genome and the little rodent will be able to perceive colors she never knew before. These are abilities unprecedented in the history of the clade— and yet somehow, brains figure out the user manuals on the fly. Borg Collectives may be simpler than we ever imagined: just plug one end of the wire into Brain A, the other into Brain B, and trust a hundred billion neurons to figure out the protocols on their own.

Which makes it a bit of a letdown, perhaps, when every experiment Pais-Vieira et al describe comes down, in the end, to the same simple choice between 0 and 1. Take the very climax of their paper, a combination of “discrete tactile stimulus classification, BtB interface, and tactile memory storage” bent to the real-world goal of weather prediction. Don’t get too excited— it was, they admit up front, a very simple exercise. No cloud cover, no POP, just an educated guess at whether the chance of rain is going up or down at any given time.

Hey, can't be any worse than the weather person on CBC's morning show...

Hey, can’t be any worse than the weather person on CBC’s morning show…

The front-end work was done by two pairs of rats wired into “dyads”; one dyad was told whether temperature was increasing (0) or decreasing (1), while the other was told the same about barometric pressure. If all went well, each simply spat out the same value that had been fed into it; they were then reintegrated into the full-scale 4-node brainet, which combined those previous outputs to decide whether the chance of precip was rising or falling. It was exactly the same kind of calculation, using exactly the same input, that showed up in other tasks from the same paper; the main difference was that this time around, the signals were labeled “temperature rising” or “temperature falling” instead of 0 and 1. No matter. It all still came down to another encore performance of Brainet’s big hit single, “Torn Between Two Signals”, although admittedly they played both acoustic and electric versions in the same set.

I’m aware of the obvious paradox in my attitude, by the way. On the one hand I can’t believe that such simple technology could work at all when interfaced with living brains; on the other hand I’m disappointed that it doesn’t do more.

I wonder how brainet would resolve those signals.

*

Of course, Pais-Vieira et al did more than paint weather icons on old variables. They ran brainet through other paces— that “broken telephone” variant I mentioned, for example, in which each node in turn had to pass on the signal it had received until that signal ended up back at the first rat in the chain— who (if the run was successful) identified the serially-massaged data as the same one it had started out with. In practice, this worked 35% of the time, a significantly higher success rate than the 6.25%— four iterations, 50:50 odds at each step— you’d expect from random chance. (Of course, the odds of simply getting the correct final answer were 50:50 regardless of how long the chain was; there were only two states to choose from. Pais-Vieira et al must have tallied up correct answers at each intermediate step when deriving their stats, because it would be really dumb not to; but I had to take a couple of passes at those paragraphs, because at least one sentence—

“the memory of a tactile stimulus could only be recovered if the individual BtB communication links worked correctly in all four consecutive trials.”

— was simply wrong. Whatever the merits of this paper, let’s just say that “clarity” doesn’t make the top ten.)

What the rats saw. Ibid.

What the rats saw. Ibid.

More nodes, better results. Ibid.

More nodes, better results. Ibid.

The researchers also used brainet to transmit simple images— again, with significant-albeit-non-mind-blowing results— and, convincingly showed that general performance improved with a greater number of brains in the net. On the one hand I wonder if this differs in any important way from simply polling a group of people with a true-false question and going with the majority response; wouldn’t that also tend towards greater accuracy with larger groups, simply because you’re drawing on a greater pool of experience? Is every Gallup focus group a hive mind?

On the other hand, maybe the answer is: yes, in a way. Conventional neurological wisdom describes even a single brain as a parliament of interacting modules. Maybe group surveys is exactly the way hive minds work.

*

So you cut them some slack. You look past the problematic statements because you can figure out what they were trying to say even if they didn’t say it very well. But the deeper you go, the harder it gets. We’re told, for example, that Rat 1 has successfully identified the signal she got from Rat 4— but how do we know that? Rat 4, after all, was only repeating a signal that originated with Rat 1 in the first place (albeit one relayed through two other rats). When R1’s brain says “0”, is it parsing the new input or remembering the old?

Sometimes the input array is used as a simple starting gun, a kick in the sulcus to tell the rats Ready, set, Go: sync up! Apparently the rat brains all light up the same way when that happens, which Pais-Vieira et al interpret as synchronization of neural states via Brain-to-Brain interface. Maybe they’re right. Then again, maybe rat brains just happen to light up that way when spiked with an electric charge. Maybe they were no more “interfaced” than four flowers, kilometers apart, who simultaneously turn their faces toward the same sun.

Ah, but synchronization improved over time, we’re told. Yes, and the rats could see each other through the plexiglass, could watch their fellows indulge in the “whisking and licking” behaviors that resulted from the stimulus. (I’m assuming here that “whisking” behavior has to do with whiskers and not the making of omelets, which would be a truly impressive demonstration of hive-mind capabilities.) Perhaps the interface, such as it was, was not through the brainet at all— but through the eyes.

I’m willing to forgive a lot of this stuff, partly because further experimentation resolves some of the ambiguity. (In one case, for example, the rats were rewarded only if their neural activity desynchronised, which is not something they’d be able to do without some sense of the thing they were supposed to be diverging from.) Still, the writing— and by extension, the logic behind it— seems a lot fuzzier than it should be. The authors apparently recognize this when they frankly admit

“One could argue that the Brainet operations demonstrated here could result from local responses of S1 neurons to ICMS.”

They then list six reasons to believe otherwise, only one of which cuts much ice with me (untrained rats didn’t outperform random chance when decoding input). The others— that performance improved during training, that anesthetized or inattentive animals didn’t outperform chance, that performance degraded with reduced trial time or a lack of reward— suggest, to me, only that performance was conscious and deliberate, not that it was “nonlocal”.

Perhaps I’m just not properly grasping the nuances of the work— but at least some of that blame has to be laid on the way the paper itself is written. It’s not that the writing is bad, necessarily; it’s actually worse than that. The writing is confusing— and sometimes it seems deliberately so. Take, for example, the following figure:

Alone against the crowd. Ibid.

Alone against the crowd. Ibid.

Four rats, their brains wired together. The red line shows the neural activity of one of those rats; the blue shows mean neural activity of the other three in the network, pooled. Straightforward, right? A figure designed to illustrate how closely the mind of one node syncs up with the rest of the hive.

Of course, a couple of lines weaving around a graph aren’t what you’d call a rigorous metric: at the very least you want a statistical measure of correlation between Hive and Individual, a hard number to hang your analysis on. That’s what R is, that little sub-graph inset upper right: a quantitative measure of how precisely synced those two lines are at any point on the time series.

I mean, Jesus, Miguel. What are you afraid of? See how easy it is?

What are you afraid of, Miguel? See how easy it is?

So why is the upper graph barely more than half the width of the lower one?

The whole point of the figure is to illustrate the strength of the correlation at any given time. Why wouldn’t you present everything at a consistent scale, plot R along the same ruler as FR so that anyone who wants to know how tight the correlation is at time T can just see it? Why build a figure that obscures its own content until the reader surrenders, is forced to grab a ruler, and back-converts by hand?

What are you guys trying to cover?

*

Some of you have probably heard of the Dr. Fox Hypothesis. It postulates that “An unintelligible communication from a legitimate source in the recipient’s area of expertise will increase the recipient’s rating of the author’s competence.” More clearly, Bullshit Baffles Brains.

But note the qualification: “in the recipient’s area of expertise”. We’re not talking about some Ph.D. bullshitting an antivaxxer; we’re talking about an audience of experts being snowed by a guy speaking gibberish in their own field of expertise.

In light of this hypothesis, it shouldn’t surprise you that controlled experiments have shown that wordy, opaque sentences rank more highly in people’s minds than simple, clear ones which convey the same information. Correlational studies report that the more prestigious a scientific journal tends to be, the worse the quality of the writing you’ll find therein. (I read one fist-hand account of someone who submitted his first-draft manuscript— which even he described as “turgid and opaque”— to the same journal that had rejected the much-clearer 6th draft of the same paper. It was accepted with minor revisions.)

Pas-Vieira et al appears in Nature’s “Scientific Reports”. You don’t get much more prestigious than that.

So I come away from this paper with mixed feelings. I like what they’ve done— at least, I like what I think they’ve done. From what I can tell the data seem sound, even behind all the handwaving and obfuscation. And yet, this is a paper that acts as though it’s got something to hide, that draws your attention over here so you won’t notice what’s happening over there. It has issues, but none are fatal so far as I can tell. So why the smoke and mirrors? It’s like being told a wonderful secret by a used-car salesman.

These guys really had something to say.

Why didn’t they just fucking say it?

 

 

 

(You better appreciate this post, by the way. Even if it is dry as hell. It took me 19 hours to research and write the damn thing.

(I ought to put up a paywall.)

Posted in: neuro, relevant tech by Peter Watts 15 Comments

Spock the Impaler: A Belated Retrospective on Vulcan Ethics.

When I first wrote these words, the Internet was alive with the death of Leonard Nimoy. I couldn’t post them here, because Nowa Fantastyka got them first (or at least, an abridged version thereof), and there were exclusivity windows to consider. As I revisit these words, though, Nimoy remains dead, and the implications of his legacy haven’t gone anywhere. So this is still as good a time as any to argue— in English, this time— that any truly ethical society will inevitably endorse the killing of innocent people.

Bear with me.

As you know, Bob, Nimoy’s defining role was that of Star Trek‘s Mr. Spock, the logical Vulcan who would never let emotion interfere with the making of hard choices. This tended to get him into trouble with Leonard McCoy, Trek‘s resident humanist. “If killing five saves ten it’s a bargain,” the doctor sneered once, in the face of Spock’s dispassionate suggestion that hundreds of colonists might have to be sacrificed to prevent the spread of a galaxy-threatening neuroparasite. “Is that your simple logic?”

The logic was simple, and unassailable, but we were obviously supposed to reject it anyway. (Sure enough, that brutal tradeoff had been avoided by the end of the episode[1], in deference to a TV audience with no stomach for downbeat endings.) Apparently, though, it was easier to swallow 16 years later, when The Wrath of Kahn rephrased it as “The needs of the many outweigh the needs of the few”. That time it really caught on, went from catch-phrase to cliché in under a week. It’s the second-most-famous Spock quote ever. It’s so comforting, this paean to the Greater Good. Of course, it hardly ever happens— here in the real world, the needs of the few almost universally prevail over those of the many— but who doesn’t at least pay lip-service to the principle?

Most of us, apparently:

“…progress isn’t directly worth the life of a single person. Indirectly, fine. You can be Joseph Stalin as long as you don’t mean to kill anyone. Bomb a dam in a third world shit-hole on which a hundred thousand people depend for water and a thousand kids die of thirst but it wasn’t intentional, right? Phillip Morris killed more people than Mao but they’re still in the Chamber of Commerce. Nobody meant for all those people to die drowning in their own blood and even after the Surgeon General told them the inside scoop, they weren’t sure it caused lung cancer.

“Compare that to the risk calculus in medical research. If I kill one person in ten thousand I’m shut down, even if I’m working on something that will save millions of lives. I can’t kill a hundred people to cure cancer, but a million will die from the disease I could have learned to defeat.”

I’ve stolen this bit of dialog, with permission, from an aspiring novelist who wishes to remain anonymous for the time being. (I occasionally mentor such folks, to supplement my fantastically lucrative gig as a midlist science fiction author.) The character speaking those words is a classic asshole: arrogant, contemptuous of his colleagues, lacking any shred of empathy.

And yet, he has a point.

He’s far from the first person to make it. The idea of the chess sacrifice, the relative value of lives weighed one against another for some greater good, is as old as Humanity itself (even older, given some of the more altruistic examples of kin selection that manifest across the species spectrum). It’s a recurrent theme even in my own fiction: Starfish sacrificed several to save a continent, Maelstrom sacrificed millions to save a world (not very successfully, as it turns out). Critics have referred to the person who made those calls as your typical cold-blooded bureaucrat, but I always regarded her as heroic: willing to make the tough calls, to do what was necessary to save the world (or at least, increase the odds that it could be saved). Willing to put Spock’s aphorism into action when there is no third alternative.

And yet I don’t know if I’ve ever seen The Needs of the Many phrased quite so starkly as in that yet-to-be-published snippet of fiction a few paragraphs back.

Perhaps that’s because it’s not really fiction. Tobacco killed an estimated 100 million throughout the 20th Century, and— while society has been able to rouse itself for the occasional class-action lawsuit— nobody’s ever been charged with Murder by Cigarette, much less convicted. But if your struggle to cure lung cancer involves experiments that you know will prove fatal to some of your subjects, you’re a serial killer. What kind of society demonizes those who’d kill the Few to save the Many, while exempting those who kill the Many for no better reason than a profit margin? Doesn’t Spock’s aphorism demand that people get away with murder, so long as it’s for the greater good?

You’re not buying it, are you? It just seems wrong.

I recently hashed this out with Dave Nickle over beers and bourbons. (Dave is good for hashing things out with; that’s one of the things that make him such an outstanding writer.) He didn’t buy it either, although he struggled to explain why. For one thing, he argued, Big Tobacco isn’t forcing people to put those cancer sticks in their mouths; people choose for themselves to take that risk. But that claim gets a bit iffy when you remember that the industry deliberately tweaked nicotine levels in their product for maximum addictive effect; they did their level best to subvert voluntary choice with irresistible craving.

Okay, Dave argued, how about this: Big Tobacco isn’t trying to kill anyone— they just want to sell cigarettes, and collateral damage is just an unfortunate side effect. “Your researcher, on the other hand, would be gathering a group of people— either forcibly or through deception— and directly administering deadly procedures with the sure knowledge that one or more of those people would die, and their deaths were a necessary part of the research. That’s kind of premeditated, and very direct. It is a more consciously murderous thing to do than is selling tobacco to the ignorant. Hence, we regard it as more monstrous.”

And yet, our researchers aren’t trying to kill people any more than the tobacco industry is; their goal is to cure cancer, even though they recognize the inevitability of collateral damage as— yup, just an unfortunate side effect. To give Dave credit, he recognized this, and characterized his own argument as sophistry— “but it’s the kind of sophistry in which we all engage to get ourselves through the night”. In contrast, the “Joseph Mengele stuff— that shit’s alien.”

I think he’s onto something there, with his observation that the medical side of the equation is more “direct”, more “alien”. The subjective strangeness of a thing, the number of steps it takes to get from A to B, are not logically relevant (you end up at B in both cases, after all). But they matter, somehow. Down in the gut, they make all the difference.

I think it all comes down to trolley paradoxes.

You remember those, of course. The classic example involves two scenarios, each involving a runaway trolley headed for a washed-out bridge. In one scenario, its passengers can only be saved by rerouting it to another track—where it will kill an unfortunate lineman. In the other scenario, the passengers can only be saved by pushing a fat person onto the track in front of the oncoming runaway, crushing the person but stopping the train.

Ethically, the scenarios are identical: kill one, save many. But faced with these hypothetical choices, people’s responses are tellingly different. Most say it would be right to reroute the train, but not to push the fat person to their death— which suggests that such “moral” choices reflect little more than squeamishness about getting one’s hands dirty. Reroute the train, yes— so long as I don’t have to be there when it hits someone. Let my product kill millions— but don’t put me in the same room with them when they check out. Let me act, but only if I don’t have to see the consequences of my action.

Morality isn’t ethics, isn’t logic. Morality is cowardice— and while Star Trek can indulge The Needs of the Many with an unending supply of sacrificial red shirts, here in the real world that cowardice reduces Spock’s “axiomatic” wisdom to a meaningless platitude.

The courage of his convictions.

The courage of his convictions.

Trolley paradoxes can take many forms (though all tend to return similar results). I’m going to leave you with one of my favorites. A surgeon has five patients, all in dire and immediate need of transplants— and a sixth, an unconnected out-of-towner who’s dropped in unexpectedly with a broken arm and enough healthy compatible organs to save everyone else on the roster.

The needs of the many outweigh the needs of the few. Everyone knows that much. Why, look: Spock’s already started cutting.

What about you?

 


 

[1] “Operation: Annihilate!”, by Steven W. Carabatsos. In case you were wondering.