Near the end of the recent German movie “Er Ist Wieder Da” (“Look Who’s Back”), Adolph Hitler— transported through time to the year 2015— is picking up where he left off. On the roof of the television studio that fueled his resurgence (the network thought they were just exploiting an especially-tasteless Internet meme for ratings), the sad-sack freelancer who discovered “the world’s best Hitler impersonator” confronts his Frankenstein’s monster— but Hitler proves unkillable. Even worse, he makes some good points:
“In 1933, people were not fooled by propaganda. They elected a leader who openly disclosed his plans with great clarity. The Germans elected me… ordinary people who chose to elect an extraordinary man, and entrust the fate of the country to him.
What do you want to do, Sawatzki? Ban elections?”
It’s a good movie, hilarious and scary and sociologically plausible (hell, maybe sociologically inevitable), and given that one of Hitler’s lines is “Make Germany Great Again” it’s not surprising that it’s been rediscovered in recent months. Imagine a cross between “Borat”, “The Terminator”, and “Springtime for Hitler”, wrapped around a spot-on re-enactment of that Hitler-in-the-Bunker meme.
But that rooftop challenge: that, I think, really cuts to the heart of things: What do you want to do, Sawatzki? Ban elections?
I feel roughly the same way every time I read another outraged screed about Cambridge Analytica.
The internet’s been all a’seethe with such stories lately. The details are arcane, but the take-home message is right there in the headlines: The Rise of the Weaponized AI Propaganda Machine; Will Democracy Survive Big Data and Artificial Intelligence?; Robert Mercer: the big data billionaire waging war on mainstream media.
The executive summary goes something like this: An evil right-wing computer genius has developed scarily-effective data scraping techniques which— based entirely on cues gleaned from social media— knows individual voters better than do their own friends, colleagues, even family. This permits “behavioral microtargetting”: campaign messages customized not for boroughs or counties or demographic groups, but at you. Individually. A bot for every voter.
Therefore democracy itself is in danger.
Put aside for the moment the fact that the US isn’t a functioning democracy anyway (unless you define “democracy” as a system in which— to quote Thomas Piketty— “When a majority of citizens disagrees with economic elites and/or with organized interests, they generally lose”). Ignore any troublesome doubts about whether the same folks screaming about Cambridge Analytica would be quite so opposed to the tech if it had been used to benefit Clinton instead of Trump. (It’s not as though the Dems didn’t have their own algorithms, their own databased targeting systems; it’s just that those algos really sucked.) Put aside the obvious partisan elements and focus on the essential argument: the better They know you, the more finely They can tune their message. The more finely They tune their message, the less freedom you have. To quote directly from Helbing et al over on the SciAm blog,
“The trend goes from programming computers to programming people.” [breathless italics courtesy of the original authors]
Or from Berit Anderson, over at Medium.com:
“Instead of having to deal with misleading politicians, we may soon witness a Cambrian explosion of pathologically-lying political and corporate bots that constantly improve at manipulating us.”
You’d expect me to be all over this, right? What could be more up my alley than Machiavellian code which treats us not as autonomous beings but as physical systems, collections of inputs and outputs whose state variables show not the slightest trace of Free Will? You can almost see Valerie tapping her arrhythmic tattoos on the the bulkhead, reprogramming the crew of the Crown of Thorns without their knowledge.
And I am all over it. Kind of. I shrugged at the finding that it took Mercer’s machine 150 Facebook “Likes” to know someone better than their parents did (hell, you’d know me better than my parents did based on, like, three), but I was more impressed when I learned that 300 “Likes” is all it would take to know me better than Caitlin does. And no one has to convince me that sufficient computing power, coupled with sufficient data, can both predict and manipulate human behavior.
But so what? ‘Twas ever thus, no?
No, Helbing and his buddies assert:
“Personalized advertising and pricing cannot be compared to classical advertising or discount coupons, as the latter are non-specific and also do not invade our privacy with the goal to take advantage of our psychological weaknesses and knock out our critical thinking.”
Oh, give me a fucking break.
They’ve been taking advantage of our psychological weaknesses to knock out our critical thinking skills since before the first booth babe giggled coquettishly at the Houston Auto Show, since the first gurgling baby was used to sell Goodyear radials, since IFAW decided they could raise more funds if they showed Loretta Swit hugging baby seals instead of giant banana slugs. Advertising tries to knock out your critical thinking by definition. Every tasteless anti-abortion poster, every unfailing-cute child suffering from bowel disease in the local bus shelter, every cartoon bear doing unnatural things with toilet paper is an attempt to rewire your synapses, to literally change your mind.
Ah, but those aren’t targeted to individuals, are they? Those are crude hacks of universal gut responses, the awww when confronted with cute babies, the hubba hubba when tits are shoved in the straight male face. (Well, almost universal; show me a picture of a cute baby and I’m more likely to vomit than coo.) This is different, Mercer’s algos know us personally. They know us as well as our friends, family, lovers!
Maybe so. But you know who else knows us as well as our friends, family and lovers? Our friends, family, and lovers. The same folks who sit across from us at the pub or the kitchen table, who cuddle up for a marsupial cling when the lights go out. Such people routinely use their intimate knowledge of us to convince us to see a particular movie or visit a particular restaurant— or, god forbid, vote for a particular political candidate. People who, for want of a better word, attempt to reprogram us using sound waves and visual stimuli; they do everything the bots do, and they probably still do it better.
What do you want to do, Sawatzki? Ban advertising? Ban debate? Ban conversation?
I hear that Scottsman, there in the back: he says we’re not talking about real debate, real conversation. When Cambridge Analytica targets you there’s no other being involved; just code, hacking meat.
As if it would be somehow better if meat were hacking meat. The prediction that half our jobs will be lost to automation within the next couple of decades is already a tired cliché, but most experts don’t react to such news by demanding the repeal of Moore’s Law. They talk about retraining, universal basic income— adaptation, in a word. Why should this be any different?
Don’t misunderstand me. The fact that our destiny is in the hands of evil right-wing billionaires doesn’t make me any happier than it makes the rest of you. I just don’t see the ongoing automation of that process as anything more than another step along the same grim road they’ve been driving us down for decades. Back in 2008 and 2012 I don’t remember anyone howling with outrage over Obama’s then-cutting-edge voter-profiling database. I do remember a lot of admiring commentary on his campaign’s ability to “get out the vote”.
Curious that the line between grass-roots activism and totalitarian neuroprogramming should fall so neatly between Then and Now.
Cambridge Analytica’s psyops tech doesn’t so much “threaten democracy” as drive one more nail into its coffin. For anyone who hasn’t been paying attention, the corpse has been rotting for some time now.
‘Course, that doesn’t mean we shouldn’t fight back. There are ways to do that, even on an individual level. I’m not talking about the vacuous aspirations peddled over on SciAm, by folks who apparently don’t know the difference between a slogan and a strategy (Ensure that people have access to their data! Make government accountable!) I’m talking about things you can do right now. Easy things.
The algos eat data? Stop feeding them. Don’t be a Twit: if all Twitter’s other downsides aren’t enough to scare you off, maybe the prospect of starving the beast will lure you away. If you can’t bring yourself to quit Facebook, at least stop “liking” things— or even better, “Like” things that you actually hate, throw up chaff to contaminate the data set and make you a fuzzier target. (When I encounter something I find especially endearing on Facebook, I often tag it with one of those apoplectic-with-rage emojis). Get off Instagram and GotUrBalls. Use Signal. Use a fucking VPN. Make Organia useless to them.
What’s that you say? Thousands of people around the world are just dying to know your favorite breadfruit recipe? Put it in a blog. It won’t stop bots from scraping your data, but at least they’ll have to come looking for you; you won’t be feeding yourself into a platform that’s been explicitly designed to harvest and resell your insides.
The more of us who refuse to play along— the more of us who cheat by feeding false data into the system— the less we have to fear from code that would read our minds. And if most people can’t be bothered— if all that clickbait, all those emojis and upward-pointing thumbs are just too much of a temptation— well, we do get the government we deserve. Just don’t complain when, after wading naked through the alligator pool, something bites your legs off.
I’m going to let Berit Anderson play me offstage:
“Imagine that in 2020 you found out that your favorite politics page or group on Facebook didn’t actually have any other human members, but was filled with dozens or hundreds of bots that made you feel at home and your opinions validated? Is it possible that you might never find out?”
I think she intends this as a warning, a dire If This Goes On portent. But what Anderson describes is the textbook definition of a Turing Test, passed with flying colors. She sees an internet filled with zombies: I see the birth of True AI.
Of course, there are two ways to pass a Turing Test. The obvious route is to design a smarter machine, one that can pass for human. But as anyone who’s spent any time on a social platform knows, people can be as stupid, as repetitive, and as vacuous as any bot. So the other path is to simply make people dumber, so they can be more easily fooled by machines.
I’m starting to think that second approach might be easier.