You all remember Starship Troopers, right?
That slim little YA contained a number of beer-worthy ideas, but the one that really stuck with me was the idea of earned citizenship— that the only people allowed to vote, or hold public office, were those who’d proven they could put society’s interests ahead of their own. Heinlein’s implementation was pretty contrived— while the requisite vote-worthy altruism was given the generic label of “Federal Service”, the only such service on display in the novel was the military sort. I’ll admit that thrusting yourself to the front lines of a war with genocidal alien bugs does show a certain willingness to back-burner your own interests— but what about firefighting, or disaster relief, or working to clean up nuclear accidents at the cost of your genetic integrity? Do these other risky, society-serving professions qualify? Or are they entirely automated now (and if that tech exists, why isn’t the Mobile Infantry automated as well)?
But I digress. While Heinlein’s implementation may have been simplistic and his interrogation wanting, the basic idea— that the only way to get a voice in the group is if you’re willing to sacrifice yourself for the group— is a fascinating and provocative idea. If every member of your group is a relative, you’d be talking inclusive fitness. Otherwise, you’re talking about institutionalized group selection.
Way back when I was in grad school, “group selection” wasn’t even real, not in the biological sense. It was worse than a dirty phrase; it was a naïve one. “The good of the species” was a fairy tale, we were told. Selection worked on individuals, not groups; if a duck could grab resources for herself at the expense of two or three conspecifics, she’d damn well do that even if fellow ducks paid the price. Human societies could certainly learn to honour the needs of the many over the needs of the few, but that was a learned response, not an evolved one. (And even when learned, it wasn’t internalized very well— just ask any die-hard capitalist why communism failed.)
I’ve lost count of the papers I read (and later, taught) which turned a skeptical eye to cases of so-called altruism in the wild— only to find that every time, those behaviors turned out to be selfish when you ran the numbers. They either benefited the “altruist”, or someone who shared enough of the “altruist’s” genes to fit under the rubric of inclusive fitness. Dawkins’ The Selfish Gene— which pushed the model incrementally further by pointing out that it was actually the genes running the show, even though they pulled phenotypic strings one-step-removed— got an especially warm reception in that environment.
But the field moved on after I left it; as it subsequently turned out, the models discrediting group selection hinged on some pretty iffy parameter values. I’m not familiar with the details— I haven’t kept up— but as I understand it the pendulum has swung a bit closer to the midpoint. Genes are still selfish, individuals still subject to selection— but so too are groups. (Not especially radical, in hindsight. It stands to reason that if something benefits the group, it benefits many of that group’s members as well. Even Darwin suggested as much way back in Origin. Call it trickle-down selection.)
So. If group selection is a thing in the biological sense, then we need not look to the Enlightened Society to explain the existence of the martyrs, the altruists, and the Johnny Ricos of the world. Maybe there’s a biological mechanism to explain them.
Enter oxytocin, back for a repeat performance.
You’re all familiar with oxytocin. The Cuddle Hormone, Fidelity in an Aerosol, the neuropeptide that keeps meadow voles monogamous in a sea of mammalian promiscuity. You may even know about its lesser-known dark side— the kill-the-outsider imperative that complements love the tribe.
Now, in the Proceedings of the National Academy of Sciences, Shalvi and Dreu pry open another function of this biochemical Swiss Army Knife. Turns out oxytocin makes you lie— but only if the lie benefits others. Not if it only benefits you yourself.
One of several illustrations which are clearer than the text.
The experiment was almost childishly simple: your treatment groups snort oxytocin, your controls snort a placebo. You tell each participant that they’ve been assigned to a group, that the money they get at the end of the day will be an even third of what the whole group makes. Their job is to predict whether the toss of a virtual coin (on a computer screen) will be heads or tails; they make their guess, but keep it to themselves; they press the button that flips the coin; then they report whether their guess was right or wrong. Of course, since they never recorded that guess prior to the toss, they’re free to lie if they want to.
Call those guys the groupers.
Now repeat the whole thing with a different group of participants— but this time, although their own personal payoffs are the same as before, they’re working solely for themselves. No groups are involved. Let’s call these guys soloists.
I’m leaving out some of the methodological details because they’re not all that interesting: read the paper if you don’t believe me (warning; it is not especially well-written). The baseline results are pretty much what you’d expect: people lie to boost their own interests. If high predictive accuracy gets you money, bingo: you’ll report a hit rate well above the 50:50 ratio that random chance would lead one to expect. If a high prediction rate costs you money, lo and behold: self-reported accuracy drops well below 50%. If there’s no incentive to lie, you’ll pretty much tell the truth. This happens right across the board, groupers and soloists, controls and treatments. Yawn.
But here’s an interesting finding: although both controls and groupers high-ball their hit rates when they stand to gain by doing that, the groupers lie significantly more than their controls. Their overestimates are more extreme, and their response times are lower. If you’re a grouper, oxytocin makes you lie more, and lie faster.
If you’re a soloist, though, oxytocin has no effect. You lie in the name of self-interest, but no more than the controls do. The only difference is, this time you’re working for yourself; the groupers were working on behalf of themselves and other people.
So under the influence of oxytocin, you’ll only lie a little to benefit yourself. You’ll lie a lot to benefit a member of “your group”— even if you’ve never met any of “your group”, even if you have to take on faith that “your group” even exists. You’ll commit a greater sin for the benefit of a social abstraction.
I find that interesting.
There are caveats, of course. The study only looked at whether we’d lie to help others at no benefit to ourselves; I’d like to see them take the next step, test whether the same effect manifests when helping the other guy actually costs you. And of course, when I say “You” I mean “adult Dutch males”. This study draws its sample, even more than most, from the WEIRD demographic— not just Western, Educated, Industrialized, Rich, and Democratic, but exclusively male to boot. I don’t have a problem with this in a pilot study; you take what you can get, and when you’re looking for subtle effects it only makes sense to minimize extraneous variability. But it’s not implausible that cultural factors might leave an imprint even on these ancient pathways. The effect is statistically real, but the results will have to replicate across a far more diverse sample of humanity before scientists can make any claims about its universality.
Fortunately, I’m not a scientist any more. I can take this speculative ball and run with it, anywhere I want.
As a general rule, lying is frowned upon across pretty much any range of societies you’d care to name. Most people who lie do so in violation of their own moral codes— and those codes cover a whole range of behaviors. Most would agree that theft is wrong, for example. Most of us get squicky at the thought of assault, or murder. So assuming that Shalvi and Drue’s findings generalize to anything that might induce feelings of guilt— which, I’d argue, is more parsimonious than a trigger so specific that it trips only in the presence of language-based deceit— what we have here is a biochemical means of convincing people to sacrifice their own morals for the good of the group.
Why, a conscientious objector might even sign up to fight the Bugs.
Once again, the sheer abstractness of this study is what makes it fascinating; the fact that the effect manifests in a little white room facing a computer screen, on behalf of a hypothetical tribe never even encountered in real life. When you get down to the molecules, who needs social bonding? Who needs familiarity, and friendship, and shared experience? When you get right down to it, all that stuff just sets up the conditions necessary to produce the chemical; what need of it, when you can just shoot the pure neuropeptide up your nose?
It’s only the first step, of course. I’m sure we can improve it if we set our minds to the task. An extra amine group here, an excised hydroxyl there, and we could engineer a group-selection molecule that makes plain old oxytocin look like distilled water.
A snort of that stuff and everyone in the Terran Federation gets to vote.