I’ve got this friend, known her since we were both grad students back in the eighties: well-regarded in her profession, well-published, even coauthored a few texts on evolutionary ecology. Keeps getting best-teacher awards for her work in the classroom. Occasionally she appears as the resident expert at the local Café Scientifique‘s Valentine’s Day edition, where she talks about the myths and science of sex and reproduction. Staunch feminist (although granted, it would have been pretty tough to look around at the faculty gender ratio in mid-eighties UBC and not be one of those). James Tiptree, Jr. fan; I like to think I had something to do with that. The last person you’d expect to catch with a pair of gender blinkers — but of course the very reason I’m bringing all of this up is because she caught herself wearing one, back when she was working on her M.Sc.
She couldn’t figure out why there weren’t any females in the wild stickleback populations she kept sampling, you see. It didn’t make sense. She seined and seined, and all she ever got were males. She knew they were males, because every last one of them changed color during mating season, which is what male sticklebacks of that species do.
Except as it turns out, it’s also what females of that species do. It’s just that over however many years and however many research projects, nobody had actually noticed that before. It even came as a surprise to her supervisor, who was regarded throughout the department and beyond as Doctor Stickleback.
I knew this other guy too, back around the same time. Not so much of a friend; the kind who’d turn up his nose when he discovered you were designing an AD&D campaign instead of polishing your laundry lists for eventual publication in Nature. (Debbie, in contrast, was a kick-ass cleric with a deathly fear of zombie flesh-eaters). This other guy did his doctorate on damselflies, I think. Maybe crane flies. One of those insects with veined transparent wings anyway, little patches of which are sometimes blotched with dark pigment. He was doing a dispersal study as part of his thesis; how many flies returned to their pond of origin, how many headed out into the wild blue yonder never to return, that sort of thing. And in order to keep track of individuals, he dabbed teensy dots of paint in unique patterns onto the wings of each fly.
In the course of his defense, one of his examiners asked about the functional significance of those little blotches of pigment — the natural pre-existing ones, not the spots that had been applied experimentally. Did he know what purpose those spots served? (He did not; in fact, he hadn’t really even thought about it.) Turns out those patches act as a combination ballast/trim-tab system, to keep the insect stabilized in flight. By applying extra pigment with no thought for that system, the candidate had doomed some of his subjects to lopsided flight in endless circles, or weighed them down to the point that they might well have dropped from exhaustion shortly after escaping custody. Bottom line, any attempt to draw conclusions about “natural dispersal patterns” would be about as valid as those you could draw by letting a bunch of people out of prison after amputating their left legs below the knee. (Notwithstanding which, the dude passed. Last I heard, he even had tenure.)
So when my attention is drawn to this paper by Gowaty et al. (good popsci summary in ScienceDaily — thanks for that, Cate), my response is not OMG the entire field of behavioral ecology is undone. Rather, it’s more like Geez, you guys give scientists way too much credit. And you don’t give science nearly enough.
The short version: back in the forties, in the days before molecular genetics, some dude named Bateman needed to track fruit fly lineages in the course of a study on reproductive success. He settled on a set of mutations that were highly visible (good), but which fucked up the reproductive success of afflicted individuals (bad): curly wings (which made certain mating behaviors impossible), deformed eyes (which sailed right over the “beer goggles” phase directly into “functional blindness”). Bateman ultimately concluded that males were more promiscuous than females, based on counts of offspring so hideously deformed that a lot of them probably died before being enumerated.
Believe it or not, this is not bad science. It’s bad experimental design, which is a whole other thing — and to be fair, there weren’t a lot of methodological options back in the forties. Bateman’s technique was pretty cutting-edge given the state-of-the-art at the time. We don’t insist on perfect experimental design.
So it’s not bad science that he designed a flawed experiment. It’s not even bad science that his cultural blinkers apparently blinded him (and others) to those flaws; the scientific method is designed to compensate for the inevitable biases that accompany any field of human endeavor. No, the bad science in this story can be found in the fact that Bateman’s study appeared in 1948, and no one got around to replicating it, and failing to confirm its findings, until sixty-four years later. In the meantime it became a classic in its field, racking up somewhere in the neighborhood of two thousand citations.
Setting aside its classic-paper status, though, we’re really just talking about another iteration of my wing-pigment anecdote. Stuff like this happens all the time; usually you hope it gets caught before you go to defense. (If you want an example of science chugging along at a more reasonable pace, check out the recent dust-up about the possibility of arsenic-based life. The paper reporting those findings came out in 2010, and the one rebutting them came out just last week — not bad, given the experimental work involved.)
Anyway. As Gowaty herself says, Bateman’s study should’ve been replicated decades ago, as soon as molecular techniques were an option. Its limitations should have been obvious to anyone who read the paper, but apparently the original results were so unsurprising — of course males fuck around more than females! — that nobody saw the point in beating such an obviously-dead horse. (By the flip side of the same token, the arsenic-microbe paper ran into such furious and immediate resistance because its findings were so completely at odds with conventional wisdom.)
That said, though, tossing out one bad study hardly destroys the field of behavioral ecology (much less threatens the Darwinian evolutionary model, as those idiots over at the Discovery Institute would like to claim). In fact, we really can’t toss out the study, not in its entirety: as Tang-Martinez and Ryder point out in their intro to a symposium on this very issue:
“Bateman’s basic theoretical insight relating mating success to RS, and predicting that the sex which has greater variance in RS will be the sex that experiences stronger sexual selection, is undeniably correct.”
What Gowaty’s paper (and a whole shitload of others) has destroyed is the justification for overgeneralization. Different species have different reproductive strategies suited to their own traits, and the relevant energetics have not changed; differential cost of reproduction still leads to different behavioral strategies. What has changed is the assumptions we made in applying those energetics. Turns out a lot of them were unwarranted. Sperm (or rather, ejaculate) isn’t as trivially inexpensive to produce as everyone once thought. Females across a whole range of species exert way more control over who they mate with, and how often, than anyone ever gave them credit for in the days before molecular genetics. Choosy-female-Indiscriminate-male was a decent first-cut approximation (and still applies in some cases), but the real picture is far more nuanced and complex. If you’re interested, I’d recommend Tang-Martinez and Ryder’s paper as a concise, readable précis of all the stuff I got taught in grad school that turned out to be dead wrong by the time I’d graduated. The principles survive, but the presuppositions we applied to them were all over the map.
In fact, I’m a little disappointed that the basic principle have survived — because wouldn’t it be amazing if our understanding of the whole process did turn out to be completely ass-backwards? The most exciting scientific breakthrough I can imagine would be one that shows that we were Just. Dead. Wrong. I’d even go so far as to suggest that most practicing scientists would agree with me.
Sure, people get invested. Nobody wants to throw away the findings and theories that made their careers. The history of science is jam-packed with entrenched viewpoints, bitter rivalries, and vicious attacks on those who dared challenge the approved paradigm. The astronomer Fred Hoyle went to his grave grumbling about what an idiotic theory the “Big Bang” was. (In fact, he was the one who coined that term; he meant it as an expression of ridicule). But science is a huge honking tent, and the number of people personally involved in any given feud is bound to be pretty small. The rest of us watch from the sidelines, munching popcorn, waiting to see how it all shakes out. I submit this quote from a story about the recent discovery of the Higgs:
De Roeck said he would find it a “little boring at the end if it turns out that this is just the Standard Model Higgs.” Instead, he was hoping it would be a “gateway or a portal to new physics, to new theories which are actually running nature” …
Maybe I’m naïve, but I’d like to think that this is more typical of the scientists’ attitude. Ask any kid who wants to grow up to be a scientist (caveat to my American readers: if you can find any). They all want to make discoveries. They all want to find out new stuff. When was the last time any budding science-fair contestant ever said “I want to be a marine biologist so I can confirm Ford’s findings on orca vocalization”?
Discovering something new is way cooler than confirming something old. And discovery, by definition, involves the unexpected; it involves surprise. You can’t be surprised if all you ever learn is that you were right all along.
And this, I think, is the essence of the fuckuppedness of institutional science. Because while I like to think that most people go into science with that attitude, they learn pretty quickly to shut up about it. Being consistently wrong is no way to forge an academic career. The road to success is paved with papers that refine the current model: fill in a gap here and there, stick another brick in the wall, don’t do anything to piss off the architects (even if they don’t sit on all the funding and review boards, they sure as shit hand-picked the guys who do). The road to success is studded with traffic cops and safety rails and Maximum 30kph signs. It is risk-averse.
It is also boring as hell. Because a career in science is not the same thing as the thrill of science; and while your career may hinge on being right, the rush comes from discovery. And you can’t truly discover anything unless you start out by being wrong.
Maybe that’s idealistic. Maybe it’s downright naïve. So here’s a more pragmatic take: paradigms don’t last forever, no matter who lines up in their defense. Steady State lost to the Big Bang. Lamarck lost to Darwin. Conventional wisdom is like a climax forest: it is vast, and it reduces visibility, and it starves new growth in the shade of an upper canopy that grabs all the light. But eventually the deadwood accumulates past some critical threshold. Inevitably, lightning strikes.
Someday, maybe, someone will discover arsenic-based life, and the data will be solid. Someday, maybe, we’ll prove that God exists. Some day — if we’re very, very lucky — we’ll discover that we were wrong about everything. Think of the opportunities that open up when a dominant ecosystem collapses. Think of all the sunlight and space and nutrients available for new growth in the wake of a devastating forest fire. Think of the grant potential, the opportunities for professional advancement, in a field where all the established authorities have just had their asses handed to them in pieces.
Why, even someone who’s been out of the field for decades might have a shot at tenure.
 There are also, admittedly, those very rare cases where the mystery is so grand, the question so open-ended, that there’s been neither the time nor the data for truly entrenched views to develop. Nobody shat on Watson and Crick when they published the structure of DNA, for example (although they did belatedly shit on them for marginalizing the contributions of Rosalind Franklin).