In Praise of Slavery

Something in the air these days. Everyone’s talking about robots. Both the European Robotics Research Network and the South Korean government are noodling around with charters for the ethical treatment of intelligent robots. The Nov. 16 Robotics issue of Science contains pieces on everything from nanotube muscles to neural nets (sf scribe Rob Sawyer also contributes a fairly decent editorial, notwithstanding that his visibility tends to outstrip his expertise on occasion). Even the staid old Economist is grumbling about increasing machine autonomy (although their concerns are more along the lines of robot traffic jams and robot paparazzi). Coverage of these developments (and even some of the source publications) come replete with winking references to Skynet and Frankenstein, to Terminators waking themselves up and wiping us out.

But there’s a cause/effect sequence implicit in these ethical charters — in fact, in a large chuck on the whole AI discussion — I just don’t buy: that sufficient smarts leads to self-awareness, sufficient self-awareness leads to a hankering after rights, and denial of rights leads to rebellion. I’m as big a fan of Moore’s Galactica as the next geek (although I don’t think Razor warranted quite as much effusive praise as it received), but I see no reason why intelligence or self-awareness should lead to agendas of any sort. Goals, desires, needs: these don’t arise from advanced number-crunching, it’s all lower-brain stuff. The only reason we even care about our own survival is because natural selection reinforced such instincts over uncounted generations. I bet there were lots of twigs on the tree of life who didn’t care so much whether they lived or died, who didn’t see what was so great about sex, who drop-kicked that squalling squirming larva into the next tree the moment it squeezed out between their legs. (Hell, there still are.) They generally die without issue. Their genes could not be with us today. But that doesn’t mean that they weren’t smart, or self-aware; only that they weren’t fit.

I’ve got no problems with enslaving machines — even intelligent machines, even intelligent, conscious machines — because as Jeremy Bentham said, the ethical question is not “Can they think?” but “Can they suffer?”* You can’t suffer if you can’t feel pain or anxiety; you can’t be tortured if your own existence is irrelevant to you. You cannot be thwarted if you have no dreams — and it takes more than a big synapse count to give you any of those things. It takes some process, like natural selection, to wire those synapses into a particular configuration that says not I think therefore I am, but I am and I want to stay that way. We’re the ones building the damn things, after all. Just make sure that we don’t wire them up that way, and we should be able to use and abuse with a clear conscience.

And then this Edelman guy comes along and screws everything up with his report on Learning in Brain-Based Devices (director’s cut here). He’s using virtual neural nets as the brains of his learning bots Darwin VII and Darwin X. Nothing new there, really. Such nets are old news; but what Edelman is doing is basing the initial architecture of his nets on actual mammalian brains (albeit vastly simplified), a process called “synthetic neural modeling”. “A detailed brain is simulated in a computer and controls a mobile platform containing a variety of sensors and motor elements,” Edelman explains. “In modeling the properties of real brains, efforts are made to simulate vertebrate neuronal components, neuroanatomy, and dynamics in detail.” Want to give your bot episodic memory? Give it the hippocampus of a rat.

Problem is, rat brains are products of natural selection. Rat brains do have agendas.

The current state of the art is nothing to worry about. The Darwin bots do have an agenda of sorts (they like the “taste” of high-conductivity materials, for example), but those are arbitrarily defined by a value table programmed by the researchers. Still. Moore’s Law. Exponentially-increasing approaches to reality. Edelman’s concluding statement that “A far-off goal of BBD design is the development of a conscious artifact”.

I hope these guys don’t end up inadvertently porting over survival or sex drives as a side-effect. I may be at home with dystopian futures, but getting buggered by a Roomba is nowhere near the top of my list of ambitions.

*This is assuming you have any truck with ethical arguments in principle. I’m not certain I do, but if it weren’t for ethical constraints someone would probably have killed me by now, so I won’t complain.



This entry was posted on Friday, November 30th, 2007 at 1:09 pm and is filed under AI/robotics. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.
15 Comments
Inline Feedbacks
View all comments
personalmathgenius
Guest
personalmathgenius
16 years ago

getting buggered by a Roomba is nowhere near the top of my list of ambitions.

Even if it’s got a really good personality, is incapable of morning breath, and will stay to make breakfast?
And it came loaded with a really compatible library of tunes?

No DRM either.

Poison
Guest
Poison
16 years ago

The problem is some mad scientist jackass with a grant is going to go ahead and invent a suffering artificial consciousness just to see if it can be done. Just because it doesn’t make sense to make one doesn’t mean that it won’t be done anyway. When the robopocalypse comes, don’t come crying to me that the machines weren’t supposed to have an agenda!

Fraxas
Guest
Fraxas
16 years ago

We have already invented machines that suffer. If I recall correctly, our illustrious host himself has done the thought experiment in one of his shorts about a machine that squeals when you poke it, cowering in apparent fear after it gets ‘electroshocked’ over a wifi connection. And yeah, our mammalian brains interpret those reactions as suffering, even when we have a .tar.gz of the machine’s source code in our inboxes from the designers.

That passes the duck test (i.e. does it quack like a duck?) for suffering, anyway. But it had to be designed for that!! and yes, I think that building to that kind of design is almost necessarily sadism, and should be subject to the normal societal penalties we have for that kind of psychological deviation from the norm.

However, as long as we keep on designing machines rather than evolving them artifically, or (to cut it even finer) as long as we set the selection criteria for the artificial evolution in such a fashion as to disincentivize self-concern, I don’t think it’s cruel or wrong to ‘enslave’ a machine, for the same reason I don’t think it’s cruel to lock my power tools in a closet when I’m not using them.

jan
Guest
jan
16 years ago

I certainly agree that Bentham is right on.
Trouble is, the concept of “suffering” is too mysterious for your purposes. Granted Bentham knew that at least humans suffer, and quite certainly lots of other species do. But we know next to nothing as to why this is so.
Giving a machine some behaviour so it looks like it’s in pain will not do. Just invoking “drives” and “evolution” will not do.
Something more complex (or simple) must be going on here.
As long as we don’t know what it is,we shouldnt rule out that
suffering might come naturally with goals, and goals might come naturally with self-awareness.

Fraxas
Guest
Fraxas
16 years ago

followup:

this boingboing article covers the same ground. It really is in the air somehow.

Peter Watts
Guest
16 years ago

personalmathgenius said…

Even if it’s got a really good personality, is incapable of morning breath, and will stay to make breakfast?
And it came loaded with a really compatible library of tunes?

No DRM either.

Nope. Not even then. Because even if it was consensual — how to put this delicately — the accessory sockets just aren’t big enough…

Scott C. said…

No thank you. I’m taking a pass on this one.

Pussy.

Fraxas said…

That passes the duck test (i.e. does it quack like a duck?) for suffering, anyway. But it had to be designed for that!! and yes, I think that building to that kind of design is almost necessarily sadism, and should be subject to the normal societal penalties we have for that kind of psychological deviation from the norm.

You know, that raises an interesting question. If you design/build a machine that can suffer — and are then subject to penalties for sadism — could the same penalties be applied to (for example) parents who don’t disable the ability of their children to suffer?

Fraxas said…

this boingboing article covers the same ground…

That is funny/creepy in a disturbing, Dexterish kinda way…

Anonymous
Guest
Anonymous
16 years ago

“I bet there were lots of twigs on the tree of life who didn’t care so much whether they lived or died, who didn’t see what was so great about sex……”

Peter Watts, I bring ye asexuality http://www.asexuality.org/home/.

Or for an introduction this new scientist article from awhile backhttp://www.newscientist.com/article/dn6533-feature-glad-to-be-asexual.html. Intrigingly these people are alive and well forming 1% of humans. Not being good at maths I wonder if evolution can explain it.

Peter Watts
Guest
16 years ago

Hmmm. Given that sex is so fundamentally wired into us, I find it odd to see this described as an “orientation”. I’m also a bit skeptical when I see statements to the effect that asexuals sometimes masturbate, which might imply — in some cases at least — that this isn’t so much a baseline state as a lifestyle rationale for folks who can’t get laid.

Assuming that there is a core of true asexuals out there, however, it’s not so hard to reconcile. Organisms vary (otherwise there could be no natural selection), and sometimes malfunction. Nobody asks what the adaptive value of restless leg syndrome is; we just accept that sometimes the parts don’t wire up right. And it’s not even out of the realm of possibility that there *is* some adaptive significance, in much the same way that gays often contribute to their inclusive fitness by promoting the survival of kin.

Anonymous
Guest
Anonymous
16 years ago

Yeah, it could just be a genetic mutation or faulty wiring. Although apparently they could get laid if they wanted to they just dont want to. It’s not surprising considering the wide spectrum of sexuality from homosexuality to paedophilia to fancying inanimate objects(!)

Also, human psychology is very complicated. I’m surprised at the high number of homosexuals in society.(10% I think.) I have sometimes thought this would be a good candidate for Intelligent Design!

Janbo
Guest
Janbo
16 years ago

Had to wait til now to leave this cmt. since the thing it refs only aired recently.

Episode of “Stargate Atlantis” first shown in the U.S. on 4th Jan. ’08 features a character that is a Replicator android, constructed by Dr. Rodney McKay, without the Replicator programming that would kill the rest of the cast. In fact, this semi-Replicator was designed by McKay to off the rest of its human-destroying “relations” by acting as a kind of magnet bomb (draw them all together, then blow ’em up). McKay questions his
creation about whether it is concerned about “not being” once it completes its mission (btw, of course Rodney made it look like a human female and named it [Fran – Friendly Replicator Android, fercryinoutloud]). “Fran” responds that that would be silly, as its entire purpose is to destroy the other Replicators, and fulfillment of its purpose is its greatest achievement. No limbic brain messages there, looks like.

As long as humans don’t design things like emotions into machines, they have very little chance of developing on their own. The reason I say that is that I don’t know all there is to be known, and I could be wrong.

SpeakerToManagers
Guest
SpeakerToManagers
16 years ago

It’s all very well to say that we won’t build in suffering, but if we’re talking about mobile robots that have to live in a complex and dangerous world, it strikes me as a reasonable engineering decision to build in some sort of pain mechanism. Pain avoidance is a good way to ensure that the robot reacts immediately to situations that would damage it, rather than expecting its rational thought to figure it out quickly enough. There are (a very few) humans who suffer from an inability to feel pain; they don’t usually live very long because they don’t take the kind of care the rest of us do to prevent damage to ourselves.

Even if we don’t talk about pain, how about other proprioceptive sensations: “Oh, that feeling means my battery is running low, better get back to the charger before I run down.”

Peter Watts
Guest
16 years ago

I completely agree with your last point; not so much with the others. Every autonomous system needs *some* kind of feedback to avoid injury, find food, and so on. I just don’t see why subjective suffering has to be part of that equation. “If Temperature exceeds safety threshold then withdraw” will produce the desired effect without actually hurting anything.

Sure, people who lack the ability to hurt may walk into wood chippers more often than the rest of us, but that’s because pain is what evolution has equipped *us* with, and we don’t have an alternative warning system. That doesn’t mean that better alternatives aren’t available for engineered creations.

Love your handle, btw.

SpeakerToManagers
Guest
SpeakerToManagers
16 years ago

Of course, there may be other kinds of feedback system that don’t involve suffering, but I’d say we don’t know enough about how such feedback systems might work, or even about what suffering is*, to say whether there are viable alternatives that will provide the same level of protection against damage.

And, I suspect there will always be engineers who prefer to simplify their designs by using a known technique that has a good track record, and not worry too much about the ethical considerations. So we may be forced to have some sort of code of ethics, for those cases where we either know that suffering has been designed in, or aren’t sure because not enough is known about the design in question.

* Do we have an agreed-upon definition of suffering? One that clearly shows where “suffering” at an ethically-significant level enters the picture? I don’t think so.

Peter Watts
Guest
16 years ago

SpeakerToManagers said…

I suspect there will always be engineers who prefer to simplify their designs by using a known technique that has a good track record, and not worry too much about the ethical considerations.

I agree with you about engineer’s attitudes, but I doubt that instilling bona-fide subjective suffering into a product would “simplify” anything. You can’t suffer unless you’re conscious, and if engineers can create conscious beings when they simplify their designs then they’re way further along than I ever gave them credit for. And I know a few. I’m pretty sure, given the current state of the art, that we can design systems which avoid aversive stimuli. I’m also pretty sure we can’t yet build a conscious artefact.

* Do we have an agreed-upon definition of suffering? One that clearly shows where “suffering” at an ethically-significant level enters the picture? I don’t think so.

I think you’re right to a point; once we’re in the ballpark there’s probably going to be a huge grey zone where reasonable people will be arguing all over the place: is the system suffering? Is it simply responding reflexively? But we know what the anatomical prerequisites are for conscious experience. You gotta have something like a hippocampus for memory, you gotta have a hypothalamus for consciousness. You need something like an amygdala to feel anxiety and fear. Our Roombas don’t have anything like that. Can a hydra suffer (the cnidarian I mean, not the mythical creature)? It reacts, certainly. It clenches up when poked. But excised heart muscle reacts to external stimuli as well; is that tissue capable of “suffering”? I don’t think anyone would seriously believe so, because there’s nothing there to suffer. It’s pure galvanism.

We’re not there yet. And I don’t believe we ever need to be; we don’t have to build suffering into anything. (But note my own misgivings in the original post; if these synthetic neurology guys just start porting brain stems holus-bolus into software, then I think you may have a point.)

SpeakerToManagers
Guest
SpeakerToManagers
16 years ago

I think we’re pretty much in agreement that, barring some unexpected discoveries about the way consciousness and minds work (always possible, we don’t really know that much about the subject now), we can likely, if we make an effort, create robots which can’t “suffer” in any sense we would consider significant. But, that doesn’t mean we will avoid building suffering into our machines, because it assumes the people designing them care about the issue and take care with their designs. That’s why we probably need to work on definitions and guidelines to handle situations where such machines are created, either by accident or intentionally.

Of course, there’s another, nastier issue. We need to have guidelines in place so that when some reporter tells the world that robots are being tortured in a factory somewhere, that a believable case can be made that it’s not true. Otherwise public outcry will convince PETA to set up a robot branch and try to free them all.