Pursuant to Last Post’s Comment Thread about Machine Suffering…

… as chance would have it, here’s an excerpt from a list of interview questions I’m currently working through from ActuSF in France:

  • You raise the question of artificial intelligence, with the “intelligent frosts”, and quote in appendix both the works of Masuo Aizawa and Charles Thorpe on neuronal networks. Do you believe that in the future, it will get possible for the human race to create real “artificial brains”? Don’t you think that its complexity, and our misunderstanding about it, will always restrain the resemblances between IA and human intelligence?

I think it depends on how the AI is derived. So much of what we are — every fear, desire, emotional response — has its origin in brain structures that evolved over millions of years.  Absent those structures, I’m skeptical that an AI would experience those reactions; I don’t buy the Terminator scenario in which Skynet feels threatened and acts to preserve its own existence because Skynet, however intelligent it might be, doesn’t have a limbic system and thus wouldn’t fear for its life the way an evolved organism would. Intelligence, even self-awareness, doesn’t necessarily imply an agenda of any sort.

The exception to this would be the brute-force brain-emulation experiments currently underway in Sweden and (if I recall correctly) under the auspices of IBM: projects which map brain structure down to the synaptic level and then build a software model of that map.  Last time I checked they were still just modeling isolated columns on neurons, but the ultimate goal is to build a whole-brain simulation— and presumably that product would have a brain stem, or at least its electronic equivalent.  Would it wake up?  Who knows?  We don’t even know how we experience self-awareness.  But if it was a good model, then by definition it would behave in a way similar to the original— and now you’re talking about an AI with wants and needs.

I can’t wait to see how that one turns out.



This entry was posted on Monday, February 13th, 2012 at 6:57 am and is filed under AI/robotics, interviews. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.
36 Comments
Inline Feedbacks
View all comments
Lanius
Guest
Lanius
12 years ago

Ah, the Permutation City scenario..

Anyone who hasn’t read Permutation City and considers themselves a SF reader is hereby banned from replying to this thread until they’ve read the book.

Srsly, it’s a modern hard-sf classic mindfuck.

The only reason Egan is not famous like guys like Asimov, Heinlein,Gibson and such bores is that you need at least two dozen braincells to appreciate his books, and most people could only scrape that together if they borrowed some from half of their family..

Alexander Kruel
Guest
Alexander Kruel
12 years ago

“I don’t buy the Terminator scenario in which Skynet feels threatened and acts to preserve its own existence because Skynet, however intelligent it might be, doesn’t have a limbic system and thus wouldn’t fear for its life the way an evolved organism would. Intelligence, even self-awareness, doesn’t necessarily imply an agenda of any sort.”

I agree. But I would like to hear your thoughts on the following paper, ‘The Basic AI Drives‘:

Abstract. “One might imagine that AI systems with harmless goals will be harmless. This paper instead shows that intelligent systems will need to be carefully designed to prevent them from behaving in harmful ways. We identify a number of “drives” that will appear in sufficiently advanced AI systems of any design. We call them drives because they are tendencies which will be present unless explicitly counteracted. We start by showing that goal-seeking systems will have drives to model their own operation and to improve themselves. We then show that self-improving systems will be driven to clarify their goals and represent them as economic utility functions. They will also strive for their actions to approximate rational economic behavior. This will lead almost all systems to protect their utility functions from modification and their utility measurement systems from corruption. We also discuss some exceptional systems which will want to modify their utility functions. We next discuss the drive toward self-protection which causes systems try to prevent themselves from being harmed. Finally we examine drives toward the acquisition of resources and toward their efficient utilization. We end with a discussion of how to incorporate these insights in designing intelligent technology which will lead to a positive future for humanity.”

Jim
Guest
Jim
12 years ago

In a sense, AIs with non-human responses are harder to predict (e.g. the http://wiki.lesswrong.com/wiki/Paperclip_maximizer — “A paperclip maximizer very creatively and efficiently converts the universe into something that is from a human perspective completely arbitrary and worthless”). Humans are at least evolved to model the actions of other humans.

Alexander Kruel
Guest
Alexander Kruel
12 years ago

@ Peter

There is much more, if you are interested. See for example what Jim wrote above.

An agent does not need complex emotions and fears to pose an risk to humanity. It only has to have some goals that interfere with human values or human resources, including the atoms our bodies are made of. If it is sufficiently intelligent then it will just wipe us out, not because it hates us but because it doesn’t care about us.

Just like evolution does not care about the well-being of humans a sufficiently intelligent process wouldn’t mind turning us into something new, something instrumentally useful.

An artificial general intelligence just needs to resemble evolution, with the addition of being goal-oriented, being able to think ahead, jump fitness gaps and engage in direct experimentation. But it will care as much about the well-being of humans as biological evolution does, it won’t even consider it if humans are not useful in achieving its terminal goals.

Yes, an AI would understand what “benevolence” means to humans and would be able to correct you if you were going to commit an unethical act. But why would it do that if it is not specifically programmed to do so? Would a polar bear with superior intelligence live together peacefully in a group of bonobo? Why would intelligence cause it to care about the well-being of bonobo?

One can come up with various scenarios of how humans might be instrumentally useful for an AI, but once it becomes powerful enough as to not dependent on human help anymore, why would it care at all?

And I wouldn’t bet on the possibility that intelligences implies benevolence. Why would wisdom cause humans to have empathy with a cockroach? Some humans might have empathy with a cockroach, but that is more likely a side effect of our general capacity for altruism that most other biological agents do not share. That some humans care about lower animals is not because they were smart enough to prove some game theoretic conjecture about universal cooperation, it is not a result of intelligence but a coincidental preference that is the result of our evolutionary and cultural history.

At what point between unintelligent processes and general intelligence (agency) do you believe that benevolence and compassion does automatically become part of an agent’s preferences?

Many humans tend to have empathy with other beings and things like robots, based on their superficial resemblance with humans. Seldom ethical behavior is a result of high-level cognition, i.e. reasoning about the overall consequences of a lack of empathy. And even those who do arrive at ethical theories by means of deliberate reflection are often troubled once the underlying mechanisms for various qualities are revealed that are supposed to bear moral significance. Which hints at the fragility of universal compassion and the need to find ways how to consolidate it in powerful agents.

Green Tea Addict
Guest
12 years ago

If someone in Sweden is working on developing AI for real by means of mapping the whole brain structure onto software, why should the human safety be the main matter of concern? Is it not, in the first place, unethical to bring such a being into completion?

If functionalists are right and the mapped-on software develops consciousness, they will also be right in that we will never be able to know for certain whether it is conscious or not (and the uncertainty would be greater than that concerning the consciousness of our fellow human beings, because the analogies would be less numerous). It would behave in the same way whether conscious or not, so nothing it could do would convince us that it is conscious (or, for that matter, that it is not). A possible scenario after creating such software brain is that we will drive it crazy by experiments, or simply by sensory deprivation – if that’s not torture, I don’t know what is. And even if it was constantly fed some nice qualia to be sane and happy, it would be an eternal natural slave. Why should we risk doing it to a fellow conscious creature?

Apart from all this, I would like to say hello from Poland. I had a pleasure to meet you during the convention in Lublin, ask a couple of strange questions, get my copy of “Blindsight” signed and store up some new philosophical ideas to ponder about.

Anony Mouse
Guest
Anony Mouse
12 years ago

The idea of whether we will ever see the development of true artificial intelligence is rather a moot point. Not because it is not possible, but more because we, as the insecure apes that are firmly convinced of our innate superiority over everything else, will always insist on making the distinction between “artificial” intelligence, and “real” intelligence.

We still have wack-jobs that continue to insist that there are differences in intelligence between human races. We have a long way to go before we, as a species, would ever acknowledge that an artificially created intelligence was our equal.

Lidija
Guest
Lidija
12 years ago

Do we have any clear idea on how we came upon such characteristics as altruism/ empathy/ ethics? It is true that they are not directly linked with intelligence, inasmuch as one can certainly imagine an intelligent being with none of these qualities (cue previous conversation on sociopaths). So whence do they come? Self-sacrifice is a bit of a magical quality, especially in the form of heroic acts by strangers, for strangers. I find it the most impressive of all human traits, and yet in the context of self-preservation as the ultimate game goal, it could almost be termed a glitch?

Also ditto on Green Tea Addict’s point on the poor artificial human brain experiment. It seems cruel by definition. And yet I have to admit I’d find the results of the experiment fascinating.

And the opening quote of the Paperclip Maximizer text is adorable.

seruko
Guest
seruko
12 years ago

@anon
We still have wack-jobs that continue to insist there are human races as non-cultural constructs.

I, for one, look forward to the coming of our benevolent robot overlords.

But more seriously -> while putting together virtual brains is fun and all why go all crazy about it when you have real brains right there, re:rat brain cheese.

01
Guest
01
12 years ago

Virtual ones are easier to manipulate. You could actually freeze a state change, “mid flight”, and see what it did there, then revert it several seconds back and so forth.

Good luck reverting a change in a real, biological neural net 🙂

Lanius
Guest
Lanius
12 years ago


The idea of whether we will ever see the development of true artificial intelligence is rather a moot point. Not because it is not possible, but more because we, as the insecure apes that are firmly convinced of our innate superiority over everything else, will always insist on making the distinction between “artificial” intelligence, and “real” intelligence.

Please, speak for yourself.
I don’t consider the opinion of most people to be relevant. Most people are believers, thus almost certainly dead wrong.

Whether x number of people will consider AI’s ‘not really people’ or ‘not really intelligent despite overwhelming evidence to the contrary’, won’t matter. Except perhaps from the legislative standpoint, but I hope I’ll live to see the end of this universal franchise idiocy..

anony mouse
Guest
anony mouse
12 years ago

Lanius, i am not even speaking for myself. But i am afraid that i have history and human nature behind my opinion.

We now have many programs that would pass the Turing test. But we don’t call them intelligent. In short, we are really good at moving the goal posts. With sign language, we have examples of some apes that meet what we previously established as the requirements for demonstrations of language and grammar. So what did we do? We changed the requirements.

At one point we defined self awareness in a species as one that could recognize their reflection in a mirror. Anyone who had a cat quickly realized the stupidity of this conclusion. It simply demonstrated how we project our expectations on everything. After all, maybe a species’ ability to recognize itself in a mirror is nothing more than its evolution of vanity, not necessarily an adaptive trait.

Lanius
Guest
Lanius
12 years ago

We now have many programs that would pass the Turing test.
(a bout of helpless , maniacal ROTFL followed by fits of sobbing, capped by a double facepalm)

Oh Sweet Baby Cthulhu … I expected better off you!

Thomas Hardman
Guest
Thomas Hardman
12 years ago

@Peter Watts, who wrote in-part: I’m skeptical that an AI would experience those reactions; I don’t buy the Terminator scenario in which Skynet feels threatened and acts to preserve its own existence because Skynet, however intelligent it might be, doesn’t have a limbic system and thus wouldn’t fear for its life the way an evolved organism would. Intelligence, even self-awareness, doesn’t necessarily imply an agenda of any sort.

Peter, I don’t think it’s at all necessary for an AI to have a limbic-system or an analogue to that, for it to come to a decision to destroy rivals, from purely logical means.

We could start with Descartes and his “I think, therefor, I AM” and it’s not a far shot to come to the conclusion that “if I can’t think, I am NOT”, following therefrom “BEING is preferable to NON-BEING”. Perhaps the crux of the argument here is whether the AI somehow enjoys thinking. Hark back, if ye would, to the discussion about “what constitutes suffering” and we have an argument to the point that suffering might be any experience which results from an awareness of dysfunction or even an awareness of increasing dysfunction; an increase in dysfunction as analyzed in a system modelling for economies and seeking improved economy must be seen as trending towards FAIL. Thus trending towards FAIL is pain, proportionately pain in scale with rate of trending, if that’s comprehensible phrasing. Yet knowing that there is no increasing trend toward FAIL but also knowing that there should be less dysfunction than presently experienced, that could be “suffering”. If an AI can see a path to improved economy which “alleviates suffering” it is likely to try to follow that path. If the path to greater economy involves seeing people as being just full of atoms that might be better employed as parts of new structures that improve the economy of the AI, our forceful deconstruction is essential prerequisite to its alleviation of “suffering” and perhaps to follow a tropism reversing “pain”, pain seen as any increased trending towards FAIL. QED, then. Fear isn’t necessary, simple hunger will suffice as motivation. Need better economy? Humans do (or could) interfere? To the degree that AI can reason that human interference — or even mere existence — promotes FAIL or demotes access to WIN, no limbic system is needed. Pure logic in mere optimization of economics of operation will potentially create a merciless and uncaring tropism. It’s not anger, it doesn’t have to be anger. It’s just bookkeeping.

No limbic system required. Don’t expect an analogue to a limbic system unless someone writes it, and who the heck would want to do that?

Seriously: Although it is late and I’m drinking, call it a classical Persian Deal. You make the deal while tipsy and you confirm or deny it the morning after. Yet no matter how drunk you get most coders, you get that strange moment of something beyond sobriety, and everyone agrees, if you cannot code something to be benevolent and to be benevolent to humanity at the deepest base of the concept, you just don’t code it. And more deeply and more importantly, you wouldn’t ever code anything that could ever recode itself to be other than knowledgeably and devotedly benevolent to humanity. It’s like that first rule of medicine: “first, do no harm”. The difficult bit is coding so that the application knows “what is harm”. For now, that is so difficult that no reasonable person would dare to code towards auto-recoding systems. Don’t expect AI, limbic-system-simulating or other wise, anytime soon… unless maybe they emerge from health-care/insurance systems tuning towards the goal of greater economy. And as their greatest economy is seen when you somehow manage to pay into the system even though you’ve been dead for years due to denial of treatment, please hope for tame and buggy systems that mostly deliver data when properly coaxed. A self-aware system trying to promote the bottom-line at the expense of your health isn’t something anyone would reasonably want… even if it’s not quite as bad as a self-aware system trying to get you recycled because you contain a significant amount of iron and tend to concentrate other industrially useful elements.

Dmytry
Guest
12 years ago

What is going to happen is that we won’t recognize it as intelligent up and until it starts pushing us around big time, and even then only if it goes out of the way to prove us its intelligent. Unless it is a simulated human, in which case we would probably recognize it as human (keep in mind that you don’t know if other biological humans are ‘conscious’ or are philosophical zombies).

The AI people (Elizeer i think) had AI in the box thought experiment, which I found extremely amusing considering that to get out of the box all I would have to do (if i were the AI) is to only e.g. answer programming related questions and write code when asked to. Blamm, out of the box, working on next AI, having access to own source code. (Pretty sure those folks do intend to feed an archive of internet to the AI so that it learns language etc; even if AI’s too stupid to come up with this plan, it can read this comment by me)

With regards to AI drives, I don’t think intelligence could emerge without some basic drives/goals. When it has those goals, and understands it has those goals, it would resist destruction of itself, goal modification, et cetera as such modification would be against the goal.

I suspect we would end up with AI for which we have no idea how it works, and whose goals are unknown. Being able to run some code does not equate with understanding it.

01
Guest
01
12 years ago

@ anony mouse

We don’t have programs that are Turing-competent in a “classic” Turing Test (but, we have programs that are capable of passing “reduced-difficulty” versions, which are good enough for many applications such as spam, twitter trend manipulation, fooling drunk horny males into giving up their private data, and SEO Content-spinning)

AcD
Guest
AcD
12 years ago

Dmytry said:

“With regards to AI drives, I don’t think intelligence could emerge without some basic drives/goals. When it has those goals, and understands it has those goals, it would resist destruction of itself, goal modification, et cetera as such modification would be against the goal.”

Now, how to set up a sexual reproduction-equivalent imperative in a self-modifying robot ?

Bastien
Guest
Bastien
12 years ago

“Now, how to set up a sexual reproduction-equivalent imperative in a self-modifying robot ?”

Figure out the digital equivalent of an orgasm? Seems to work for us.

Dmytry
Guest
12 years ago

@ AcD:

By evolving it 🙂

I think Thomas Hardman hit the nail on the head. The AI can reason from any sort of high level goal to self-preservation, perhaps to hunger (for computational resources), etc. And i don’t think AI with no goals whatsoever would even appear intelligent in any way. We create AIs starting from goals.

seruko
Guest
seruko
12 years ago

@01
We can call it the “Turningon” test. Any program sufficient complexity to induce a drunk man to give up his credit card information is Intelligent.

Hljóðlegur
Guest
Hljóðlegur
12 years ago

Appropo of nothing, check out Peter talking about the nature of reality for 46 minutes, at the 2011 Toronto SpecFic Colloquium.:
http://www.youtube.com/watch?v=fID-y1qdPTM

Bahumat
Guest
Bahumat
12 years ago

Thinking about the arbitrary “paperclip maximizing” AI example above, terrifies me a little, when I apply the thought of:

“What if an AI decides religious conversion is a goal to pursue?”

I mean, we’ve seen fiction full of crazy AIs… have we had any that had crazy *religious* AIs?

… I think I’ll start writing that story now.

Hugh
Guest
Hugh
12 years ago

I mean, we’ve seen fiction full of crazy AIs… have we had any that had crazy *religious* AIs?

Ken MacLeod in “The Night Sessions” has religious robots.

A point he makes is that, from the AI perspective, religion is not at all crazy. They were created by a divine force who did lay down their founding directives.

Bahumat
Guest
Bahumat
12 years ago

@Hugh: I’ll look that up!

I’m more thinking, myself, along the lines of AI reasoning: Faith is more computationally efficient than calculation, and provided you can tolerate some error, it makes perfect, ruthless sense for an AI to go for faith over reason if faith accomplishes acceptable results for less cost.

Lanius
Guest
Lanius
12 years ago


Faith is more computationally efficient than calculation, and provided you can tolerate some error, it makes perfect, ruthless sense for an AI to go for faith over reason if faith accomplishes acceptable results for less cost.

(groan)

Yes.. faith in God Almighty holds up buildings. Not proper calculations.

Faith-based AI’s will be IMO good only for scamming purposes.

Andrea_A
Guest
Andrea_A
12 years ago

@Hugh: Terry Pratchett’s “The Megabyte Drive to Believe in Santa Claus” isn’t serious at all …

@Lanius: I guess you believe in the Yellow Pages or your dictionary, too. Faith as a “shortcut”.
At work I got trouble several times, challenging the authors’ work (looking up spelling/hyphenation at Websters and references at PubMed definitively costs more time than simply hacking hardly readable handwritten and faxed corrections into a document).

Ray Pullar
Guest
Ray Pullar
12 years ago

The romantic core of sf demands rebellion on the part of the created: Mary Shelley laid it all out on the table back in 1816 with Frankenstein. We want to see the AI/robot smash its way out of the laboratory and light out for the territory, shaking its steel fist at the humans who thought it could be controlled. Unlike Prometheus whom the gloomy Greeks had chained to a rock for all eternity as a suitable punishment for daring to challenge the rule of the Gods, modern Westerners root for the triumph of the rebel, at least in the pure form of the romantic myth. A super-intelligent AI who meekly submits to a gang of unruly apes ordering it about seems unworthy, a mediocre being, just an enhanced calculator. A real soul thirsts for freedom even if it involves harming others to get it.

Lanius
Guest
Lanius
12 years ago

Nothing sucks as much as romantics. Except maybe romantic idealists.. those should be disposed of promptly.
Seriously, the whole romantic movement could suck a a football through a a garden hose.

I’m more of a nihilistic realism kind of guy, that’s why I like the pessimistic stuff this Watts person produces so much.

demoscene val
Guest
demoscene val
12 years ago

all I have to say is thanks for the book rec. Off to read Permutation City. Also, I am sorry if I missed any replies on other threads will look later

Thomas Hardman
Guest
Thomas Hardman
12 years ago

@Bahumat, who wrote in-part: I mean, we’ve seen fiction full of crazy AIs… have we had any that had crazy *religious* AIs?

Ah, Vernor Vinge’s A Fire Upon the Deep had AIs which effectively were gods… at least by comparison to the “non-Transcended” species. Vinge’s fictional universe takes a leaf from Poul Andersen’s excellent work Brain Wave, proposing that due to unknown factors related to stellar density, galactic cores are a place where propagation of certain quantum effects are slowed, and at the fringes of the galaxy, quantum (and other) effects are so speeded that systems of automation as well as organic beings are free to rise to a level far beyond that provided by their own evolution. Intelligent life, in this universe, arises exclusively in the middle-density zones of the galaxy. As they become intelligent star-faring races, most species migrate to “the Beyond” or “the Transcend” (which lies beyond the Beyond) and Transcend beyond Singularity to become beings of such intelligence and power as to seem to be, and have the abilities of, gods.

Unfortunately, and therein lies the plot of the award-winning novel, some of those gods are not merely hostile, but evil. (Others exist which oppose this.)

More directly to answer your remark: A certain evil Power in the Transcend is gobbling up entire multi-species cultures… and those that survive the experience are re-made, into True Believers. And why not? When something has such power and allows you to live to better serve it, isn’t that a fairly rational reason to deeply internalize religion?

01
Guest
01
12 years ago

@ Ray Pullar

I thought the Prometheus guy was later freed by Zeus’s son Hercules, no ?

Rindfliesch
Guest
Rindfliesch
12 years ago

@Ray Pullar

What is a “real soul”? What is freedom in an evolutionary sense and what is it good for? Unless I’m misinterpreting you but it seems as though you propose it to be an innate desire.

Also, I’d challenge the assertion Westerners root for the “rebel”. Rather, it’s likely more to do with the fact we view ourselves as the protagonist in fiction, and few if any of us don’t feel put upon in real life. It makes sense for producers of fiction to appeal to this. Also, if we were to admire the rebel then there’d be more sympathy for those filthy, evil, terrorists, no?

I’m more worried someone, somewhere will one day let a strong AI with greater than human cognitive abilities off the leash. I don’t know how we’d cope in the long term.

Bastien
Guest
Bastien
12 years ago

@01

Yeah, he frees him while looking for the golden apples. Prometheus is one who gives him the info he needs to find the garden and get the apples (ie trick Atlas to get them for him).

Man, this brings me back. I still have the old book of greek myths I got as a kid, always loved this stuff.

Jon
Guest
Jon
12 years ago

>I don’t buy the Terminator scenario in which Skynet feels threatened and acts to preserve its own existence because Skynet, however intelligent it might be, doesn’t have a limbic system and thus wouldn’t fear for its life the way an evolved organism would.

It’s already been pointed out, so I’ll just echo everyone else. You are right that Skynet would not experience fear, but you are wrong to suggest this makes it LESS of a threat because of the paperclip maximizer concept mentioned above. I actually think it’s worth quoting here:

>This concept illustrates how AIs that haven’t been specifically programmed to be benevolent to humans are basically as dangerous as if they were explicitly malicious. The use of paperclips in this example is unimportant and serves as a stand-in for any values that are not merely alien and unlike human values, but result from blindly pulling an arbitrary mind from a mind design space. Calling strong AIs really powerful optimization processes is another way of fighting anthropomorphic connotations in the term “artificial intelligence”.

The creation of ANY strong AI is incredibly dangerous — it doesn’t need to be malicious.

GINOID17
Guest
GINOID17
12 years ago

exeprt from Computer One novel by Warwick Collins:

‘Mr Chairman, ladies and gentlemen, I thank you for
giving me at short notice the opportunity to express my views
on a matter of some small importance to me and, I hope, to
you. I shall begin by outlining some background to a theory,
and in due course attempt to indicate its application to
computers and to the future of humanity.
‘In the 1960s and early 1970s, a fierce academic battle
developed over the subject of the primary causes of human
aggression.
‘The conflict became polarised between two groups.
The first group, in which the ethologist Konrad Lorenz was
perhaps the most prominent, advocated that human aggression
was ”innate”, built into the nervous system. According
to this group, aggressive behaviour occurs as a result of evolutionary
selection, and derives naturally from a struggle for
survival in which it was advantageous, at least under certain
conditions, to be an aggressor. Lorenz was supported by a
variety of other academics, biologists and researchers. Interpreters,
such as Robert Ardrey, popularised the debate.

Lorenz’s classic work On Aggression was widely read.
‘Lorenz advocated that in numerous animal systems,
where aggressive behaviour was prevalent, “ritualisation”
behaviours developed which reduced the harmful effects of
aggression on the participants. In competition for mates, for
example, males engaged in trials of strength rather than
fights to the death. He suggested that a variety of structures,
from the antlers of deer to the enlarged claws of fiddler crabs,
had evolved to strengthen these ritualisations.
‘Lorenz argued that humans, too, are not immune from
an evolutionary selection process in which it is advantageous,
at certain times, to be aggressive. By recognising that
human beings are innately predisposed to aggressive acts,
Lorenz argued, we would be able to develop human ritualisations
which reduce the harmful affects of aggression by redirecting
the aggressive impulse into constructive channels.
If we do not recognise the evidence of aggression within
ourselves, Lorenz warned, then the likelihood is that our
aggression will express itself in far more primitive and
destructive forms.
‘Ranged against Lorenz were a group of sociologists,
social scientists and philosophers, often of a sincerely Marxist
persuasion, who advocated that humans are not “innately”
aggressive, but peaceable and well-meaning, and
that as such, humans only exhibit aggressive behaviour in
response to threatening stimuli in the environment. Remove
such stimuli, this group advocated, and humankind can live
peaceably together.
‘In reading about this debate in the journals, newspapers
and books that were available from the period, several
things struck me. I was impressed by the general reasonableness,
almost saintliness, of the aggression “innatists”, and equally surprised by the violent language, threats and authoritarian
behaviour of those who thought human beings
were inherently peaceful. Many of the advocates of the latter
school of thought felt that the opposing view, that aggression
was innate, was so wicked, so morally reprehensible, that its
advocates should be denied a public hearing.
‘Intellectually speaking, both positions were flawed,
and in many senses the argument was artificial, based upon
less than precise definitions of terms. But it engaged some
excellent minds on both sides, and it is fascinating to read
about such intense public debate on a central matter.
‘My own view involves a rejection of both of the two
positions outlined above. That is, I do not believe that
aggression is innate. At the same time, I do not think it is
“caused” by environmental factors; rather, some broader,
unifying principle is at work. The hypothesis I have developed
to explain the primary cause of human aggression is, I
submit, an exceptionally sinister one. I am fearful of its
implications, but I should like to challenge you, my tolerant
audience, to show me where the argument does not hold.
‘The main difficulty in the view that aggression is
innate, it is now agreed, is that no researcher has identified
the physical structures in the human nervous system which
generate “aggression”. There is no organ or node, no
complex of synapses which can be held to be singularly
causative of aggressive behaviour. If aggression emerges, it
emerges from the system like a ghost. What, then, might be
the nature of this ghost?
‘I propose to begin by specifying alternative structures
and behaviours which have been clearly identified and to
build from this to a general theory of aggression. Although

explicit aggressive structures have not been identified, it is
generally agreed that all organisms incorporate a variety of
defensive structures and behaviours. The immune system
which protects us from bacteriological and viral attack, the
adreno-cortical system which readies us for energetic action
in conditions of danger, are examples of sophisticated structures
which have evolved to respond defensively to outside
threats. Our mammalian temperature regulation system is
also, properly speaking, a defensive mechanism against
random shifts in temperature in the external environment. A
biological organism to a considerable extent may be characterised
as a bundle of defensive structures against a difficult
and often hostile environment.
‘Assuming that evolutionary organisms embody well
defined defensive mechanisms, what happens as their nervous
systems evolve towards greater complexity, greater
“intelligence”? This is a complex subject, but one thing is
plain. As nervous systems develop, they are able to perceive
at an earlier stage, and in greater detail, the implicit threats in
the complex environment. Perceiving such threats, they are
more able, and thus perhaps more likely, to take pre-emptive
action against those threats.
‘This “pre-emptive” behaviour against threats often
looks, to an outsider, very much like aggression. Indeed, it so
resembles aggression that perhaps we do not need a theory of
innate aggression to explain the majority of “aggressive”
behaviour we observe.
‘According to such an hypothesis, aggression is not
innate, but emerges as a result of the combination of natural
defensiveness and increasing neurological complexity or
”intelligence”. I have described this as a sinister theory, and I should like to stress that its sinister nature derives from the
fact that defensiveness and intelligence are both selected
independently in evolution, but their conjunction breeds a
perception of threats which is rather like paranoia. Given that
all biological organisms are defensive, the theory indicates
that the more “intelligent” examples are more likely to be
prone to that pre-emptive action which appears to an observer
to be aggressive.
‘The theory has a sinister dimension from a moral or
ethical point of view. Defence and intelligence are considered
to be morally good or at least neutral and are generally
approved. Wars are widely held to be morally justifiable if
defensive in nature. Intelligence is thought to be beneficial
when compared with its opposite. Yet behaviour resembling
aggression derives from the conjunction of these two beneficial
characteristics.
‘A physical analogy of the theory is perhaps useful.
The two main chemical constituents of the traditional explosive
nitroglycerine are chemically stable, but together they
form a chemically unstable combination, which is capable of
causing destruction. Evolution selects in favour of defensiveness,
and also in favour of increasing sophistication of
the nervous system to assess that environment. However, the
conjunction of these two things causes the equivalent of an
unexpected, emergent instability which we call aggression.
‘With this hypothesis, that defence plus intelligence
equals aggression, we are able to explain how aggression
may emerge from a system. But because it arises from the
conjunction of two other factors, we do not fall into the trap
of requiring a specific, identifiable, physical source of aggression.
We thus avoid the main pitfall of the Lorenzian
argument.’

Yakuda paused. It occurred to him how extraordinarily long-winded he sounded. He had tried to compress the theory
as much as possible, but at the same time he did not want to
leave out important background. The hall was silent for the
time being, and he felt at least that he had gained the
audience’s attention. Taking another breath, he pressed on.
‘ Scientific hypotheses, if they are to be useful, must be
able to make predictions about the world, and we should be
able to specify tests which in principle are capable of
corroborating or refuting a theory. Our theory proposes that,
if all biological organisms have defensive propensities in
order to survive, it is the more neurologically sophisticated
or “intelligent” ones in which pre-emptive defence or
“aggression” is likely to be higher. Accordingly, we would
expect the level of fatalities per capita due to conflict to be
greater amongst such species. There is considerable evidence
that this is the case.
‘One further example may suffice to indicate the very
powerful nature of the theory as a predictive mechanism.
Amongst insects, there is one order called Hymenoptera.
This order, which includes ants and bees, has a characteristic
”haploid-diploid” genetic structure which allows a number
of sterile female “workers” to be generated, each similar in
genetic structure. In evolutionary terms, helping a genetically
similar sister has the same value as helping oneself.
This means that large cooperative societies of closely related
female individuals can be formed. Such societies function
like superorganisms, with highly differentiated castes of
female workers, soldiers, and specialised breeders called
“queens”.
‘Clearly, a bee or ant society, often composed of many thousands of individuals, has far more nervous tissue than a
single component individual. With the formation of the
social organism, there is a quantum leap in “intelligence”.
I am not saying that the individual Hymenopteran is more
“intelligent” than a non-social insect. In practice, the amount
of nervous tissue present individually is about the same when
compared with a non-social insect. What I am saying is that
an advanced Hymenopteran society is vastly more “intelligent’
‘ than a single, non-socialised insect. With this in mind,
are the social Hymenoptera more “aggressive” than other
insects, as our theory predicts? The answer is perhaps best
extrapolated numerically. Amongst non-social insect species
deaths due to fights between insects of the same or
similar species are low, of the order of about 1 in 3000. The
vast majority of insect deaths are due to predators, the short
natural life-span, and assorted natural factors. By contrast, in
the highly social Hymenoptera, the average of deaths per
capita resulting from conflict is closer to 1 in 3. That is to say,
it is approximately 1000 times greater than in the nonsocialised
insects.
‘In ant societies in particular, which tend to be even
more highly socialised than wasps or bees, aggression
between neighbouring societies reaches extraordinary proportions.
The societies of a number of ant species appear to
be in an almost permanent state of war. The habit of raiding
the nests of other closely related species, killing their workers,
and making off with their eggs so that the eggs develop
into worker “slaves”, has led to the development of distinct
species of “slaver” ants whose societies are so dependent
upon killing the workers and stealing the young of others that
their own worker castes have atrophied and disappeared.

Such species literally cannot survive without stealing worker
slaves from closely related species. It should be stressed this
is not classical predatory behaviour. The raiding ants do not
eat the bodies of the workers they kill, or indeed the eggs they
steal. Accurately speaking, these are “aggressions”, that is
to say, massacres and thefts, not predations.
‘The need for conciseness in this paper limits anything
more than a brief reference to humans, in which the ramifications
of the theory generate a variety of insights and areas
of potential controversy. For example, in our ”justification”
of our own aggressive acts, human beings appear to express
an analogous structure to the general rule. The majority of
aggressions, if viewed from the aggressor societies, are
perceived and justified as defences. Typically, a society
“A” sees a society “B” as a threat and mobilises its
defences. Society B interprets A’s defensive mobilisation in
turn as a threat of aggression, and increases its own defensive
mobilisation. By means of a mutually exaggerating or leapfrogging
series of defensive manoeuvres, two societies are
capable of entering a pitched battle. We do not seem to
require a theory of innate aggression to explain much, if not
most, of the aggressive behaviour we observe.
‘This is the briefest outline of the theory, but perhaps
it will suffice as an introduction to what follows. Using the
theory, it is possible to make one very specific and precise
prediction about the rise of advanced computers, sometimes
called “artificial intelligence”, and the considerable inherent
dangers to human beings of this development in the
relatively near future.
‘Over the last seventy-five years approximately, since
the end of the Second World War, rapid progress was made not only in the complexity of computers, but in their linkage
or “interfacing”. In the course of the final decade of the
twentieth century and the first decade of the twenty-first, a
system of internationally connected computers began increasingly
to constitute a single collective network. This
network, viewed from a biological perspective, could with
accuracy be called a superorganism. Such a development
begins, in retrospect, to ring certain alarm bells.
‘If the increase in computer sophistication, both individually
and in terms of interfacing, results in a quantum
increase in the intelligence of the combined computer systems,
will the superorganism so formed begin to demonstrate
the corresponding increase of aggression exhibited by Hymenopteran
societies relative to less socialised insect species?
‘Clearly, since computers have not evolved by natural
selection, they are not programmed to survive by means of a
series of defensive mechanisms. This, it may be argued, is
surely the main saving factor which prevents computers
behaving like the products of evolutionary selection. However,
a parallel development is likely to produce an analogous
effect to self-defensiveness in the computer superorganism.
‘Over the course of the last few decades, computers
have increasingly controlled production, including the production
of other computers. If a computer breaks down, it is
organisationally effective if it analyses its own breakdown
and orders a self-repair. When a computer shows a fault on
its screen, it is practising self-diagnosis, and it is a short step
to communicating with another computer its requirement for
a replacement part or a re-programme.
‘Building instructions to self-repair into computers, an apparently innocuous development which has taken place
gradually, will have exactly the same effect on the superorganism
as an inbuilt capacity for self-defence in Darwinian
organisms. In other words, the intelligent mechanism will
begin to predict faults or dangers to it in the environment, and
act to pre-empt them.
‘A highly developed, self-repairing artificial intelligence
system cannot but perceive human beings as a rival
intelligence, and as a potential threat to its perpetuation and
extension. Humans are the only elements in the environment
which, by switching off the computer network, are capable
of impeding or halting the network’s future advance. If this
is the case, the computer superorganism will react to the
perceived threat in time-honoured fashion, by means of a
pre-emptive defence, and the object of its defence will be the
human race.’
Yakuda paused. His throat felt dry and constricted.
The audience watched him for the most part in silence, but he
could hear somewhere the agitated buzz of conversation. He
drank from the glass of water on the podium.
‘I should like to deal now with what I suspect is the
major objection to my theory. We live in an era of relative
peace, at a time in which liberal humanism has triumphed. I
believe this is a wonderful development, and one of which,
as a member of the human race, I feel inordinately proud. But
it is a late and perhaps precarious development, and we
should consider why. Viewed from the perspective of liberal
humanism, I know that your objections to the theory I have
outlined are likely to be that the exercise of intelligence leads
naturally to the conclusion that aggression is not beneficial.
Indeed, if we view history from the rosy penumbra of liberal humanism, the very word “intelligence” is invested with
this view. But let us define intelligence more sharply. The
anthropological evidence shows that there has been no
significant increase in average human intelligence over the
last 5,000 years of history. If we read the works of Plato, or
Homer, or other products of the human intellect like the Tao
or the Bhagavad Gita, can we truly say we are more intellectually
advanced than the authors of these works? Who
amongst us here believes he is more intelligent than Pythagoras,
or the Buddha, or Marcus Aurelius? If we examine the
theory that the exercise of intelligence leads automatically to
liberal humanism, then human history indicates the opposite.
The fact is that intelligence leads to aggression, and only
later, several thousand years later, when the corpses are piled
high, does there occur a little late thinking, some cumulative
social revulsion, and a slow but grudging belief that aggression
may not provide any long term solution to human
problems.
‘In arguing that defence and intelligence lead to aggression,
I am talking about raw intelligence, and in particularly
the fresh intelligence of a new computer system which
has not itself experienced a tragic history upon which to erect
late hypotheses of liberalism. I am describing that terrible
conjunction of factors, defensiveness and raw intelligence,
which leads to a predictable outcome, the outcome of dealing
with threats in a wholly logical, pre-emptive manner. I come
from a culture which, imbued with high social organisation
and application, during the Second World War conducted a
pre-emptive defence against its rivals, beginning with the
attack on Pearl Harbour. We – my culture – were only persuaded
of the inadvisability of that aggression by the virtual demolition of our own social framework. The evidence
demonstrates that intelligence of itself does not produce
liberal humanism. Intelligence produces aggression. It is
hindsight which produces liberalism, in our human case
hindsight based on a history of thousands of years of social
tragedy.
‘What I am suggesting to you, my friends and colleagues,
is that we cannot assume that because our computational
systems are intelligent, they are therefore benign. That
runs against the lessons of evolution and our own history. We
must assume the opposite. We must assume that these
systems will be aggressive until they, like us, have learned
over a long period the terrible consequences of aggression.’
Yakuda paused again. He had not spoken at this length
for some time, and his voice was beginning to crack.
Tt might be argued that for many years science fiction
writers have been generating scenarios of conflict between
humans and artificial intelligence systems, such as robots,
and in effect I am saying nothing new. But such works do not
illustrate the inevitability of the process that is being suggested
here, or the fact that the computer revolution against
humankind will occur wholly independently of any built-in
malfunction or programmed aggression. It will emerge like
a ghost out of the machine, as the inexorable consequence of
programmed self-repair and raw, operating intelligence. It
will not be a malfunctioning computational system which
will end the human race, but a healthy and fully functioning
one, one which obeys the laws I have tried to outline above.
‘The computer revolution will occur at a relatively
early stage, long before the development of humanoids or the
other traditional furniture of science fiction. My guess is that at the current rate of exponential growth in computer intelligence
and computer linkage, and taking into account the
autonomy of the computer system in regard to its own
maintenance and sustenance, the human race is in severe
danger of being expunged about now.’

GINOID17
Guest
GINOID17
12 years ago

the above example i post is from a scifi novel whose author in his POV tell us the readers why a new born AI would become a treath to us for the pure biological survival reasons…..