… as chance would have it, here’s an excerpt from a list of interview questions I’m currently working through from ActuSF in France:
- You raise the question of artificial intelligence, with the “intelligent frosts”, and quote in appendix both the works of Masuo Aizawa and Charles Thorpe on neuronal networks. Do you believe that in the future, it will get possible for the human race to create real “artificial brains”? Don’t you think that its complexity, and our misunderstanding about it, will always restrain the resemblances between IA and human intelligence?
I think it depends on how the AI is derived. So much of what we are — every fear, desire, emotional response — has its origin in brain structures that evolved over millions of years. Absent those structures, I’m skeptical that an AI would experience those reactions; I don’t buy the Terminator scenario in which Skynet feels threatened and acts to preserve its own existence because Skynet, however intelligent it might be, doesn’t have a limbic system and thus wouldn’t fear for its life the way an evolved organism would. Intelligence, even self-awareness, doesn’t necessarily imply an agenda of any sort.
The exception to this would be the brute-force brain-emulation experiments currently underway in Sweden and (if I recall correctly) under the auspices of IBM: projects which map brain structure down to the synaptic level and then build a software model of that map. Last time I checked they were still just modeling isolated columns on neurons, but the ultimate goal is to build a whole-brain simulation— and presumably that product would have a brain stem, or at least its electronic equivalent. Would it wake up? Who knows? We don’t even know how we experience self-awareness. But if it was a good model, then by definition it would behave in a way similar to the original— and now you’re talking about an AI with wants and needs.
I can’t wait to see how that one turns out.