Mind Reading Technology…

…has been a staple of every low-budget piece of celluloid skiffy going back at least to that early-sixties Gerry-Anderson puppet show Stingray (which no one with any dignity will admit to having watched, although I clearly remember the episode with the mind-reading chair). The Prisoner also featured an episode in which No. 6’s dreams could be probed, and the various incarnations of Star Trek must have had a half-dozen such episodes among them although they all seem to run together after awhile (the episode I’m thinking of had aliens with bumpy foreheads; does that help at all?).

Now here comes Kendrick Kay and his buddies in Nature with “Identifying natural images from human brain activity“, and if they haven’t actually vindicated all those cheesy narrative gimmicks, they’ve made a damn good first pass at it. They used fMRI scans to infer which one of 120 possible novel images a subject was looking at. “Novel” is important: the system trained up front on a set of nearly 2,000 images to localize the receptive fields, but none of those were used in the actual mind-reading test. So we’re not talking about simply recognizing a simple replay of a previously-recorded pattern here. Also, the images were natural— landscapes and still-lifes and snuff porn, none of this simplified star/circle/wavey-lines bullshit.

The system looked into the minds of its subjects, and figured out what they were looking at with accuracies ranging from 32% to 92%. While the lower end of that range may not look especially impressive, remember that random chance would yield an accuracy of 0.8%. These guys are on to something.

Of course, they’re not there yet. The machine only had 120 pictures to choose from; tagging a card from a known deck is a lot easier than identifying an image you’ve never seen before. But Kay et al are already at work on that; they conclude “it may soon be possible to reconstruct a picture of a person’s visual experience from measurements of brain activity alone.” And in a recent interview Kay went further, suggesting that a few decades down the road, we’ll have machines that can read dreams.

He was good enough to mention that we might want to look into certain privacy issues before that happens…



This entry was posted on Sunday, March 9th, 2008 at 6:45 pm and is filed under biology, neuro, relevant tech. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.
7 Comments
Inline Feedbacks
View all comments
Matt McCormick
Guest
Matt McCormick
16 years ago

What I can’t get unstuck from my craw in these sorts of “mind-reading” studies is that fMRI scans are only picking up really macro-events like increased blood flow, right? But consciousness, or thinking, or whatever it is that we’re trying to read is a neural event that’s molecular. So isn’t using an fMRI to try to read minds kind of like throwing trucks at the Mona Lisa and trying to figure out what it looks like and the colors by checking how they bounce off?

MM

Anonymous
Guest
Anonymous
16 years ago

…”machines that can read dreams”… Hmmm… Didn’t I see one in Wenders’ Until The End Of The World(1991).

Nothing is new under the sun.

Mike
Guest
Mike
16 years ago

Movies like Brainstorms and Strange Days also covered that more explicitly.

That said, those are easy SFnal predictions to make, and have undoubtedly been made many times earlier.

Keith David It's-a-Taylor-Series! Smeltz
Guest
Keith David It's-a-Taylor-Series! Smeltz
16 years ago

to Matt McCormick:

I’m not sure that consciousness is a molecular neural event. I think it probably is, but maybe it’s a more holistic thing.

Even if consciousness is defined by invisible microstates maybe it can still be correlated with macro-events, making conscious thought readable to the machine in the same way that polygraphs detect the physical symptoms of stress.

While we all know polygraphs can’t see lies, their hit rate is better than random. Occasionally useful tool.

In that event, an fMRI could use the brain itself as the sensor/amplifier of its own microstates.

John Henning
Guest
John Henning
16 years ago

Interesting.

Personally, it seems less advanced that this “artificial hippocampus” chip test 3-4 years ago.

http://www.newscientist.com/article.ns?id=dn6574

From New Scientist:
The microchip, designed to model a part of the brain called the hippocampus, has been used successfully to replace a neural circuit in slices of rat brain tissue kept alive in a dish. The prosthesis will soon be ready for testing in animals.

In previous work, Berger’s team had recorded exactly what biological signals were being produced in the central part of the hippocampal circuit and had made a mathematical model to mimic its activity. They then programmed the model onto a microchip, roughly 2 millimetres square (New Scientist, 12 March 2003).

Now the team has tested whether its chip can work like the real thing. They cut out the central part of the circuit in real rat brain slices and used a grid of miniature electrodes to feed signals in and out of their microchip. “We asked if output from an intact slice was the same as from a slice with the substituted chip,” says Berger. “The answer was yes. It works really well.”

They should be ready to start testing on living mice about now.

Jacqie
Guest
Jacqie
16 years ago

Speaking of mind-reading technology-just read your short story in the new Solaris book. Nicely done- I didn’t know whether to empathize with your protagonist or not! Should make some people think.

Peter Watts
Guest
Peter Watts
16 years ago

Matt McCormick said…

What I can’t get unstuck from my craw in these sorts of “mind-reading” studies is that fMRI scans are only picking up really macro-events like increased blood flow, right? But consciousness, or thinking, or whatever it is that we’re trying to read is a neural event that’s molecular. So isn’t using an fMRI to try to read minds kind of like throwing trucks at the Mona Lisa and trying to figure out what it looks like and the colors by checking how they bounce off?

We’re looking at a resolution of a couple of mm here, so yeah— you’re a long way from targeting individual neurons. On the other hand, the visual cortex is a pretty big stretch of real-estate, and image-processing gets spread widely across it, so I’m guessing the image you derive, while crude, is fine-grained enough to select from the available options. In fact, I don’t have to guess, because the study wouldn’t work otherwise.

Mike
said…

Movies like Brainstorms and Strange Days also covered that more explicitly.

I liked Strange Days quite a bit, right up to the ending where the snowballing millennial riot was defused by one guy jumping up and shouting why-can’t-we-all-just-get-along. It really lost me there. Seems to me, once a riot has that kind of momentum Times Square would’ve ended up burned to the fucking ground

John Henning said…

Personally, it seems less advanced that this “artificial hippocampus” chip test 3-4 years ago.

Oooh, they’ve got way further than that:

http://www.seedmagazine.com/news/2008/03/out_of_the_blue.php?page=all&p=y