Blind Spots

So DARPA’s feeling a little overwhelmed by the blizzard of data at its disposal. All that telemetry. All those intercepted signals. All those eyes in the sky and ears to the ground, sucking up the terabytes so fast they can barely slap new storage into place in time to catch it all. And that’s just collecting the stuff. What about actually analyzing it?

Well, DARPA’s a government bureaucracy, so obviously the first step is to create a whole new department: the Mathematics of Sensing, Exploitation, and Execution Program. Then you call for bids on a specific deliverable, to wit: a unified mathematical language for everything the military sees or hears, to get the “economy and efficiency that derives from an intrinsic, objective-driven unification of sensing and exploitation”.

Wired puts it more eloquently:

“existence is, to a sensor, what William James called a “blooming, buzzing confusion“: an unmediated series of events to be vacuumed up, leaving an analyst overloaded with unsorted data. Wouldn’t it be better if a sensor could be taught how to filter the world through a perceptual prism, anticipating what the analyst needs to know?”

Um, no. Or at least, maybe not. At least I think I disagree.

The thing is, we’re pretty much dealing with a description of how the human brain works: a filter, which by definition excludes most of your data. A prism, which refracts and distorts reality for the sake of more elegant and conspicuous highlights. And the problem with that is, well, the same problem we have with our brains. Unseen blind spots in the middle of the visual field. Phantom limbs. The unconscious rejection of anything the brain’s front lines regards as anomalous or improbable; if you’ve hung around here long enough, you already know how easily the conscious mind simply ignores everything from disappearing buildings to people in gorilla suits (if you haven’t hung around long enough, check out these demos from the University of Illinois’s Visual Cognition Lab). Hell, if the wrong part of your brain bleeds out you’ll even deny the existence of half your own body.

Brains are optimized (insofar as natural selection “optimizes” anything) for short-term survival, not objective truth. If believing absurd falsehoods increase the odds of getting laid or avoiding predators, your brain will believe those falsehoods with all its metaphorical little heart. And there’s a difference between parsing the Pleistocene savannah with an eye to remaining uneaten, and processing a million tactical inputs from landlines and geostationary satellites and everything in between with an eye to maintaining political stability in the Middle East. It may even be a significant difference, requiring fundamentally different modes of pattern-matching.

Because that’s what DARPA’s really talking about here: not analysis per sé (which will still be done by the generals), but the preliminary massaging, the distillation of an Executive Summary to highlight the salient points. Our brains already filter out ninety percent of sensory input in the name of high-grading the Important stuff; now we’re going to stick software in front to filter out ninety percent of that raw input up front. Maybe it’s an inevitable consequence of information overload; the data pipe is so thick that by now it’s physically impossible to analyze even a fraction of it before changing events render it all irrelevant.

But sometimes buildings do disappear, and the world can change as a result. And I wonder if adding even more reflexive filters blinding us to such events is the best way to go.

If I ever commit a terrorist attack, maybe I’ll dress up in a gorilla suit first.



This entry was posted on Wednesday, February 2nd, 2011 at 9:55 am and is filed under relevant tech, sentience/cognition. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.
28 Comments
Inline Feedbacks
View all comments
Blue
Guest
Blue
13 years ago

FYI, The Things won a Black Quill Award as Editor’s Choice in the Dark Scribble category (Single work, non-anthology short fiction appearing in a print or virtual magazine; awarded to the author).

http://www.darkscribemagazine.com/winners-of-the-4th-annual-blac/

Congratulations!

seruko
Guest
seruko
13 years ago

The post-information age has come;
look on it’s works ye might, tremble and despair.

Chinedum Richard Ofoegbu
Guest

I realized about two years ago that Google’s search results were getting worse. The reason is obvious: the web has gotten too big. I liken it to the way the volume of a sphere increases faster than the surface area. The more data there is to analyze, the harder it is analyze it.

dalvian
Guest
dalvian
13 years ago

The last line made me chuckle and reminded me of the bingo scene from the movie Rampage.

Marek Krysiak
Guest
Marek Krysiak
13 years ago

I’m very sorry, Peter, but with this very statement:

“If I ever commit a terrorist attack, maybe I’ll dress up in a gorilla suit first.”

you’ve just made it into US Most Dangerous list. First the “Border Incident”. Now this. They’ll get you. They’ll hunt you down.

I propose you wear the gorilla suit right now.

Andrea_A
Guest
Andrea_A
13 years ago

Another bid from DARPA:
https://www.fbo.gov/utils/view?id=b07b63280986a13fd60fb88c8d8debea
In this one scanning the communication of staff members is planned. The main problem seem to be false positive results.
Congrats, too!

Hugh
Guest
Hugh
13 years ago

My understanding is that the big, big problem for the military is that a normal human being can only watch the feed from a security camera / drone / satellite for twenty minutes tops before zoning out from boredom.

I don’t envy the researchers who take on this job. (Wait, what am I saying? They’re going to get a billion dollars! Of course I envy them!) One of the classic failures in neural networks is the tank recognition program from the 1980s or so. Show neural net bunch of photos, some with tanks hidden in the undergrowth, some without, and grade the results. Pretty soon it was almost infallible. Then came the big demo day with new photos, and it failed most spectacularly. As far as we know – even then it wasn’t always clear what the little buffers were actually thinking – the original photos had been taken on two different days, and the neural net had learned the difference between cloudy and sunlit, not tanks and no tanks.

A gruesome thought. Maybe some kinds of autistics might turn out to be very good at concentrating on data feeds. And if, as in Starfish, you can’t find enough naturally damaged people …

Mirik Smit
Guest
Mirik Smit
13 years ago

The obvious solution is to disband all spionage operations on it’s own citizens and allies… (you would think).

Anony Mouse
Guest
Anony Mouse
13 years ago

I believe that Dawkins wrote about the gorilla suit test. I have seen the video before and I find it really hard to believe that people did not see the gorilla. But the other tests were very interesting.

But we see a similar phenomenon in our every day lives. I make the same drive every morning to work (I know, sucks to be me). On some occasions, I get to work and can’t remember any details about my commute. Was I sleeping during my drive or has my brain decided that retaining that memory is a waste of chemicals?

handslive
Guest
handslive
13 years ago

I read Metamagical Themas in 1990 where Douglas Hofstadter spends a least some time discussing AI. One of the things I thought then was that we were probably fooling ourselves if we thought we could build anything resembling intelligence that didn’t have some perceptual and cognitive issues related to pattern matching and filtering. It might not be a given that all thinking entities have to filter and have to pattern match loosely to make connections, but it’s how *we* work after all. How do you step away from the conditions that form you?

And then the other thing I can’t help thinking about is Stand on Zanzibar and one of the cleverest hacks of an AI in fiction.

So, what do we end up with here? An AI like the one in Maelstrom that filters and pattern matches the way it’s been taught, but comes to unexpected and harmful conclusions? Or an AI that throws out reality because it just doesn’t make sense?

Hljóðlegur
Guest
Hljóðlegur
13 years ago

Hugh brings up a really important point, imho – automated systems are not magically better at some particular task, they are painstakingly engineered to be better, hence the tank/no-tank snafu.

People are fooled into thinking that when an engineered system does some task really well that it can run unattended. I think it has to do with assumed agency or animism. (Peter recently mentions the human tendency to assume agency in the inanimate environment as an evolutionary legacy – assuming the rustling in the grass is a lion is a good default.)

If you unconsciously think of a data filter or aggregator as another mind – and you will – then once you “trust” the system, you unconsciously assume it is looking out for your agenda: finding the hidden tank, for instance. It’s all projection on our part, however. It’s a cognitive shortcut that works pretty well lots of the time.

Let me hike up my pants, get out my old man cane, wave it in the air, and say this: a big DARPA data transmorgrifier will find some patterns that human eyes could not, and miss some it could, and should never ever be made completely autonomous. Cain’t trust them machines. Not because they have a Skynet agenda of their own, but because no autonomous system really has an “agenda”, so it needs human repair, attention and monitoring. (Notice how in science fiction there is always some self-repairing technology that never fails? Nanobots that flawlessly repair something, over and over, for instance. That isn’t science fiction, that is pure sweet fantasy in a real universe where entropy rules and everything cumulatively drifts.)

In re the gorilla suit. As I have said before, Mr. Watts is his own country, with its own laws and customs. Ergo, no amount of gorilla suit is going to make him invisible. Why? Because the gorillas that wander through psych experiments never speak.

Sheila
Guest
Sheila
13 years ago

@Hugh: “A gruesome thought. Maybe some kinds of autistics might turn out to be very good at concentrating on data feeds. And if, as in Starfish, you can’t find enough naturally damaged people …”

It’s been a long time since I’ve read A Fire on The Deep, but doesn’t it have something like this?

One I definitely remember, with corporate tools using autistic people (but not damaging people to get that way) you have The Speed of Dark by Elizabeth Moon.

Hljóðlegur
Guest
Hljóðlegur
13 years ago

@Sheila – it does. That gripping book has a whole ship of slaves brain-damaged by an engineered virus to attend only to their assigned tasks.

Sheila
Guest
Sheila
13 years ago

@H: “In re the gorilla suit. As I have said before, Mr. Watts is his own country, with its own laws and customs. Ergo, no amount of gorilla suit is going to make him invisible. Why? Because the gorillas that wander through psych experiments never speak.

We could swap him out for a different speaking gorilla and people wouldn’t notice due to change blindness.

Sheila
Guest
Sheila
13 years ago

@Peter: “(although I invented an analagous incident in which everyone thought the AI was keying on the arrival of subway cars when in fact it was only correlating patterns on a wall-clock that happened to be in camera range).

You got me wondering whether one could build in needs like breathing to a system so that it would do the pattern matching thing as well as notice when it stopped being able to breathe.

Tangent selfish request: state of the art is different than when Starfish was published and probably out there somewhere they are publishing papers on neural network/Bayesian/method du jur hybrids. Those things would probably do the subway better. any chance for you blogging about it? you could break it in new and interesting ways. too busy? okay, sorry. I’m selfish.

Peter D
Guest
Peter D
13 years ago

Sheila & Hljóðlegur:

Actually it was “A Deepness in the Sky” that had ‘focus’, not “A Fire Upon The Deep”. But it’s easy to get them confused. I keep wondering if he’ll release a third book, “A Sky Full of Fire” just to make the conclusion circular.

Hljóðlegur
Guest
Hljóðlegur
13 years ago

Pure. Evil. Genius.

Of course, two gorillas – the first one is Peter, the second a cleverly engineered automated weapon system that talks in his particular idiom. Mid-digression on the neurology of consciousness, Peter ducks out and the weapon system gorilla is subbed in. The switcheroo is transparent to the casual observer.

If anyoner gets too close, in which case the ruse is discoverable, the system can kill them.

No, wait, that won’t work.

Still, I like it. It just needs more work, a little refinement.

Hugh
Guest
Hugh
13 years ago

@Hljóðlegur:“If anyoner gets too close, in which case the ruse is discoverable, the system can kill them.
No, wait, that won’t work.”

Sounds like a great way to pass the Turing Test! Yes I’m an AI, but if you tell anyone, I’m going to kill you.

Sheila
Guest
Sheila
13 years ago

thank you wikipedia. this is the experiment I was remembering:


In a study by Simons and Levin (1998), a study confederate would stop passers-by on a college campus to ask them for directions, only to be replaced suddenly by a different confederate when a visual obstacle (two men carrying a large board) passed between them. While the two confederates presented looked different, a large number of people faced with this change failed to detect it.

We set up an interview with Peter in a guerilla suit, distract the interviewer, and swap out Peter with someone else in a guerilla suit. See if they notice.

maybe we could do this over IRC with ascii art due to our limited budget.

Caudoviral
Guest
13 years ago

So let’s say we build another intelligence/pattern recognition system. It begins filtering data. It has blind spots. We then receive the data. We have different blind spots. What happens when the data it selects for with its different blind spots falls within our own blind spots. What does it look like if we construct an intelligence, craft it to provide a specific subset of information, and that information falls into our own blindspot? Do we assume our construct has malfunctioned? Or do we reason that we have a functioning construct providing us with data that we can’t see? What happens if it can’t see and analyse things that we clearly can?

It seems that for perfect utility the two fields, if you will, would have to map to each other near-flawlessly. If you want universal applicability of course.

And Sheila? That just gave a very odd mental image.

sheila
Guest
sheila
13 years ago

@Caudoviral:

I don’t have good speculative replies for all of your paragraph, but thinking on the idea of not completely overlapping mental abilities brings the question of which mistakes one would prefer? the mistakes of the humans or the mistakes of the vat.

In Starfish the relinquish so much control that the mistakes of the headcheese have huge consequences. I don’t know in reality when a human has made an equivently drastic mistake in judgment. can someone think of examples?

C, as for the image, do you mean the image from thinking of what a headcheese that needs some of the same nutrients we do? (which maybe even wouldn’t prevent the error in the book)

trackback

[…] Witness the revelation. […]

Chinedum Richard Ofoegbu
Guest

Actually it was “A Deepness in the Sky” that had ‘focus’, not “A Fire Upon The Deep”. But it’s easy to get them confused. I keep wondering if he’ll release a third book, “A Sky Full of Fire” just to make the conclusion circular.

I’ve thought for a long time that the third book will be A Skyness Preposition the Noun or something to that effect.

Hljóðlegur
Guest
Hljóðlegur
13 years ago

PW: Sheila & Hljóðlegur:

Actually it was “A Deepness in the Sky” that had ‘focus’, not “A Fire Upon The Deep”.

I stand gently corrected. The one with the giant Spiders at Princeton, not the psychic pack dogs at Camelot. I really really wanted to hate these books, but those giant spiders, man, they become real somehow.

Hank Roberts
Guest
Hank Roberts
13 years ago

Hey, I see no problem with using automated tools to organize our information. Clearly they’re reliable and useful.

Look at Google Scholar for evidence of how information goes into our computers. These “intellects vast and cool and unsympathetic” will be able to rely on their data to manage our future.

Oh, yes.

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.170.2780

Brycemeister
Guest
Brycemeister
13 years ago

This dovetails nicely with what I tell me friends-some of whom think I’m one of them conspiracy theorists. I’ve educated them on the fact that while I am fascinated by conspiracies, I do not necessarily believe most of them-or any of them. But I loves me some wackadoodle nutsy stuff, it gives you a real feel for the so-called underbelly. Anyhows, I wind up telling people that massive superfast parallel processing, and oodles of surveillance tech means pretty much diddly squat. It means lots of money for various people-note that the 3 or four million cctv cameras, allegedly hooked to smart software that, y’know, looks for stuff, hasn’t lowered the crime rate an iota. Possibly it gives the corporations an edge on advertising.
This is all for the simple reason that if you gots a fancy patooter looking for specific words and phrases on cel phones, you also have to have somebody looking into all that data, and trying to decide relevance and importance. Ouch, the head, it hurts to think.
Oh well, at least lot’s of people are gainfully employed.

Brycemeister
Guest
Brycemeister
13 years ago

Craps and a half! Forgot to mention it’s London I was thinking of, with the cameras. Okay, bedtime for me.