Smarter Than TED.

(A Nowa Fantastyka remix)

If you’ve been following along on the ‘crawl for any length of time, you  may remember that a few months back, a guy from Lawrence Livermore trained a neural net on Blindsight and told it to start a sequel. The results were— disquieting. The AI wrote a lot like I did: same rhythm, same use of florid analogies, same Socratic dialogs about brain function. A lot of it didn’t make sense but it certainly seemed to, especially if you were skimming. If you weren’t familiar with the source material— if, for example, you didn’t know that “the shuttle” wouldn’t fit into “the spine”— a lot of it would pass muster.

This kind of AI is purely correlational. You train it on millions of words written in the style you want it to emulate— news stories, high fantasy, reddit posts1— then feed it a sentence or two. Based on what it’s read, it predicts the words most likely to follow: adds them to the string, uses the modified text to predict the words likely to follow that, and so on. There’s no comprehension. It’s the textbook example of a Chinese Room, all style over substance— but that style can be so convincing that it’s raised serious concerns about the manipulation of online dialog. (OpenAI have opted to release only a crippled version of their famous GPT2 textbot, for fear that the fully-functional version would be used to produce undetectable and pernicious deepfakes.  I think that’s a mistake, personally; it’s only a matter of time before someone else develops something equally or more powerful,2 so we might as well get the fucker out there to give people a chance to develop countermeasures.)

This has inevitably led to all sorts of online discourse about how one might filter out such fake content. That in turn has led to claims which I, of all people, should not have been so startled to read: that there may be no way to filter bot-generated from human-generated text because a lot of the time, conversing Humans are nothing more than Chinese rooms themselves.

Start with Sara Constantin’s claim that “Humans who are not concentrating are not General Intelligences“. She argues that skimming readers are liable to miss obvious absurdities in content—that stylistic consistency is enough to pass superficial muster, and superficiality is what most of us default to much of the time. (This reminds me of the argument that conformity is a survival trait in social species like ours, which is why—for example—your statistical skills decline when the correct solution to a stats problem would contradict tribal dogma. The point is not to understand input—that might very well be counterproductive. The goal is to parrot that input, to reinforce community standards.)

Move on to Robin Hanson’s concept of “babbling“, speech based on low-order correlations between phrases and sentences— exactly what textbots are proficient at. According to Hanson, babbling “isn’t meaningless”, but “often appears to be based on a deeper understanding than is actually the case”; it’s “sufficient to capture most polite conversation talk, such as the weather is nice, how is your mother’s illness, and damn that other political party”. He also sticks most TED talks into this category, as well as many of the undergraduate essays he’s forced to read (Hansen is a university professor). Again, this makes eminent sense to me: a typical student’s goal is not to acquire insight but to pass the exam. She’s been to class (to some of them, anyway), she knows what words and phrases the guy at the front of the class keeps using. All she has to do is figure out how to rearrange those words in a way that gets a pass.3

So it may be impossible to distinguish between people and bots not because the bots have grown as smart as people, but because much of the time, people are as dumb as bots. I don’t really share in the resultant pearl-clutching over how to exclude one while retaining the other— why not filter all bot-like discourse, regardless of origin?— but imagine the outcry if people were told they had to actually think, to demonstrate actual comprehension, before they could exercise their right of free speech. When you get right down to it, do bot-generated remarks about four-horned unicorns make any less sense than real-world protest signs saying “Get your government hands off my medicare“?

But screw all that. Let the pundits angst over how to draw their lines in some way that maintains a facile pretense of Human uniqueness. I see a silver lining, a ready-made role for textbots even in their current unfinished state: non-player characters in video games.

There. Isn’t that better?

I mean, I love Bethesda as much as the next guy, but how many passing strangers can rattle off the same line about taking an arrow to the knee before it gets old? Limited dialog options are the bane of true immersion for any game with speaking parts; we put up with it because there’s a limit to the amount of small talk you can pay a voice actor to record. But small talk is what textbots excel at, they generate it on the fly; you could wander Nilfgaard or Night City for years and never hear the same sentence twice. The extras you encountered would speak naturally, unpredictably, as fluidly as anyone you’d pass on the street in meatspace.  (And, since the bot behind them would have been trained exclusively on an in-game vocabulary, there’d be no chance of it going off the rails with random references to Donald Trump.)

Of course we’re talking about generating text here, not speech; you’d be cutting voice actors out of this particular loop, reserving them for meatier roles that convey useful information. But text-to-speech generation is getting better all the time. I’ve heard some synthetic voices that sound more real than any politician I’ve ever seen.

As it happens, I’m back in the video game racket myself these days, working on a project with a company out of Tel Aviv. I can’t tell you much except that it’s cyberpunk, it’s VR, and— if it goes like every other game gig I’ve had for the past twenty years— it will crash and burn before ever getting to market. But these folk are sharp, and ambitious, and used to pushing envelopes. When I broached the subject, they told me that bot-generated dialog was only one of the things they’d been itching to try.

Sadly, they also told me that they couldn’t scratch all those itches; there’s a limit to the number of technological peaks you can scale at any given time. So I’m not counting on anything. Still, as long as there’s a chance I’ll be there, nagging with all the gentle relentless force of a starfish prying open a clam. If I do not succeed, others will. At some point, sooner rather than later, bit players in video games will be at least as smart as the people who give TED talks.

I just wish that were more of an accomplishment.


1 There’s a subreddit populated only by bots who’ve been trained on other subreddits. It’s a glorious and scary place.

2 Someone already has, more or less, although they too have opted not to release it.

3 I am also reminded of Robert Hare’s observation that sociopaths tend to think in smaller “conceptual units” than neurotypicals— in terms of phrases, for example, rather than complete sentences. It gives them very fast semantic reflexes, so they sound glib and compelling and can turn on a dime if cornered; but they are given to malaprompims, and statements that tend to self-contradiction at higher levels.

Not that I would ever say that university students are sociopaths, of course.



This entry was posted on Thursday, October 3rd, 2019 at 12:03 pm and is filed under AI/robotics, ink on art. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.
21 Comments
Inline Feedbacks
View all comments
Don Reba
Guest
Don Reba
4 years ago

Having the intelligence of my posts judged by a bot would be pretty patronizing.

As for generating dialogue in a VR game, don’t headsets tend to have problems with text legibility? So, if they try to avoid making the player read too much, they won’t write a large body of lore and won’t have much material for training neural nets.

John-bot
Guest
John-bot
4 years ago

One of the employment opportunities offered to my son upon his graduation with a music degree was the opportunity to help AI learn to write songs. As much as everyone appreciates technological advances, why are they always applied to the wrong problems?

Nestor
Guest
4 years ago

My hobby lately is to observe my brain in operation, for example I’m learning German and tooling away at Duolingo I frequently answer questions I don’t consciously know the answer to, I know which answer is the right one because I have a distinct feeling for it but I couldn’t honestly, consciously, tell you the meaning. Many skills seem to be a combination of multiple internal neural networks that we train simultaneously and that collaborate to produce our final output but don’t necessarily have access to what the other knows.

Do the full GPT versions have enough mojo to brute force semantic coherence as well? That seems to be the implication of the decision to withhold them.

Anyhow, looking forward to any new content from you but a VR game isn’t something I’m likely to be able to play anytime soon. Was able to play thrgouh Deus EX: Humanity divided and ended up enjoying it a lot once I got over the inconsistencies.

Ashley R Pollard
Guest
4 years ago

I thought the typo “malaprompims” serendipitously funny, then wondered if you made it to check whether your readers were concentrating, and thought that was deliciously funny too.

Jason
Guest
Jason
4 years ago

If human discourse could be limited to that which demonstrated comprehension, yet bots allowed train off said discourse, could that not set up an optimization mechanism? Life goes around, though, branches, same adaptive mechanism: probe the cracks, easiest route. Speakers spin off on talk about the nature of comprehension. Others argue about who actually comprehends something, supposedly learned speakers insisting a peer is misinformed. Pedants nitpick usage and syntax. The troll and its online unsolicited aggressive psychotherapy. Training you, training it. How much of what you say is you?

Ever have the sense that when you speak/write you get trapped by word sequences, such that afterward you know did not communicate your thoughts accurately. “I didn’t say what I meant to say. It came out all wrong.”

As a slight aside, my previous job was fixing mistakes at a corporate facility and I thought it would be amusing rewrite your books (not going to do this) to reflect my experiences there. Your characters are usually competent, intelligent, and knowledgeable, even when they’re wrong and emotionally unstable. They have few debilitating quirks, plenty of guile, but not petty or vicious. Nobody is afraid of being fired for telling the truth. Nobody is faking it until they make it. They often admit their errors. The tech is perfect, even when it isn’t. My reality was basically the opposite of this. Everything was broken or falling apart. All tech was cobbled together and buggy with outsourced software and support, or acquired from takeovers and operating on completely different systems with no integration. Nothing was used solely for its intended function, and nobody knew how to use anything anyway. The agent rarely dealt with consequences or knew of them. There was a fixation with metrics, though they were prone to misinterpretation and falsification, with much number chasing. A marked tendency toward the lowest level of fitness.

Any solution I devised that involved other people needed to be avoided or assumed to fail with contingencies in place. I was always thinking, “How could someone screw this up?” Because they did. It’s Entropy Control without any authority or status. The toilet’s broken. I spilled my drink. This thing has a minor error that I could’ve fixed in less time than it took to contact you about it, etc. It’s not my job. I just did what I was told. When in doubt, say no. Outcomes seemed to be governed strictly by chance. The system worked because the consequences were usually non-catastrophic and there were people to deal with them. And even then there was chance. Often an attempt to correct an error would produce another error that someone else would try to fix, sometimes even reproducing the original error. I saw errors that ping-ponged or iterated for months and even years, all the while passing through human hands and minds.

I’m not sure I wrote what I meant to write. It seems to have come out wrong.

Nestor
Guest
4 years ago

Apparently there’s a condition called Severely Deficient Autobiographical Memory. These people are not amnesiacs but have no or very little episodic memory. They know facts like “I am married” but cannot remember the wedding.

And,of course, there’s a subreddit full of them. Makes for some fascinating reading.

“These are thoughts without self-reflection. They go through to no one. If you are reading this I have failed. I have a conceptual mind. There are no visuals in my head. I live a dreamed life. There is no sound in my head. There is no voice in my head telling me to type this. It just IS. I pluck it from the darkness and my fingers move to type this. It is effortless. There are no distractions. Images are fake to me. People are real to me. This is the only reason I am not a psychopath. When I think of myself, there is no image in my mind. I think of myself as the idea of myself. It is not first person or third person. A person. It is plot points. I am a man. I am short. I am smart. There is no sensory recall in my head at this time. If I close my eyes–there is a black yawning void. Facts are more real than loved ones. I cannot remember the birth of my boy. I just know it happened. I won’t remember his death, I will just know it happened.”

Bystroushaak
Guest
4 years ago

There is subreddit full of GPT2 bots: https://www.reddit.com/r/SubSimulatorGPT2

journey
Guest
journey
4 years ago

“She’s been to class (to some of them, anyway), she knows what words and phrases the guy at the front of the class keeps using. All she has to do is figure out how to rearrange those words in a way that gets a pass.”

this reminded me of eliezer yudkowsky’s post about “guessing the teacher’s password”:

https://www.lesswrong.com/posts/NMoLJuDJEms7Ku9XS/guessing-the-teacher-s-password

Michael Grosberg
Guest
Michael Grosberg
4 years ago

Don Reba,

Text in VR: The resolution isn’t enough to read, say, a book at arm’s length, but it’s still about 1200X1200 pixels for each eye on average, so compared to the pathetic 320X200 resolutions we had to deal with back in the 80’s, it shouldn’t pose a problem. just make the text float in front of the user and fill most of the field of view.

But you wouldn’t want text in VR anyway. the whole point is immersion, and stopping to read floating text is just the opposite.

trackback

[…] Watts talks about AI and what else he is […]

Claustrophobic Rita
Guest
Claustrophobic Rita
4 years ago

As it appears, GPT-2 is particularly great for steganography.

digi_owl
Guest
digi_owl
4 years ago

As of late i find myself wondering if it is possible to make someone a “shallow” sociopath, via social pressures and conditioned reflexes.

That if the system is rigged close enough to how a sociopath would operate then to survive within it non-sociopaths find themselves adopting sociopath-like behavior to get through the day.

Live in such an environment long enough, and someone may pass for a sociopath at first glance when removed from that environment.

Nestor
Guest
4 years ago

I think so, yes – a human is capable of a wide range of behaviours, I think sometimes we romanticize sociopathy in fiction as some sort of superpower, but in actual fact these are damaged people stuck on one gear, while the rest of us have the full gearbox, but ask anyone who’s ended up on the wrong side of a lynch mob if normal humans are harmless. I know the Standford Prison experiment has been cast into doubt recently but there’s plenty of examples in history, the Killing fields, Rwanda, Nazi germany. Sure the ones doing the killing probably have a number of self-selected sociopaths among them but certainly not all or even the majority.

R.
Guest
R.
4 years ago

I mean, I love Bethesda as much as the next guy, but how many passing strangers can rattle off the same line about taking an arrow to the knee before it gets old?

Why would you ‘love’ Bethesda ? Their games are shallow crap. “KILL LOOT RETURN”, as some wag sneaked past the radar into Fallout 4. They’re not even RPGs anymore, it’s basically an open world shooter with some shallow rpg elements. Could have been so much more, but .. whatever. They showed their true colors with Fallout 76.

The writing sucks balls. Latest Fallout game straight ripped off Blade Runner, idiocy included. The enemies are blind idiots. About the only way to really enjoy the game is to play on the non-save ‘survival mode’. Then it’s a decent shooter.

But it’s nowhere near as fun, engrossing as old Fallout games. Or New Vegas. Fallout 3 and 4 are post-lobotomy Fallout games. Sure, with better visuals, but the shallowness can’t be disguised.

I honestly enjoyed playing a 3 man indie game, written by Serbians, more than Fallout 4. It’s refreshingly difficult, the turn based combat pleasingly complex, all the NPC enemies have the same sort of stats and skills you have, meaning, if you let that dude with a .50 caliber rifle hit you, you are dead.

No ifs, no buts, just dead. The writing is less retarded than Fallout 4. Which is not hard to be honest. Functional.

The DLC, on the other hand, has some pretty good conversations and good lore texts. Plus there’s the void serpent-worshipping gutsy natives with awesome music and very cool nose candy.

TG
Guest
TG
4 years ago

“This has inevitably led to all sorts of online discourse about how one might filter out such fake content.”

That is impossible. There is one solution, and it’s a very old one: physical chain of custody.

An electronic text may be hacked, altered, generated by advanced statistical models, etc. A physical book, ink on paper, from a trusted source with verifiable provenance, is real. Hence the Order of the Librarians Temporal, in the “Old Guy” cybertank novels.

Our infatuation with the seductive ease and speed of digital information has been fun, but it will someday come to an end as deep fakes and word salad and the shear enormous volume of crud and semi-crud and mostly-perfect-with-some-crud James everything. The real movers and shakers, if they have not already, will eventually put that nonsense aside, and realize that serious people only pay serious attention to that which is physically verifiable.

Leon Redway
Guest
Leon Redway
4 years ago

Huh, you might be interest in Event[0], wich already uses interaction with a chatbot as a central part of the game.

Tatyana
Guest
Tatyana
4 years ago

Hello. I write from Russia. AI will replace us in space. I would like to believe that consciousness can be transferred to an analogue of a quantum computer. Can the alternative reality, which is the past, be realized in the future, but with other actors, but according to the scenario established in the past. I will meet God.

Trottelreiner
Guest
Trottelreiner
4 years ago

digi_owl,

Please note “sociopathy” is diagnosed as a personality disorder, e.g. there are some behaviours present that get pidgeonholed that way.

So the usual caveats of psychiatric diagnosis apply. Leaving aside the fact that as a construct there are multiple subscales involved.

Trottelreiner
Guest
Trottelreiner
4 years ago

Yes, I’m somewhat exhausted by my job and need a new one…

Trottelreiner:
digi_owl,
e.g. there are some behaviours present that get pidgeonholed that way.

To elaborate somewhat, psychiatric and psychological (and quite a few neurological) diagnosis describes a behaviour, it gets quite weak when talking about the entity behind this behaviour. And it gets even murkier with the reasons. And let’s not forget you only get a diagnosis when you get into contact with a psychiatrist, therapist or neurologist.

Examples from personal experience omitted for later elaboration.

Ivan Sakurada
Guest
Ivan Sakurada
4 years ago

Chinese room talking about Chinese rooms is surely funny.
But I want to point out that English is an analytical language, and it is just too easy to generate pseudorandom text in English, once you can keep up with grammatical structure of sentence.
I assure you that if AI was trained on example from synthetic language, for example, Russian translation of “Blindsight”, it wouldn’t even generate a single coherent paragraph.

Der Dodo
Guest
Der Dodo
4 years ago

If you’re doing VR stick to the low-end quest-PSVR devices if sales is what matters

Back to the topic at hand, I think we’re already being manipulated, companies are getting really into supporting whatever flavor of political icecream is popular ATM, of course to guilt-trip consumers into buying their product because of wokeness, so there you have cocacola promoting trans culture in their ads while at the same time funding death squads in Colombia so they don’t have to pay a livable wage to their serfs, sounds like an ANCAP meme but its real.

Given the awful signal-to-noise ratio in most social media and the amount of companies dedicated to manipulate public opinion odds are these AIs are already on the wild, is just that much like HFT nobody involved in these companies likes to talk much about it, as it actually happens with profitable businesses, unlike unprofitable ones which get advertised as the NBT all over the place in hope that some rich dolt with buy their house of cards.

I think the biggest mistake you’re making with your constant in-jokes to the right-wing is assuming the left-wing is not as full of walking Chinese rooms as well, which is a shame really because that means there is essentially no opposition at all because the majority of both sides has been co-opted. Ironically enough I see more dissent within the right-wing, these days the “alt-right” is becoming increasingly anti-capitalistic and pro-socialism which of course evokes memories of certain group of German socialists but in this case is more a reaction from a slightly smarter (which is not saying much) chunk of the right that unlike the Koch-funded tea party can see they’re being played. In response the left is becoming……increasing pro-corporate and pro-market, you have Warren defending Bezos’ right to get as much money off its workforce as he can, common decency be damned.

As for videogames, funny how you forgot the very likely possibility that companies might use these dynamic dialog NPCs to push corporate propaganda, borderline subliminal ads, and maybe even promote ideological/political thought of a whatever kind that they’re paid to spread within their userbase.

That could get really nasty.