Terry Gilliam’s Air Canada

“What happens to the very concept of a war crime when every massacre can be defined as an industrial accident?”
—“Collateral”

“It’s not our mistake!”
—Sam Lowry, Brazil


Being who I am, I tend to portray my futures in the spirit of Orwell’s Nineteen Eighty Four or Brunner’s The Sheep Look Up. Sometimes, though, reality turns out more like Gilliam’s Brazil: just as grim, but hysterically so.

Take my short story “Collateral”: a tale that (among other things) asks about culpability for decisions made by machines. Or “Malak”, about an autonomous drone with “no appreciation for the legal distinction between war crime and weapons malfunction, the relative culpability of carbon and silicon, the grudging acceptance of ethical architecture and the nonnegotiable insistence on Humans In Ultimate Control.” Both stories are military SF; both deal with realpolitik and the ethics of multi-digit kill counts in contexts ranging from conventional warfare to mass shootings. Serious stuff—because I R Serious Writer, and these R Serious Things.

And then the real world comes along and makes the whole issue utterly ridiculous.

If you’re Canadian, you’ll know about Air Canada. They’re our sole remaining major airline, after swallowing up the competition a few decades ago. You may have seen them in the news recently for such customer-friendly acts as forcing a disabled passenger to drag himself off the plane while the flight crew stood around watching, refusing to divert to an emergency landing while another passenger was inconveniently dying in Economy, refusing to board people named “Mohammad” because their names were “Mohammad”, and coming in dead last for on-time flights among all major North American airlines. It’s none of those noteworthy accomplishments I’m here to talk about today, though; rather, I’m here to remark upon their historic accomplishment in AI. In one fell swoop they’ve leapt over the fears of such luminaries as Geoffrey Hinton and David Chalmers, who opine that AI might become dangerously autonomous in the near future.

Air Canada has claimed, in court, that they’ve created a chatbot which is already an autonomous agent, and hence beyond corporate control.

It’s a bold claim, with case history to back it up. Back in 2022 one Jake Moffat, planning a flight to attend his grandmother’s funeral, went online to inquire about bereavement fares. Air Canada’s chatbot helpfully informed him that he could apply for the bereavement discount following the trip; when he tried to do that, his claim was denied because bereavement rates couldn’t be claimed for completed travel. When Moffat presented screen shots of Air Canada’s own chatbot saying the exact opposite, Air Canada politely told him to fuck off.

So Moffat sued them.

The case played out in Small Claims Court, over such a trifling sum (less than a thousand dollars) that it must have cost the airline far more to defend their position than it would have to simply fork over the money they owed. But this wasn’t about money: this was apparently a matter of principle, and Air Canada puts principle above all. They made the case that they weren’t responsible for erroneous chatbot claims (what those in the know might call “hallucinations”) because—let me make sure I’ve got this right—

Ah yes. Because the chatbot was “a separate legal entity that is responsible for its own actions.”

Apparently the Singularity happened, and Air Canada’s attorneys were the only ones to notice.

The judge, visionless Luddite that he was, didn’t buy it for a second. No word yet on whether Air Canada will appeal. But it seems strangely, stupidly appropriate that the momentous and historic claim of AI autonomy (dare I say sapience?) emerged not from from some silicon Cambridge Declaration, not from any UN tribunal on autonomous military drones, but from petty corporate bean-counters trying to shaft some grieving soul for $812 Canadian.

When it comes to cool, bleeding-edge tech, William Gibson once observed that “The street finds its own uses for things”. What he forgot to mention, apparently, is that at least one of those uses is “being a dick”.




This entry was posted on Sunday, February 18th, 2024 at 12:19 pm and is filed under AI/robotics, legal. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.
30 Comments
Inline Feedbacks
View all comments
Lars
Guest
Lars
2 months ago

Good to see that Air Canada continues to live up to its motto – “If you want to get somewhere in the worst possible way…”

Phil
Guest
Phil
1 month ago
Reply to  Lars

Or, “We’re not happy until you’re unhappy.”

Phil
Guest
Phil
1 month ago
Reply to  Phil

Too late to edit, but that should read:

“We’re not happy until you’re not happy.”

Paulygon
Guest
Paulygon
1 month ago
Reply to  Peter Watts

Excellent Psych reference intended or not. Sorry, I’m done now.

george
Guest
george
1 month ago

Next up, if your phone’s autocomplete/autocorrect/autostupidity feature writes something objectionable… You can take _it_ to court 😀

I mean, I get it, you’re a lawyer, you _must_ use any and all arguments no matter how absurd… but there has to be _some_ limit. Some basis in reason. Right?

Right? I am pressing tab why is it not tabbing out help!

Martin
Guest
Martin
1 month ago

Apparently, if they put a disclaimer saying “The information from our customer helpline chatbot may be incorrect.”, they’ll be covered in the future.‍♂️

Martin
Guest
Martin
1 month ago
Reply to  Martin

Apparently, this website converts a “shrug” emoji to a male symbol. *shrug*

SomeHistoryGuy
Guest
SomeHistoryGuy
1 month ago

To be honest, shackling Wintermute so you can use it to dodge refunds is exactly the sort of myopic nonsense I’ve come to expect from large corporations. Much more realistic than trying to live forever.

Andy
Guest
Andy
1 month ago

Re: “being a dick” might I remind you of one Peter Rivera from “Neuromancer”? The guy with a holograpgic projector he uses to act like a complete douchebag? Guy was basically a troll 40 years before the term gained its current meaning (and 60 years before it becomes a capital offence; I hope).

Gibson got a lot of stuff wrong but what he got right, he got right.

CHIMP
Guest
CHIMP
1 month ago

Reminds me of that car dealership using ChatGPT and a user was able to convince it to sell a car for $1. Apparently not legally binding but I wonder if it’s truly AGI then it’s “no takesies backsies” kind of deal.  

listedproxyname
Guest
listedproxyname
1 month ago

Oh, I believe this “legal” discussion has been pretty hot since years ago, especially with self-driving cars. If the robot is driving a car and the driver is at the wheel, who is responsible for accidents – the driver? The company? Nobody at all? Maybe we should detain the immaterial soul of a car and hold it accountable?

The industry is developing quickly, and new amazing tools come out every month, but they are still tools, and very much imperfect at that. These tools are supposed to be much better, maybe outperform people in certain tasks, reduce human error (without reducing humanity), but we’re not even remotely there.

But here’s a scary thing, human souls aren’t perfect at all, there’s (allegedly) no quality control department that can even attempt to run checks on that. It instantly goes into political discussion, and loses all appeal to progress altogether.

What if the most pressing concern in current economic situation of human capital is not that robots will somehow catch up to humans, but rather a lot of humans themselves discovering that their bullshit jobs and bullshit education has made them useless and redundant. What a catastrophe would it be!

Fatman
Guest
Fatman
1 month ago

“but rather a lot of humans themselves discovering that their bullshit jobs and bullshit education has made them useless and redundant.”

Pretty sure we caught onto that a couple of centuries ago. As long as we keep getting paid for those bullshit jobs, most of us will be fine with the idea.

The K
Guest
The K
1 month ago
Reply to  Fatman

Exactly. Not everyone wants to derive the meaning of life from his work. I, for example, like my work, but i work because it pays enough to live.

If i won the lottery i wouldnt work but dedicate myself to something else entirely. Hobbies, my cats, charity work for cats etcpp.

Now the more interesting problem for me is: When (not if) we have outsourced pretty much all paid work to machines, with a small overseer caste of specialists, who exactly will buy all the nifty stuff the factories churn out? How long will capitalism last without a consumer class?

Fatman
Guest
Fatman
1 month ago
Reply to  The K

That’s one of the big contradictions of terminal-stage capitalism. Techbros, “free market absolutists”, and their nutswingers don’t really seem to be able to wrap their minds around the concepts of “money” and “wealth”, save for the foggy notion that “we want more” (or “we’re envious of those who have more”, in the nutswingers’ case).

Also known as the “Galt’s Gulch Gibberish” theory.

Andrei
Guest
Andrei
1 month ago
Reply to  The K

Abundance is not a problem at all, see “The Midas Plague”. Scarcity of the stuff that you need (like in the Soviet Union etc), that is a real problem.

Terebrus
Guest
Terebrus
1 month ago

And that’s the most optimistic scenario of near future!

Phil
Guest
Phil
1 month ago

How dare you, sir! This is our national airline you’re talking about. In its defense:

  1. He was able to drag himself off the plane without assistance (or his wheelchair) so where is the problem?
  2. Economy is…

Actually, nevermind. It’s a bad airline run by idiots. That they went to court over information given by their own website – sorry, by their sentient AI – is just fucking unbelievable. I’d love a transcript of legal’s discussion leading to that decision, especially the part where they drill down into how, while on the one hand the bereavement angle was certain to make this national news, on the other hand they could save a 6 billion dollar company $812.

In their defense, you can fire a human, but they’re stuck with the tech, and if it really is sentient, maybe they’re rightly afraid.

Hugh
Guest
Hugh
1 month ago
Reply to  Phil

My guess is that the legal team asked the chatbot whether they should settle or take it to court.

Phil
Guest
Phil
1 month ago
Reply to  Hugh

Ah, that’s probably what happened. It would explain the decision, and be true to form.

Fatman
Guest
Fatman
1 month ago
Reply to  Phil

“In their defense, you can fire a human, but they’re stuck with the tech”

To mangle a paraphrase from Ian McDonald, “machines can’t be punished, only people can”. He writes quite a bit about the implications and (ab)uses of fully sentient AI.

Bogdanow
Guest
Bogdanow
1 month ago

“What he forgot to mention, apparently, is that at least one of those uses is “being a dick”.”

Well, it could be argued that he put out a couple of novels that were essentially about using tech to be a dick.

Andrei
Guest
Andrei
1 month ago

Talking about “Malak” – does it actually add much to “Watchbird”?

trackback

[…] Terry Gilliam’s Air Canada […]

Kris
Guest
Kris
1 month ago

It’s quite an argument for the company to be making: This chatbot is a separate legal entity responsible for its own actions, and we’ve intentionally given it access to our systems to allow it to use those systems to commit fraud, which makes us an accessory.

Jack
Guest
Jack
1 month ago
Reply to  Kris

Arrest that AI! And lock him away!

Dr. Dumasse
Guest
Dr. Dumasse
1 month ago

To quote John Scalzi, “Fuck-Fuck-Fuck-Fuckity-Fuck!”

I wish I’d known about this before it went to court. To wit, a plan:

  1. Contact Mr. Moffat,
  2. Pay him what Air Canada didn’t in exchange for delaying the suit because “he has to think about it.”
  3. Pay his attorney an hour’s fee to send a nice letter to Air Canada re the delay, with an ambiguous attitude about the bot’s legal standing.
  4. Figure out which foundation model the bot is based on, and spend a week or two doing “prompt engineering research” with the bot and its base model.
  5. Have Mr Moffat execute a newly-crafted “right conversation” with the Air Canada bot about a hypothetical scenario, where the bot states that in that hypothetical-but-impossible case it would pay out $10M for denied claims in which it had contradicted its own prior statements.
  6. Contact Air Canada and present them with both bot conversations. After first sending them to some journalists, and perhaps to a pack of hungry lawyers.

NB, security researchers do stuff like this all the time to (for example) get ‘safe’ chatbots to describe how to 3D-print a handgun.

GPT3, GPT4, Claude, Gemini – and all other chatbots run by grownups with money at stake – have big fat disclaimers at the bottom of every input prompt saying “answers may be wrong.”
The companies that run these GenAI/LLM chatbots go out of their way to make it clear that their bot is not a mouthpiece for company policy – for exactly the reason illustrated above.

But, hey, some dicks at Air Canada think they know more about GenAI and LLMs than OpenAI, Google, Meta, and Anthropic do. Heh.

The real tragedy here is a missed opportunity to profit from the punishment and suffering of the dicks at Air Canada by exploiting their cruelty/stupidity in the most ironic way possible.

A_Lexx
Guest
A_Lexx
1 month ago

A_lexx

Antrax
Guest
Antrax
1 month ago

Dear Peter, Thank you for your Creativity: The question of war: Putin’s elections are coming, mobilization is coming, I will remain a man if I refuse. Braizil and Terry Gilliam – thanks