Skip to main content

Air Canada Claims It’s Not Responsible For Its Own Chatbot’s Hallucinations

Airplane sweating scene
Recommended Videos

When are we going to call it, fam? When are we going to finally admit that chatbots and AI can’t actually do the job of human beings? How many scams, PR disasters, and public debacles is it going to take before we finally say, “Yeah, you know what? Relying on mindless robots for all of our art and knowledge maybe isn’t the great idea we thought it was.”

The latest AI disaster? Well, it’s a couple of weeks old, so it’s not as recent as the AI-generated Willy Wonka knockoff, but it’s still pretty astonishing. A man named Jake Moffatt needed to fly to Toronto after his grandmother died, so he asked Air Canada’s chatbot how bereavement rates worked. The chatbot told him to buy a full-priced ticket and then request a partial refund within 90 days, which he did—making sure to screenshot the chatbot’s guidance.

When it came time to get the refund, though, Air Canada told Moffatt that its chatbot was wrong. You see, generative AI—the technology behind things like customer service chatbots—is susceptible to what’s called “hallucinations,” or information that the AI makes up. Last year, for instance, a couple of lawyers got into deep shit when they had ChatGPT write the text of a lawsuit for them, and then found out that it cited a bunch of court cases that didn’t actually exist. AI can’t tell the difference between true and false information. All it can do is remix existing text into something that sounds plausible.

Incredibly, the airline claimed that the mess was Moffatt’s fault because he hadn’t checked the chatbot’s guidance against the airline’s official policy. According to the airline’s reasoning, Moffatt should have known that the customer service tool Air Canada provided couldn’t be trusted. Presumably, if the wings fell off the plane mid-flight, Air Canada would chide customers for boarding it in the first place.

Air Canada refused to issue the refund, so Moffatt took the airline to court, and won. Air Canada’s defense is even more absurd than its initial refusal to issue the refund. According to the airline, the chatbot—which, let me stress, the airline uses on its own website to answer customers’ questions—is “a separate legal entity that is responsible for its own actions.”

If a human employee gave a customer false information, then Air Canada could conceivably give the customer the refund, and hold the employee accountable in turn. But to argue that a bot, which doesn’t earn a paycheck and only does what you program it to do, is its own legal entity? Get outta here with that.

Could we please just stop? Stop producing mountains of AI gobbledegook, stop replacing capable humans with incompetent bots, stop pretending that AI is going to somehow make society better? It’s not working, friends. Call it off. AI is only making everything worse.

(via Ars Technica, featured image: Paramount Pictures)

Have a tip we should know? tips@themarysue.com

Author
Julia Glassman
Julia Glassman (she/her) holds an MFA from the Iowa Writers' Workshop, and has been covering feminism and media since 2007. As a staff writer for The Mary Sue, Julia covers Marvel movies, folk horror, sci fi and fantasy, film and TV, comics, and all things witchy. Under the pen name Asa West, she's the author of the popular zine 'Five Principles of Green Witchcraft' (Gods & Radicals Press). You can check out more of her writing at <a href="https://juliaglassman.carrd.co/">https://juliaglassman.carrd.co/.</a>

Filed Under:

Follow The Mary Sue:

Exit mobile version