top | item 39426107

Air Canada chatbot promised a discount. Now the airline has to pay it

34 points| nocommandline | 2 years ago |washingtonpost.com

21 comments

order

advael|2 years ago

This is correct. For any law to function, responsibility needs to propagate through the use of any tool. If a company is the legal entity responsible for making the decision to deploy a chatbot as a support service, they must be responsible for what that chatbot says. This responsibility should also flow through corporations to the people who had the power to make decisions about how the corporation operates, but I'll take it as a small blessing that we're seeing an unwillingness to set precedent that further allows indirection to evaporate responsibility

kristianp|2 years ago

This sounds similar to the guy that xeeted about fooling a car dealership chatbot into selling him a car for $1. Different jurisdiction though.

https://venturebeat.com/ai/a-chevy-for-1-car-dealer-chatbots...

doix|2 years ago

I wonder if intent matters. I think it's pretty obvious from the $1 car case that the guy was intentionally trying to break the bot.

In this case, it's much more plausible that it was a genuine misunderstanding.

I'm obviously not a lawyer, I have no idea if that matters. But going by my gut feeling, I agree with the outcome of both cases.

nocommandline|2 years ago

… But when Moffatt later attempted to receive the discount, he learned that the chatbot had been wrong. Air Canada only awarded bereavement fees if the request had been submitted before a flight. The airline later argued the chatbot was a separate legal entity “responsible for its own actions,”….

How exactly do you go about making a chatbot a legal entity?

RandomBacon|2 years ago

Maybe the airline accidentally hired lawyers that have a conscience and wanted to help regular people so they intentionally used a stupid argument?

Animats|2 years ago

This gets close to the law of principal and agent. Are "intelligent agents" agents in the legal sense? That is, is the principal responsible for their actions? That's usually the case for employees, unless the employee clearly acted outside the scope of their employment. AI systems operated on behalf of a business should be held to the same standard.

There's an economic theory of accounting for mistakes of agents.[1] There's a cost of mistakes, and a cost of decreasing the error rate. So it's something that can be priced into the cost of running the business.

[1] https://www.pthistle.faculty.unlv.edu/WorkingPapers/pamistak...

vsnf|2 years ago

Moreover, how does one hold a chatbot responsible?

from-nibly|2 years ago

Companies will probably stop using these kind of chat bots if people keep exploiting them. Not that anyone should do that. But I'm just saying that's probably what they would do.

orionblastar|2 years ago

An human would not fall for that. Chatbots can be tricked and exploited.

booi|2 years ago

Humans will absolutely fall for that.