top | item 39400437

(no title)

ooboe | 2 years ago

"In an argument that appeared to flabbergast a small claims adjudicator in British Columbia, the airline attempted to distance itself from its own chatbot's bad advice by claiming the online tool was "a separate legal entity that is responsible for its own actions."

"This is a remarkable submission," Civil Resolution Tribunal (CRT) member Christopher Rivers wrote.""

From https://www.cbc.ca/news/canada/british-columbia/air-canada-c...

discuss

order

jasonjayr|2 years ago

IANAL, but it's astounding they took that as their defense, rather than pointing to a line (I hope?) in their ToS that says "This agreement is the complete terms of service, and cannot be amended or changed by any agent or representative of the company except by ... (some very specific process the bot can't follow)". I've seen this mentioned in several ToSs, I expect it to be standard boilerplate at this point ...

Drakim|2 years ago

That does make sense, but on the flipside, let's say that they start advertising discounts on TV, but when people try to pay the reduced rate they say "according to our ToS that TV ad was not authorized to lower the price".

Obviously that wouldn't fly. So why would it fly with the AI chatbot's advertising discounts?

JCM9|2 years ago

Courts often rule that you can’t use ToS to overcome common sense. ToS are not a get out of jail free card if your company just does stupid things.

dataflow|2 years ago

How do those clauses actually work? If a rep does something nice for you (like give you something for free), could the airline say it never agreed to that in writing or whatever and demand it back? How are you supposed to know if a rep has authority to enter into an agreement with you over random matters?

But, to your question, my guess is that would basically be telling people not to avoid their chatbot, which they don't want to do.

easyThrowaway|2 years ago

I guess the original issue pointed by the judge would still stand: how am I supposed to know which terms are to be assumed true and valid? Why would I assume a ToS hidden somewhere (Is it still valid? does it apply to my case? Is it relevant and binding to my jurisdiction?) is to be considered more trustworthy than an Air Canada agent?

AlexandrB|2 years ago

How is that enforceable? In many cases this is carte blanche for company representatives to lie to you. No one is going to read the ToS and cross reference it with what they're being told in real time. Moreover, if a customer was familiar with the ToS they would not be asking questions like this of a chatbot. The entire idea of having a clause like this while also running a "help" chatbot that can contradict it seems like bad faith dealing.

sdwr|2 years ago

Those ToS statements overreach their capabilities a lot of the time. They're ammunition against the customer, but don't always hold up in the legal system.

kreek|2 years ago

Beyond the chatbot's error and the legal approach they took, this bad PR could have been avoided by any manager in the chain doing the right thing by overriding things and just giving him the bereavement fare (and then fixing the bot/updating the policy).

skywhopper|2 years ago

The claim is so outrageous that I wish there were a way (I assume there probably isn't) for the company or the lawyers to have been sanctioned outside what the plaintiff was asking for.

animex|2 years ago

Straight out of I, Robot.

onlyrealcuzzo|2 years ago

How is this different from me getting one of my friends to work at Air Canada and promise me a billion dollars to cancel my flight?

Will Air Canada be legal for my friend going against company policy?

thsksbd|2 years ago

That's fraud because you're in cahoots with your friend.

If a random AC employee gave you a free flight, on the other hand, you'd be entitled to it.

Anyway, the chat bot has no agency except that given to it by AC; unlike a human employee, therefore, its actions are 100% AC actions.

I don't see how this is controversial? Why do people think that laws no longer apply when fancy high-tech pixie dust is sprinkled?

eirikbakke|2 years ago

The legal concept is called "Apparent authority". The test is whether "a reasonable third party would understand that an agent had authority to act".

("Chatbot says you can submit a form within 90 days to get a retroactive bereavement discount" sounds perfectly reasonable, so the doctrine applies.)

https://en.wikipedia.org/wiki/Apparent_authority

vkou|2 years ago

>How is this different from me getting one of my friends to work at Air Canada and promise me a billion dollars to cancel my flight?

There is a common misconception about law that software engineers have. Code is not law. Law is not code. Just because something that looks like a function exists, you can't just plug in any inputs and expect it to have a consistent outcome.

The difference between these two cases is that even if a chat bot promised that, the judge would throw it out, because it's not reasonable. Also, the firm would have a great case against at least the CS rep for this collusion.

If your friend of a CS agent promised you a bereavement refund (As the chatbot did), even though it went against company policy, you'd have good odds of winning that case. Because the judge would find it reasonable of you to believe and expect that after speaking to a CS rep, that such a policy would actually get honored. (And the worst that would happen to the CS rep would be termination.)

weego|2 years ago

Likely because the claim was considered to be within the reasonable expectations of real policy.

carlosjobim|2 years ago

Law and justice is not like a computer program that you can exploit and control without limits by being a hacker.

If the chatbot told them that they'd get a billion dollars, the courts would not hold Air Canada responsible for it, just as if a programmer put a decimal wrong and prices became obviously wrong. In this case, the chat bot gave a policy within reason and the court awarded the passenger what the bot had promised, which is a completely correct judgement.

vundercind|2 years ago

The computer only does what they told it to.

What they told it to do was to behave very unpredictably. They shouldn’t have done that.

TheCoelacanth|2 years ago

No, if you conspire with your friend to get them to tell you an incorrect policy, then you have no reasonable expectation that what they tell you is the real policy. If you are promised a billion dollars even without a pre-existing relationship with the agent, you have no reasonable expectation that what they are promising is the real policy because it's an unbelievably large amount.

If you are promised something reasonable by an agent of the company who you are not conspiring with, then the company is bound to follow through on the promise because you do have a reasonable expectation that what they are telling you is the real policy.

spamizbad|2 years ago

Your example is significantly different.

The chatbot instructed the passenger to pay full price for a ticket but stated they could get a refund later. That refund policy was a hallucination. The victim her just walked away with a discounted ticket as promised not a billion dollars.

nneonneo|2 years ago

Because that would not be reasonable, and nobody would be surprised if Air Canada reneged on that. See, for instance, Leonard vs. PepsiCo.

If your friend promised you something reasonable in the course of carrying out their duties, and you honestly believed them, I think that would be legal and enforceable just as this case suggests.

dataflow|2 years ago

> How is this different from me getting one of my friends to work at Air Canada

One major difference is the AI wasn't your friend, another is that you didn't get it hired at Air Canada, another is that the promise wasn't $1B, etc...

BiteCode_dev|2 years ago

Your friend is not trained by Air Canada. The bot is Air Canada property.

If they decide it is reliable enough to be put in front of the customer, they must accept all the consequences: the benefits like having to hire less, and the cons, which is that they have to make it work correctly.

Otherwise, woopsy, we made our AI handle our accounting and it cheated, sorry IRS. That won't fly.

oliwary|2 years ago

No, it is more similar to Air Canada hiring a monkey to push buttons to handle customer complaints. In that case, the company knows (or should know) that the given information may be wrong, but accepts the risk.

Fnoord|2 years ago

AI has auth from higher up to be used 1:1 as truth. Your friend does not have auth to promise you 1 billion in their employers name.

BadHumans|2 years ago

Chatbots aren't people and people are actually separate legal entities responsible for their own actions.

nrmitchi|2 years ago

What you are describing is 1) fraud, 2) conspiracy, and 3) not a policy that a reasonable person would take at face value.

It is very different than if an employee were to, in writing, make a statement that a reasonable person would find reasonable.

hiddencost|2 years ago

Weird straw man...

So replacing all their customer support staff with AI that misleads customers is OK? That's pants on head insane, so why spend time trying to justify it.

willcipriano|2 years ago

You didn't get your friend to do it, an employee just decided to. There is no conspiracy.