top | item 43952707

Insurers launch cover for losses caused by AI chatbot errors

136 points| jmacd | 9 months ago |ft.com

64 comments

order

loeber|9 months ago

Insurance tech guy here. This is not the revolutionary new type of insurance that it might look like at first glance. It's an adaptation of already-commonplace insurance products that are limited in their market size. If you're curious about this topic, I've written about it at length: https://loeber.substack.com/p/24-insurance-for-ai-easier-sai...

em-bee|9 months ago

while i am not a fan of the AI craze, and regardless of what i think of the practices of certain insurers, my first thought was that the current state of AI naturally lends itself for insurance. there is a chance that AI gives you a right or wrong answer. and a lesser chance that a wrong answer will lead to damages. but risk averse users will want to protect themselves. so as long as the income insurers make is higher than the payouts, it's a sound business model.

trod1234|9 months ago

How would any insurance company even begin to control costs on this? It seems like a fast-track to insolvency.

AI models hallucinate, and by their blackbox nature can't have any kind of safeguards put in, as has been evidenced by the large number of paths in research to prompt jailbreaking.

Inherently also, AI is operating on a non-deterministic environment, but its architecture for computation is constrained by determinism and decide-ability. The two are foundationally incompatible for reliable operations.

Language is also one of those trouble areas since the meaning is floating. It seems quite likely that a chatbot will get stuck in a infinite loop (halting problem) with the paying customer failing to be served, and worse the company involved imposes personal cost on them in the process (in frustration and lack of resolution). If the company involved eliminates all but that as a single point of contact, either in structure or informal process; I don't see any way you can actually control costs sufficiently when the lawsuits start piling up.

omoikane|9 months ago

Was it also commonplace to have insurances covering human errors? For example:

> A tribunal last year ordered Air Canada to honour a discount that its customer service chatbot had made up.

If a human sales representative had made that mistake instead of a chatbot, I wonder if companies will try to recover that cost through insurance. Or perhaps AI insurance won't cover the chatbot for that either?

conartist6|9 months ago

Man I wish I could get insurance like that. "Accountability insurance"

You were responsibile for something, say, child care, and you just decided to go for beer and leave the child with an AI. The house burns down, but because you had insurance you are not responsible. You just head along to your next child care job and don't too much worry about it.

alexriddle|9 months ago

Lots of insurance covers these types of situation which are the result of careless acts...

Don't take the right safety precautions and burn down a customers house - liability insurance

Click on a link in a phishing email and open up your network to a ransomware attack - cyber insurance

Forget to lock your door and get burgled - property insurance

Write buggy software which leads to a hospital having to suspend operations - PI (or E&O) insurance

Fail to adequately adhere to regulatory obligations and get sued - D&O insurance

Obviously there will be various conditions etc which apply but I've been in Insurance a long time and cover for carelessness and stupidity is one of the things which keeps the industry going. I've dealt directly with (paid) claims for all of the above situations.

It doesn't absolve responsibility though, it just protects against the financial loss. I suspect if you leave a child alone with an AI and the house burns down that's going to be the least of your problems.

thallium205|9 months ago

Crime Insurance (Criminal Acts) is exactly what this is for - when an employee does something criminal while on the clock and the company is facing liability as a result of their actions.

Justin_K|9 months ago

It's called errors and omissions and it's as basic an insurance as it gets.

kube-system|9 months ago

Insurance can’t go to jail for you but it can and often does pay your legal fees and/or civil liabilities regardless of fault.

Suppafly|9 months ago

>Man I wish I could get insurance like that. "Accountability insurance"

You could. Insurance companies will sell you insurance for just about anything, in custom situations they figure up the risk somehow. You likely wouldn't like how much it'd cost you though.

john-h-k|9 months ago

This feels like a pretty far fetched straw man. If someone invented medical malpractice insurance yesterday you could use the exact same argument.

More generally I think “if something is bad, we should not be able to insure it because then we incentivise it” is not right

0xDEAFBEAD|9 months ago

>You just head along to your next child care job and don't too much worry about it.

Aside from the fact that your insurance rate just went up, possibly by a lot.

WrongAssumption|9 months ago

Being covered does not mean you are not responsible.

delfinom|9 months ago

Insurance doesn't mean you are not responsible my dude, way to completely misunderstand insurance.

Insurance just covers financial damage, and it's the insurer making a bet with you that they will profit off the premiums they calculated for your particular coverage instead of you causing an insurance payout that would be in the red for them.

And if you intentionally committed an act that would cause a payout, the insurance would almost certainly void your coverage and claim.

caulkboots|9 months ago

Not sure insurance will take the rap for criminal negligence.

imoverclocked|9 months ago

At best, this screams, “you’re doing it wrong.”

We know this stuff isn’t ready, is easily hacked, is undesirable by consumers… and will fail. Somehow, it’s still more efficient to cover losses and degrade service than to approach the problem differently.

rchaud|9 months ago

That assumes that insurers will readily pay out when such claims are made. Insurers don't make money doing that.

nickff|9 months ago

Customer service personnel are expensive to train properly, and often quit very quickly because they are treated very poorly by customers. The alternative to AI customer service is often no customer service (like Google).

john-h-k|9 months ago

> At best, this screams, “you’re doing it wrong.”

If you’re doing it wrong to a meaningful extent you won’t be able to get insurance or it will be very expensive

Neywiny|9 months ago

No mercy. Had to deal with one when looking for apartments and it made up whatever it thought I wanted to be right. Good thing they still had humans around in person when I went for a tour.

DonHopkins|9 months ago

Can consumers get AI insurance that covers eating a pizza with glue on it, or eating a rock?

https://www.forbes.com/sites/jackkelly/2024/05/31/google-ai-...

How about MAGA insurance that covers injecting disinfectant, or eating horse dewormer pills, or voting for tariffs?

20after4|9 months ago

I'd think it's the rest of us that need to have MAGA insurance, to cover the cost of therapy after realizing how cruel and stupid voting public actually is. And maybe to cover the increased costs of everything due to tariffs.

fsfod|9 months ago

I wonder if the premiums scale up depending on the temperature used for the model output.

JumpCrisscross|9 months ago

Oooh, the foundation-model developers could offer to take first losses up to X if developers follow a rule set. This would reduce premiums and thus increase uptake among users of their models.

85392_school|9 months ago

Reading the actual article, this seems odd. It only covers cases when the models degrade, but there hasn't been evidence of a LLM pinned to a checkpoint degrading yet.

yieldcrv|9 months ago

AI that hallucinates accurately enough times should just carry Errors and Omissions insurance like human contractors do

hoistbypetard|9 months ago

Who in their right mind would underwrite that? Hallucinations are a necessary part of the process, and there's no way to estimate whether the hallucinations are "accurate enough" or not. It'd be like a reverse lottery ticket for the insurance company.

AzzyHN|9 months ago

I wonder who makes more errors, underpaid & undertrained employees, or AI chatbots.

otabdeveloper4|9 months ago

Whew. Somebody finally figured out how to make money off the nu-AI bubble.

vfclists|9 months ago

Pretty sure it will wind up like insurance against malware like NotPetya.

aatd86|9 months ago

And now with mcp...should make sure to not allow agents access to sensitive capabilities.