Insurance tech guy here. This is not the revolutionary new type of insurance that it might look like at first glance. It's an adaptation of already-commonplace insurance products that are limited in their market size. If you're curious about this topic, I've written about it at length: https://loeber.substack.com/p/24-insurance-for-ai-easier-sai...
while i am not a fan of the AI craze, and regardless of what i think of the practices of certain insurers, my first thought was that the current state of AI naturally lends itself for insurance. there is a chance that AI gives you a right or wrong answer. and a lesser chance that a wrong answer will lead to damages. but risk averse users will want to protect themselves. so as long as the income insurers make is higher than the payouts, it's a sound business model.
How would any insurance company even begin to control costs on this? It seems like a fast-track to insolvency.
AI models hallucinate, and by their blackbox nature can't have any kind of safeguards put in, as has been evidenced by the large number of paths in research to prompt jailbreaking.
Inherently also, AI is operating on a non-deterministic environment, but its architecture for computation is constrained by determinism and decide-ability.
The two are foundationally incompatible for reliable operations.
Language is also one of those trouble areas since the meaning is floating. It seems quite likely that a chatbot will get stuck in a infinite loop (halting problem) with the paying customer failing to be served, and worse the company involved imposes personal cost on them in the process (in frustration and lack of resolution). If the company involved eliminates all but that as a single point of contact, either in structure or informal process; I don't see any way you can actually control costs sufficiently when the lawsuits start piling up.
Was it also commonplace to have insurances covering human errors? For example:
> A tribunal last year ordered Air Canada to honour a discount that its customer service chatbot had made up.
If a human sales representative had made that mistake instead of a chatbot, I wonder if companies will try to recover that cost through insurance. Or perhaps AI insurance won't cover the chatbot for that either?
Man I wish I could get insurance like that. "Accountability insurance"
You were responsibile for something, say, child care, and you just decided to go for beer and leave the child with an AI. The house burns down, but because you had insurance you are not responsible. You just head along to your next child care job and don't too much worry about it.
Lots of insurance covers these types of situation which are the result of careless acts...
Don't take the right safety precautions and burn down a customers house - liability insurance
Click on a link in a phishing email and open up your network to a ransomware attack - cyber insurance
Forget to lock your door and get burgled - property insurance
Write buggy software which leads to a hospital having to suspend operations - PI (or E&O) insurance
Fail to adequately adhere to regulatory obligations and get sued - D&O insurance
Obviously there will be various conditions etc which apply but I've been in Insurance a long time and cover for carelessness and stupidity is one of the things which keeps the industry going. I've dealt directly with (paid) claims for all of the above situations.
It doesn't absolve responsibility though, it just protects against the financial loss. I suspect if you leave a child alone with an AI and the house burns down that's going to be the least of your problems.
Crime Insurance (Criminal Acts) is exactly what this is for - when an employee does something criminal while on the clock and the company is facing liability as a result of their actions.
>Man I wish I could get insurance like that. "Accountability insurance"
You could. Insurance companies will sell you insurance for just about anything, in custom situations they figure up the risk somehow. You likely wouldn't like how much it'd cost you though.
Insurance doesn't mean you are not responsible my dude, way to completely misunderstand insurance.
Insurance just covers financial damage, and it's the insurer making a bet with you that they will profit off the premiums they calculated for your particular coverage instead of you causing an insurance payout that would be in the red for them.
And if you intentionally committed an act that would cause a payout, the insurance would almost certainly void your coverage and claim.
We know this stuff isn’t ready, is easily hacked, is undesirable by consumers… and will fail. Somehow, it’s still more efficient to cover losses and degrade service than to approach the problem differently.
Customer service personnel are expensive to train properly, and often quit very quickly because they are treated very poorly by customers. The alternative to AI customer service is often no customer service (like Google).
No mercy. Had to deal with one when looking for apartments and it made up whatever it thought I wanted to be right. Good thing they still had humans around in person when I went for a tour.
I'd think it's the rest of us that need to have MAGA insurance, to cover the cost of therapy after realizing how cruel and stupid voting public actually is. And maybe to cover the increased costs of everything due to tariffs.
Oooh, the foundation-model developers could offer to take first losses up to X if developers follow a rule set. This would reduce premiums and thus increase uptake among users of their models.
Reading the actual article, this seems odd. It only covers cases when the models degrade, but there hasn't been evidence of a LLM pinned to a checkpoint degrading yet.
Who in their right mind would underwrite that? Hallucinations are a necessary part of the process, and there's no way to estimate whether the hallucinations are "accurate enough" or not. It'd be like a reverse lottery ticket for the insurance company.
tomrod|9 months ago
loeber|9 months ago
em-bee|9 months ago
trod1234|9 months ago
AI models hallucinate, and by their blackbox nature can't have any kind of safeguards put in, as has been evidenced by the large number of paths in research to prompt jailbreaking.
Inherently also, AI is operating on a non-deterministic environment, but its architecture for computation is constrained by determinism and decide-ability. The two are foundationally incompatible for reliable operations.
Language is also one of those trouble areas since the meaning is floating. It seems quite likely that a chatbot will get stuck in a infinite loop (halting problem) with the paying customer failing to be served, and worse the company involved imposes personal cost on them in the process (in frustration and lack of resolution). If the company involved eliminates all but that as a single point of contact, either in structure or informal process; I don't see any way you can actually control costs sufficiently when the lawsuits start piling up.
omoikane|9 months ago
> A tribunal last year ordered Air Canada to honour a discount that its customer service chatbot had made up.
If a human sales representative had made that mistake instead of a chatbot, I wonder if companies will try to recover that cost through insurance. Or perhaps AI insurance won't cover the chatbot for that either?
conartist6|9 months ago
You were responsibile for something, say, child care, and you just decided to go for beer and leave the child with an AI. The house burns down, but because you had insurance you are not responsible. You just head along to your next child care job and don't too much worry about it.
alexriddle|9 months ago
Don't take the right safety precautions and burn down a customers house - liability insurance
Click on a link in a phishing email and open up your network to a ransomware attack - cyber insurance
Forget to lock your door and get burgled - property insurance
Write buggy software which leads to a hospital having to suspend operations - PI (or E&O) insurance
Fail to adequately adhere to regulatory obligations and get sued - D&O insurance
Obviously there will be various conditions etc which apply but I've been in Insurance a long time and cover for carelessness and stupidity is one of the things which keeps the industry going. I've dealt directly with (paid) claims for all of the above situations.
It doesn't absolve responsibility though, it just protects against the financial loss. I suspect if you leave a child alone with an AI and the house burns down that's going to be the least of your problems.
thallium205|9 months ago
Justin_K|9 months ago
kube-system|9 months ago
Suppafly|9 months ago
You could. Insurance companies will sell you insurance for just about anything, in custom situations they figure up the risk somehow. You likely wouldn't like how much it'd cost you though.
john-h-k|9 months ago
More generally I think “if something is bad, we should not be able to insure it because then we incentivise it” is not right
0xDEAFBEAD|9 months ago
Aside from the fact that your insurance rate just went up, possibly by a lot.
WrongAssumption|9 months ago
delfinom|9 months ago
Insurance just covers financial damage, and it's the insurer making a bet with you that they will profit off the premiums they calculated for your particular coverage instead of you causing an insurance payout that would be in the red for them.
And if you intentionally committed an act that would cause a payout, the insurance would almost certainly void your coverage and claim.
caulkboots|9 months ago
imoverclocked|9 months ago
We know this stuff isn’t ready, is easily hacked, is undesirable by consumers… and will fail. Somehow, it’s still more efficient to cover losses and degrade service than to approach the problem differently.
rchaud|9 months ago
nickff|9 months ago
john-h-k|9 months ago
If you’re doing it wrong to a meaningful extent you won’t be able to get insurance or it will be very expensive
Neywiny|9 months ago
DonHopkins|9 months ago
https://www.forbes.com/sites/jackkelly/2024/05/31/google-ai-...
How about MAGA insurance that covers injecting disinfectant, or eating horse dewormer pills, or voting for tariffs?
20after4|9 months ago
fsfod|9 months ago
JumpCrisscross|9 months ago
85392_school|9 months ago
yieldcrv|9 months ago
hoistbypetard|9 months ago
AzzyHN|9 months ago
otabdeveloper4|9 months ago
vfclists|9 months ago
aatd86|9 months ago
Zoethink|9 months ago
[deleted]