If it autonomous or self-driving then why is the person in the car paying for the insurance? Surely if it's Tesla making the decisions, they need the insurance?
Generally speaking, liability for a thing falls on the owner/operator. That person can sue the manufacturer to recover the damages if they want. At some point, I expect it to become somewhat routine for insurures to pay out, then sue the manufacturer to recover.
The product you buy is called "FSD Supervised". It clearly states you're liable and must supervise the system.
I don't think there's law that would allow Tesla (or anyone else) to sell a passenger car with unsupervised system.
If you take Waymo or Tesla Robotaxi in Austin, you are not liable for accidents, Google or Tesla is.
That's because they operate on limited state laws that allow them to provide such service but the law doesn't allow selling such cars to people.
That's changing. Quite likely this year we will have federal law that will allow selling cars with fully unsupervised self-driving, in which case the insurance/liability will obviously land on the maker of the system, not person present in the car.
> Quite likely this year we will have federal law that will allow selling cars with fully unsupervised self-driving, in which case the insurance/liability will obviously land on the maker of the system, not person present in the car.
You raise an important point here. Is it economically feasible for system makers to bear the responsibility of self-driving car accidents? It seems impossible, unless the cars are much more expensive to cover the potential future costs. I'm very curious how Waymo insures their cars today. I assume they have a bespoke insurance contract negotiated with a major insurer. Also, do we know the initial cost of each Waymo car (to say nothing of ongoing costs from compute/mapping/etc.)? It must be very high (2x?) given all of the special navigation equipment that is added to each car.
Tacking "Supervised" on the end of "Full Self Driving" is just contradictory. Perhaps if it was "Partial Self Driving" then it wouldn't be so confusing.
> Quite likely this year we will have federal law that will allow selling cars with fully unsupervised self-driving, in which case the insurance/liability will obviously land on the maker of the system, not person present in the car.
This is news to me. This context seems important to understanding Tesla's decision to stop selling FSD. If they're on the hook for insurance, then they will need to dynamically adjust what they charge to reflect insurance costs.
You can sell autonomous vehicles to consumers all day long. There's no US federal law prohibiting that, as long as they're compliant with FMVSS as all consumer vehicles are required to be.
Without LIDAR and/or additional sensors, Tesla will never be able to provide "real" FSD, no matter how wonderful their software controlling the car is.
Also, self driving is a feature of a vehicle someone owns, I don't understand how that should exempt anyone from insuring their property.
Waymo and others are providing a taxi service where the driver is not a human. You don't pay insurance when you ride Uber or Bolt or any other regular taxi service.
If your minor child breaks something, or your pet bites someone, you are liable.
This analogy may be more apt than Tesla would like to admit, but from a liability perspective it makes sense.
You could in turn try to sue Tesla for defective FSD, but the now-clearly-advertised "(supervised)" caveat, plus the lengthy agreement you clicked through, plus lots of lawyers, makes you unlikely to win.
> Surely if it's Tesla making the decisions, they need the insurance?
Why surely? Turning on cruise control doesn't absolve motorists of their insurance requirement.
And the premise is false. While Tesla does "not maintain as much insurance coverage as many other companies do," there are "policies that [they] do have" [1]. (What it insures is a separate question.)
The coder and sensor manufacturers need the insurance for wrongful death lawsuits
and Musk for removing lidar so it keeps jumping across high speed traffic at shadows because the visual cameras can't see true depth
99% of the people on this website are coders and know how even one small typo can cause random fails, yet you trust them to make you an alpha/beta tester at high speed?
Risk gets passed along until someone accepts it, usually an insurance company or the operator. If the risk was accepted and paid for by Tesla, then the cost would simply be passed down to consumers. All consumers, including those that want to accept the risk themselves. In particular, if you have a fleet of cars it can be cheaper to accept the risk and only pay for mandatory insurance, because not all of your cars are going to crash at the same time, and even if they did, not all in the worst way possible. This is how insurance works, by amortizing lots of risk to make it highly improbable to make a loss in the long run.
Not an expert here, but I recall reading that certain European countries (Spain???) allow liability to be put on the autonomous driving system, not the person in the car. Does anyone know more about this?
That is the case everywhere. It is common when buying a product for the contract to include who has liability for various things. The price often changes by a lot depending on who has liability.
Cars are traditionally sold as the customer has liability. Nothing stops a car maker (or even an individual dealer) from selling cars today taking all the insurance liability in any country I know of - they don't for what I hope are obvious reasons (bad drivers will be sure to buy those cars since it is a better deal for them an in turn a worse deal for good drivers), but they could.
Self driving is currently sold as customers has liability because that is how it has always been done. I doubt it will change, but it is only because I doubt there will ever be enough advantage as to be worth it for someone else to take on the liability - but I could be wrong.
I think there is an even bigger insurance problem to worry about: if autonomous vehicles become common and are a lot safer than manual driven vehicles, insurance rates for human driven cars could wind up exploding as the risk pool becomes much smaller and statistically riskier. We could go from paying $200/month to $2000/month if robo taxis start dominating cities.
> if autonomous vehicles become common and are a lot safer than manual driven vehicles, insurance rates for human driven cars could wind up exploding as the risk pool becomes much smaller and statistically riskier.
The assumption there is that the remaining human drivers would be the higher risk ones, but why would that be the case?
One of the primary movers of high risk driving is that someone goes to the bar, has too many drinks, then needs both themselves and their car to get home. Autonomous vehicles can obviously improve this by getting them home in their car without them driving it, but if they do, the risk profile of the remaining human drivers improves. At worst they're less likely to be hit by a drunk driver, at best the drunk drivers are the early adopters of autonomous vehicles and opt themselves out of the human drivers pool.
Because the operator is liable? Tesla as a company isn't driving the car, it's a ML model running on something like HW4 on bare metal in the car itself. Would that make the silicon die legally liable?
The point is if the liability is always exclusively with the human driver then any system in that car is at best a "driver assist". Claims that "it drives itself" or "it's autonomous" are just varying degrees of lying. I call it a partial lie rather than a partial truth because the result more often than not is that the customer is tricked into thinking the system is more capable than it is, and because that outcome is more dangerous than the opposite.
Any car has varying degrees of autonomy, even the ones with no assists (it will safely self-drive you all the way to the accident site, as they say). But the car is either driven by the human with the system's help, or is driven by the system with or without the human's help.
A car can't have 2 drivers. The only real one is the one the law holds responsible.
> If it autonomous or self-driving then why is the person in the car paying for the insurance? Surely if it's Tesla making the decisions, they need the insurance?
Suppose ACME Corporation produces millions of self-driving cars and then goes out of business because the CEO was embezzling. They no longer exist. But the cars do. They work fine. Who insures them? The person who wants to keep operating them.
Which is the same as it is now. It's your car so you pay to insure it.
I mean think about it. If you buy an autonomous car, would the manufacturer have to keep paying to insure it forever as long as you can keep it on the road? The only real options for making the manufacturer carry the insurance are that the answer is no and then they turn off your car after e.g. 10 years, which is quite objectionable, or that the answer is "yes" but then you have to pay a "subscription fee" to the manufacturer which is really the insurance premium, which is also quite objectionable because then you're then locked into the OEM instead of having a competitive insurance market.
I like your thesis, but what about this: all this self driving debate is nonsense if you require Tesla to pay all damages plus additional damages, "because you were hit by a robot!". That should make sure Tesla improves the system, and that it operates above human safety levels. Then one can forget about legislation and Tesla can do its job.
So to circle back to your thesis: when the car is operating autonomously, the manufacturer is responsible. If it goes broke then what? Then the owner will need to insure the car privately. So Tesla insurance might have to continue to operate (and be profitable).
The question this raises is if Tesla should sell any self-driving cars at all, or instead it should just drive them itself.
It isn't fully autonomous yet. For any future system sold as level 5 (or level 4?), I agree with your contention -- the manufacturer of the level 5 autonomous system is the one who bears primary liability and therefore should insure. "FSD" isn't even level 3.
(Though, there is still an element of owner/operator maintenance for level 4/5 vehicles -- e.g., if the owner fails to replace tires below 4/32", continues to operate the vehicle, and it causes an injury, that is partially the owner/operator's fault.)
Wouldn't that requirement completely kill any chance of a L5 system being profitable? If company X is making tons of self-driving cars, and now has to pay insurance for every single one, that's a mountain of cash. They'd go broke immediately.
I realize it would suck to be blamed for something the car did when you weren't driving it, but I'm not sure how else it could be financially feasible.
gizmo686|1 month ago
amelius|1 month ago
PunchyHamster|1 month ago
einpoklum|1 month ago
kjksf|1 month ago
The product you buy is called "FSD Supervised". It clearly states you're liable and must supervise the system.
I don't think there's law that would allow Tesla (or anyone else) to sell a passenger car with unsupervised system.
If you take Waymo or Tesla Robotaxi in Austin, you are not liable for accidents, Google or Tesla is.
That's because they operate on limited state laws that allow them to provide such service but the law doesn't allow selling such cars to people.
That's changing. Quite likely this year we will have federal law that will allow selling cars with fully unsupervised self-driving, in which case the insurance/liability will obviously land on the maker of the system, not person present in the car.
throwaway2037|1 month ago
paulryanrogers|1 month ago
kolbe|1 month ago
This is news to me. This context seems important to understanding Tesla's decision to stop selling FSD. If they're on the hook for insurance, then they will need to dynamically adjust what they charge to reflect insurance costs.
arijun|1 month ago
AlotOfReading|1 month ago
rubyfan|1 month ago
jasoncartwright|1 month ago
2III7|1 month ago
Also, self driving is a feature of a vehicle someone owns, I don't understand how that should exempt anyone from insuring their property.
Waymo and others are providing a taxi service where the driver is not a human. You don't pay insurance when you ride Uber or Bolt or any other regular taxi service.
zugi|1 month ago
This analogy may be more apt than Tesla would like to admit, but from a liability perspective it makes sense.
You could in turn try to sue Tesla for defective FSD, but the now-clearly-advertised "(supervised)" caveat, plus the lengthy agreement you clicked through, plus lots of lawyers, makes you unlikely to win.
paulryanrogers|1 month ago
davidhunter|1 month ago
taneq|1 month ago
JumpCrisscross|1 month ago
Why surely? Turning on cruise control doesn't absolve motorists of their insurance requirement.
And the premise is false. While Tesla does "not maintain as much insurance coverage as many other companies do," there are "policies that [they] do have" [1]. (What it insures is a separate question.)
[1] https://www.sec.gov/ix?doc=/Archives/edgar/data/0001318605/0...
forgetfreeman|1 month ago
ck2|1 month ago
and Musk for removing lidar so it keeps jumping across high speed traffic at shadows because the visual cameras can't see true depth
99% of the people on this website are coders and know how even one small typo can cause random fails, yet you trust them to make you an alpha/beta tester at high speed?
stubish|1 month ago
jimt1234|1 month ago
bluGill|1 month ago
Cars are traditionally sold as the customer has liability. Nothing stops a car maker (or even an individual dealer) from selling cars today taking all the insurance liability in any country I know of - they don't for what I hope are obvious reasons (bad drivers will be sure to buy those cars since it is a better deal for them an in turn a worse deal for good drivers), but they could.
Self driving is currently sold as customers has liability because that is how it has always been done. I doubt it will change, but it is only because I doubt there will ever be enough advantage as to be worth it for someone else to take on the liability - but I could be wrong.
throw20251220|1 month ago
SoftTalker|1 month ago
Rebelgecko|1 month ago
seanmcdirmid|1 month ago
AnthonyMouse|1 month ago
The assumption there is that the remaining human drivers would be the higher risk ones, but why would that be the case?
One of the primary movers of high risk driving is that someone goes to the bar, has too many drinks, then needs both themselves and their car to get home. Autonomous vehicles can obviously improve this by getting them home in their car without them driving it, but if they do, the risk profile of the remaining human drivers improves. At worst they're less likely to be hit by a drunk driver, at best the drunk drivers are the early adopters of autonomous vehicles and opt themselves out of the human drivers pool.
mavhc|1 month ago
djoldman|1 month ago
https://www.roadandtrack.com/news/a39481699/what-happens-if-...
jdminhbg|1 month ago
It was way too limited to be useful to anyone.
ponector|1 month ago
jgbuddy|1 month ago
jasoncartwright|1 month ago
throw20251220|1 month ago
close04|1 month ago
Any car has varying degrees of autonomy, even the ones with no assists (it will safely self-drive you all the way to the accident site, as they say). But the car is either driven by the human with the system's help, or is driven by the system with or without the human's help.
A car can't have 2 drivers. The only real one is the one the law holds responsible.
charcircuit|1 month ago
AnthonyMouse|1 month ago
Suppose ACME Corporation produces millions of self-driving cars and then goes out of business because the CEO was embezzling. They no longer exist. But the cars do. They work fine. Who insures them? The person who wants to keep operating them.
Which is the same as it is now. It's your car so you pay to insure it.
I mean think about it. If you buy an autonomous car, would the manufacturer have to keep paying to insure it forever as long as you can keep it on the road? The only real options for making the manufacturer carry the insurance are that the answer is no and then they turn off your car after e.g. 10 years, which is quite objectionable, or that the answer is "yes" but then you have to pay a "subscription fee" to the manufacturer which is really the insurance premium, which is also quite objectionable because then you're then locked into the OEM instead of having a competitive insurance market.
childintime|1 month ago
So to circle back to your thesis: when the car is operating autonomously, the manufacturer is responsible. If it goes broke then what? Then the owner will need to insure the car privately. So Tesla insurance might have to continue to operate (and be profitable).
The question this raises is if Tesla should sell any self-driving cars at all, or instead it should just drive them itself.
abtinf|1 month ago
redanddead|1 month ago
it's why young drivers pay more for insurance
loeg|1 month ago
(Though, there is still an element of owner/operator maintenance for level 4/5 vehicles -- e.g., if the owner fails to replace tires below 4/32", continues to operate the vehicle, and it causes an injury, that is partially the owner/operator's fault.)
Night_Thastus|1 month ago
I realize it would suck to be blamed for something the car did when you weren't driving it, but I'm not sure how else it could be financially feasible.