top | item 44131488

(no title)

Lazarus_Long | 9 months ago

In general my smoke test for this kind of things is, if the company (or whatever) gladly accept the full liability for the AI usage.

Cases like: - The AI replaces a salesperson but the sales are not binding or final, in case the client gets a bargain at $0 from the chatbot.

- It replaces drivers but it disengages 1 second before hitting a tree to blame the human.

- Support wants you to press cancel so the reports say "client cancel" and not "self drive is doing laps around a patch of grass".

- Ai is better than doctors at diagnosis, but in any case of misdiagnosis the blame is shifted to the doctor because "AI is just a tool".

- Ai is better at coding that old meat devs, but when the unmaintainable security hole goes to production, the downtime and breaches cannot be blamed on the AI company producing the code, it was the old meat devs fault.

AI companies want the cake and eat it too, until i see them eating the liability, i know, and i know they know, it's not ready for the things they say it is.

discuss

order

odyssey7|9 months ago

Most doctors have insurance for covering their mistakes. We might expect an AI medical startup to pay analogous premiums when it’s paid analogous fees.

treetalker|9 months ago

Exactly: skin in the game, and to underscore the point, make any debt non-dischargeable in bankruptcy.

_alternator_|9 months ago

The obvious next step is not that the LLMs replace doctors, it’s that LLMs become part of the ‘standard of care’, a component of the triage process. You go to the emergency room, and an LLM assessment becomes routine, if not required. This study shows that doing that will significantly increase accurate diagnoses for the start. Everyone wins.

OutOfHere|9 months ago

That's completely missing the point. The LLM score substantially higher than the clinician. Statistically this means the clinician will have many more misdiagnoses.

The point is that clinicians don't really get sued most of the time anyway for misdiagnoses. With AI, all one has to do is open up a new chat, tell the AI that its last diagnosis isn't really helping, and it will eagerly give an updated assessment. Compared to a clinician, the AI dramatically lowers the bar of iteratively working with it to help address an issue.

As for drug prescriptions, they are to be processed through an interactions checker anyway.

inopinatus|9 months ago

If you tell a LLM that its last effort was bad, it won't give you a better outcome. It will get worse at whatever you asked for.

The reason is simple. They are trained as plausibility engines. It's more plausible that a bad diagnostician gives you a worse outcome than a good one, and you have literally just prompted it that it's bad at diagnosis.

Sure, you might get another text completion. Will it be correct, actionable, reliable, safe? Even a stopped clock. Good luck rolling those dice with your health.

In summary, do not iterate with prompts for declining competence.

ncgl|9 months ago

Jesus they're calling us meat devs?

pragmatic|9 months ago

Reminds m of the assassin droid in KOTOR2 that called everyone meat bags.

We're getting there!