(no title)
Lazarus_Long | 9 months ago
Cases like: - The AI replaces a salesperson but the sales are not binding or final, in case the client gets a bargain at $0 from the chatbot.
- It replaces drivers but it disengages 1 second before hitting a tree to blame the human.
- Support wants you to press cancel so the reports say "client cancel" and not "self drive is doing laps around a patch of grass".
- Ai is better than doctors at diagnosis, but in any case of misdiagnosis the blame is shifted to the doctor because "AI is just a tool".
- Ai is better at coding that old meat devs, but when the unmaintainable security hole goes to production, the downtime and breaches cannot be blamed on the AI company producing the code, it was the old meat devs fault.
AI companies want the cake and eat it too, until i see them eating the liability, i know, and i know they know, it's not ready for the things they say it is.
odyssey7|9 months ago
treetalker|9 months ago
_alternator_|9 months ago
OutOfHere|9 months ago
The point is that clinicians don't really get sued most of the time anyway for misdiagnoses. With AI, all one has to do is open up a new chat, tell the AI that its last diagnosis isn't really helping, and it will eagerly give an updated assessment. Compared to a clinician, the AI dramatically lowers the bar of iteratively working with it to help address an issue.
As for drug prescriptions, they are to be processed through an interactions checker anyway.
inopinatus|9 months ago
The reason is simple. They are trained as plausibility engines. It's more plausible that a bad diagnostician gives you a worse outcome than a good one, and you have literally just prompted it that it's bad at diagnosis.
Sure, you might get another text completion. Will it be correct, actionable, reliable, safe? Even a stopped clock. Good luck rolling those dice with your health.
In summary, do not iterate with prompts for declining competence.
ncgl|9 months ago
pragmatic|9 months ago
We're getting there!