top | item 46596145

(no title)

myhf | 1 month ago

If an app makes a diagnosis or a recommendation based on health data, that's Software as a Medical Device (SaMD) and it opens up a world of liability.

https://www.fda.gov/medical-devices/digital-health-center-ex...

discuss

order

eastbound|1 month ago

How do you suggest to deal with Gemini? Extremely useful to understand whether something is worrying or not. Whether we like it or not, it’s a main participant to the discussion.

jessetemp|1 month ago

Ideally, hold Google liable until their AI doesn’t confabulate medical advice.

Realistically, sign a EULA waiving your rights because their AI confabulates medical advice

derbOac|1 month ago

Apparently we should hire the Guardian to evaluate LLM output accuracy?

Why are these products being put out there for these kinds of things with no attempt to quantify accuracy?

In many areas AI has become this toy that we use because it looks real enough.

It sometimes works for some things in math and science because we test its output, but overall you don't go to Gemini and it says "there's a 80% chance this is correct". At least then you could evaluate that claim.

There's a kind of task LLMs aren't well suited to because there's no intrinsic empirical verifiability, for lack of a better way of putting it.

arkh|1 month ago

> How do you suggest to deal with Gemini?

Don't. I do not ask my mechanic for medical advice, why would I ask a random output machine?

overfeed|1 month ago

> How do you suggest to deal with Gemini?

With robust fines based on % revenue whenever it breaks the law, would be my preference. I'm nit here to attempt solutions to Google's self-inflicted business-model challenges.

ndsipa_pomu|1 month ago

If it's giving out medical advice without a license, it should be banned from giving medical advice and the parent company fined or forced to retire it.

atoav|1 month ago

As a certified electrical engineer, the amount of times googles LLM suggested a thing that would have at minimum started a fire is staggering.

I have the capacity to know when it is wrong, but I teach this at university level. What worries me, are the people who are on the starting end of the Dunning-Kruger curve and needed that wrong advice to start "fixing" the spaces where this might become a danger to human life.

No information is superior to wrong information presented in a convincing way.