top | item 46777203

(no title)

sksksk | 1 month ago

>This article gave an LLM a bunch of health metrics and then asked it to reduce it to a single score, didn't tell us any of the actual metric values, and then compared that to a doctor's opinion. Why anyone would expect these to align is beyond my understanding.

This gets to one of LLMs' core weaknesses, they blindly respond to your requests and rarely push back against the premise of it.

discuss

order

next_xibalba|1 month ago

I read somewhere that LLM chat apps are optimized to return something useful, not correct or comprehensive (where useful is defined as the user accepts it). I found this explanation to be a useful (ha!) way to explain to friends and family why they need to be skeptical of LLM outputs.