(no title)
mxwsn
|
1 year ago
OK - there's always a nonzero chance of hallucination. There's also a non-zero chance that macroscale objects can do quantum tunnelling, but no one is arguing that we "need to live with this" fact. A theoretical proof of the impossibility of reaching 0% probability of some event is nice, but in practice it says little about whether we can exponentially decrease the probability of it happening or not to effectively mitigate risk.
unshavedyak|1 year ago
People rag on LLMs constantly and i get it, but they then give humans way too much credit imo. The primary difference i feel like we see with LLMs vs Humans is complexity. No, i don't personally believe LLMs can scale to human "intelligence". However atm it feels like comparing a worm brain to a human intelligence and saying that's evidence that neurons can't reach human intelligence level.. despite the worm being a fraction of the underling complexity.
threeseed|1 year ago
a) They don't give detailed answers for questions they have no knowledge about.
b) They learn from their mistakes.
raga324|1 year ago
[deleted]
amelius|1 year ago
True, but it is defeatist and goes against a good engineering/scientific mindset.
With this attitude we'd still be practicing alchemy.
panarky|1 year ago
LLMs will sometimes be inaccurate. So are humans. When LLMs are clearly better than humans for specific use cases, we don't need 100% perfection.
Autonomous cars will sometimes cause accidents. So do humans. When AVs are clearly safer than humans for specific driving scenarios, we don't need 100% perfection.
krapp|1 year ago
Yet any arbitrary degree of error can be dismissed in LLMs because "humans do it too." It's weird.
talldayo|1 year ago
People didn't stop refining the calculator once it was fast enough to beat a human. It's reasonable to expect absolute idempotent perfection from a robot designed to manufacture text.
digger495|1 year ago
FtFY.