top | item 41541148

(no title)

mxwsn | 1 year ago

OK - there's always a nonzero chance of hallucination. There's also a non-zero chance that macroscale objects can do quantum tunnelling, but no one is arguing that we "need to live with this" fact. A theoretical proof of the impossibility of reaching 0% probability of some event is nice, but in practice it says little about whether we can exponentially decrease the probability of it happening or not to effectively mitigate risk.

discuss

order

unshavedyak|1 year ago

Plus, why do we care about that degree? If we could make it so humans don't hallucinate too that would be great, but it ain't happening. Humans memory gets polluted the moment you feed them new information, as evidence by how much care we have to give when trying to extract information when it matters, like law enforcement.

People rag on LLMs constantly and i get it, but they then give humans way too much credit imo. The primary difference i feel like we see with LLMs vs Humans is complexity. No, i don't personally believe LLMs can scale to human "intelligence". However atm it feels like comparing a worm brain to a human intelligence and saying that's evidence that neurons can't reach human intelligence level.. despite the worm being a fraction of the underling complexity.

threeseed|1 year ago

Humans have two qualities that make them infinitely superior to LLMs for similar tasks.

a) They don't give detailed answers for questions they have no knowledge about.

b) They learn from their mistakes.

amelius|1 year ago

> there's always a nonzero chance of hallucination. There's also a non-zero chance that macroscale objects can do quantum tunnelling, but no one is arguing that we "need to live with this" fact.

True, but it is defeatist and goes against a good engineering/scientific mindset.

With this attitude we'd still be practicing alchemy.

panarky|1 year ago

Exactly.

LLMs will sometimes be inaccurate. So are humans. When LLMs are clearly better than humans for specific use cases, we don't need 100% perfection.

Autonomous cars will sometimes cause accidents. So do humans. When AVs are clearly safer than humans for specific driving scenarios, we don't need 100% perfection.

krapp|1 year ago

If we only used LLMs for use cases where they exceed human ability, that would be great. But we don't. We use them to replace human beings in the general case, and many people believe that they exceed human ability in every relevant factor. Yet if human beings failed as often as LLMs do at the tasks for which LLMs are employed, those humans would be fired, sued and probably committed.

Yet any arbitrary degree of error can be dismissed in LLMs because "humans do it too." It's weird.

talldayo|1 year ago

> When AVs are clearly safer than humans for specific driving scenarios, we don't need 100% perfection.

People didn't stop refining the calculator once it was fast enough to beat a human. It's reasonable to expect absolute idempotent perfection from a robot designed to manufacture text.

digger495|1 year ago

LLMs will always have some degree of inaccuracy.

FtFY.