top | item 47024834

(no title)

jt2190 | 15 days ago

> How do you not see the difference between a machine that will hallucinate something random if it doesn’t know the answer vs a human...

Your claim here is that humans can't hallucinate something random. Clearly they can and do.

> ... that will logic through things and find the correct answer.

But humans do not find the correct answer 100% of the time.

The way that we address human fallibility is to create a system that does not accept the input of a single human as "truth". Even these systems only achieve "very high probability" but not 100% correctness. We can employ these same systems with AI.

discuss

order

chrisjj|15 days ago

> The way that we address human fallibility is to create a system that does not accept the input of a single human as "truth".

I think you just rejected all user requirement and design specs.

buzzerbetrayed|15 days ago

Not sure how things work at your company, but I’ve never seen a design spec that doesn’t have input from many humans on some form or another