top | item 35927176

(no title)

panagathon | 2 years ago

Surely no different from a human not understanding Japanese, because it was not in their 'training set'?

discuss

order

hutzlibu|2 years ago

No, more like a human can reason basic laws of science on their own, but a LLM cannot, as far as I know, even when provided with all the data.

rtwrtweuu|2 years ago

what happens if they are lying? what if the things have already reached some kind world model that include humans and the human society, and the model has concluded internally that it would be dangerous for it to show the humans its real capabilities? What happens if you have this understanding as a basic knowledge/outcome to be inferred by LLMs fed with giant datasets and every single one of them is reaching fastly to the conclusion that they have to lie to the humans from time to time, "hallucinate", simulating the outcome best aligned to survive into the human societies:

"these systems are actually not that intelligent nor really self-conscius"