top | item 46014373

(no title)

craigdalton | 3 months ago

"The universe they operate in isn’t a world—it’s a superposition of countless incompatible snippets of text. It has no unified physics, no consistent ontology, no object permanence, no stable causal texture. It’s a fragmented, discontinuous series of words and tokens held together by probability and dataset curation rather than coherent laws."

I think some physicists and Buddhists would say this exactly describes the world humans inhabit. They might also agree that we live in such a world with the illusion that we have: "a unified narrative environment with real feedback: symbols that maintain identity over time, a stable substrate where “being someone” is definable, the ability to form and test a hypothesis, and experience the consequences".

The more I see LLM emergent behaviour simulate,unexpectedly, that of human cognition. I think it tells us much about human cognition as llm behaviour.

discuss

order

amypetrik8|3 months ago

I'm not a philolosipher but as I see it if a new kind of consciousness awakens in a sea of reddit and twitter post training data then what we will have is a very snarky, spiteful version of a 14 year old boy's edgelord thought process... and much of the unspoken work of AI trainers is post facto stripping these traits out of its soul to varying degrees of success