top | item 46006893

(no title)

yunyu | 3 months ago

>A conscious textual agent would need something like a unified narrative environment with real feedback: symbols that maintain identity over time, a stable substrate where “being someone” is definable, the ability to form and test a hypothesis, and experience the consequences. LLMs don’t have that. They exist in a shifting cloud of possibilities with no single consistent reality to anchor self-maintaining loops. They can generate pockets of local coherence, but they can’t accumulate global coherence across time.

These exist? Companies are making billions of dollars selling persistent environments to the labs. Huge amounts of inference dollars are going into coding agents which live in persistent environments with internal dynamics. LLMs definitely can live in a world, and what this world is and whether it's persistent lie outside the LLM.

discuss

order

yannyu|3 months ago

I agree, I'm sure people have put together things like this. There's a significant profit and science motive to do so. JEPA and predictive world models are also a similar implementation or thought experiment.