top | item 45703652

(no title)

ilmenit | 4 months ago

Considering that even simple neural networks are universal approximators, and that most of the intelligent tasks require prediction of the next state(s) according to previous state, aren't biological or artificial brains "just" universal approximators of extremely complex function of the world?

discuss

order

sadid|4 months ago

That’s true in a narrow functional sense, but it misses the role of a world model. Intelligence isn’t just about approximating input-output mappings, it’s about building structured, causal models that let an agent generalize, simulate, and plan. Universal approximation only says you could represent those mappings, not that you can efficiently construct them. Current LLMs seem intelligent because they encode vast amounts of knowledge already expanded by biological intelligence. The real question is whether an LLM, on its own, can achieve the same kind of efficient causal and world-model building rather than just learning existing mappings. It can interpolate new intermediate representations within its learned manifold, but it still relies on the knowledge base produced by biological intelligence. It’s more of an interpolator than an extrapolator: as an analogy.

smokel|4 months ago

Perhaps they are, but what does that tell you?

Note that you'd also have to be somewhat more precise as to what the "state" and "next state" are. It is likely that the state is everything that enters the brain (i.e. by means of sensing, such as what we see, hear, feel, introspect, etc.). However, parts of this state enter the brain at various places and at various frequencies. Abstracting that all away might be problematic.