(no title)
_acco | 7 months ago
i.e. AGI is a philosophical problem, not a scaling problem.
Though we understand them little, we know the default mode network and sleep play key roles. That is likely because they aid some universal property of AGI. Concepts we don't understand like motivation, curiosity, and qualia are likely part of the picture too. Evolution is far too efficient for these to be mere side effects.
(And of course LLMs have none of these properties.)
When a human solves a problem, their search space is not random - just like a chess grandmaster's search space of moves is not random.
How our brains are so efficient when problem solving while also able to generate novelty is a mystery.
No comments yet.