The neural networks we use today have really terrible accuracy, and we tend to make them worse, not better, as having more neurons is better than having more precision. Human brains are also a mess, but somehow, they work, and we are usually able to correct our own mistakes.
Since by AGI, we usually mean human-like, that system should be able to self correct the same way we do.
I'd presume it could reason around the wrong answer, at least to realize something was off. Current LLMs will sometimes hallucinate that this has happened when they're "thinking".
GuB-42|5 months ago
Since by AGI, we usually mean human-like, that system should be able to self correct the same way we do.
nenenejej|5 months ago
gs17|5 months ago