(no title)
ssnistfajen | 1 year ago
Attempts at autonomous AI agents are still failing spectacularly because the models don't actually have any thought or memory. Context is provided to them via prefixing the prompt with all previous prompts which obviously causes significant info loss after a few interaction loops. The level of intellectual complexity at play here is on par with nematodes in a lab (which btw still can't be digitally emulated after decades of research). This isn't a diss on all the smart people working in AI today, bc I'm not talking about the quality of any specific model available today.
atleastoptimal|1 year ago
LLM's do have memory and thought. I've invented a few somewhat unusual games, described it to Sonnet 3.5 and it reproduces it in code almost perfectly. Likewise its memory has been scaling. Just a couple years ago context windows were 8000 tokens maximum, now they're reaching the millions.
I feel like you're approaching all these capabilities with a myopic viewpoint, then playing semantic judo to obfuscate the nature of these increases as "not counting" since they can be vaguely mapped to something that has a negative connotation.
>A lot of people don't even consider the ability to solve problems to be a reliable indicator of intelligence
That's a very bold statement, as lots of smart people have said that the very definition of intelligence is the ability to solve problems. If fear of the effectiveness of LLM's in behaving genuinely intelligently leads you to making extreme sweeping claims on what intelligence doesn't count as, then you're forcing yourself into a smaller and smaller corner as AI SOTA capabilities predictably increase month after month.