(no title)
bwood | 2 years ago
What is the actual argument for why that’s true?
(I realize you could turn the question around and ask why I think it might be possible in the first place, but I feel like my expectations have been blown out of the water so regularly and so increasingly frequently that I can’t default to being a naysayer anymore)
mianos|2 years ago
Just increasing the size of pre-trained LLMs is not considered a likely simple path to AI by most professionals working in the technical side of the field.
bwood|2 years ago
However, it did not provide a convincing argument as to why LLMs cannot be a part of a "doomer" AI. In fact, I got the opposite vibe from Andrej explaining expected future developments. The whole section on System 2 thinking sounds like a layer constructed around dumb LLMs that would result in vastly improved and more generalizable intelligence.
I agree that just scaling the size of LLMs is probably not sufficient for AGI...but that just seems like one relatively minor piece of all the possible ways it might be achieved.
wddkcs|2 years ago
Besides expert consensus, is there any other actual argument against LLMs achieving generalizability?
cubefox|2 years ago