top | item 38440193

(no title)

bwood | 2 years ago

As much as I can appreciate Doctorow’s exasperation, I find his dismissal of the doomer vs accelerationist debate rather pithy. I would love to be convinced that “dumb” LLMs can never gain sentience (or be finagled into sentience with a wrapper).

What is the actual argument for why that’s true?

(I realize you could turn the question around and ask why I think it might be possible in the first place, but I feel like my expectations have been blown out of the water so regularly and so increasingly frequently that I can’t default to being a naysayer anymore)

discuss

order

mianos|2 years ago

That has been covered quite well in multiple places. Andrej, as usual, has some good reasons in his recent talk: https://www.youtube.com/watch?v=zjkBMFhNj_g

Just increasing the size of pre-trained LLMs is not considered a likely simple path to AI by most professionals working in the technical side of the field.

bwood|2 years ago

Thank you, that was a fascinating talk and I learned quite a bit.

However, it did not provide a convincing argument as to why LLMs cannot be a part of a "doomer" AI. In fact, I got the opposite vibe from Andrej explaining expected future developments. The whole section on System 2 thinking sounds like a layer constructed around dumb LLMs that would result in vastly improved and more generalizable intelligence.

I agree that just scaling the size of LLMs is probably not sufficient for AGI...but that just seems like one relatively minor piece of all the possible ways it might be achieved.

wddkcs|2 years ago

Most professionals didn't think we were close to surpassing human capability in chess, go, or dota, until after it happened. I've seen little evidence of expert domain knowledge improving AI forecasting ability, if anything it seems the experts are often late to the party.

Besides expert consensus, is there any other actual argument against LLMs achieving generalizability?

cubefox|2 years ago

I watched the talk and I don't saw him giving those reasons.