top | item 41712309

(no title)

baanist | 1 year ago

What is he wrong about?

discuss

order

krallistic|1 year ago

"Deep learning is hitting a wall now with just scaling"

"Deep learning is only good for perception" (with language one of the areas where its not good)

baanist|1 year ago

I don't have any subscriptions to the latest models but what improvements have you noticed in scaling and language understanding? Last I checked people were still discussing "9.9 > 9.1" and "How many 'r's are in 'strawberry'".

hyperG|1 year ago

I have never read anything by him that seemed correct.

He really does seem like quite a good contrarian indicator if anything.

baanist|1 year ago

He was right that scaling would not achieve abstract reasoning and he's been right so far on basically every new hyped development. The closed research labs like OpenAI are hoping to reduce all the gaps in their models by constantly patching out of distribution data sets but this clearly can not achieve any sort of general intelligence unless they somehow manage to obsolete themselves and the entire company by automating the out of distribution patching itself which they now perform by burning lots of cash and energy.

There are people who need to write a lot of emails so for those people I'm sure OpenAI will continue to deliver some kind of value but everyone else will still have to continue thinking for themselves regardless of what Sam Altman keeps promising.