(no title)
itkovian_ | 1 year ago
How can anyone think he's arguing in good faith at this point. That essay was published after gpt3 prior to gpt 4 - and he's claiming it was correct!
itkovian_ | 1 year ago
How can anyone think he's arguing in good faith at this point. That essay was published after gpt3 prior to gpt 4 - and he's claiming it was correct!
sytelus|1 year ago
While writing this, it occured to me that he would get even goose bumps at reading this comment because it, after all, I am giving him attention.
baobabKoodaa|1 year ago
My impression is the opposite: I would describe Gary Marcus as having all his opinions perfectly aligned to a singular viewpoint at all times regardless of weather (or evidence).
nonrandomstring|1 year ago
unknown|1 year ago
[deleted]
petters|1 year ago
abc-1|1 year ago
anonylizard|1 year ago
GPT-3: Useful as autocomplete. Still error prone, but vastly better than any pre-AI autocomplete
GPT-4: Already capable of independently coding up simple functions based on natural language.
O3-mini: Can code in say top 5% of codeforces.
There's a 2 years gap between each of them.
More over, intelligence has a superexponential return, 90IQ->100IQ < 100IQ->110IQ in terms of returns.
rashidae|1 year ago
mquander|1 year ago
Gary Marcus didn't make a lot of specific criticisms or concrete predictions in his essay [0], but some of his criticisms of GPT-3 were:
- "For all its fluency, GPT-3 can neither integrate information from basic web searches nor reason about the most basic everyday phenomena."
- "Researchers at DeepMind and elsewhere have been trying desperately to patch the toxic language and misinformation problems, but have thus far come up dry."
- "Deep learning on its own continues to struggle even in domains as orderly as arithmetic."
Are these not all dramatically improved, no matter how you measure them, in the past three years?
[0] https://nautil.us/deep-learning-is-hitting-a-wall-238440/
pj_mukh|1 year ago
suddenlybananas|1 year ago
menaerus|1 year ago