top | item 43967551

(no title)

garymarcus | 9 months ago

If you think AI is “smart” or “PhD level” or that it “has an IQ of 120”, take five minutes to read my latest newsletter (link below), as I challenge ChatGPT to the incredibly demanding task of drawing a map of major port cities with above average income.

The results aren’t pretty. 0/5, no two maps alike.

“Smart” means understanding abstract concepts and combining them well, not just retrieving and analogizing in shoddy ways.

No way could a system this is wonky actually get a PhD in geography. Or economics. Or much of anything else.

discuss

order

rvz|9 months ago

> If you think AI is “smart” or “PhD level” or that it “has an IQ of 120”...

It's not there yet, it's still learning™, but a lot of progress in AI has happened recently, which I would give them that.

However, as you point out in your newsletter already, there are also lots of misleading and dubious claims alongside too much hype in the hopes to raise VC capital which comes with the overpromising in AI as well.

One of them is the true meaning of "AGI" (right now it is starting to look like a scam), since there are several conflicting definitions directly from those who benefit.

What do you think it truly means given your observations?

enjoylife|9 months ago

“‘It’s still learning’ is a misnomer. The model isn’t learning—we are. LLMs are static after training; all improvement comes from human iteration in the outer loop: fine-tuning, prompt engineering, tool integration, retrieval. Until the outer loop itself becomes autonomous and self-improving, we’re nowhere near AGI. Current hype confuses capability with agency.

some_random|9 months ago

This is a really surface level investigation that just happens to exclusively use a part of the current multimodal model that is really bad at the task presented, that is demanding precise images, graphs, charts, etc. Try asking for tables of data or matplotlib code to generate the same visualizations and it will typically do far better. That said if you actually use even the latest models day to day you'll inevitably run into even stupider mistakes/hallucinations than this, but the point you're trying to make is undermined by appearing to have picked up chatgpt with the exclusive goal of making a substack post dunking on it.

ben_w|9 months ago

I very much appreciate all the ways we're improving our ideas of what "smart" means.

I wouldn't call LLMs "smart" either, but with a different definition than the one you use here: to me, at the moment, "smart" means being able to learn efficiently, with few examples needed to master a new challenge.

This may not be sufficient, but it does avoid any circular arguments about if any given model would have any "understanding" at all.

knowsuchagency|9 months ago

I don't believe ChatGPT has an IQ of 120, and after reading the linked article, I don't think the author does either.

lillecarl|9 months ago

No arguing with anything, but the (link below) doesn't exist.

jqpabc123|9 months ago

The only thing LLM does really well is statistical prediction.

As should be expected, sometimes it predicts correctly and sometimes it doesn't.

It's kinda like FSD mode in a Tesla. If you're not willing to bet your life on it (and why would you?), it's really not all that useful.