top | item 45621948

(no title)

hax0ron3 | 4 months ago

If the transcript is accurate, Karpathy does not actually ever, in this interview, say that AGI is a decade away, or make any concrete claims about how far away AGI is. Patel's title is misleading.

discuss

order

dang|4 months ago

Hmm good point. I skimmed the transcript looking for an accurate, representative quote that we could use in the title above. I couldn't exactly find one (within HN's 80 char limit), so I cobbled together "It will take a decade to get agents to work", which is at least closer to what Karpathy actually said.

If anyone can suggest a more accurate and representative title, we can change it again.

Edit: I thought of using "For now, autocomplete is my sweet spot", which has the advantage of being an exact quote; but it's probably not clear enough.

Edit 2: I changed it to "It will take a decade to work through the issues with agents" because that's closer to the transcript.

Anybody have a better idea? Help the cause of accuracy out here!

hax0ron3|4 months ago

To be fair to the OP of the thread, he's just using Patel's title word-for-word. It's Patel who is being inaccurate.

tim333|4 months ago

He says re agents:

>They don't have enough intelligence, they're not multimodal enough, they can't do computer use and all this stuff. They don't do a lot of the things you've alluded to earlier. They don't have continual learning. You can't just tell them something and they'll remember it. They're cognitively lacking and it's just not working.

>It will take about a decade to work through all of those issues. (2:20)

hax0ron3|4 months ago

Him saying that it will take a decade to work through agents' issues isn't the same as him saying that there will be AGI in a decade, though

bamboozled|4 months ago

Couldn't have even been bothered watching ~ 2 minutes of an interview before commenting.

Sateeshm|4 months ago

And the decade timeframe is mostly based on his intuition and previous experience with such leaps.

whiplash451|4 months ago

Did the same with Sutton (LLMs are a dead end) when Sutton never said this in the conversation.

jobs_throwaway|4 months ago

He didn't say those words exactly but he did say

"The scalable method is you learn from experience. You try things, you see what works. No one has to tell you. First of all, you have a goal. Without a goal, there’s no sense of right or wrong or better or worse. Large language models are trying to get by without having a goal or a sense of better or worse. That’s just exactly starting in the wrong place."

and a bunch of similar things implying LLMs have no hope of reaching AGI

nextworddev|4 months ago

There's a lot of salt here

dang|4 months ago

> Hey, podcast bro needs to get clicks

Please don't cross into personal attack. It's not what this site is for, and destroys what it is for.

Edit: please don't edit comments to change their meaning once someone has replied. It's unfair to repliers whose comments no longer make sense, and it's unfair to readers who can no longer understand the thread. It's fine, of course, to add to an existing comment in such a case, e.g. by saying "Edit:" or some such and then adding what else you want to say.