(no title)
dwohnitmok | 16 days ago
Amodei does not mean that things are plateauing (i.e. the exponential will no longer hold), but rather uses "end" closer to the notion of "endgame," that is we are getting to the point where all benchmarks pegged to human ability will be saturated and the AI systems will be better than any human at any cognitive task.
Amodei lays this out here:
> [with regards to] the “country of geniuses in a data center”. My picture for that, if you made me guess, is one to two years, maybe one to three years. It’s really hard to tell. I have a strong view—99%, 95%—that all this will happen in 10 years. I think that’s just a super safe bet. I have a hunch—this is more like a 50/50 thing—that it’s going to be more like one to two [years], maybe more like one to three.
This is why Amodei opens with
> What has been the most surprising thing is the lack of public recognition of how close we are to the end of the exponential. To me, it is absolutely wild that you have people — within the bubble and outside the bubble — talking about the same tired, old hot-button political issues, when we are near the end of the exponential.
Whether you agree with him is of course a different matter altogether, but a clearer phrasing would probably be "We are near the endgame."
rramadass|15 days ago
It is a 2+hrs video and hence a summary of main themes is welcome.
adrian_b|15 days ago
Nothing that I have seen described here on HN or elsewhere, by the most enthusiast users of AI, who claim that their own productivity has been multiplied, does not demonstrate performance in cognitive tasks even remotely comparable with that of a competent human, much less better performance.
All that I see is that the AI systems outperform humans for various tasks only because they had access in their training data to much more information than most humans are allowed to access, because they do not have enough money to obtain such access, both because the various copyright paywalls and also because of the actual cost of storage and retrieval systems.
Using an AI agent may be faster than if you were given access to the training data and you would use conventional search tools on it, but the speed may be illusory, because when I search something and I have access to the original sources I can validate the search results faster and with much more certainty than when I try to ponder about the correctness of what AI has provided, e.g. whether a program produced by it really does what I have requested and it is bug free (in comparison with having access to its training programs and being able to choose myself what to copy and paste).
I hope that paid access to AI tools gives better results, but the AI replies that popular search engines, like Google and Bing, force upon their users have made Internet searches much worse not better, as their answers always contain something else than I want, and this is in the best case, when the answers are not plainly wrong.
menaerus|14 days ago
You should get yourself a paid subscription. Honest advice. The difference between agentic workflow vs single-shot questions in free-tier services is night and day. Building context and letting the model have access to your code is the largest differentiator between "wtf, I don't need this" and "wtf".
> All that I see is that the AI systems outperform humans for various tasks only because they had access in their training data to much more information than most humans are allowed to access, because they do not have enough money to obtain such access
Humans cannot even theoretically read and consume the volume of data the models can do so it's not really about the money - it's more about the infinite amount of time humans would need to have and the extremely large cognitive load it would impose on them. How many people can even synthesize so much diverse topics at high and constant pace? None or very little.
Also, models are proven to generalize very well so having access to your codebase during the training phase is not necessary for them to provide you with the correct answers. Give it a try.
Models being able to generalize very well is one of the ways AI labs think they may reach the "AI systems will be better than any human at any cognitive task" goal. I am not convinced that this will be the only sauce needed but I am also not too skeptic about it too, given the speed at which AI capabilities unfolded, especially during the 2025.
I think we already reached the point where it's safe to say that "AI systems are better than many humans at most cognitive tasks". I can see it myself on the project I am currently working on. These are not the top-tier developers. And when I talk to the top-tier ones I have previously worked with, we share the similar sentiment. The only difference might be "AI systems are much faster than many humans at most congitive tasks".
generallyjosh|14 days ago
I give it a question (narrow or really broad), and the model does a bunch of web searches using subagents, to try and get a comprehensive answer using current results
The important part is, when the model answers, I have it cite its sources, using direct links. So, I can directly confirm the accuracy and quality of any info it finds
It's been super helpful. I can give it super broad questions like "Here's the architecture and environment details I'm planning for a new project. Can you see if there's any known issues with this setup". Then, it'll give me direct links + summaries to any relevant pages.
Saves a ton of time manually searching through the haystack, and so far, the latest models are pretty good about not missing important things (and catching plenty of things I missed)