(no title)
BinaryIgor | 1 month ago
There are lots of technologies that have been 99% done for decades; it might be the same here.
BinaryIgor | 1 month ago
There are lots of technologies that have been 99% done for decades; it might be the same here.
Philpax|1 month ago
> My co-founders at Anthropic and I were among the first to document and track the “scaling laws” of AI systems—the observation that as we add more compute and training tasks, AI systems get predictably better at essentially every cognitive skill we are able to measure. Every few months, public sentiment either becomes convinced that AI is “hitting a wall” or becomes excited about some new breakthrough that will “fundamentally change the game,” but the truth is that behind the volatility and public speculation, there has been a smooth, unyielding increase in AI’s cognitive capabilities.
> We are now at the point where AI models are beginning to make progress in solving unsolved mathematical problems, and are good enough at coding that some of the strongest engineers I’ve ever met are now handing over almost all their coding to AI. Three years ago, AI struggled with elementary school arithmetic problems and was barely capable of writing a single line of code. Similar rates of improvement are occurring across biological science, finance, physics, and a variety of agentic tasks. If the exponential continues—which is not certain, but now has a decade-long track record supporting it—then it cannot possibly be more than a few years before AI is better than humans at essentially everything.
> In fact, that picture probably underestimates the likely rate of progress. Because AI is now writing much of the code at Anthropic, it is already substantially accelerating the rate of our progress in building the next generation of AI systems. This feedback loop is gathering steam month by month, and may be only 1–2 years away from a point where the current generation of AI autonomously builds the next. This loop has already started, and will accelerate rapidly in the coming months and years. Watching the last 5 years of progress from within Anthropic, and looking at how even the next few months of models are shaping up, I can feel the pace of progress, and the clock ticking down.
torginus|1 month ago
It's quite likely they train on CC output too.
Yeah, there's synthethic data as well, but how do you generate said data is very likely a good question and one that many people have lost a lot of sleep over.
minimaltom|1 month ago
What convinces me is this: I live in SF and have friends at various top labs, and even ignoring architecture improvements the common theme is this: any time researchers have spent time to improve understanding on some specific part of a domain (whether via SFT or RL or whatever), its always worked. Not superhuman, but measurable, repeatable improvements. In the words of sutskever, "these models.. they just wanna learn".
Inb4 all natural trends are sigmoidal or whatever, but so far, the trend is roughly linear, and we havent seen seen a trace of a plateau.
Theres the common argument that "Ghipiti 3 vs 4 was a much bigger step change" but its not if you consider the progression from much before, i.e. BERT and such, then it looks fairly linear /w a side of noise (fries).
ctoth|1 month ago
Bicycles? carbon fiber frames, electronic shifting, tubeless tires, disc brakes, aerodynamic research
Screwdrivers? impact drivers, torque-limiting mechanisms, ergonomic handles
Glass? gorilla glass, smart glass, low-e coatings
Tires? run-flats, self-sealing, noise reduction
Hell even social technologies improve!
How is a technology "done?"
tadfisher|1 month ago
monero-xmr|1 month ago
nancyminusone|1 month ago
A can opener from 100 years ago will open today's cans just fine. Yes, enthusiasts still make improvements; you can design ones that open cans easier, or ones that are cheaper to make (especially if you're in the business of making can openers).
But the main function (opening cans) has not changed.
basch|1 month ago
What used to require specialized integration can now be accomplished by a generalized agent.
storystarling|1 month ago