(no title)
insignificntape | 6 months ago
Look at Claude Code. Unless they hacked into private GitHub/GitLab repos... (which, honestly, I wouldn't put beyond these tech CEO's, see what CloudFlare recently found out about Perplexity as an example), but unless they really did that, they trained Claude 4 on approximately the same data as Claude 3. Yet for some reason its agentic coding skills are stupidly enhanced when compared to previous iterations.
Data no longer seems to be the bottleneck. Which is understandable. At the end of the day, data is really just a way to get the AI to make a predicion and run gradient descent on it. If you can generate for example a bunch of unit tests, you can let the AI freewheel its way into getting them to pass. A kid learns to catch a baseball not by seeing a million examples of people catching balls, but instead by testing their skills in the real world, and gathering feedback from the real world on whether their attempt to catch the ball was successful. If an AI can try to achieve goals and assess whether or not its actions lead to a successful or a failed attempt, who needs more data?
No comments yet.