O3 is multiple orders of magnitude more expensive to realize a marginal performance gain. You could hire 50 full time PhDs for the cost of using O3. You're witnessing the blowoff top of the scaling hype bubble.
Like they've been making it all this time? Cheaper and cheaper? Less data, less compute, fewer parameters, but the same, or improved performance? Not what we can observe.
>> Tell me, what has this industry been good at since its birth? Driving down the cost of compute and making things more efficient.
No, actually the cheaper compute gets the more of it they need to use or their progress stalls.
> What they’ve proven here is that it can be done.
No they haven't, these results do not generalize, as mentioned in the article:
"Furthermore, early data points suggest that the upcoming ARC-AGI-2 benchmark will still pose a significant challenge to o3, potentially reducing its score to under 30% even at high compute"
Meaning, they haven't solved AGI, and the task itself do not represent programming well, these model do not perform that well on engineering benchmarks.
Yes, that's exactly what I'm implying, otherwise they would have done it a long time ago, given that the fundamental transformer architecture hasn't changed since 2017. This bubble is like watching first year CS students trying to brute force homework problems.
That's a very static view of the affairs. Once you have a master AI, at a minimum you can use it to train cheaper slightly less capable AIs. At the other end the master AI can train to become even smarter.
The high efficiency version got 75% at just $20/task. When you count the time to fill in the squares, that doesn't sound far off from what a skilled human would charge
whynotminot|1 year ago
Now they just have to make it cheap.
Tell me, what has this industry been good at since its birth? Driving down the cost of compute and making things more efficient.
Are you seriously going to assume that won’t happen here?
YeGoblynQueenne|1 year ago
Like they've been making it all this time? Cheaper and cheaper? Less data, less compute, fewer parameters, but the same, or improved performance? Not what we can observe.
>> Tell me, what has this industry been good at since its birth? Driving down the cost of compute and making things more efficient.
No, actually the cheaper compute gets the more of it they need to use or their progress stalls.
Jensson|1 year ago
No they haven't, these results do not generalize, as mentioned in the article:
"Furthermore, early data points suggest that the upcoming ARC-AGI-2 benchmark will still pose a significant challenge to o3, potentially reducing its score to under 30% even at high compute"
Meaning, they haven't solved AGI, and the task itself do not represent programming well, these model do not perform that well on engineering benchmarks.
peepeepoopoo97|1 year ago
MVissers|1 year ago
This type of compute will be cheaper than Claude 3.5 within 2 years.
It's kinda nuts. Give these models tools to navigate and build on the internet and they'll be building companies and selling services.
fspeech|1 year ago
Bolwin|1 year ago