top | item 42476926

(no title)

why_only_15 | 1 year ago

how is this a plateau since gpt-4? this is significantly better

discuss

order

csomar|1 year ago

First, this model is yet to be released. This is a momentum "announcement". When the O1 was "announced", it was announced as a "breakthrough" but I use Claude/O1 daily and 80% of the time Claude beats it. I also see it as a highly fine-tuned/targeted GPT-4 rather than something that has complex understanding.

So we'll find out if this model is real or not by 2-3 months. My guess is that it'll turn out to be another flop like O1. They needed to release something big because they are momentum based and their ability to raise funding is contingent on their AGI claims.

XenophileJKO|1 year ago

I thought o1 was a fine-tune of GPT-4o. I don't think o3 is though. Likely using the same techniques on what would have been the "GPT-5" base model.

crazylogger|1 year ago

Intelligence has not been LLM's major limiting factor since GPT4. The original GPT4 reports in late-2022 & 2023 already established that it's well beyond an average human in professional fields: https://www.microsoft.com/en-us/research/publication/sparks-.... They failed to outright replaced humans at work not because of lacking intelligence.

We may have progressed from a 99%-accurate chatbot to one that's 99.9%-accurate, and you'd have a hard time telling them apart in normal real world (dumb) applications. A paradigm shift is needed from the current chatbot interface to a long-lived stream of consciousness model (e.g. a brain that constantly reads input and produces thoughts at 10ms refresh rate; remembers events for years and keep the context window from exploding; paired with a cerebellum to drive robot motors, at even higher refresh rates.)

As long as we're stuck at chatbots, LLM's impact on the real world will be very limited, regardless of how intelligent they become.

peepeepoopoo97|1 year ago

O3 is multiple orders of magnitude more expensive to realize a marginal performance gain. You could hire 50 full time PhDs for the cost of using O3. You're witnessing the blowoff top of the scaling hype bubble.

whynotminot|1 year ago

What they’ve proven here is that it can be done.

Now they just have to make it cheap.

Tell me, what has this industry been good at since its birth? Driving down the cost of compute and making things more efficient.

Are you seriously going to assume that won’t happen here?

MVissers|1 year ago

I would agree if the cost of AI compute over performance hasn't been dropping by more than 90-99% per year since GPT3 launched.

This type of compute will be cheaper than Claude 3.5 within 2 years.

It's kinda nuts. Give these models tools to navigate and build on the internet and they'll be building companies and selling services.

fspeech|1 year ago

That's a very static view of the affairs. Once you have a master AI, at a minimum you can use it to train cheaper slightly less capable AIs. At the other end the master AI can train to become even smarter.

Bolwin|1 year ago

The high efficiency version got 75% at just $20/task. When you count the time to fill in the squares, that doesn't sound far off from what a skilled human would charge

kenjackson|1 year ago

People act as if GPT-4 came out 10 years ago.

Jensson|1 year ago

> how is this a plateau since gpt-4? this is significantly better

Significantly better at what? A benchmark? That isn't necessarily progress. Many report preferring gpt-4 to the newer o1 models with hidden text. Hidden text makes the model more reliable, but more reliable is bad if it is reliably wrong at something since then you can't ask it over and over to find what you want.

I don't feel it is significantly smarter, it is more like having the same dumb person spend more thinking than the model getting smarter.