top | item 42913417

(no title)

DigitalSea | 1 year ago

Not sure if people picked up on it, but this is being powered by the unreleased o3 model. Which might explain why it leaps ahead in benchmarks considerably and aligns with the claims o3 is too expensive to release publicly. Seems to be quite an impressive model and the leading out of Google, DeepSeek and Perplexity.

discuss

order

lordofgibbons|1 year ago

> Which might explain why it leaps ahead in benchmarks considerably and aligns with the claims o3 is too expensive to release publicly

It's the only tool/system (I won't call it an LLM) in their released benchmarks that has access to tools and the web. So, I'd wager the performance gains are strictly due to that.

If an LLM (o3) is too expensive to be released to the public, why would you use it in a tool that has to make hundreds of inference calls to it to answer a single question? You'd use a much cheaper model. Most likely o3-mini or o1-mini combined with o4-mini for some tasks.

famouswaffles|1 year ago

>why would you use it in a tool that has to make hundreds of inference calls to it to answer a single question? You'd use a much cheaper model.

The same reason a lot of people switched to GPT-4 when it came out even though it was much more expensive than 3 - doesn't matter how cheap it is if it isn't good enough/much worse.

xbmcuser|1 year ago

It was expensive as they wanted to charge more for it but deepseek has forced their hand

willy_k|1 year ago

They’ve only released o3-mini, which is a powerful model but not the full o3 that is being claimed as too expensive to release. That being said, DeepSeek for sure forced their hand to release o3-mini to the public.

Sparkyte|1 year ago

Rightfully so, some models are getting super efficient.

bbor|1 year ago

Interesting, thanks for highlighting! Did not pick up on that. Re:"leading", tho:

Effectiveness in this task environment is well beyond the specific model involved, no? Plus they'd be fools (IMHO) to only use one size of model for each step in a research task -- sure, o3 might be an advantage when synthesizing a final answer or choosing between conflicting sources, but there are many, many steps required to get to that point.

xendipity|1 year ago

I don't believe we have any indication that the big offerings (claude.ai, Gemini, operator, tasks, canvas, chatgpt) use multiple models in one call (other than for different modalities like having Gemini create an image). It seems to actually be very difficult technically and I'm curious as to why.

I wonder how much of an impact our being still so early in the productization phase of this all is. Like it takes a ton of work and training and coordination to get multiple models synced up into an offering and I think the companies are still optimizing for getting new ideas out there rather truly optimizing them.

mistercheph|1 year ago

I'm sure o3 will be a generation ahead of whatever deepseek, google and meta are doing today when it launches in 10 months, super impressive stuff.

petesergeant|1 year ago

I’m not sure if you’re implying this subtly in your comment or not, as it’s early here, but it does of course need to be a generation ahead of what 10 months of their competitors moving forward have done too. Nobody is standing still

bitshiftfaced|1 year ago

> but this is being powered by the unreleased o3 model

What makes you believe that?

_bin_|1 year ago

they explicitly stated it in the launch

ai-christianson|1 year ago

Has anyone here tried it out yet?

nycdatasci|1 year ago

Pro user. No access like everyone else.

OpenAI is very much in an existential crisis and their poor execution is not helping their cause. Operator or “deep research” should be able to assume the role of a Pro user, run a quick test, and reliably report on whether this is working before the press release right?