top | item 42890681

(no title)

pookieinc | 1 year ago

Can't wait to try this. What's amazing to me is that when this was revealed just one short month ago, the AI landscape looked very different than it does today with more AI companies jumping into the fray with very compelling models. I wonder how the AI shift has affected this release internally, future releases and their mindset moving forward... How does the efficiency change, the scope of their models, etc.

discuss

order

patrickhogan1|1 year ago

I thought it was o3 that was released one month ago and received high scores on ARC Prize - https://arcprize.org/blog/oai-o3-pub-breakthrough

If they were the same, I would have expected explicit references to o3 in the system card and how o3-mini is distilled or built from o3 - https://cdn.openai.com/o3-mini-system-card.pdf - but there are no references.

Excited at the pace all the same. Excited to dig in. The model naming all around is so confusing. Very difficult to tell what breakthrough innovations occurred.

nycdatasci|1 year ago

Yeah - the naming is confusing. We're seeing o3-mini. o3 yields marginally better performance given exponentially more compute. Unlike OpenAI, customers will not have an option to throw an endless amount of money at specific tasks/prompts.

echelon|1 year ago

There's no moat, and they have to work even harder.

Competition is good.

lesuorac|1 year ago

I really don't think this is true. OpenAI has no moat because they have nothing unique; they're using mostly other people's (like Transformers) architectures and other companies hardware.

Their value-prop (moat) is that they've burnt more money than everybody else. That moat is trivially circumvented by lighting a larger pile of money and less trivially by lighting the pile more efficently.

OpenAI isn't the only company. The Tech companies being beaten massively by Microsoft in #of H100s purchases are the ones with a moat. Google / Amazon with their custom AI chips are going to have a better performance per cost than others and that will be a moat. If you want to get the same performance per cost then you need to spend the time making your own chips which is years of effort (=moat).

lumost|1 year ago

Capex was the theoretical moat, same as TSMC and similar businesses. DeepSeek poked a hole in this theory. OpenAI will need to deliver massive improvements to justify a 1 billion dollar training cost relative to 5 million dollars.

dutchbookmaker|1 year ago

It is still curious though as far as what is actually being automated?

I find huge value in these models as an augmentation of my intelligence and as a kind of cybernetic partner.

I can't think of anything that can actually be automated though in terms of white collar jobs.

The white collar model test case I have in mind is a bank analyst under a bank operations manger. I have done both in the past but there is something really lacking with the idea of the operations manager replacing the analyst with a reasoning model even though DeepSeek annihilates every bank analyst reasoning I ever worked with right now.

If you can't even arbitrage the average bank analyst there might be these really non-intuitive no AI arbitrage conditions with white color work.

wahnfrieden|1 year ago

Collaboration is even better, per open source results.

It is the closed competition model that’s being left in the dust.