> The way work gets done has changed, and enterprises are starting to feel it in big ways.
Why do they say all of this fluff when everyone knows it’s not exactly true yet. Just makes me be cynical of the rest.
When can we say we have enough AI? Even for enterprise? I would guess that for the majority of power users you could stop now and people would be generally okay with it, maybe some further into medical research or things that are actually important.
For Sam Altman and microslop though it seems to be a numbers game, just have everyone in and own everything. It’s not even about AGI anymore I feel.
For classic engineering it's been a boon. This is in a pretty similar vein to the gains mathematicians have been making with AI.
These models can pretty reliably bang out what once was long mathematical solves for hypothetical systems in incredibly short periods of time. It also enables you to do second and third order approximations way easier. What was a first order approach that would take a day, is now a second order approach taking an hour.
And to top it off, they're also pretty damn competent in at least pointing you in the right direction (if nothing else) for getting information about adjacent areas you need to understand.
I've been doing an electro-optical project recently as an electronics guys, and LLMs have been infinitely useful in helping with the optics portion (on top of the mathing electronics speed up).
It's still "trust, but verify" for sure, but damn, it's powerful.
I disagree with your sentiment and genuinely think something big is coming. It doesn't need to be perfect now, but it could be good enough to disrupt SaaS market.
> say all of this fluff when everyone knows it’s not exactly true yet
How do you know it's not exactly true? I am already seeing employees in enterprises are heavily reliant on LLMs, instead of using other SaaS vendors.
* Want to draft email and fix your grammar -> LLMs -> Grammarly is dying
* Want to design something -> Lovable -> No need to wait designer, no need to get access to Figma, let designer design and present, for anything else use lovable, or alternatives
* want to code -> obviously LLMs -> I sometimes feel like JetBrains is probably in code red at the moment, because I am barely opening it (saying this as a heavy user in the past)
To make message shorter, I will share my vision in the reply
> Why do they say all of this fluff when everyone knows it’s not exactly true yet.
There isn’t an incentive not to lie when people will read a lie, understand it to be a lie, and then characterize the lie as “not true yet”. Like if your audience has already invented a term to excuse your lies before you start talking, you categorically do not need to tell the truth.
If people judged OpenAI/Sam Altman’s statements under the premise that they are either true or untrue and that there’s no third thing I imagine that we wouldn’t hear as much about OpenAI.
To be frank I don’t think your worldview is directionally accurate. OpenAI is certainly trying to sell something but every incremental update to these models there are more avenues of value generation being unlocked. For sure it’s not as it was hyped up to be as all the talking heads in the industry were spouting, but there is a lot of interesting ways to use these tools and it’s not for generating slop.
I am already tired of the disaster that is social media. Hilariously, we’ve gotten to the point that multiple countries are banning social media for under 18s.
The costs of AI slop are going to be paid by everyone, social media will ironically becoming far less useful, and the degree of fraud we will see will be … well cyber fraud is already terrifying, what’s the value of infinity added to infinity.
I would say that tech firms are definitely running around setting society on fire at this point.
God, they built all of this on absurd amounts of piracy, and while I am happy to dance on the grave of the MPAA and RIAA, the farming of content from people who have no desire to be harvested is absurd. I believe wikipedia has already started seeing a drop in traffic, which will lead to a reduction in donations. Smaller sites are going to have an even worse time.
> At a major semiconductor manufacturer, agents reduced chip optimization work from six weeks to one day.
I call BS right there. If you can actually do that, you’d spin up a “chip optimization” consultancy and pocket the massive efficiency gain, not sell model access at a couple bucks per million-tokens.
There should be a massive “caveats and terms apply” on that quote.
So far the AI productivity gains have been all bark and no bite. I’ll believe when I see either faster product development, higher quality or lower prices (which indeed happened with other technological breakthroughs, whether the printing press or the loom) - if anything, software quality is going down suggesting we aren’t there yet.
I'm willing to bet "chip optimization work" doesn't mean "the work required to optimize a chip", but "some work tasks performed as part of chip optimization". Basically they sped up some unknown subset of the work from six weeks to one day. Which could be big or could be negligible
I have a hard time believing that the right move for most organizations that aren't already bought into an OpenAI enterprise plan is going to be building their entire business around something like this. This ties you to one model provider that has been having issues keeping up with the other big labs and provides what looks like superficially some extremely useful tools but with unclear amounts of rigor. I don't think I would want to build my business on this if I was an AI-native company that was just starting right now unless they figure out how to make this much more legible and transparent to people.
This is a crowded solution space with participation from cloud, SaaS and data infrastructure vendors. All of these players and their customers have been trying to operationalize LLMs in enterprise workflows for 2+ years. Two big challenges are business ontology and fitting probabilistic tools into processes requiring deterministic outcomes. Overcoming these problems require significant systems integration and process engineering work. What does OpenAI have that makes them specifically capable of solving these problems over Azure, Databricks, Snowflake, etc., who have all been working on these problems for quite a while? I don't know if the press release really addresses any of this, which makes it seem more like marketing copy than anything else.
The question of lock-in is also a major one. Why tether your workflow automation platform to your LLM vendor when that may just be a component of the platform, especially when the pace of change in LLMs specifically is so rapid in almost every conceivable way. I think you'd far rather have an LLM-vendor neutral control plane and disaggregate the lock-in risk somewhat.
As someone who would be in a position to advise enterprises on whether to adopt Frontier, there is simply not enough information for me to follow the "Contact Sales" CTA.
We need technical details, example workflows, case studies, social proof and documentation. Especially when it's so trivial to roll your own agent.
I’m imagining this is like ai native slack. Which would be a super useful thing. But I’m with you, who knows? I had a ceo sign up - I’m curious to see one of my companies try it out.
> "75% of enterprise workers say AI helped them do tasks they couldn’t do before."
> "At OpenAI alone, something new ships roughly every three days, and that pace is getting faster."
- We're seeing all these productivity improvements and it seems as though devs/"workers" are being forced to output so much more, are they now being paid proportionally for this output? Enterprise workers now have to move at the pace of their agents and manage essentially 3-4 workers at all times (we've seen this in dev work). Where are the salary bumps to reflect this?
- Why do AI companies struggle to make their products visually distinct OpenAI Frontier looks the exact same as OpenAI Codex App which looks the exact same as GPT
- OpenAI going for the agent management market share (Dust, n8n, crewai)
> Why do AI companies struggle to make their products visually distinct OpenAI Frontier looks the exact same as OpenAI Codex App which looks the exact same as GPT|
Because that requires human thought and it might take couple weeks more to design and develop. Do something fast is the mantra, not doing something good.
> "At OpenAI alone, something new ships roughly every three days, and that pace is getting faster."
This is a weird flex. Organizations have long strived to ship multiple times per day, it’s even one of the main business metrics for “high” performance orgs in DORA.
The fact that the premier “AI” company is barely able to deliver at a rate that is considered “high” instead of “medium” (the line is at shipping once per week) tells me that even at OpenAI writing the code is not the bottleneck.
Organizational inefficiency is as usual the real culprit.
I imagine the salary bumps occur when the individuals who have developed these productivity boosting skills apply for jobs at other companies, and either get those jobs or use the offer to negotiate a pay increase with their current employer.
I doubt that there is any risk at all. One is that this is probably a minor aspect of the business seeing these AIs. AIs that ive seen deployed so far for real work seem to best be done in side business offerings that you can tolerate a high false positive / false negative rate, but also isnt price sensitive enough that building a fully done automated pipeline classically is worth it or possible.
I think that’s sort of right. Said differently and the way I process these tools. You have mundane tasks that a human does where there are clear guidelines on acceptance. The underlying tools being used have clear APIs for automation. You can use natural language to automate said task without a full blown engineer in the loop. For non business engineers this sounds silly but it can save a lot of time for business users.
There are many ways to interpret it. What’s your interpretation?
It is also interesting to contrast calling them by name vs. the other example, “a major semiconductor company”, not called by name. Though of course, there are also different reasonable ways to interpret that.
This is OpenAI taking the concept of AI coworkers seriously down to the level of “identity” for these agents.
This reminded me of Kairos which came up a few days ago (https://www.kairos.computer/) however I actually feel much better and more inspired at the angle OpenAI took than the angle kairos took. OpenAI’s genuinely feels like a platform for a coworker while Kairos is yet another cool landing page, yet another agent platform with X amount of data integrations. The use cases in OpenAI’s article also felt more concrete and impressive to be honest.
The fact that “as agents have gotten more capable, the opportunity gap between what models can do and what teams can actually deploy has grown.” is definitely true. I think the analogy whose source I have forgot commented that we have F1 cars driving at 60/kmh so for a lot of enterprises they are not even at the deployment limit where improving benchmarks matter. They are still at the level of not being able to provide the right info, not having the right evaluation and improvement frameworks .etc.
Using “Opening the AI Frontier” as a heading would be in really poor taste before OpenAI released their OSS models (earning their ClosedAI moniker) but I guess it’s a bit less offensive now. I think this product combined with OpenAI FDEs is going to make a lot of large industries inaccessible to startups but there may still be value in companies like Kairos watching what OpenAI does in this space and copying them.
> “Partnering with OpenAI helps us give thousands of State Farm agents and employees better tools to serve our customers. By pairing OpenAI’s Frontier platform and deployment expertise with our people, we’re accelerating our AI capabilities and finding new ways to help millions plan ahead, protect what matters most, and recover faster when the unexpected happens.”
— Joe Park, Executive Vice President and Chief Digital Information Officer at State Farm
Ok how about you tell us one thing this shit is actually doing instead of vague nonsense.
> This is happening for AI leaders across every industry, and the pressure to catch up is increasing.
> Enterprises are feeling the pressure to figure this out now, because the gap between early leaders and everyone else is growing fast.
> The question now isn’t whether AI will change how work gets done, but how quickly your organization can turn agents into a real advantage.
FOMO at its finnest. "Quick before you're left behind for good this time!"
The idea itself has sensibility. It is the kind of AI application I've been pitching to companies, though without going all in on agents. Though I think it would be foollish for any CEO to build this on top of OpenAi, instead of a self-hosted model, and also train the model for them. You're just externalizing your internal knowledge this way.
If your employee does (with intent/malice) something very egregious, you can always fire and sue them for the damage done. Out of curiosity, what will the option be if some AI agent does the same?
The animations look nice, but why does OpenAI want to be the substrate for intelligence? It's at a disadvantage there vs competitors with strong domain experience.
Why didn't your VC friend drop some seed on you back then if the stealth startup was doing 25MM ARR? They probably could've had a better deal with you!
Well, even working as an AI engineer is no longer secure. It may soon be the case that all humans work for bots created by others. Is that the universal salary we are talking about?
Great, some more bullshit our founders are going to force onto the company while they never use it, ignore everyone’s feedback that it doesn’t work, and expect everything to be done twice as fast now
Another day, another blog post about managing Agents. Its for pretend companies who think they are doing something worthwhile if they run 4000 agents at once.
It's not going to "trigger" mass layoffs; it'll be used as a convenient scapegoat for mass layoffs that were always going to happen anyway to make room for more stock buybacks. Business as usual. Same shit, different hat.
bilekas|24 days ago
Why do they say all of this fluff when everyone knows it’s not exactly true yet. Just makes me be cynical of the rest.
When can we say we have enough AI? Even for enterprise? I would guess that for the majority of power users you could stop now and people would be generally okay with it, maybe some further into medical research or things that are actually important.
For Sam Altman and microslop though it seems to be a numbers game, just have everyone in and own everything. It’s not even about AGI anymore I feel.
WarmWash|24 days ago
These models can pretty reliably bang out what once was long mathematical solves for hypothetical systems in incredibly short periods of time. It also enables you to do second and third order approximations way easier. What was a first order approach that would take a day, is now a second order approach taking an hour.
And to top it off, they're also pretty damn competent in at least pointing you in the right direction (if nothing else) for getting information about adjacent areas you need to understand.
I've been doing an electro-optical project recently as an electronics guys, and LLMs have been infinitely useful in helping with the optics portion (on top of the mathing electronics speed up).
It's still "trust, but verify" for sure, but damn, it's powerful.
Waterluvian|24 days ago
I think two things:
1. Not everyone knows.
2. As we've seen at a national scale: if you just lie lie lie enough it starts being treated like the truth.
throwaw12|24 days ago
> say all of this fluff when everyone knows it’s not exactly true yet
How do you know it's not exactly true? I am already seeing employees in enterprises are heavily reliant on LLMs, instead of using other SaaS vendors.
* Want to draft email and fix your grammar -> LLMs -> Grammarly is dying
* Want to design something -> Lovable -> No need to wait designer, no need to get access to Figma, let designer design and present, for anything else use lovable, or alternatives
* want to code -> obviously LLMs -> I sometimes feel like JetBrains is probably in code red at the moment, because I am barely opening it (saying this as a heavy user in the past)
To make message shorter, I will share my vision in the reply
baxtr|24 days ago
jrflowers|24 days ago
There isn’t an incentive not to lie when people will read a lie, understand it to be a lie, and then characterize the lie as “not true yet”. Like if your audience has already invented a term to excuse your lies before you start talking, you categorically do not need to tell the truth.
If people judged OpenAI/Sam Altman’s statements under the premise that they are either true or untrue and that there’s no third thing I imagine that we wouldn’t hear as much about OpenAI.
throwaw12|24 days ago
Why stop though? Google didn't say Altavista and Yahoo is good enough for the majority of power users, let's not create something better.
When you have something good at your hand and you see other possibilities, would you say let's stop, this is enough?
moritzwarhier|23 days ago
Because it's easy to paraphrase in a myriad of ways without having any real information.
A renowned scientist coined the term "bullshitting" for it, I think.
infecto|24 days ago
intended|24 days ago
I’m good for now.
I am already tired of the disaster that is social media. Hilariously, we’ve gotten to the point that multiple countries are banning social media for under 18s.
The costs of AI slop are going to be paid by everyone, social media will ironically becoming far less useful, and the degree of fraud we will see will be … well cyber fraud is already terrifying, what’s the value of infinity added to infinity.
I would say that tech firms are definitely running around setting society on fire at this point.
God, they built all of this on absurd amounts of piracy, and while I am happy to dance on the grave of the MPAA and RIAA, the farming of content from people who have no desire to be harvested is absurd. I believe wikipedia has already started seeing a drop in traffic, which will lead to a reduction in donations. Smaller sites are going to have an even worse time.
pier25|24 days ago
They're desperate?
mikemarsh|24 days ago
Nextgrid|24 days ago
I call BS right there. If you can actually do that, you’d spin up a “chip optimization” consultancy and pocket the massive efficiency gain, not sell model access at a couple bucks per million-tokens.
There should be a massive “caveats and terms apply” on that quote.
So far the AI productivity gains have been all bark and no bite. I’ll believe when I see either faster product development, higher quality or lower prices (which indeed happened with other technological breakthroughs, whether the printing press or the loom) - if anything, software quality is going down suggesting we aren’t there yet.
wongarsu|24 days ago
bramhaag|24 days ago
sjakzbbz|24 days ago
d--b|24 days ago
Der_Einzige|24 days ago
estsauver|24 days ago
louiereederson|24 days ago
The question of lock-in is also a major one. Why tether your workflow automation platform to your LLM vendor when that may just be a component of the platform, especially when the pace of change in LLMs specifically is so rapid in almost every conceivable way. I think you'd far rather have an LLM-vendor neutral control plane and disaggregate the lock-in risk somewhat.
turnsout|24 days ago
We need technical details, example workflows, case studies, social proof and documentation. Especially when it's so trivial to roll your own agent.
vessenes|24 days ago
ossa-ma|24 days ago
> "At OpenAI alone, something new ships roughly every three days, and that pace is getting faster."
- We're seeing all these productivity improvements and it seems as though devs/"workers" are being forced to output so much more, are they now being paid proportionally for this output? Enterprise workers now have to move at the pace of their agents and manage essentially 3-4 workers at all times (we've seen this in dev work). Where are the salary bumps to reflect this?
- Why do AI companies struggle to make their products visually distinct OpenAI Frontier looks the exact same as OpenAI Codex App which looks the exact same as GPT
- OpenAI going for the agent management market share (Dust, n8n, crewai)
falloutx|24 days ago
Because that requires human thought and it might take couple weeks more to design and develop. Do something fast is the mantra, not doing something good.
jurgenburgen|23 days ago
This is a weird flex. Organizations have long strived to ship multiple times per day, it’s even one of the main business metrics for “high” performance orgs in DORA.
The fact that the premier “AI” company is barely able to deliver at a rate that is considered “high” instead of “medium” (the line is at shipping once per week) tells me that even at OpenAI writing the code is not the bottleneck.
Organizational inefficiency is as usual the real culprit.
vessenes|24 days ago
Increased efficiency benefits capital not labor; always good to remember to look at which side you prefer to be on
boppo1|24 days ago
Revenue bumps and ROI bumps both gotta come first. Iirc, there's a struggle with the first one.
simonw|24 days ago
throwaw12|24 days ago
Let me increase salary to all my employees 2x, because productivity is 4x'ed now - never said a capitalist.
Insanity|24 days ago
OpenAI might burn through all their money, and end up dropping support for these features and/or being sold off for parts altogether.
ecshafer|24 days ago
WarmWash|24 days ago
jaredcwhite|24 days ago
simianwords|24 days ago
In our company we have a list of long tail "workflows" or "processes" that really just involves reading a document and filling a form.
For example, how do I even get access to a new DB? Or a new AWS account?
Can this tool help us create an agent that can automate this with some reasonable accuracy?
I see OpenAI frontier as quick way to automate these long tail processes.
infecto|24 days ago
unknown|24 days ago
[deleted]
andsoitis|24 days ago
spprashant|24 days ago
j16sdiz|24 days ago
andsoitis|24 days ago
It is also interesting to contrast calling them by name vs. the other example, “a major semiconductor company”, not called by name. Though of course, there are also different reasonable ways to interpret that.
chairhairair|24 days ago
Downside: your employees’ agents decide that they should collectively bargain.
4corners4sides|22 days ago
This reminded me of Kairos which came up a few days ago (https://www.kairos.computer/) however I actually feel much better and more inspired at the angle OpenAI took than the angle kairos took. OpenAI’s genuinely feels like a platform for a coworker while Kairos is yet another cool landing page, yet another agent platform with X amount of data integrations. The use cases in OpenAI’s article also felt more concrete and impressive to be honest.
The fact that “as agents have gotten more capable, the opportunity gap between what models can do and what teams can actually deploy has grown.” is definitely true. I think the analogy whose source I have forgot commented that we have F1 cars driving at 60/kmh so for a lot of enterprises they are not even at the deployment limit where improving benchmarks matter. They are still at the level of not being able to provide the right info, not having the right evaluation and improvement frameworks .etc.
Using “Opening the AI Frontier” as a heading would be in really poor taste before OpenAI released their OSS models (earning their ClosedAI moniker) but I guess it’s a bit less offensive now. I think this product combined with OpenAI FDEs is going to make a lot of large industries inaccessible to startups but there may still be value in companies like Kairos watching what OpenAI does in this space and copying them.
bix6|24 days ago
Ok how about you tell us one thing this shit is actually doing instead of vague nonsense.
franktankbank|24 days ago
unknown|24 days ago
[deleted]
mobiuscog|24 days ago
Because for many of us, AI is "not approved until legal say so".
dbbk|24 days ago
mhitza|24 days ago
> Enterprises are feeling the pressure to figure this out now, because the gap between early leaders and everyone else is growing fast.
> The question now isn’t whether AI will change how work gets done, but how quickly your organization can turn agents into a real advantage.
FOMO at its finnest. "Quick before you're left behind for good this time!"
The idea itself has sensibility. It is the kind of AI application I've been pitching to companies, though without going all in on agents. Though I think it would be foollish for any CEO to build this on top of OpenAi, instead of a self-hosted model, and also train the model for them. You're just externalizing your internal knowledge this way.
TrackerFF|24 days ago
imafish|24 days ago
CuriouslyC|24 days ago
chain030|24 days ago
[deleted]
turbocon|24 days ago
baxtr|24 days ago
Sol-|24 days ago
unknown|24 days ago
[deleted]
techpression|24 days ago
miltonlost|24 days ago
rognjen|23 days ago
franktankbank|24 days ago
Oras|24 days ago
1899-12-30|24 days ago
Weird amounts of overlap between the two.
moralestapia|24 days ago
neom|24 days ago
written-beyond|24 days ago
Aeroi|24 days ago
chain030|24 days ago
anzerarkin|24 days ago
loveparade|24 days ago
an0malous|24 days ago
falloutx|24 days ago
paradite|24 days ago
thepasch|24 days ago
bilekas|24 days ago
falloutx|24 days ago
songodongo|24 days ago
[deleted]
rvnx|24 days ago
[deleted]
replwoacause|24 days ago
ImPostingOnHN|24 days ago
You can of course read it yourself, interpret it, verify it is correct, and post in your own words.
yanis_t|24 days ago
[deleted]
ninja3925|24 days ago
[deleted]