It's interesting to me that whenever a new breakthrough in AI use comes up, there's always a flood of people who come in to handwave away why this isn't actually a win for LLMs. Like with the novel solutions GPT 5.2 has been able to find for erdos problems - many users here (even in this very thread!) think they know more about this than Fields medalist Terence Tao, who maintains this list showing that, yes, LLMs have driven these proofs: https://github.com/teorth/erdosproblems/wiki/AI-contribution...
loire280|16 days ago
threethirtytwo|15 days ago
It's easy to fall into a negative mindset because the justification is real and what we see is just the beginning.
Obviously we are not at a point where developers aren't needed. But One developer can do more. And that is a legitimate reason to higher less developers.
The impending reality of the upward moving trendline is that AI becomes so capable that it can replace the majority of developers. That future is so horrifying that people need to scaffold logic to unjustifiy it.
anon84873628|15 days ago
dakolli|16 days ago
CEOs/decision makers would rather give all their labour budget to tokens if they could just to validate this belief. They are bitter that anyone from a lower class could hold any bargaining chips, and thus any influence over them. It has nothing to do with saving money, they would gladly pay the exact same engineering budget to Anthropic for tokens (just like the ruling class in times past would gladly pay for slaves) if it can patch that bitterness they have for the working class's influence over them.
The inference companies (who are also from this same class of people) know this, and are exploiting this desire. They know if they create the idea that AI progress is at an unstoppable velocity decision makers will begin handing them their engineering budgets. These things don't even have to work well, they just need to be perceived as effective, or soon to be for decision makers to start laying people off.
I suspect this is going to backfire on them in one of two ways.
1. French Revolution V2, they all get their heads cutoff in 15 years, or an early retirement on a concrete floor.
2. Many decisions makers will make fools of themselves, destroy their businesses and come begging to the working class for our labor, giving the working class more bargaining chips in the process.
Either outcome is going to be painful for everyone, lets hope people wake up before we push this dumb experiment too far.
lovecg|16 days ago
Toutouxc|16 days ago
threethirtytwo|15 days ago
Like I have compassion, but I can't healthily respect people who try so hard to rewrite reality so that the future isn't so horrifying. I'm a SWE and I'm affected too, but it's not like I'm going to lie to myself about what's happening.
dakolli|16 days ago
They just want people to think the barrier of entry has dropped to the ground and that value of labour is getting squashed, so society writes a permission slip for them to completely depress wages and remove bargaining chips from the working class.
Don't fall for this, they want to destroy any labor that deals with computer I/0, not just SWE. This is the only value "agentic tooling" provides to society, slaves for the ruling class. They yearn for the opportunity to own slaves again.
It can't do most of your work, and you know that if you work on anything serious. But If C-suite who hasn't dealt with code in two decades, thinks this is the case because everyone is running around saying its true they're going to make sure they replace humans with these bot slaves, they really do just want slaves, they have no intention of innovating with these slaves. People need to work to eat, now unless LLMs are creating new types of machines that need new types of jobs, like previous forms of automation, then I don't see why they should be replacing the human input.
If these things are so good for business, and are pushing software development velocity.. Why is everything falling apart? Why does the bulk of low stakes software suck. Why is Windows 11 so bad? Why aren't top hedge funds, medical device manufactures (places where software quality is high stakes) replacing all their labor? Where are the new industries? They don't do anything novel, they only serve to replace inputs previously supplied by humans so the ruling class can finally get back to good old feeling of having slaves that can't complain.
D-Machine|15 days ago
The thing about spin and AI hype (besides being trivially easy to write) is that is isn't even trying to be objective. It would help if a lot of these articles would more carefully lay out what is actually surprising, and what is not, given current tech and knowledge.
Only a fool would think we aren't potentially on the verge of something truly revolutionary here. But only a fool would also be certain that the revolution has already happened, or that e.g. AGI is necessarily imminent.
The reason HN has value is because you can actually see some specifics of the matter discussed, and, if you are lucky, an expert even might join in to qualify everything. But pointing out "how interesting that there are extremes to this" is just engagement bait.
famouswaffles|15 days ago
Really? Is that happening in this thread because I can barely see it. Instead you have a bunch of asinine comments butthurt about acknowledging a GPT contribution that would have been acknowledged any day had a human done it.
>they know more about this than Fields medalist Terence Tao, who maintains this list showing that, yes, though these are not interesting proofs to most modern mathematicians, LLMs are a major factor in a tiny minority of these mostly-not-very-interesting proofs
This is part of the problem really. Your framing is disingenuous and I don't really understand why you feel the need to downplay it so. They are interesting proofs. They are documented for a reason. It's not cutting edge research, but it is LLMs contributing meaningfully to formal mathematics, something that was speculative just years ago.
Leynos|15 days ago
threethirtytwo|15 days ago
This sentence sounds contradictory. You're a fool to not think we're on the verge of something revolutionary and you are a fool if you think something revolutionary like AGI is on the verge of happening?
But to your point if "revolutionary" and "agi" are different things, I'm certain the "revolution" has already happened. ChatGPT was the step function change and everything else is just following the upwards trendline post release of chatGPT.
Anecdotally I would say 50% of developers never code things by hand anymore. That is revolutionary in itself and by the statement itself it has already literally happened.
krackers|15 days ago
And in this case "derives a new result in theoretical physics" is again overstating things, it's closer to "simplify and propose a more general form for a previously worked out sequence of amplitudes" which sounds less magical, and closer to something like what Mathematica could do, or an LLM-enhanced symbolic OEIS. Obviously still powerful and useful, but less hype-y.
newswasboring|15 days ago
How is this different than a new result? Many a careers in academia are built on simplifying mathematics.
tclancy|16 days ago
It's interesting to me that whenever AI gets a bunch of instructions from a reasonably bright person who has a suspicion about something, can point at reasons why, but not quite put their finger on it, we want to credit the AI for the insight.
cman1444|15 days ago
austinwade|16 days ago
hgfda|16 days ago
https://www.math.columbia.edu/~woit/wordpress/?p=15362
Let's wait a couple of days whether there has been a similar result in the literature.
gjm11|16 days ago
epolanski|16 days ago
The reality is: "GPT 5.2 found a more general and scalable form of an equation, after crunching for 12 hours supervised by 4 experts in the field".
Which is equivalent to taking some of the countless niche algorithms out there and have few experts in that algo have LLMs crunch tirelessly till they find a better formula. After same experts prompted it in the right direction and with the right feedback.
Interesting? Sure. Speaks highly of AI? Yes.
Does it suggest that AI is revolutionizing theoretical physics on its own like the title does? Nope.
jdthedisciple|16 days ago
Yet, if some student or child achieved the same – under equal supervision – we would call him the next Einstein.
unknown|16 days ago
[deleted]
MatejKafka|16 days ago
D-Machine|15 days ago
There are simple limitations that follow from these basic facts (or which follow with e.g. extreme but not 100% certainty), such that many experts openly state that e.g. LLMs have serious limitations, but, still, despite all this, you get some very extreme claims about capabilities, from supporters, that are extremely hard to reconcile with these basic and indisputable facts.
That, and the massive investment and financial incentives means that the counter-reaction is really quite rational (but still potentially unwarranted, in some/many practical cases).
NegativeK|16 days ago
There is no loud, moderate voice. It makes me very tired of the blasting rhetoric that invades _every_ space.
ijidak|16 days ago
It reminds me of an episode of Star Trek, "The Measure of a Man" I think it's called, where it is argued that Data is just a machine and Picard tries to prove that no he is a life form.
And the challenge is, how do you prove that?
Every time these LLMs get better, the goalposts move again.
It makes me wonder, if they ever did become sentient, how would they be treated?
It's seeming clear that they would be subject to deep skepticism and hatred much more pervasive and intense than anything imagined in The Next Generation.
otabdeveloper4|14 days ago
Wait, so this is now a contest (or maybe war) that LLMs are supposed to win?
Wild.
Bengalilol|15 days ago
What I question here is OpenAI's article: it could be way more generous towards the reader.
bjackman|15 days ago
One group of people saying every amazing breakthrough "doesn't count" because the AI didn't put a cherry on top. Another group of people saying humans are obsolete, I just wrote a web browser with AI bro.
There are some voices out there that are actually examining the boundaries, possibilities and limitations. A lot of good stuff like that makes it onto HN but then if you open the comments it's just intellectual dregs. Very strange.
ISTR there was a similar phenomenon with cryptocurrency. But with that it was always clear the fog of bullshit would blow away sooner or later. But maybe if it hadn't been there, a load of really useful stuff could have come out of the crypto hype wave? Anyway, AI isn't gonna blow over like crypto did. I guess we have more of a runway to grow out of this infantile phase.
CrimsonRain|15 days ago
_giorgio_|16 days ago
They never surrender.
D-Machine|15 days ago
No one cares about how "AGI" or whatever the fuck term or internet-argument goalpost you cared about X months ago was. Everyone cares about what current tech can do NOW, and under what conditions, and when it fails catastrophically. That is all that matters.
So, refining the conditions of an LLM win (or loss) is all that matters (not who wins or loses depending on some particular / historical refinement). Complaining that some people see some recent result as a loss (or win) is just completely failing to understand the actual game being played / what really matters here.
threethirtytwo|15 days ago
Take a look at this entire thread. Everyone and I mean everyone is talking as if AI is some sort of fraud and everything is just hype. But then this thread is all against, AI, I mean all of it. If anything the Anti-hype around AI is what's flooding the world right now. If AI hype was through the roof we'd see the opposite effect on HN.
I think it's a strange contradiction in the human mind. At work outside of HN, what I see is roughly 50-60% of developers no longer code by hand. They all use AI. Then they come onto HN and they start Anti-hyping it. It's universal. They use it and they're against it at the same time.
The contradiction is strange, but it also makes sense because AI is a thing that is attacking what programmers take pride in. Most programmers are so proud of their abilities and intelligence as it relates to their jobs and livelihood. AI is on a trendline of replacing this piece by piece. It makes perfect sense for them to talk shit but at the same time they have to use it to keep up with the competition.
cxvwK|15 days ago