The headline may make it seem like AI just discovered some new result in physics all on its own, but reading the post, humans started off trying to solve some problem, it got complex, GPT simplified it and found a solution with the simpler representation. It took 12 hours for GPT pro to do this. In my experience LLM’s can make new things when they are some linear combination of existing things but I haven’t been to get them to do something totally out of distribution yet from first principles.
CGMthrowaway|16 days ago
Humans have worked out the amplitudes for integer n up to n = 6 by hand, obtaining very complicated expressions, which correspond to a “Feynman diagram expansion” whose complexity grows superexponentially in n. But no one has been able to greatly reduce the complexity of these expressions, providing much simpler forms. And from these base cases, no one was then able to spot a pattern and posit a formula valid for all n. GPT did that.
Basically, they used GPT to refactor a formula and then generalize it for all n. Then verified it themselves.
I think this was all already figured out in 1986 though: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.56... see also https://en.wikipedia.org/wiki/MHV_amplitudes
godelski|16 days ago
I think this is a prime example of where it is easy to think something is solved when looking at things from a high level but making an erroneous conclusion due to lack of domain expertise. Classic "Reviewer 2" move. Though I'm not a domain expert and so if there was no novelty over Parke and Taylor I'm pretty sure this will get thrashed in review.
btown|16 days ago
This result, by itself, does not generalize to open-ended problems, though, whether in business or in research in general. Discovering the specification to build is often the majority of the battle. LLMs aren't bad at this, per se, but they're nowhere near as reliably groundbreaking as they are on verifiable problems.
lupsasca|16 days ago
woeirua|16 days ago
helterskelter|16 days ago
Slightly OT, but wasn't this supposed to be largely solved with amplituhedrons?
ericmay|16 days ago
nine_k|16 days ago
torginus|16 days ago
randomtoast|16 days ago
Can humans actually do that? Sometimes it appears as if we have made a completely new discovery. However, if you look more closely, you will find that many events and developments led up to this breakthrough, and that it is actually an improvement on something that already existed. We are always building on the shoulders of giants.
davorak|16 days ago
From my reading yes, but I think I am likely reading the statement differently than you are.
> from first principles
Doing things from first principles is a known strategy, so is guess and check, brute force search, and so on.
For an llm to follow a first principles strategy I would expect it to take in a body of research, come up with some first principles or guess at them, then iteratively construct and tower of reasonings/findings/experiments.
Constructing a solid tower is where things are currently improving for existing models in my mind, but when I try openai or anthropic chat interface neither do a good job for long, not independently at least.
Humans also often have a hard time with this in general it is not a skill that everyone has and I think you can be a successful scientist without ever heavily developing first principles problem solving.
samrus|16 days ago
These have been identified as various things. Eureka moments, strokes of genius, out of the box thinking, lateral thinking.
LLMs have not shown to be capable of this. They might be in the future, but they havent yet
dotancohen|16 days ago
You could nitpick a rebuttal, but no matter how many people you give credit, general relativity was a completely novel idea when it was proposed. I'd argue for special relatively as well.
CooCooCaCha|16 days ago
The process you’re describing is humans extending our collective distribution through a series of smaller steps. That’s what the “shoulders of giants” means. The result is we are able to do things further and further outside the initial distribution.
So it depends on if you’re comparing individual steps or just the starting/ending distributions.
tjr|16 days ago
utopiah|16 days ago
So that's actually 2 different regimes on how to proceed. Both are useful but arguably breaking off of the current paradigm is much harder and thus rare.
D-Machine|16 days ago
There are genuine creative insights that come from connecting two known semantic spaces in a way that wasn't obvious before (e.g, novel isomorphism). It is very conceivable that LLMs could make this kind of connection, but we haven't really seen a dramatic form of this yet. This kind of connection can lead to deep, non-trivial insights, but whether or not it is "out-of-distribution" is harder to answer in this case.
tshaddox|16 days ago
godelski|16 days ago
Seriously, think about it for a second...
If that were true then science should have accelerated a lot faster. Science would have happened differently and researchers would have optimized to trying to ingest as many papers as they can.
Dig deep into things and you'll find that there are often leaps of faith that need to be made. Guesses, hunches, and outright conjectures. Remember, there are paradigm shifts that happen. There are plenty of things in physics (including classical) that cannot be determined from observation alone. Or more accurately, cannot be differentiated from alternative hypotheses through observation alone.
I think the problem is when teaching science we generally teach it very linearly. As if things easily follow. But in reality there is generally constant iterative improvements but they more look like a plateau, then there are these leaps. They happen for a variety of reasons but no paradigm shift would be contentious if it was obvious and clearly in distribution. It would always be met with the same response that typical iterative improvements are met with "well that's obvious, is this even novel enough to be published? Everybody already knew this" (hell, look at the response to the top comment and my reply... that's classic "Reviewer #2" behavior). If it was always in distribution progress would be nearly frictionless. Again, with history in how we teach science we make an error in teaching things like Galileo, as if The Church was the only opposition. There were many scientists that objected, and on reasonable grounds. It is also a problem we continually make in how we view the world. If you're sticking with "it works" you'll end up with a geocentric model rather than a heliocentric model. It is true that the geocentric model had limits but so did the original heliocentric model and that's the reason it took time to be adopted.
By viewing things at too high of a level we often fool ourselves. While I'm criticizing how we teach I'll also admit it is a tough thing to balance. It is difficult to get nuanced and in teaching we must be time effective and cover a lot of material. But I think it is important to teach the history of science so that people better understand how it actually evolves and how discoveries were actually made. Without that it is hard to learn how to actually do those things yourself, and this is a frequent problem faced by many who enter PhD programs (and beyond).
And it still is. You can still lean on others while presenting things that are highly novel. These are not in disagreement.It's probably worth reading The Unreasonable Effectiveness of Mathematics in the Natural Sciences. It might seem obvious now but read carefully. If you truly think it is obvious that you can sit in a room armed with only pen and paper and make accurate predictions about the world, you have fooled yourself. You have not questioned why this is true. You have not questioned when this actually became true. You have not questioned how this could be true.
https://www.hep.upenn.edu/~johnda/Papers/wignerUnreasonableE...
stouset|16 days ago
Five years ago we were at Stage 1 with LLMs with regard to knowledge work. A few years later we hit Stage 2. We are currently somewhere between Stage 2 and Stage 3 for an extremely high percentage of knowledge work. Stage 4 will come, and I would wager it's sooner rather than later.
MITSardine|15 days ago
In chess, there's a clear goal: beat the game according to this set of unambiguous rules.
In science, the goals are much more diffuse, and setting those in the first place is what makes a scientist more or less successful, not so much technical ability. It's a very hierarchical field where permanent researchers direct staff (postdocs, research scientists/engineers), direct grad students. And it's at the bottom of the pyramid where the technical ability is the most relevant/rewarded.
Research is very much a social game, and I think replacing it with something run by LLMs (or other automatic process) is much more than a technical challenge.
bluecalm|16 days ago
TGower|16 days ago
guluarte|16 days ago
empath75|16 days ago
bpodgursky|16 days ago
We're talking about significant contributions to theoretical physics. You can nitpick but honestly go back to your expectations 4 years ago and think — would I be pretty surprised and impressed if an AI could do this? The answer is obviously yes, I don't really care whether you have a selective memory of that time.
RandomLensman|16 days ago
unknown|16 days ago
[deleted]
outlace|16 days ago
nozzlegear|16 days ago
Whoever wrote the prompts and guided ChatGPT made significant contributions to theoretical physics. ChatGPT is just a tool they used to get there. I'm sure AI-bloviators and pelican bike-enjoyers are all quite impressed, but the humans should be getting the research credit for using their tools correctly. Let's not pretend the calculator doing its job as a calculator at the behest of the researcher is actually a researcher as well.
emil-lp|16 days ago
Probably not something that the average GI Joe would be able to prompt their way to...
I am skeptical until they show the chat log leading up to the conjecture and proof.
Sharlin|16 days ago
jmalicki|16 days ago
Is this so different?
sejje|15 days ago
LLMs surpassed the average human a long time ago IMO. When LLMs fail to measure up to humans, it's that they fail to measure up against human experts in a given field, not the Average Joe.
We are surrounded by NPCs.
hgfda|16 days ago
[deleted]
famouswaffles|16 days ago
slibhb|16 days ago
What's the distinction between "first principles" and "existing things"?
I'm sympathetic to the idea that LLMs can't produce path-breaking results, but I think that's true only for a strict definition of path-breaking (that is quite rare for humnans too).
hellisad|16 days ago
I can claim some knowledge of physics from my degree, typically the easy part is coming up with complex dirty equations that work under special conditions, the hard part is the simplification into something elegant, 'natural' and general.
Also "LLM’s can make new things when they are some linear combination of existing things"
Doesn't really mean much, what is a linear combination of things you first have to define precisely what a thing is?
epolanski|16 days ago
lovecg|16 days ago
javier123454321|16 days ago
8note|16 days ago
over long periods of time, checklists are the biggest thing, so the LLM can track whats already done and whats left. after a compact, it can pull the relevant stuff back up and make progress.
having some level or hierarchy is also useful - requirements, high level designs, low level designs, etc
anon291|16 days ago
tedd4u|16 days ago
int_19h|16 days ago
The real question is, what does it cost OpenAI? I'm pretty sure both their plans are well below cost, at least for users who max them out (and if you pay $200 for something then you'll probably do that!). How long before the money runs out? Can they get it cheap enough to be profitable at this price level, or is this going to be "get them addicted then jack it up" kind of strategy?
sathish316|16 days ago
Agree with this. I’ve been trying to make LLMs come up with creative and unique word games like Wordle and Uncrossy (uncrossy.com), but so far GPT-5.2 has been disappointing. Comparatively, Opus 4.5 has been doing better on this.
But it’s good to know that it’s breaking new ground in Theoretical Physics!
FranklinJabar|16 days ago
MITSardine|15 days ago
acchow|16 days ago
It seems to me that all “new ideas” are basically linear combinations of existing things with exceeding rare exceptions…
Maybe Godel’s Incompleteness?
Darwinian evolution?
General Relativity?
Buddhist non-duality?
unknown|16 days ago
[deleted]
malshe|16 days ago
zaphirplane|16 days ago
arjie|15 days ago
DeathArrow|16 days ago
Aren't most new things linear combinations of existing things (up to a point)?
waynesonfire|16 days ago
Thanks for the summary; but this is a huge hand-wave. was GPT Pro just spinning for 12 hours and returend 42?!
Sparkyte|15 days ago
slibhb|15 days ago
But it's worth thinking more about this. What gives humans the ability to discover "new things"? I would say it's due to our interaction with the universe via our senses, and not due to some special powers intrinsic to our brains that LLMs lack. And the thing is, we can feed novel measurements to LLMs (or, eventually, hook them up to camera feeds to "give them senses")
ctoth|16 days ago
[0]: https://slatestarcodex.com/2019/02/19/gpt-2-as-step-toward-g...
bottlepalm|16 days ago
outlace|16 days ago
But I’ve successfully made it build me a great Poker training app, a specific form that also didn’t exist, but the ingredients are well represented on the internet.
And I’m not trying to imply AI is inherently incapable, it’s just an empirical (and anecdotal) observation for me. Maybe tomorrow it’ll figure it out. I have no dogmatic ideology on the matter.
fpgaminer|16 days ago
If all ideas are recombinations of old ideas, where did the first ideas come from? And wouldn't the complexity of ideas be thus limited to the combined complexity of the "seed" ideas?
I think it's more fair to say that recombining ideas is an efficient way to quickly explore a very complex, hyperdimensional space. In some cases that's enough to land on new, useful ideas, but not always. A) the new, useful idea might be _near_ the area you land on, but not exactly at. B) there are whole classes of new, useful ideas that cannot be reached by any combination of existing "idea vectors".
Therefore there is still the necessity to explore the space manually, even if you're using these idea vectors to give you starting points to explore from.
All this to say: Every new thing is a combination of existing things + sweat and tears.
The question everyone has is, are current LLMs capable of the latter component. Historically the answer is _no_, because they had no real capacity to iterate. Without iteration you cannot explore. But now that they can reliably iterate, and to some extent plan their iterations, we are starting to see their first meaningful, fledgling attempts at the "sweat and tears" part of building new ideas.
D-Machine|16 days ago
There are in fact ways to directly quantify this, if you are training e.g. a self-supervised anomaly-detection model.
Even with modern models not trained in that manner, looking at e.g. cosine distances of embeddings of "novel" outputs could conceivably provide objective evidence for "out-of-distribution" results. Generally, the embeddings of out-of-distribution outputs will have a large cosine (or even Euclidean) distance from the typical embedding(s). Just, most "out-of-distribution" outputs will be nonsense / junk, so, searching for weird outputs isn't really helpful, in general, if your goal is useful creativity.
amelius|16 days ago
bamboozled|15 days ago
mirsadm|15 days ago
getnormality|16 days ago
[deleted]
verdverm|16 days ago
[deleted]
buttered_toast|16 days ago