The Quanta write up is a bit more neutral on this announcement. There is a computational result that was not included in the theoretical value used to bench the test against. Once reviewed, this difference may yet go back to oblivion.
To clarify, for those not familiar with this topic, this experiment is making measurements at such exquisite precision that even the calculations for the theoretical prediction are extremely non-trivial and require careful estimation of many many pieces which are then combined. Which is to say that debugging the theoretical prediction is (almost) as hard as debugging the experiment. So I would expect the particle physics community to be extremely circumspect while the details get ironed out.
The Quanta article explains it quite nicely. To quote their example of what has happened in the past:
> ”A year after Brookhaven’s headline-making measurement, theorists spotted a mistake in the prediction. A formula representing one group of the tens of thousands of quantum fluctuations that muons can engage in contained a rogue minus sign; fixing it in the calculation reduced the difference between theory and experiment to just two sigma. That’s nothing to get excited about.”
> if the lattice result [new approach] is mathematically sound then there would have to be some as yet unknown correlated systematic error in many decades worth of experiments that have studied e+e- annihilation to hadrons
> alternatively, it could mean that the theoretical techniques that map the experimental data onto the g-2 prediction could be subtly wrong for currently unknown reasons, but I have not heard of anyone making this argument in the literature
In the Scientific American article also currently linked on the front page a scientist & professor* at an Italian university is quoted as saying something along the lines of “this is probably an error in the theoretical calculation”. Would this be what the professor was referring to?
Edit: I’m not entirely sure whether they’re a professor, but here’s the exact quote
> “My feeling is that there’s nothing new under the sun,” says Tommaso Dorigo, an experimental physicist at the University of Padua in Italy, who was also not involved with the new study. “I think that this is still more likely to be a theoretical miscalculation.... But it is certainly the most important thing that we have to look into presently.”
As someone who has worked in fields that use lattice calculations (on the experimental side), the new calculation is interesting, but I would not say it’s particularly convincing yet. Lattice calculations are VERY difficult, and are not always stable. I am not questioning whether they did their work well or not, just pointing out that in high energy physics and high energy nuclear physics, many times our experimental results are significantly better constrained and also undergo significantly more testing via reproduction of results by other experiments than our theory counterparts’ work.
Is it possible that all of our previous experiments have had some sort of correlated systematic error in them? Unlikely, but yes. Is it more likely that this lattice calculation may be underestimating its errors? Much more likely. Another interesting option is that one of the theoretical calculations was actually done slightly wrong. My first guess would be the lattice result, since it’s newer, but both procedures are complicated, so it could be either.
As a particle physicist (no longer working in the field, sadly), this is one of the more exciting results in a long time. Muon g-2 has been there, in some form of another for debate and model building, for many years (taken somewhat seriously for 15+?), waiting for better statistics and confirmation. At over 4 sigma this is much more compelling than it has ever been, and the best potential sign of new (non-Standard Model) physics.
I'm not current on what models people like to explain this result, but it has been factored in (or ignored if you didn't trust it) in particle physics model building and phenomenology for years. This result makes it much more serious and something I imagine all new physics models (say for dark matter or other collider predictions or tensions in data) will be using.
Whether or not anything interesting is predicted, theoretically, from this remains to be seen. I don't know off hand if it signals anything in particular, as the big ideas, like supersymmetry, are a bit removed from current collider experiments and aren't necessarily tied to g-2 if I remember correctly.
Alexey Petrov, quoted in the article, subbed in to teach one day in my quantum mechanics class :) It was the first day we were being introduced to the theory of scattering, and I will never forget his intro. He asked the class, “what is scattering?”, waited a moment, and then threw a whiteboard marker against the wall, and answered his own question: “that’s scattering”.
Lots of times, physics classes can be so heavy on math that it’s hard to even remember that you’re trying to describe the real world sometimes, and moments like that were always very memorable to me, because it helped remind me I wasn’t just solving equations for the hell of it :)
My favorite example of this was during a lecture on waveguides, when Michael Schick picked up the section of cylindrical metal pipe he was using to motivate the cylindrical-waveguide problem at hand, looked at the class through the pipe, and said, "clearly, it admits higher-order modes."
That little episode brought great joy to this experimentalist's heart.
I have a theory about how well educated the mass of humans are, could be and should be.
Bear with me.
Roughly 2000 years ago, the number of people who could do arithmetic and writing was < 1% of the population. By 200 years ago it was maybe what 10%?
Now it is 95% of the world population, and 99.9% of 'Western' world.
Lets say that Alexey Petrov is about as highly educated and trained as any human so far. (A Physics PhD represents pretty much 25 years of full-time full-on education). But most of us stop earlier, say 20 years, and many have less full-on education, perhaps not doing an hour a day of revision or whatever.
But imagine we could build the computing resources, the smaller class sizes, the gamification, whatever, that meant that each child was pushed as far as they could get (maybe some kind of Mastery learning approach ) - not as far as they can get if the teacher is dealing with 30 other unruly kids, but actually as far as their brain will take them.
Will Alexey be that much far ahead when we do this?
Is Alexey as far ahead as any human can be? Or can we go further - how much further? And if every kid leaving university is as well trained as an Astronaut, is capable of calculus and vector multiplication, will that make a difference in the world today?
I have the opposite experience. Physics classes where always the most interactive and practical. But then again, I only ever studied up to undergrad level physics.
Oh sweet summer physicist, what do you know of reality? Reality is for the markets, lovely mathey person, when a one in a million chance comes every month, and investment portfolios lie scattered over the floor like the corpses on a battlefield. Reality is for when your mortgage and the kid's school fees are riding on it, and quantitative strategies are borne and die with the fads of last summers interns pet projects.
In some domains 7 sigma events come and go - statistics is not something to be used to determine possibility in the absence of theory. If you go shopping you will buy a dress, just because it's a pretty one doesn't mean that it was made for you.
Amusingly - fittingly for our times - in the same issue of the exact same journal (Nature) another paper has been published that indicates that the prior, so much "hyped" discrepancy might be due to the theory having being applied inaccurately in the past. When computed with the new method, the experimental and theoretical models align far more accurately.
So now all that matters is what kind of article do your want to write. A sensationalist one to get eyeballs or a realistic one that is far less exciting. Thus the exact same discovery can be presented via two radically different headlines:
I highly recommend the YouTube channel PBS Space Time's coverage of this, it's informative, well organized, and accessible even to someone like me who doesn't have any background in physics.
Wouldn't that be amazing if the universe developed more and more characteristics as you look for them? Or even, that it's pushed to create something when you do?
Sometimes I think about this half-baked theory where physical laws don't exist until they are discovered. Once you catch physics with it's pants down it now must maintain those constraints or have it's bluff called.
Knowledge - expands,
Space exploration knowledge - expands,
Sub atomic exploration - expands, (muon and we may even find its sub atomic particles as well)
Space - expands,
Number series - expands,
Fibonacci - expands.
Be warned when something expands, it's a trap.
Science expands external knowledge and shrinks self-knowledge,
Spirituality shrinks external-knowledge and expands self-knowledge.
Be warned when something expands.
Be warned when something shrinks.
E=mc^2
where c is not just the speed of light, c is the speed of space expansion as well.
Maybe there aren't anything like "fundamental laws" and all are emergent patterns, like we are, and in other places in the Universe the "fundamental laws" are completely different. In that case, the hermetics had a point when they talked about infinite divisibility.
The idea of replacing a 'gravitational force' with spacetime curvature gave us General Relativity; extending this same idea to electromagnetism gives us Kaluza-Klein theory https://en.wikipedia.org/wiki/Kaluza%E2%80%93Klein_theory
In QFT, "particles" and "forces" are emergent phenomena (waves of excitation in the underlying fields, and the couplings/interactions/symmetries of those fields). QFT tends to be modelled using Lagrangian mechanics too.
One thing you find in modern physics is that ideas are often named according to some mathematical analogue to classical physics. You start thinking about forces by imagining a ball being kicked, and after boiling away the conceptual baggage you realise it's all about the exchange of energy.
It turns out that energy exchange is one of the most fundamental mechanisms that drives nature so it makes sense that this same mathematics appears in deeper theories. Unlike classical physics the symbols in quantum equations don't represent simple numbers, they're usually quite complicated and subtle actually but remarkably these equations share many properties with their classical counterparts. To be fair this could just be that phenomena that differ completely from classical physics are incomprehensible to us.
So an electron "spin", at least mathematically, is governed by equations that are remarkable similar to classical equations of angular momentum and so on. Force is in the same category and really just means "fundamental interaction".
Yes, very much so. Forces are not really a thing in the Standard Model. There are symmetry groups attached to spacetime which lead to exchanges of gauge bosons which 'create' forces.
However other forces such as the strong nuclear and the electroweak are forces in theories such as the standard model.
Grand Unification theories often are trying to turn gravity into a force this is where mediating particles such as the graviton come into play but these aren’t very successful yet.
It may be that gravity isn’t a force at all and is just an emergent phenomenon from the geometric properties of space time, or it could be both basically two distinct phenomena that cause attraction between massive objects where on a larger scale it’s primarily dominated by the geometry of space time and on the quantum scales by a mediated force with its own field and quanta (particles).
> It just seems like one of those inherently anthropocentric concepts that (potentially) holds us back from exploring something different?
This is something I struggle with.
I know that physics originated from an experimental framework. We observe phenomena, then we try to come up with explanations for said phenomena, formulate hypothesis, then test them. That is fine.
But this breaks down when the 'fundamental forces' are involved. What _is_ a force? All the explanations I've ever seen (apart from gravity) seem to treat a 'force' as an atomic concept. They will describe what a force 'does', but not what it 'is'. Maybe that's something unknowable, but it bothers me.
Science is a never ending series of incorrect observations, each disqualifying the penultimate while asserting the ultimate is axiomatic.
When you're young you get excited each time a new breakthrough is happening. If you manage to grow up, you get tired of the pattern, and the signal to noise ratio starts to look like a good statistical P value.
> "The concordance shows the old result was neither a statistical fluke nor the product of some undetected flaw in the experiment, says Chris Polly, a Fermilab physicist and co-spokesperson for the g-2 team. “Because I was a graduate student on the Brookhaven experiment, it was certainly an overwhelming sense of relief for me,” he says."
A committed scientist should worry about having such feelings, even though it is very human. It represents a possible source of non-independence of tests and of scientific bias.
[+] [-] beezle|5 years ago|reply
https://www.quantamagazine.org/muon-g-2-experiment-at-fermil...
[+] [-] ssivark|5 years ago|reply
The Quanta article explains it quite nicely. To quote their example of what has happened in the past:
> ”A year after Brookhaven’s headline-making measurement, theorists spotted a mistake in the prediction. A formula representing one group of the tens of thousands of quantum fluctuations that muons can engage in contained a rogue minus sign; fixing it in the calculation reduced the difference between theory and experiment to just two sigma. That’s nothing to get excited about.”
[+] [-] jessriedel|5 years ago|reply
https://mobile.twitter.com/dangaristo/status/137982536595107...
From Gordan Krnjaic at Fermilab:
> if the lattice result [new approach] is mathematically sound then there would have to be some as yet unknown correlated systematic error in many decades worth of experiments that have studied e+e- annihilation to hadrons
> alternatively, it could mean that the theoretical techniques that map the experimental data onto the g-2 prediction could be subtly wrong for currently unknown reasons, but I have not heard of anyone making this argument in the literature
https://mobile.twitter.com/GordanKrnjaic/status/137984412453...
[+] [-] elliekelly|5 years ago|reply
Edit: I’m not entirely sure whether they’re a professor, but here’s the exact quote
> “My feeling is that there’s nothing new under the sun,” says Tommaso Dorigo, an experimental physicist at the University of Padua in Italy, who was also not involved with the new study. “I think that this is still more likely to be a theoretical miscalculation.... But it is certainly the most important thing that we have to look into presently.”
[+] [-] beezle|5 years ago|reply
This is a pre-print https://arxiv.org/abs/2002.12347
This is the link to the Nature publication: https://www.nature.com/articles/s41586-021-03418-1
[+] [-] atty|5 years ago|reply
[+] [-] glofish|5 years ago|reply
Why is it more likely for it to be wrong than the calculation that shows the theory deviating from experiment.
[+] [-] podiki|5 years ago|reply
I'm not current on what models people like to explain this result, but it has been factored in (or ignored if you didn't trust it) in particle physics model building and phenomenology for years. This result makes it much more serious and something I imagine all new physics models (say for dark matter or other collider predictions or tensions in data) will be using.
Whether or not anything interesting is predicted, theoretically, from this remains to be seen. I don't know off hand if it signals anything in particular, as the big ideas, like supersymmetry, are a bit removed from current collider experiments and aren't necessarily tied to g-2 if I remember correctly.
[+] [-] nyc640|5 years ago|reply
[+] [-] atty|5 years ago|reply
[+] [-] ISL|5 years ago|reply
That little episode brought great joy to this experimentalist's heart.
[+] [-] lifeisstillgood|5 years ago|reply
Bear with me.
Roughly 2000 years ago, the number of people who could do arithmetic and writing was < 1% of the population. By 200 years ago it was maybe what 10%?
Now it is 95% of the world population, and 99.9% of 'Western' world.
Lets say that Alexey Petrov is about as highly educated and trained as any human so far. (A Physics PhD represents pretty much 25 years of full-time full-on education). But most of us stop earlier, say 20 years, and many have less full-on education, perhaps not doing an hour a day of revision or whatever.
But imagine we could build the computing resources, the smaller class sizes, the gamification, whatever, that meant that each child was pushed as far as they could get (maybe some kind of Mastery learning approach ) - not as far as they can get if the teacher is dealing with 30 other unruly kids, but actually as far as their brain will take them.
Will Alexey be that much far ahead when we do this? Is Alexey as far ahead as any human can be? Or can we go further - how much further? And if every kid leaving university is as well trained as an Astronaut, is capable of calculus and vector multiplication, will that make a difference in the world today?
[+] [-] kache_|5 years ago|reply
[+] [-] dang|5 years ago|reply
(The comment was posted to https://news.ycombinator.com/item?id=26726981 before we merged the threads.)
[+] [-] surfsvammel|5 years ago|reply
[+] [-] gus_massa|5 years ago|reply
That is really a lot. It's less than the official arbitrary threshold of 5 sigmas to proclaim a discovery, but it's a lot.
In the past, experiments with 2 or 3 sigmas were later classified as flukes, but AFAIK no experiment with 4 sigmas has "disappeared" later.
[+] [-] sgt101|5 years ago|reply
In some domains 7 sigma events come and go - statistics is not something to be used to determine possibility in the absence of theory. If you go shopping you will buy a dress, just because it's a pretty one doesn't mean that it was made for you.
[+] [-] comboy|5 years ago|reply
It just shows probabilistic significance. Confirmation by independent research teams helps eliminate calculation and execution errors.
[+] [-] wrnr|5 years ago|reply
[+] [-] mjevans|5 years ago|reply
https://news.fnal.gov/2021/04/first-results-from-fermilabs-m...
[+] [-] j4yav|5 years ago|reply
[+] [-] aaomidi|5 years ago|reply
[+] [-] glofish|5 years ago|reply
So now all that matters is what kind of article do your want to write. A sensationalist one to get eyeballs or a realistic one that is far less exciting. Thus the exact same discovery can be presented via two radically different headlines:
BBC goes with "Muons: 'Strong' evidence found for a new force of nature" https://www.bbc.com/news/56643677
> "Now, physicists say they have found possible signs of a fifth fundamental force of nature"
ScienceDaily says: "The muon's magnetic moment fits just fine" https://www.sciencedaily.com/releases/2021/04/210407114159.h...
> "A new estimate of the strength of the sub-atomic particle's magnetic field aligns with the standard model of particle physics."
There you have it, the mainstream media is not credible even when they attempt to write about a physics experiment ...
[+] [-] mkaic|5 years ago|reply
[+] [-] wnevets|5 years ago|reply
[+] [-] ipnon|5 years ago|reply
[+] [-] BiteCode_dev|5 years ago|reply
Infinite playground.
[+] [-] whimsicalism|5 years ago|reply
Of course we'll perceive things as complex when we move outside of that regime.
[+] [-] oscardssmith|5 years ago|reply
[+] [-] dokem|5 years ago|reply
[+] [-] imvetri|5 years ago|reply
Knowledge - expands, Space exploration knowledge - expands, Sub atomic exploration - expands, (muon and we may even find its sub atomic particles as well) Space - expands, Number series - expands, Fibonacci - expands.
Be warned when something expands, it's a trap.
Science expands external knowledge and shrinks self-knowledge, Spirituality shrinks external-knowledge and expands self-knowledge.
Be warned when something expands. Be warned when something shrinks.
E=mc^2
where c is not just the speed of light, c is the speed of space expansion as well.
Mass expands to form energy (star)
Energy shrinks to form mass (black hole)
[+] [-] f6v|5 years ago|reply
[+] [-] nahuel0x|5 years ago|reply
[+] [-] irjustin|5 years ago|reply
5 sigma results have disappeared (even 6-sigma) upon independent testing, so more testing is needed.
[0] https://www.nytimes.com/2021/04/07/science/particle-physics-...
[+] [-] davidivadavid|5 years ago|reply
I know a bit about how it is reconceptualized as space-time deformation in the context of general relativity, but that's about it.
It just seems like one of those inherently anthropocentric concepts that (potentially) holds us back from exploring something different?
[+] [-] chriswarbo|5 years ago|reply
The idea of replacing a 'gravitational force' with spacetime curvature gave us General Relativity; extending this same idea to electromagnetism gives us Kaluza-Klein theory https://en.wikipedia.org/wiki/Kaluza%E2%80%93Klein_theory
The current state of the art is Quantum Field Theory (of which the Standard Model is an example) https://en.wikipedia.org/wiki/Quantum_field_theory
In QFT, "particles" and "forces" are emergent phenomena (waves of excitation in the underlying fields, and the couplings/interactions/symmetries of those fields). QFT tends to be modelled using Lagrangian mechanics too.
[+] [-] alephu5|5 years ago|reply
One thing you find in modern physics is that ideas are often named according to some mathematical analogue to classical physics. You start thinking about forces by imagining a ball being kicked, and after boiling away the conceptual baggage you realise it's all about the exchange of energy.
It turns out that energy exchange is one of the most fundamental mechanisms that drives nature so it makes sense that this same mathematics appears in deeper theories. Unlike classical physics the symbols in quantum equations don't represent simple numbers, they're usually quite complicated and subtle actually but remarkably these equations share many properties with their classical counterparts. To be fair this could just be that phenomena that differ completely from classical physics are incomprehensible to us.
So an electron "spin", at least mathematically, is governed by equations that are remarkable similar to classical equations of angular momentum and so on. Force is in the same category and really just means "fundamental interaction".
[+] [-] ajkjk|5 years ago|reply
[+] [-] dogma1138|5 years ago|reply
However other forces such as the strong nuclear and the electroweak are forces in theories such as the standard model.
Grand Unification theories often are trying to turn gravity into a force this is where mediating particles such as the graviton come into play but these aren’t very successful yet.
It may be that gravity isn’t a force at all and is just an emergent phenomenon from the geometric properties of space time, or it could be both basically two distinct phenomena that cause attraction between massive objects where on a larger scale it’s primarily dominated by the geometry of space time and on the quantum scales by a mediated force with its own field and quanta (particles).
[+] [-] outworlder|5 years ago|reply
This is something I struggle with.
I know that physics originated from an experimental framework. We observe phenomena, then we try to come up with explanations for said phenomena, formulate hypothesis, then test them. That is fine.
But this breaks down when the 'fundamental forces' are involved. What _is_ a force? All the explanations I've ever seen (apart from gravity) seem to treat a 'force' as an atomic concept. They will describe what a force 'does', but not what it 'is'. Maybe that's something unknowable, but it bothers me.
F* magnets, how do they work.
[+] [-] dkersten|5 years ago|reply
[+] [-] BlueTemplar|5 years ago|reply
[+] [-] BlueTemplar|5 years ago|reply
(F=ma being replaced by Schrodinger's equation.)
[+] [-] andi999|5 years ago|reply
[+] [-] fctorial|5 years ago|reply
[+] [-] gonational|5 years ago|reply
When you're young you get excited each time a new breakthrough is happening. If you manage to grow up, you get tired of the pattern, and the signal to noise ratio starts to look like a good statistical P value.
[+] [-] zbendefy|5 years ago|reply
https://www.nature.com/news/has-a-hungarian-physics-lab-foun...
[+] [-] dukwon|5 years ago|reply
[+] [-] eevilspock|5 years ago|reply
A committed scientist should worry about having such feelings, even though it is very human. It represents a possible source of non-independence of tests and of scientific bias.
[+] [-] treyh|5 years ago|reply