I really appreciate how writers at Quanta turn extremely complex and dry topics into a pleasurable read by mixing simple analogies with history. I really admire the skill it takes to break down these topics and make them fascinating for someone with no understanding of them, such as me.
I like those articles too but I wish they include a short version with the major takeaways. In another industry that would be an executive summary. I don't always have the time to read all the story, so I end up fast reading it trying to find the important points and I'm never sure I really found them. In this case they seem to be the paragraphs after "Here’s an extremely rough cartoon version of the approach:"
Man, I know a good bit of graduate-level math well and that is incomprehensible to me. Either it's very poorly written or it's targeted at an audience who are already experts in, specifically, Borel transforms and I guess functional analysis?
I did an undergraduate research project many years ago on Conway's surreal numbers (some people might be aware of Knuth's excellent book on the subject). This alien calculus of resurgence reminded me of that, so I went looking for a connection, and found one: https://arxiv.org/pdf/2208.14331.pdf
I was only barely a math nerd and very far from a physics nerd, but even as a pretty naive bystander the surreal numbers seemed to offer some hope for common problem of wrangling divergent sums. Anyone out there that can compare/contrast the approaches and challenges of integration/differentiation on the surreals vs this topic of resurgence in perturbation theory? Are these things even that fundamentally different or is it more like a difference in point-of-view/branding for the same techniques?
Finding a way around the problem of infinities in perturbative quantum theory/QFT would not only put the science on a more sound mathematical footing but would also greatly benefit it in other ways. It seems to me that with a rigorous mathematical foundation it's likely phenomena will emerge that would never have been obvious without it. We can only hope the 'discovery' of Écalle's work leads us along that path.
> Tunneling is one of many nonperturbative phenomena in quantum physics, but nonperturbative effects are everywhere: The branching growth of snowflakes, the flow of a liquid through a pipe with holes, the orbits of planets in a solar system, the rippling of waves trapped between round islands, and countless other physical phenomena are nonperturbative.
This is beautiful. Any examples of using Écalle's method to solve three-body problem?
The three-body problem is solved nowadays (the existence and structure of possible solutions is known). It can also be simulated very precisely using numerics. There is no need for perturbation theory.
Sincere question: could it be that the underlying structure of the universe is really simple but since we have no idea what it is we have to use exotic mathematics for it?
That’s quite possible. A lot of what we think of as fundamental physics might turn out to be emergent behaviour.
In fact that’s pretty much the story of the development of physics. It turns out Newtonian mechanics is emergent from Relativity. Maxwells equations are emergent from quantum mechanics. The behaviour of bosons is emergent from the behaviour of quarks.
As I understand it there are theoretical reasons to suspect that quarks do not have any decomposition though. We’ll see.
Everything is simple, given the right notation (and the concepts underlying it).
The original Maxwell theory of electromagnetism is about 10 rather involved equations. Maxwell-Heaviside form is 4 simpler equations. A formulation using differential 3-forms is 2 simple equations. A formulation using geometric algebra / Clifford algebra is one utterly simple equation.
I think this ends up being a question that is the cousin of Bertrand's paradox. In that case, the English words in the original question, despite feeling concrete in what they ask for, leave enough vagueness to give different ways to solve the problem that all seem to satisfy the query but give incompatible answers. I say this because I see two similar phrases in your query that seem to carry equal levels of assumptions.
First is the idea of simple. If something has a few very well defined rules that are understood in isolation, but whose emergent behavior is beyond our ability to define, is it simple? Conway's Game of Life is somewhat the default example. 2 very simple rules (or perhaps more, depending upon specifically how you count them), but it gives rise to a Turing complete system. Math itself is another example, as mathematicians seek to find simple rules from which math arises, yet even for the subsets of math that are limited to such rules, is it really fair to call it simple?
The second idea is that of an underlying structure. Does the universe have an underlying structure, and even if it does, does that exist in side of some more foreign concept? What happens before the big bang? Why did the big bang happen when it did? Are there other universes, both from the many worlds interpretation of quantum mechanics, and universes that entirely separate from our own. These seem questions that feel almost entirely in the realm of science fiction, not physics, but there are plenty of theoretical physicists who dive into this field even though it currently doesn't produce testable hypothesis and is thus outside the scope of proper science.
Some think the underlying structure of the universe is mathematics. That is, the universe isn’t merely describe by mathematics, but it is a mathematical structure.
The universe could be simpler than the current models suggest, but that would require taking a step back too far for the comfort of today’s STEM-oriented mind. For as long as natural sciences consider philosophy a load of hand-wavy abstract inapplicable hogwash they will be stuck iterating on existing physical models towards local maximum.
So one question I have in the introduction section is that it seems the article misses the difference between the number of Feynman diagrams to calculate a_n and the value of a_n. They point out that the the number of Feyman diagrams grows ~n!, which is much larger than the rate x^n shrinks (given 0<x<1). If the number a_n calculated by Feyman diagrams doesn't grow proportional to the number of Feyman diagrams, then it is still entirely possible for x^n to shrink faster than any change in a_n.
Based on my limited knowledge of particle physics, physicists are currently able to calculate using Feynman diagrams because a_n does grow less than x^n shrinks. There are some equations (I think dealing with specific forces/fields) where the constant it larger than others which makes calculates much harder. x ~=.7 shrinks much slower than x~=.007. Yet even then the general trend does hold and it does allow for making calculations which can then be tested against experimental data.
What we find is that our calculations do match the experimental data. It isn't a perfect match, there is room for error and confidence intervals and such. The important point is that what this article suggest doesn't seem to happen. If at some point a_n grew much faster than x^n shrunk, then the real world solution would diverge and our answer from calculating n out to 5 wouldn't closely match the data. It almost sounds like the article is suggesting things will diverge only once we calculate out for n>100 or so, but reality doesn't await for those calculations. If this problem really existed, it would happen because reality is calculating out n all the way to infinity even while the physicists can not.
So I'm left with two possible conclusions.
1. The article is misunderstanding the relationship between the number of Feynman diagrams needed to calculate a_n and a_n itself.
2. The real critique is that the current model is wrong because the model diverges, not that reality itself diverges. Thus while this model is approximate for what we currently calculate, it is inherently wrong.
The second issue is an interesting idea. A model that looks correct and is correct for all calculations done so far, but which may no be correct for more detailed calculations but which we do not and will not have the computation power to test at that level.
I only read the article halfway through, and I am no physicist, but my understanding was that this was a new mathematical method that allowed to calculate past the divergent part.
Possibly because some terms cancel out later? a bit like some limit calculations.
If that's the case, I was picturing it a bit like how imaginary numbers were initially introduced, to find real-valued solutions to 3rd+ degree polynomial equations. Step into another realm, perform your transformations, and find back a real solution. Laplace transforms come to mind as well, there are a phletora of such tools (fourrier, taylor series, etc) that allow to express the problem in a different space.
It is generally believed that the perturbation expansion that we see in realistic quantum field theories are what are known as asymptomatic expansions. These are series that have a radius of convergence of zero (i.e. they only converge when the expansion parameter is exactly zero and diverge for all non zero values).
There are then two natural questions: 1. if the perturbation series diverges, why doesn't the universe explode? and 2. If the series diverges, why can we use it at all?
Let's first talk about the first part: why doesn't the universe explode? Well, it's because the perturbation series is not actually what is going on, the real answer is the solution to the full set of equations. It's just that we're using a perturbation expansion as a crutch. It's sort of like if the universe's function is 1/(1-x) but we constantly insist on using 1+x+x^2+... Clearly the first function is completely well behaved at x=2 but the second one is not. If we notice that our series explodes for x=2 we should not immediately assume that the universe also must explode, it's just that our representation of the true physics is not faithful. This is perhaps a bad example because the series in question is convergent for some x, just not for x=2. The perturbation expansions in question are more subtle since the never converge.
This then leads into the second question: if the series diverges, how can we even use it? Well the idea here is that it's not just any divergent series (like my silly example with 1/(1-x) above) but rather an asymptomatic series. This means that as long as you truncate the series at some point it is in fact reasonably close to the target function for a sufficiently small value of the parameter. It's just that the more terms you want to include, the sooner the approximation breaks in terms of the parameter. So, if you want to include 10 terms it might be a decent approximation until x ~0.1 but if you include 100 terms it might only be a good approximation until x~0.01. Now, within the overlapping range (x<0.01) it's better to have 100 terms than 10 terms, so it's not like including more terms is bad in all ways. But you see the issue: if you include 1000 terms you get a better approximation for your function for values x<0.001 than you had with 100 terms but now your approximation breaks much sooner. If you want to include all the terms your approximation breaks the moment you leave the point x=0.
Why do we think that QFT perturbation theories generally have zero radius of convergence? Well, look at QED, the quantum theory of E&M. If the theory had any nonzero radius of convergence, that also means that the theory would need to make sense for negative coupling constants. However, what would E&M look like for negative coupling? Well, we'd still have electron/positron virtual pair creation from the vacuum since the interactions of the theory are still the same. However this time around they wouldn't attract each other anymore but instead repel each other causing an instability in the vacuum of the theory. We would just constantly be producing these particle/anti-particle pairs and they'd form two separate clusters where all the electrons attract each other and all the positions attract each other but they pairwise repel. In other words, the vacuum would break. This suggests that QED with a negative coupling constants doesn't make sense. But this contradicts the fact that the radius of convergence of the perturbative expansion is nonzero.
That's not to say that all QFTs must have zero radius of convergence, but similar arguments can (I think) be made for the type of QFTs that we actually see in nature.
What I find fascinating about this, is the question of "What other applications (besides saving particle physics from infinities) could this branch of mathematics have"?
Every since I read The Universe Speaks in Numbers[1] I've been fascinated by those scenarios where there's some hard problem in science, and it turns out that the math needed to solve it was invented years before, to solve some other problem. And the eventual solution to the current problem is merely the serendipitous discovery that this other math exists and applies.
Unfortunately, as mentioned in TFA, they are in French, which makes them less than ideal for anyone who doesn't read/speak French. I wonder if these will ever see an English translation?
Short of that, I wonder how well it would work to try to read through it using ChatGPT or something to translate the prose bits.
ChatGPT isn't the only AI in town and there are services optimized for translation that might work better for this use case. Some accept HTML, PDF, etc. and send back a translation of the content.
That being said, taking something in French that is intensely academic and converting it to English might not be a straight-up translation task - having GPT-4 clean up and correct the translation with the context from the original French document might yield a better final product.
Fyi, this is from the french. "Alien" is a bad translation. The better word is "stranger". Alien just sounds more scifi, akin to how "alien hand syndrome" is the more scifi translation of le mans etrangers, which is best translated as "stranger's hand".
For example, alternating current in presense of capaciators and induction coils is well handled by switching to imaginary (complex) resistance/current/voltage calculation, and transition processes in electric chains are handled by operational calculus.
If we used differential equations to solve these, which indeed look natural for these tasks, we will not be able to accomplish any calculations without a PhD.
> If we used differential equations to solve [RC networks], we will not be able to accomplish any calculations without a PhD.
That's overstating the argument[1]. In fact "differential equations" are naturally abstractable too. Decades and decades of workaday electrical engineers have been writing SPICE models successfully. The fact that one abstraction looks like "math" and the other "software" is just an aesthetics thing.
That's not to say that there's no point in teaching complex impedance. There absolutely is. But abstraction works in mysterious ways and some abstractions are more "beautiful dead ends" than others.
[1] Also needs to be mentioned that linear RC networks are a pretty small subset of the actual problem that needs solving. Transistors are kinda important too.
I learned how to use differential equations to solve circuits as an undergraduate in electrical engineering, and also how to derive the more efficient methods from the differential equations by assuming solutions are (possibly complex) exponentials. No PhD required.
Can those Feymann diagrams be calculated by a computer? I understand that it raises expotentially, but why the 891 and 320 thousand diagrams cannot be solved by some tool?
Moreover, we know the perturbative series does not converge for many theories.
For QED, for example, the radius of convergence must be 0, by an argument due to Dyson. Roughly:
1. The perturbative series is an analytic function of the fine structure constant α, which is proportional to the electron charge squared.
2. Like charges repel and therefore α>0. If like charges attracted the vacuum would be unstable against the continuous creation of electron-positron pairs from the vacuum and the electrons going to one part of the universe and the positrons to the other.
3. Because of this instability the series converges for no α<0, and since the perturbative expansion is analytic in α cannot converge for any α>0 either.
Of course! But the R&D for those tools are very peculiar, at least as I understand it from a friend who works directly on this domain.
Essentially there are a whole variety of different interactions, some of them which interact with themselves. You wind up with a tiny zoo of bizarre diagrammatic creatures that you have to figure out herding patterns for - and only then can you get the tool to do the herding for you. I seem to remember involved a whole deal of abstract group theory.
It's way easier to draw the diagrams on paper to solve all cases of an interaction, anyhow; after all that nightmare you wind up with some result. And from what I recall, the number of cases also grows exponentially too D:
The way I read this: this method of finding non-pertrubative terms requires calculating perturbative terms first, that’s why it can’t be applied to QED and other more-or-less realistic theories. But of course string theorists can use this method to write more papers.
I'm much more interested in whether it can provide a better way to deal with the strong force. QCD is almost impossible to calculate with in most cases because everything diverges.
> "Resurgent functions" are divergent power series whose Borel transforms converge in a neighborhood of the origin and give rise, by means of analytic continuation, to (usually) multi-valued functions, but these multi-valued functions have merely isolated singularities without singularities that form cuts with dimension one or greater.
Regardless, I think we can all agree super compact massively heavy objects do in fact exist. We have pictures of black holes, we can see infrared time-laps images spanning decades of stars whipping around an undefined point in space.... they certainly do exist. Does all that matter collapse to an asymptotic point beyond Planck space? Perhaps not, it could simply be really compact degenerate matter, inside the Schwarzschild radius, like a quark-gluon plasma, or whatever might go above such high energies. And whatever that stuff is, it could perhaps not collapse to single point, it just gets really hot, and really dense.
Recently Eric Weinstein has been making the rounds on internet podcasts, for example Joe Rogan and the likes... getting what we might call academically belligerent about singularities, and all the "(re)normalisation" that gets explained away to balance equations. His characterisation of the situation is charismatic, and to some extent persuasive. But I dunno, he seems kinda weird.
Is the implication that each nonperturbative term represents a specific kind of field interaction? Thus would there be a finite set of nonperturbative terms that, if exhaustively found, would render the trans-series convergent?
That's an interesting idea but I think if it was the case, the article would have said so. What I got from it is that they haven't even got this to work in anything except for simplified toy universes. So I doubt anyone can answer your question yet.
[+] [-] omeysalvi|2 years ago|reply
[+] [-] pmontra|2 years ago|reply
[+] [-] yuuuuyu|2 years ago|reply
(I agree with your point and share your appreciation.)
[+] [-] pvitz|2 years ago|reply
[+] [-] ajkjk|2 years ago|reply
[+] [-] JPLeRouzic|2 years ago|reply
[+] [-] photonthug|2 years ago|reply
I was only barely a math nerd and very far from a physics nerd, but even as a pretty naive bystander the surreal numbers seemed to offer some hope for common problem of wrangling divergent sums. Anyone out there that can compare/contrast the approaches and challenges of integration/differentiation on the surreals vs this topic of resurgence in perturbation theory? Are these things even that fundamentally different or is it more like a difference in point-of-view/branding for the same techniques?
[+] [-] hilbert42|2 years ago|reply
[+] [-] adastra22|2 years ago|reply
[+] [-] est|2 years ago|reply
This is beautiful. Any examples of using Écalle's method to solve three-body problem?
[+] [-] staunton|2 years ago|reply
[+] [-] axilmar|2 years ago|reply
[+] [-] simonh|2 years ago|reply
In fact that’s pretty much the story of the development of physics. It turns out Newtonian mechanics is emergent from Relativity. Maxwells equations are emergent from quantum mechanics. The behaviour of bosons is emergent from the behaviour of quarks.
As I understand it there are theoretical reasons to suspect that quarks do not have any decomposition though. We’ll see.
[+] [-] d--b|2 years ago|reply
The emergence of complex traits from simple rules still requires exotic mathematics to describe macroscopically.
[+] [-] nine_k|2 years ago|reply
The original Maxwell theory of electromagnetism is about 10 rather involved equations. Maxwell-Heaviside form is 4 simpler equations. A formulation using differential 3-forms is 2 simple equations. A formulation using geometric algebra / Clifford algebra is one utterly simple equation.
[+] [-] WindyLakeReturn|2 years ago|reply
First is the idea of simple. If something has a few very well defined rules that are understood in isolation, but whose emergent behavior is beyond our ability to define, is it simple? Conway's Game of Life is somewhat the default example. 2 very simple rules (or perhaps more, depending upon specifically how you count them), but it gives rise to a Turing complete system. Math itself is another example, as mathematicians seek to find simple rules from which math arises, yet even for the subsets of math that are limited to such rules, is it really fair to call it simple?
The second idea is that of an underlying structure. Does the universe have an underlying structure, and even if it does, does that exist in side of some more foreign concept? What happens before the big bang? Why did the big bang happen when it did? Are there other universes, both from the many worlds interpretation of quantum mechanics, and universes that entirely separate from our own. These seem questions that feel almost entirely in the realm of science fiction, not physics, but there are plenty of theoretical physicists who dive into this field even though it currently doesn't produce testable hypothesis and is thus outside the scope of proper science.
[+] [-] criddell|2 years ago|reply
[+] [-] strogonoff|2 years ago|reply
[+] [-] WindyLakeReturn|2 years ago|reply
Based on my limited knowledge of particle physics, physicists are currently able to calculate using Feynman diagrams because a_n does grow less than x^n shrinks. There are some equations (I think dealing with specific forces/fields) where the constant it larger than others which makes calculates much harder. x ~=.7 shrinks much slower than x~=.007. Yet even then the general trend does hold and it does allow for making calculations which can then be tested against experimental data.
What we find is that our calculations do match the experimental data. It isn't a perfect match, there is room for error and confidence intervals and such. The important point is that what this article suggest doesn't seem to happen. If at some point a_n grew much faster than x^n shrunk, then the real world solution would diverge and our answer from calculating n out to 5 wouldn't closely match the data. It almost sounds like the article is suggesting things will diverge only once we calculate out for n>100 or so, but reality doesn't await for those calculations. If this problem really existed, it would happen because reality is calculating out n all the way to infinity even while the physicists can not.
So I'm left with two possible conclusions.
1. The article is misunderstanding the relationship between the number of Feynman diagrams needed to calculate a_n and a_n itself.
2. The real critique is that the current model is wrong because the model diverges, not that reality itself diverges. Thus while this model is approximate for what we currently calculate, it is inherently wrong.
The second issue is an interesting idea. A model that looks correct and is correct for all calculations done so far, but which may no be correct for more detailed calculations but which we do not and will not have the computation power to test at that level.
[+] [-] MayeulC|2 years ago|reply
Possibly because some terms cancel out later? a bit like some limit calculations.
If that's the case, I was picturing it a bit like how imaginary numbers were initially introduced, to find real-valued solutions to 3rd+ degree polynomial equations. Step into another realm, perform your transformations, and find back a real solution. Laplace transforms come to mind as well, there are a phletora of such tools (fourrier, taylor series, etc) that allow to express the problem in a different space.
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] pontus|2 years ago|reply
There are then two natural questions: 1. if the perturbation series diverges, why doesn't the universe explode? and 2. If the series diverges, why can we use it at all?
Let's first talk about the first part: why doesn't the universe explode? Well, it's because the perturbation series is not actually what is going on, the real answer is the solution to the full set of equations. It's just that we're using a perturbation expansion as a crutch. It's sort of like if the universe's function is 1/(1-x) but we constantly insist on using 1+x+x^2+... Clearly the first function is completely well behaved at x=2 but the second one is not. If we notice that our series explodes for x=2 we should not immediately assume that the universe also must explode, it's just that our representation of the true physics is not faithful. This is perhaps a bad example because the series in question is convergent for some x, just not for x=2. The perturbation expansions in question are more subtle since the never converge.
This then leads into the second question: if the series diverges, how can we even use it? Well the idea here is that it's not just any divergent series (like my silly example with 1/(1-x) above) but rather an asymptomatic series. This means that as long as you truncate the series at some point it is in fact reasonably close to the target function for a sufficiently small value of the parameter. It's just that the more terms you want to include, the sooner the approximation breaks in terms of the parameter. So, if you want to include 10 terms it might be a decent approximation until x ~0.1 but if you include 100 terms it might only be a good approximation until x~0.01. Now, within the overlapping range (x<0.01) it's better to have 100 terms than 10 terms, so it's not like including more terms is bad in all ways. But you see the issue: if you include 1000 terms you get a better approximation for your function for values x<0.001 than you had with 100 terms but now your approximation breaks much sooner. If you want to include all the terms your approximation breaks the moment you leave the point x=0.
Why do we think that QFT perturbation theories generally have zero radius of convergence? Well, look at QED, the quantum theory of E&M. If the theory had any nonzero radius of convergence, that also means that the theory would need to make sense for negative coupling constants. However, what would E&M look like for negative coupling? Well, we'd still have electron/positron virtual pair creation from the vacuum since the interactions of the theory are still the same. However this time around they wouldn't attract each other anymore but instead repel each other causing an instability in the vacuum of the theory. We would just constantly be producing these particle/anti-particle pairs and they'd form two separate clusters where all the electrons attract each other and all the positions attract each other but they pairwise repel. In other words, the vacuum would break. This suggests that QED with a negative coupling constants doesn't make sense. But this contradicts the fact that the radius of convergence of the perturbative expansion is nonzero.
That's not to say that all QFTs must have zero radius of convergence, but similar arguments can (I think) be made for the type of QFTs that we actually see in nature.
[+] [-] mindcrime|2 years ago|reply
Every since I read The Universe Speaks in Numbers[1] I've been fascinated by those scenarios where there's some hard problem in science, and it turns out that the math needed to solve it was invented years before, to solve some other problem. And the eventual solution to the current problem is merely the serendipitous discovery that this other math exists and applies.
[1]: https://www.amazon.com/Universe-Speaks-Numbers-Reveals-Natur...
[+] [-] m3kw9|2 years ago|reply
[+] [-] mindcrime|2 years ago|reply
https://www.imo.universite-paris-saclay.fr/~jean.ecalle/publ...
Unfortunately, as mentioned in TFA, they are in French, which makes them less than ideal for anyone who doesn't read/speak French. I wonder if these will ever see an English translation?
Short of that, I wonder how well it would work to try to read through it using ChatGPT or something to translate the prose bits.
[+] [-] r3trohack3r|2 years ago|reply
Google's is available here: https://cloud.google.com/translate
That being said, taking something in French that is intensely academic and converting it to English might not be a straight-up translation task - having GPT-4 clean up and correct the translation with the context from the original French document might yield a better final product.
[+] [-] sandworm101|2 years ago|reply
https://www.webmd.com/brain/what-is-alien-hand-syndrome
[+] [-] mcabbott|2 years ago|reply
The alien derivative is an operation which moves you from one sector to another. A bit like crossing a branch cut.
[+] [-] thriftwy|2 years ago|reply
For example, alternating current in presense of capaciators and induction coils is well handled by switching to imaginary (complex) resistance/current/voltage calculation, and transition processes in electric chains are handled by operational calculus.
If we used differential equations to solve these, which indeed look natural for these tasks, we will not be able to accomplish any calculations without a PhD.
[+] [-] ajross|2 years ago|reply
That's overstating the argument[1]. In fact "differential equations" are naturally abstractable too. Decades and decades of workaday electrical engineers have been writing SPICE models successfully. The fact that one abstraction looks like "math" and the other "software" is just an aesthetics thing.
That's not to say that there's no point in teaching complex impedance. There absolutely is. But abstraction works in mysterious ways and some abstractions are more "beautiful dead ends" than others.
[1] Also needs to be mentioned that linear RC networks are a pretty small subset of the actual problem that needs solving. Transistors are kinda important too.
[+] [-] not2b|2 years ago|reply
[+] [-] rvba|2 years ago|reply
[+] [-] evanb|2 years ago|reply
Moreover, we know the perturbative series does not converge for many theories.
For QED, for example, the radius of convergence must be 0, by an argument due to Dyson. Roughly:
1. The perturbative series is an analytic function of the fine structure constant α, which is proportional to the electron charge squared.
2. Like charges repel and therefore α>0. If like charges attracted the vacuum would be unstable against the continuous creation of electron-positron pairs from the vacuum and the electrons going to one part of the universe and the positrons to the other.
3. Because of this instability the series converges for no α<0, and since the perturbative expansion is analytic in α cannot converge for any α>0 either.
[+] [-] yunruse|2 years ago|reply
Essentially there are a whole variety of different interactions, some of them which interact with themselves. You wind up with a tiny zoo of bizarre diagrammatic creatures that you have to figure out herding patterns for - and only then can you get the tool to do the herding for you. I seem to remember involved a whole deal of abstract group theory.
It's way easier to draw the diagrams on paper to solve all cases of an interaction, anyhow; after all that nightmare you wind up with some result. And from what I recall, the number of cases also grows exponentially too D:
[+] [-] ur-whale|2 years ago|reply
https://youtu.be/pTcmkBocYZU
[+] [-] beautifulfreak|2 years ago|reply
[+] [-] sesm|2 years ago|reply
[+] [-] not2b|2 years ago|reply
[+] [-] evanb|2 years ago|reply
[+] [-] notorandit|2 years ago|reply
> "Resurgent functions" are divergent power series whose Borel transforms converge in a neighborhood of the origin and give rise, by means of analytic continuation, to (usually) multi-valued functions, but these multi-valued functions have merely isolated singularities without singularities that form cuts with dimension one or greater.
Cool!
[+] [-] parasense|2 years ago|reply
Regardless, I think we can all agree super compact massively heavy objects do in fact exist. We have pictures of black holes, we can see infrared time-laps images spanning decades of stars whipping around an undefined point in space.... they certainly do exist. Does all that matter collapse to an asymptotic point beyond Planck space? Perhaps not, it could simply be really compact degenerate matter, inside the Schwarzschild radius, like a quark-gluon plasma, or whatever might go above such high energies. And whatever that stuff is, it could perhaps not collapse to single point, it just gets really hot, and really dense.
Recently Eric Weinstein has been making the rounds on internet podcasts, for example Joe Rogan and the likes... getting what we might call academically belligerent about singularities, and all the "(re)normalisation" that gets explained away to balance equations. His characterisation of the situation is charismatic, and to some extent persuasive. But I dunno, he seems kinda weird.
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] eximius|2 years ago|reply
[+] [-] esperent|2 years ago|reply