As an ex-physicist (quantum physics, now - data science), I am still puzzled why computer scientist consider computational complexity something metaphysical. Or even worse - a rule which universe should care about; look up considerations of non-local boxes and computational complexity (a generalization of quantum correlations; see Popescu-Rohrlich box).
First, even the CS's nightmare (where P=NP) may mean little practical difference (e.g. the simplest 'NP' takes O(n^1000)).
Second, for many problems even O(n^2) is too much (think of big data problems). (Also, given that computation requires physical resources, you cannot scale it arbitrarily.) So even in P the actual exponent matters a lot.
One interesting, physical case is the spontaneous emission of a photon from an excited atom. It looks like an exponential decay, but the theory shows that the decay have to be polynomial (I don't remember it's power, if more like 1/t^3 or 1/t^6, but not too high). Up to my knowledge, we haven't detected the polynomial tail (because when it is, the probabilities are so low).
So, is there a physical effect that cares about computational complexity? Or a phenomenon in which it matters that we can simulate it in "only" O(n^100), not O(2^n)?
>So, is there a physical effect that cares about computational complexity?
The No-SuperSearch Postulate (there is no physical means to solve NP-complete problems in polynomial time) is not purely mathematical conjecture. It's also a claim about the laws of physics. The postulate includes P != NP as a special case, but it is stronger.
If you assume that the postulate is true, you can use it to explain things like why quantum mechanics is linear, why adiabatic systems have small spectral gaps, why timeline curves don't exist, black hole information paradox, impossibility of relativity computers, why QM-amplitudes are not directly observable. The question of how much physics you could derive from No-super-search postulate is interesting.
> First, even the CS's nightmare (where P=NP) may mean little practical difference (e.g. the simplest 'NP' takes O(n^1000)).
Sure, it may. But there are precious few examples in the wild where we're dealing with crazy powers like that. I think the bigger (but still quite small) failure in P's ability to measure "ease" of computation is that it only considers the worst case. A good example is graph isomorphism. There was recently a development of a quasipolynomial algorithm for graph isomorphism, which had the best bound we've found for the problem yet. However, with our existing algorithms it was extremely difficult to find examples where they didn't run in polynomial time, even when actively looking for them. But again, this is the exception, not the rule.
> Second, for many problems even O(n^2) is too much (think of big data problems). (Also, given that computation requires physical resources, you cannot scale it arbitrarily.) So even in P the actual exponent matters a lot.
Very true, but TCS spends a lot of time developing new algorithms to shrink exponents as well. A lot of the methods and skills needed for making better algorithms to get problems into P also apply to moving them around within P. And many of the same tactics for trying to figure out exactly how P and NP fit together apply to trying to figure out the same for P and BQP.
Since you have a background in quantum physics and (I assume) care about that field, are you aware that quite a bit of the TCS community works on quantum problems? There have been plenty of results that stand to be very physically relevant. Some good examples are the recent (quite rapid) progress in analyzing symmetric cryptography schemes' vulnerability to attacks by a quantum computers[1]. Interestingly, many are starting from the same jumping off that asymmetric attacks did; namely Simon's algorithm (an important stepping stone towards Shor's algorithm).
> I am still puzzled why computer scientist consider computational complexity something metaphysical.
In what sense do you mean "metaphysical"?
Personally, I would say computational complexity is both "physical", in the sense that it gives fundamental limits to what can/cannot be done in the physical world using some amount of resources (say, anything obeying Blum's axioms). I would say it's also "metaphysical", in the sense that those limits are derived purely mathematically, and are "imposed on" the physical world; rather than, for example, by observing the physical world and imposing consistency with those observations on some mathematical model. I think of these computational "laws of nature" in a similar way to the "law of nature" that e^(i*pi) + 1 = 0.
In other words, we can imagine (AKA develop a consistent mathematical model of) a world operating by Newtonian physics; we can do the same for a world operating by special or general relativity; and for the standard model; we can do it for worlds with N dimensions; with Euclidean, hyperbolic or spherical geometry; and so on. All of these "make sense", in a self-consistent way, and it's up to physicists to figure out which of those models best describes our physical world.
On the other hand, there don't seem to be self-consistent models of universal systems which can solve their own halting problem, or models which don't posess the hierarchical complexity classes we've found (although, of course, the machine models may differ, we may introduce hierarchies of halting oracles, etc.). In that sense, these seem to be "laws of nature" which are intrinsic to our models, by virtue of their mathematical definition, rather than being imposed from the outside in order to agree with observations.
The section about closed timelike curves is, I think, a good example of why computational complexity has metaphysical consequences but struggles to bridge the two - see what Aaronson writes about the "grandfather paradox".
> the theory shows that the decay have to be polynomial
I thought this was the other way around: the theory predicts exact exponential decay, but experiments aren't entirely clear that that is always the case. Depending on which textbook or paper you read, some say the deviations from exact exponential decay are just experimental error, while others say they are evidence of additional effects that the exponential decay law does not account for.
> As an ex-physicist (quantum physics, now - data science), I am still puzzled why computer scientist consider computational complexity something metaphysical.
Please don't generalize one or a few computer scientists to the whole group.
By the way, a similar thing can be said for the halting problem. We may not be able to prove mechanically whether any program halts, but if we can do it for any program appearing in practice, that may be good enough. Still the proof is of immense theoretical value.
So, is there a physical effect that cares about computational complexity? Or a phenomenon in which it matters that we can simulate it in "only" O(n^100), not O(2^n)?
Personally, I consider it metaphysical because it says something about provability (since, deep down, proofs and programs are the same thing). Truth beyond proof (or at least beyond what's feasible to prove) is bizarre and mysterious.
Metaphysics (and physics) is concerned with uncovering the fundamental truths of the universe. Computational complexity indicates this may be unachievable.
Is there anyone on hacker news that could explain me secion 6 in this article? Overall the article is very interesting but I am not sure if the waterfall argument is valid.
The waterfall conjecture was stated as :
'Given any chess-playing algorithm A that accesses a "waterfall oracle" W, there is an equally good--chess-playing algorithm A', with similar time and space requirements that does not access W.'
From this we can assume that the article is a metaphor for us as external observer given semantics for computation. Thus by providing the reduction we showed that the computation in and of itself does not have semantics. However, why would we assume there to exists A'? Even if there is A', can't we reduce A -> A', and if we can reduces W to A, then by proxy W -> A'?
Well, author tries to prove it's not. And, btw, oracle is something more than just an external observer, it can be used, for example, to solve the halting problem in classes for which it would be impossible with nondeterministic machines.
The main problem with that section is that the author tries to engage with a particular objection (the waterfall argument) that is itself a bit obscure.
I think philosophers should focus on novel contemporarily-unfalsifiable ideas, like are we living in a simulation?; if you are a person outside time with instant access to all moments in the history (past-present-future), how much power do you have?; how would you create time (continuous step? adaptive resolution depending on what beings inside time decide to observe?) etc. Some of these ideas later might become falsifiable and move science forward, even if they sound super sci-fi or even insane right now. Or would be a nice background for a movie/game with some non-trivial thoughts already applied.
Unfalsifiability was a philosophical breakthrough first, and it is now the de facto test to see if an idea has any real world consequence.
To say philosophers should focus on unfalsifiable ideas is to say philosophers should be antipragmatic and inconsequential. One could easily argue that many are, but to encourage it? That's not necessary.
We need to focus on everything, and think about everything. We shouldn't ask "our philosophers" to "think" about things. We should all learn how to think and become better philosophers for ourselves, and find a way to openly contribute the progress of our thoughts.
Very interesting. I'll be honest, I expected this to be vague and self-important but it was very concrete and made a lot of discussed-to-death topics interesting again. He sort of lost me about halfway through the paper but I suppose that's because he has a PhD and I don't.
Not related to the content of the PDF, but I seriously dislike the width of those margins. While LaTeX look marvelous as always, there's just too much text per page.
[+] [-] stared|9 years ago|reply
First, even the CS's nightmare (where P=NP) may mean little practical difference (e.g. the simplest 'NP' takes O(n^1000)).
Second, for many problems even O(n^2) is too much (think of big data problems). (Also, given that computation requires physical resources, you cannot scale it arbitrarily.) So even in P the actual exponent matters a lot.
One interesting, physical case is the spontaneous emission of a photon from an excited atom. It looks like an exponential decay, but the theory shows that the decay have to be polynomial (I don't remember it's power, if more like 1/t^3 or 1/t^6, but not too high). Up to my knowledge, we haven't detected the polynomial tail (because when it is, the probabilities are so low).
So, is there a physical effect that cares about computational complexity? Or a phenomenon in which it matters that we can simulate it in "only" O(n^100), not O(2^n)?
[+] [-] nabla9|9 years ago|reply
The No-SuperSearch Postulate (there is no physical means to solve NP-complete problems in polynomial time) is not purely mathematical conjecture. It's also a claim about the laws of physics. The postulate includes P != NP as a special case, but it is stronger.
If you assume that the postulate is true, you can use it to explain things like why quantum mechanics is linear, why adiabatic systems have small spectral gaps, why timeline curves don't exist, black hole information paradox, impossibility of relativity computers, why QM-amplitudes are not directly observable. The question of how much physics you could derive from No-super-search postulate is interesting.
[+] [-] todd8|9 years ago|reply
http://www.informit.com/articles/article.aspx?p=2213858&WT.m...
[+] [-] johncolanduoni|9 years ago|reply
Sure, it may. But there are precious few examples in the wild where we're dealing with crazy powers like that. I think the bigger (but still quite small) failure in P's ability to measure "ease" of computation is that it only considers the worst case. A good example is graph isomorphism. There was recently a development of a quasipolynomial algorithm for graph isomorphism, which had the best bound we've found for the problem yet. However, with our existing algorithms it was extremely difficult to find examples where they didn't run in polynomial time, even when actively looking for them. But again, this is the exception, not the rule.
> Second, for many problems even O(n^2) is too much (think of big data problems). (Also, given that computation requires physical resources, you cannot scale it arbitrarily.) So even in P the actual exponent matters a lot.
Very true, but TCS spends a lot of time developing new algorithms to shrink exponents as well. A lot of the methods and skills needed for making better algorithms to get problems into P also apply to moving them around within P. And many of the same tactics for trying to figure out exactly how P and NP fit together apply to trying to figure out the same for P and BQP.
Since you have a background in quantum physics and (I assume) care about that field, are you aware that quite a bit of the TCS community works on quantum problems? There have been plenty of results that stand to be very physically relevant. Some good examples are the recent (quite rapid) progress in analyzing symmetric cryptography schemes' vulnerability to attacks by a quantum computers[1]. Interestingly, many are starting from the same jumping off that asymmetric attacks did; namely Simon's algorithm (an important stepping stone towards Shor's algorithm).
[1]: https://arxiv.org/abs/1602.05973
[+] [-] chriswarbo|9 years ago|reply
In what sense do you mean "metaphysical"?
Personally, I would say computational complexity is both "physical", in the sense that it gives fundamental limits to what can/cannot be done in the physical world using some amount of resources (say, anything obeying Blum's axioms). I would say it's also "metaphysical", in the sense that those limits are derived purely mathematically, and are "imposed on" the physical world; rather than, for example, by observing the physical world and imposing consistency with those observations on some mathematical model. I think of these computational "laws of nature" in a similar way to the "law of nature" that e^(i*pi) + 1 = 0.
In other words, we can imagine (AKA develop a consistent mathematical model of) a world operating by Newtonian physics; we can do the same for a world operating by special or general relativity; and for the standard model; we can do it for worlds with N dimensions; with Euclidean, hyperbolic or spherical geometry; and so on. All of these "make sense", in a self-consistent way, and it's up to physicists to figure out which of those models best describes our physical world.
On the other hand, there don't seem to be self-consistent models of universal systems which can solve their own halting problem, or models which don't posess the hierarchical complexity classes we've found (although, of course, the machine models may differ, we may introduce hierarchies of halting oracles, etc.). In that sense, these seem to be "laws of nature" which are intrinsic to our models, by virtue of their mathematical definition, rather than being imposed from the outside in order to agree with observations.
[+] [-] jgeada|9 years ago|reply
So yes, there is a physical cost to non-reversible computation.
[+] [-] ProfChronos|9 years ago|reply
[+] [-] Natanael_L|9 years ago|reply
http://phys.org/news/2015-12-quantum-physics-problem-unsolva...
http://www.nature.com/nature/journal/v528/n7581/full/nature1...
[+] [-] pdonis|9 years ago|reply
I thought this was the other way around: the theory predicts exact exponential decay, but experiments aren't entirely clear that that is always the case. Depending on which textbook or paper you read, some say the deviations from exact exponential decay are just experimental error, while others say they are evidence of additional effects that the exponential decay law does not account for.
[+] [-] amelius|9 years ago|reply
Please don't generalize one or a few computer scientists to the whole group.
By the way, a similar thing can be said for the halting problem. We may not be able to prove mechanically whether any program halts, but if we can do it for any program appearing in practice, that may be good enough. Still the proof is of immense theoretical value.
[+] [-] im3w1l|9 years ago|reply
Most cryptography rests on open problems so I don't think the question is settled yet. In either case, the answer is of immense importance.
[+] [-] gone35|9 years ago|reply
Firewalls[1,2], for starters.
[1] http://arxiv.org/abs/1301.4504v4
[2] http://arxiv.org/abs/1402.5674
[+] [-] mmalone|9 years ago|reply
Metaphysics (and physics) is concerned with uncovering the fundamental truths of the universe. Computational complexity indicates this may be unachievable.
[+] [-] stared|9 years ago|reply
[+] [-] eru|9 years ago|reply
There's also a slatestarcodex post about Scott Aaronson: http://slatestarcodex.com/2015/01/01/untitled/
[+] [-] GlobalTraveler|9 years ago|reply
The waterfall conjecture was stated as : 'Given any chess-playing algorithm A that accesses a "waterfall oracle" W, there is an equally good--chess-playing algorithm A', with similar time and space requirements that does not access W.'
From this we can assume that the article is a metaphor for us as external observer given semantics for computation. Thus by providing the reduction we showed that the computation in and of itself does not have semantics. However, why would we assume there to exists A'? Even if there is A', can't we reduce A -> A', and if we can reduces W to A, then by proxy W -> A'?
Could somebody help me?
[+] [-] solusipse|9 years ago|reply
[+] [-] eru|9 years ago|reply
[+] [-] bitL|9 years ago|reply
[+] [-] unabst|9 years ago|reply
To say philosophers should focus on unfalsifiable ideas is to say philosophers should be antipragmatic and inconsequential. One could easily argue that many are, but to encourage it? That's not necessary.
We need to focus on everything, and think about everything. We shouldn't ask "our philosophers" to "think" about things. We should all learn how to think and become better philosophers for ourselves, and find a way to openly contribute the progress of our thoughts.
[+] [-] LionessLover|9 years ago|reply
What does that even mean? The entire question only makes sense from a human point of view.
[+] [-] ExpiredLink|9 years ago|reply
[+] [-] conceit|9 years ago|reply
[+] [-] janan11|9 years ago|reply
[+] [-] alberto_ol|9 years ago|reply
[+] [-] klez|9 years ago|reply
From the FAQ
> If a story has had significant attention in the last year or so, we kill reposts as duplicates.
So having more than a year passed I guess this is ok?
[+] [-] sp332|9 years ago|reply
[+] [-] eru|9 years ago|reply
[+] [-] GlobalTraveler|9 years ago|reply
[deleted]
[+] [-] robinhoodexe|9 years ago|reply
[+] [-] eru|9 years ago|reply
Alas, they are sort-of a standard.
[+] [-] tome|9 years ago|reply