top | item 15577979

The Scientific Case for P≠NP

125 points| apsec112 | 8 years ago |scottaaronson.com | reply

100 comments

order
[+] mixedmath|8 years ago|reply
This is from 2014. In this post, Scott describes various reasonings why one might consider that lead one to suspect that P!=NP.

A surprising amount of verbiage of this post is devoted towards responding to/shaming a particular person who used to comment frequently on Scott's blog, and who thinks it is not safe to assume that P!=NP. I find this off-putting. But sometimes people engage in mini-crusades on the internet... the interested reader can read the prequel and subsequent exchanges between Scott and Lubos throughout comment sections of a number of blogs.

[+] lisper|8 years ago|reply
I can personally attest to the truth of Scott's characterization of Lubos Motl's personality. Lubos once publicly called me a "category 5 loon" [1] for something I said in a presentation on quantum mechanics [2]. What he didn't seem to realize was that the thing he was attacking me for was presented as a straw man. There's a lengthy introduction where I said, "The thing I am about to tell you is clearly false, but it seems to follow logically from the things they tell you in quantum mechanics 101. The trick is to figure out where the flaw in the reasoning lies." (And BTW, I can still fool card-carrying physicists with this puzzle even today.)

AFAIK Lubos has never apologized (not that I ever really gave a lot of weight to his opinion).

[1] https://motls.blogspot.com/2012/10/evading-quantum-mechanics...

[2] https://www.youtube.com/watch?v=dEaecUuEqfc

[+] acqq|8 years ago|reply
He does no shaming. If you claim that, you have to quote his words with which he does that, and I believe it will be obvious that your statement is biased. He responds, and explains why:

"Anyway, my goal here was never to convince Luboš. I was writing, not for him, but for my other readers: especially for those genuinely unfamiliar with these interesting issues, or intimidated by Luboš’s air of certainty. I felt like I owed it to them to set out, clearly and forcefully, certain facts that all complexity theorists have encountered in their research, but that we hardly ever bother to articulate."

[+] omegaham|8 years ago|reply
It reminds me of disses in old rap songs. Eminem yelling at Lynne Cheney[1]. "Who? Oh yeah, she had some kind of thing about rap lyrics 15 years ago."

The dispute is long since forgotten, but the bickering is still there, and it's confusing and pointless to people who weren't around when it happened.

[1]https://www.youtube.com/watch?v=hsBYblypL38

[+] rcthompson|8 years ago|reply
More generally, we can show that the attitude that regards any unproven statement as having a 50% probability of being true is trivially self-contradictory. Suppose X is an unknown number, and all we know is that it's between 0 and 100. We can make the following statements about X, none of which is provable with the information we currently have:

1. X < 1

2. 49 < X < 51

3. X > 99

If we assume that each of these statements has a 50% probability of being true, then we trivially get a contradiction, because the probability of the union of these 3 options becomes 150%, because mutually exclusive probabilities are additive.

So the idea that a 50% chance of truth represents a state of no knowledge about the truth of a statement is just wrong. If we actually knew that any one of the above statements had a 50% probability of being true, we would clearly have more information about X than we would otherwise. There is no sense in which a 50% probability of truth is a useful default.

[+] newen|8 years ago|reply
You are misapplying probabilities here. If you assume no knowledge, and you have two options for X, true or false, then it's a 50% probability of X being true.

Similarly, if there are N possibilities for X, and no prior knowledge for X, then it's 1/N probability of X being on of those possibilities.

The key is what prior knowledge you bring to the table.

[+] lokerfoi|8 years ago|reply
What made me realize that P?=NP is not that important for practical issues is the unbelievable effectiveness of heuristics. They seem to get extremely close to the optimum than best approximation algorithm known for the problem.
[+] lorenzhs|8 years ago|reply
That's not entirely true. I'm a PhD student in theoretical computer science and some of my colleagues work on (hyper-)graph partitioning, both of which are NP-complete. For most real-world instances, nobody has any clue what the optimum cut/imbalance/... is. It would be very useful to evaluate the quality of the heuristics, though. A 0.5% improvement is suddenly much more impressive if you can show that you're only half as far from the optimum as your closest competitor. Or are you not seeing any improvement on instance X because whatever you're trying isn't working, or is it because you've already got the optimum?, etc
[+] bbatha|8 years ago|reply
Unfortunately complexity theorists have studied heuristics as well. If our heuristics are actually good we effectively solve P=NP, therefore there is a strong upper bound to our heuristics. Though the jury is still out as to whether that bound matters for practical purposes.
[+] UncleMeat|8 years ago|reply
This is false. There are NP-complete problems that, assuming P!=NP, there are provably no fast and good heuristics for these problems.
[+] Grue3|8 years ago|reply
Ok, but how long did it take to find a polynomial algorithm for primality checking (published in 2002)? [1] And NP-complete problems are even harder than that! The history of mathematics is rife with conjectures that would be considered true if viewed "scientifically", but have an extremely large counterexample that disproves the whole thing. [2]

[1] https://en.wikipedia.org/wiki/AKS_primality_test [2] https://en.wikipedia.org/wiki/P%C3%B3lya_conjecture

[+] CJefferson|8 years ago|reply
Nothing comes close to P=NP. Again and again, across thousands of problems, almost everything ends up being P or NP-hard, and never both.

It is certainly possible that P=NP, but it will involve basically starting most of the last 30 years of theoretical computer science again.

[+] Sniffnoy|8 years ago|reply
But this is another area where if P=NP you might expect hints to leak through. As Aaronson points out elsewhere, there's a number of problems where people had found practically efficient algorithms, or polynomial-time randomized algorithms, or something, long before they found an actual polynomial-time worst case algorithm. (In addition to primality, linear programming is another example.) For NP-complete problems there's none of that, no such hints leaking through.
[+] zzazzdsa|8 years ago|reply
We had plenty of evidence that a deterministic primality testing algorithm exists, though. The Miller-Rabin test was known to be randomized polynomial time since the 70s, and assuming the generalized Riemann hypothesis it can be made deterministic polynomial.

And while there was some nuemerical evidence that the Polya conjecture was false, there really wasnt all that much evidence that the conjecture was false. The first counterexample is around 9 billion-- you could find it yourself if you wanted.

[+] wtallis|8 years ago|reply
> Ok, but how long did it take to find a polynomial algorithm for primality checking

Proving P=NP doesn't require finding a polynomial time algorithm to solve NP-complete problems. Finding such an algorithm is a much greater challenge than proving that one can exist.

[+] amelius|8 years ago|reply
Nice post, but he loses sympathy with me for shaming that guy.
[+] throwaway8537|8 years ago|reply
Normally I would agree, but it should be noted that Lubos is a major asshole, and deserves every bit of shaming he gets. He's easily in the top 3 biggest assholes I've had to deal with in my academic career, and that's saying something.

He has it all: extreme levels of arrogance, narcissism, pedantry, absolute lack of empathy, extreme aggressivity. He is also a notable science denialist: climate change denier, evolution denier...

[+] mcguire|8 years ago|reply
"Telling truth to competence."

Brilliant.

[+] justifier|8 years ago|reply
Are there other theories that rely on the assumption that p≠np?

Like how there are theories that begin "assuming the reimann hypothesis|goldbach conjecture is correct.."

Disclaimer.. it is my intended inference to show that p=np

[+] SAI_Peregrinus|8 years ago|reply
The existence of one-way functions is one of those things that requires P≠NP. So all cryptographic hash functions would be broken.
[+] SomeStupidPoint|8 years ago|reply
> Because, like any other successful scientific hypothesis, the P≠NP hypothesis has passed severe tests that it had no good reason to pass were it false.

Is this the crux of the post?

That seems like an extremely crap reason to believe a mathematical theorem. What am I missing?

I also don't find his heuristic convincing -- to the point his description is making me question if P=NP is actually possible after all: a close complex border that seems to yield increasing numbers of "close calls" sounds exactly like the situation where there's a lurking, sparse class of "touches" that we haven't thought of, particularly when under 100 years of research has been put into the topic.

[+] nl|8 years ago|reply
That seems like an extremely crap reason to believe a mathematical theorem. What am I missing?

What's the alternative?

As I see it, the possible ways to think about it are as follows:

1) You must think of it as 50/50 possibility of being true until an absolute proof is found.

2) You look at places where it could fail, and if it passes then you update the likelihood based on that.

To me (and as this post argues) it makes sense to follow (2). Not only is it more useful practically (you can use that assumption to solve other things, conditional on it being true) but also it reflects the reality of the world. Even if it turns out not to be true in every case it is clear that in the majority of cases it is true.

Maybe it is untrue but it is possible to define the set of cases where it is untrue. If this is the case, it seems likely that class is very small, so it is mostly true. That is the opposite of a proof of course, but it does seem reasonable to think about your confidence of it being true as being somewhat proportional to the likely size of the class.

close complex border that seems to yield increasing numbers of "close calls" sounds exactly like the situation where there's a lurking, sparse class of "touches" that we haven't thought of

That's not really how it works though. If it keeps happening and those classes don't touch, then it typically means they aren't as close as they seem (ie, there is an extra dimension in which they are a long way apart). In this case it seems like the P/NP divide is actually a thing which means things which appear close actually aren't, so if you analyze them in that space they don't appear close together at all.

[+] rocqua|8 years ago|reply
The argument isn't that P≠NP is true because it has passed tests. The argument is that it seems to be true because it has passed tests. Specifically, it seems much more likely to be true than false.

Consider this. You have to bet on either P=NP or P≠NP. Where do you put your money. That is the question we are trying to answer.

[+] postramus|8 years ago|reply
His earlier post (linked from this) has stronger reasons.

I think it’s advisable to keep an open mind on the issue, and not simply on the actual question itself; it would not be surprising to me if the solution comes from something like adopting a different framework.

For example, time bounds and complexity classes smell a lot like conservation laws: “transforming such and such arrangement into such and such arrangement requires at least this much computation.”

That said, it’s possible (a) we’re currently failing to consider some “term”, (b) generally ignoring this term doesn’t cause problems when proving lower bounds but (c) the complex border and your “sparse touches” correspond to situations where the missing term plays a more significant role.

That’s a case where P probably isn’t equal to NP but keeping an open mind at least leads in more interesting directions, imho.

I also think people don’t take seriously the possibility of P being equal to NP but with intrinsically high degree. I say this not to be cute—“what if p is np but still de-facto intractable?”—but because I don’t think anyone has a great intuition for, say, what kinds of algorithms have polynomial solutions of minimum degree, say, 8...at least not in the same way we have good intuition for which algorithms are linear, nlogn, n^2, n^3, and so on.

It’s hard for me, at least, to feel overly confident in the “we’ve been working on finding a fast algorithm for seventy years and gotten nowhere” when our algorithmic intuition vis-a-vis higher polynomial degree seems so under-developed.

Even if you don’t consider it likely that p equals np, you can still follow this line of thought and consider the possibility that these “sparse touches” may be cases where the slippery problems like graph isomorphism (etc) correspond to problems with polynomial running times of (unexpectedly) high degree...and thus we keep finding these sparse touches along a seemingly-complex border because we don’t yet have a solid intuition for the capabilities of polynomial algorithms of high degree.

And so on and so forth. P probably isn’t NP but being dogmatic about it is neither fun nor interesting.

[+] jcranmer|8 years ago|reply
The "idiot's explanation of P=NP" that refers to "easy" or "practical" is at this point fairly well established as disproven. It's possible for P=NP, but for that to happen, the algorithm pretty much has to be completely impractical.

To summarize the argument in more general terms: the class of algorithms in P is those where you can derive some global property in terms of increment local decisions. For example, building the shortest path between two nodes in a graph can be done by always picking the closest node. By contrast, NP-hard algorithms have the problem that incremental local decisions don't let that happen: you might need to recolor an entire optimally-colored subgraph to admit a new node.

Furthermore, we know from research that there tends to be a very sharp transition from P to NP-hard in terms of transformation, where relaxing a single condition goes straight from P to NP-hard without any intermediate "we don't know where there is" space. This tends to suggest that it's not really so much a question of problems being P or NP, but rather of instances being intrinsically easy to hard.

The question of P=NP then is really about whether these hard instances are really exponentially hard, or can we embed them into a simpler instance using some combinatorial construct that merely makes it look exponentially hard. So it's still possible for P=NP, but the practical question of "will we get an efficient, guaranteed for all instances, solution to these problems?" is answered in the negative. When you do the combinatorial embeddings to make the polynomial time algorithms work, you end up getting constants that look like 3^2^2^4, and there's no way to shrink those constants to practical ones.

[+] closed|8 years ago|reply
Hmm.. good point. It'd be interesting to think of examples where something with a "complex border" turned out true, and examples where it turned out false.

(On a side not, I love this kind of scientific approach to believing unproved theorems, and Polya has a lot of thoughts on the topic)

[+] ubernostrum|8 years ago|reply
Mathematics is full of things which show asymptotic behavior, so I'm not sure why the ability to find points where two things are arbitrarily "close" to each other would be read as indicating the likeliehood of a point where they "touch".