top | item 17206115

(no title)

babypistol | 7 years ago

Well I'm trying to understand why this question is big.

The fact that the exponents tend to be small in the algorithms that our puny brains were able to produce does not convince me one bit that all problems in P have algorithms with small exponents.

Why are you so sure that P vs NP is important? Does it really relate to so many parts of computer science?

I do agree that complexity analysis is a very (perhaps the most) important part of CS. But that does not necessarily mean that P vs NP is a fundamental question.

RSA wouldn't break even if P = NP. Even if we came up with a polynomial algorithm for integer factorisation in could have a much higher exponent than the multiplication required to verify the result.

This is what bothers me the most, people saying that P = NP implies that solving a problem is as difficult as verifying a solution. That's simply not true: P = NP implies that solving a problem is as difficult as verifying a solution (modulo polynomial reductions).

The part about ignoring polynomial reductions seems very important to me. Otherwise I could also say that: "all even numbers are the same", when I actually mean: "all even numbers are the same (modulo 2)".

Would you care to convince me why we chose to ignore polynomial reductions and not some other complexity class?

The best explanation I see is the fact that different Turing machine formulations (single-tape, multi-tape, one-way infinite, two-way infinite, ...) and also the random access machines have polynomial reductions between them. But even this does not justify the importance we give to P vs NP in my eyes.

discuss

order

myWindoonn|7 years ago

There is a reasonable-to-read paper [0] that explains everything; I will only share a few responses to your concerns.

"Our puny brains" have no problem constructing O(n^100) or similarly-ridiculous problems. Here is an informal survey [1] of several good examples. In my previous message, I indicated that matrix multiplication and CFG parsing are good examples; they are both cubic-time but we have come up with even cheaper algorithms for special cases, like sparse matrices or LR(k) grammars.

Whether P equals NP is not only a fundamental question, but it's the first of infinitely-many questions about the "polynomial hierarchy" or PH, a tower of ever-more-complex classes. And part of why the question is so important is that deciding whether P=NP is actually a problem in a higher rung of another complexity class further up in PH! P!=NP is like the first of an infinite family of very very difficult meta-proofs about complexity.

"RSA wouldn't break" because PRIMES is already known to be in P, via a famous paper titled "PRIMES is in P", and RSA keys are already sized to have impractical exponents.

What P=NP implies is much more powerful. First, let's consider the actual cost of a poly-time reduction. P is self-low, which means that P with a P oracle is equivalent to P. So if P=NP, then the cost of a poly-time reduction can be rolled in, and the entire problem is still in P.

Now, which problems would be transformable via P=NP? Well, we typically assume that the solution would include a poly-time reduction for 3SAT instances into some P-complete problem. Okay, great. However, we know that 3SAT can't be transformed opaquely; if this transformation exists, it must be able to examine and specialize the structure within each specific 3SAT instance. So we'd get one powerful theorem about logic (specifically, about SAT). But that wouldn't be the end.

We'd also get transformations for the following NP-complete problems, and each transformation would embody a free theorem about the nature of the structure of all instances of the problem ([2] for a bigger list):

* Graph theory: CLIQUE, HAMILTONIAN, K-COLORING (faster compilers!), COVER

* Number theory: TSP (faster packet routing!), KNAPSACK, SUBSET-SUM, PARTITION

* Formal systems: BOUNDED-POST-CORRESPONDENCE (holy shit, bounded Turing-complete analysis in P!?)

That's some serious free theorems! There's no reason to expect that whatever would give us so much stuff for free would be easy to prove. In fact, it's quite the opposite: The fact that any NP-complete problem, if it yielded, would give us deep insight into all of the others, should give us extreme pause before hoping that P=NP.

[0] https://www.scottaaronson.com/papers/pnp.pdf

[1] https://cs.stackexchange.com/questions/87073/what-are-the-ex...

[2] https://en.wikipedia.org/wiki/List_of_NP-complete_problems

babypistol|7 years ago

Thanks for the explanation.

But, I do have some comments:

1)

As far as a I can tell the fact that PRIMES is in P doesn't mean breaking RSA is. The problem in rsa is integer factorization for which we do not know whether it is in P. (But also do not know for it to be NP-complete.)

But here you are actually supporting my point: event in that case we could choose sufficiently sized keys.

2)

And yes, the fact that P is self-low does seem to be the best justification we have to give such importance to P vs NP. That's also somewhat related to what I was saying about Turing machines and random access machines having polynomial reductions between them. But that's just because we decided to ignore polynomial reductions.

Continuing my example with even number being the same: We can see that multiplying an even number by any number yields an even number. Does that in any way justify ignoring the multiplication alltogether?

P with a P oracle really is equivalent to P, but O(n) with a O(n) oracle is not O(n), it's O(n^2).

3)

We have many NP-complete problems that have practical algorithms that perform fast on real-world inputs. I don't see how getting a free theorem about K-COLORING would practically mean faster compilers if the polynomial exponent was galactic.