top | item 17207439

(no title)

babypistol | 7 years ago

Thanks for the explanation.

But, I do have some comments:

1)

As far as a I can tell the fact that PRIMES is in P doesn't mean breaking RSA is. The problem in rsa is integer factorization for which we do not know whether it is in P. (But also do not know for it to be NP-complete.)

But here you are actually supporting my point: event in that case we could choose sufficiently sized keys.

2)

And yes, the fact that P is self-low does seem to be the best justification we have to give such importance to P vs NP. That's also somewhat related to what I was saying about Turing machines and random access machines having polynomial reductions between them. But that's just because we decided to ignore polynomial reductions.

Continuing my example with even number being the same: We can see that multiplying an even number by any number yields an even number. Does that in any way justify ignoring the multiplication alltogether?

P with a P oracle really is equivalent to P, but O(n) with a O(n) oracle is not O(n), it's O(n^2).

3)

We have many NP-complete problems that have practical algorithms that perform fast on real-world inputs. I don't see how getting a free theorem about K-COLORING would practically mean faster compilers if the polynomial exponent was galactic.

discuss

order

myWindoonn|7 years ago

Last reply; the sun's up and work starts soon. I'm leaving breadcrumbs.

On RSA: You're right, but this is only the tip of the iceberg. There's a concept, "Impagliazzo's five worlds," which explains in fine detail how the various P!=NP and P=NP scenarios impact cryptography. Basically, whether P equals NP could be an easier question than another open question, whether one-way functions even exist. There are cryptographic practical implications.

On multiplication: You are correct that DTIME(n) ≤ DTIME(n^2). However, P = DTIME(n) + DTIME(n^2) + DTIME(n^3) + …

If you would like to consider that only some small sections of P are tractable, then that's an understandable stance to take. There is, as I've mentioned before, an entire subfield of computer science dedicated to trying to move matrix multiplication from DTIME(n^3) to DTIME(n^2).

On approximation: Integer programming is simply harder than approximation. That's all. The fact that the approximations are tractable doesn't slake our thirst for the precise answers. Moreover, we don't know how BPP and NP relate yet, so it could be that there are ways to solve problems exactly in NP by randomized algorithms. We've found a few hints at this already, but derandomization is a hard process. (We don't even know if BPP=P, although P≤BPP and BPP is self-low. The holy grail of derandomization would be to derandomize BPP to P.)

Finally, I really encourage you to take a bit of time to read the Aaronson paper I previously linked, because it explains all this in much more detail. You are free to continue to think that the problem is not interesting, but I encourage you to at least get a sense of why it is hard.

babypistol|7 years ago

I don't understand why you are responding to me in such a condescending tone, but thanks anyway for the discussion.

I'm well aware that P vs NP is a very hard question (I have studied quite some complexity theory, although I am a bit rusty), I'm saying that it does not seem as fundamental (or interesting) to computer science as people believe it to be.