I'm not sure if its really misunderstanding, when in 99% cases it has turned out that if a problem is in P, then it has a polynomial-time algorithm with a quite small exponent.
It's certainly not true in principle that most problems in P have a small exponent. There is no shortage of graph related algorithms in P that have absolutely absurd exponents, like on the order of 10^100. It is true that practical decision problems that are in P have small exponents but that's exactly the point being made, namely that problems in P with large exponents are not practical and hence don't get much attention.
What are examples of natural graph related problems in P with absurd exponents? I think the reason they don't get attention is not that the algorithms are not practical, but that the problems are not natural. Really, the only examples of such problems I can think of are something like "Finding a clique of size 222222 is in P because we can try all possibilities in n^222222 time".
As the amount of data grows, exponents that used to be small become large.
In many applications, the data grows at least as quickly as computer performance. If the fastest known algorithm is superlinear, today's computers take longer to solve today's problems than yesterday's computers solving yesterday's problems. While O(n^2) time algorithms used to be pretty fast in the 80s, today they are often unusable.
I completely agree with you. My point was that the P vs NP distinction matters in practice, but of course also the subquadratic vs quadratic time distinction matters a lot.
Maxatar|1 year ago
It's certainly not true in principle that most problems in P have a small exponent. There is no shortage of graph related algorithms in P that have absolutely absurd exponents, like on the order of 10^100. It is true that practical decision problems that are in P have small exponents but that's exactly the point being made, namely that problems in P with large exponents are not practical and hence don't get much attention.
Laakeri|1 year ago
jltsiren|1 year ago
In many applications, the data grows at least as quickly as computer performance. If the fastest known algorithm is superlinear, today's computers take longer to solve today's problems than yesterday's computers solving yesterday's problems. While O(n^2) time algorithms used to be pretty fast in the 80s, today they are often unusable.
Laakeri|1 year ago