(no title)
babypistol | 7 years ago
But in the article, there is an interesting section on that:
> If P = NP, even with a high exponent on the polynomial, that means that checking strategies from the past becomes only polynomially harder as time passes and data is aggregated. But so long as the combined computational power of financial market participants continues to grow exponentially, either through population growth or technological advances, then there will always come a time when all past strategies can be quickly backtested by the then-prevailing amount of computational power. In short, so long as P = NP, the markets will ultimately be efficient, regardless of how high the exponent on the polynomial is; the only question is when, and the answer depends on the available computational power.
So this section, if I understand it correctly, says that problems in P are easy because the computational power in the world grows exponentially and we can assume that they will at least once become feasible to solve.
That's an interesting way of looking at it. Is this really the reason why we consider polynomial problems much easier than NP-hard ones?
mannykannot|7 years ago
The importance of the difference between P and NP appears to be much greater in domains that are starkly discrete, such as cryptography, than in those that are approximately continuous, like finance, where there are often approximate methods in P that are very effective and efficient, at least for most real-world cases.
[1] Only in the last section does the author look at all at real market data, and there only to look for evidence in support of a weak prediction from the thesis. This is so riddled with uncontrolled factors that it tells us nothing about the quantitative consequences of N != NP on market efficiency.
myWindoonn|7 years ago
We give importance to this question because it is open, it is big, it relates to many other parts of computer science, it has real-world implications, etc. The fact that you don't think this question is important likely means that you don't have the complexity-theoretic background required to appreciate the stark deepness of the question. This isn't bad, but it's myopic.
babypistol|7 years ago
The fact that the exponents tend to be small in the algorithms that our puny brains were able to produce does not convince me one bit that all problems in P have algorithms with small exponents.
Why are you so sure that P vs NP is important? Does it really relate to so many parts of computer science?
I do agree that complexity analysis is a very (perhaps the most) important part of CS. But that does not necessarily mean that P vs NP is a fundamental question.
RSA wouldn't break even if P = NP. Even if we came up with a polynomial algorithm for integer factorisation in could have a much higher exponent than the multiplication required to verify the result.
This is what bothers me the most, people saying that P = NP implies that solving a problem is as difficult as verifying a solution. That's simply not true: P = NP implies that solving a problem is as difficult as verifying a solution (modulo polynomial reductions).
The part about ignoring polynomial reductions seems very important to me. Otherwise I could also say that: "all even numbers are the same", when I actually mean: "all even numbers are the same (modulo 2)".
Would you care to convince me why we chose to ignore polynomial reductions and not some other complexity class?
The best explanation I see is the fact that different Turing machine formulations (single-tape, multi-tape, one-way infinite, two-way infinite, ...) and also the random access machines have polynomial reductions between them. But even this does not justify the importance we give to P vs NP in my eyes.
leereeves|7 years ago
QML|7 years ago
However, even in an exponential world, I am reminded of a quote:
"exponential algorithms make polynomially slow progress, while polynomial algorithms advance exponentially fast".
babypistol|7 years ago
pbhjpbhj|7 years ago
So, not only do you have the resource problem highlighted in sibling posts but also a recursion problem:
Vis, when the problem becomes solvable the optimisation must add the market transactions for the sale of resources to solve the optimisation, so now one needs to optimise that larger problem ... which again involves transactions, which shift the original optimisation.
If you can predict the optimisational perturbations needed to solve the enlarged system, then you might be able to break the cycle .. but won't you need to transact the processing of the perturbation, the optimisations of which will shift the inner optimisation.
The solution itself would be marketable, perturbing the market and negating the optimisation?
My instinct may be wrong but it suggests the market can't be [perfectly] optimised (but in practise that probably doesn't matter).
Optimising markets might be the extent goal of capitalism, but it's not the extent goal of the rich (in general) who would rather remain rich than have perfect allocation of resources.