top | item 1940573

Is Complexity Theory On The Brink?

56 points| wglb | 15 years ago |rjlipton.wordpress.com | reply

13 comments

order
[+] amichail|15 years ago|reply
Do you think complexity theory gets more attention than it deserves given its impact on practical computing?
[+] ihodes|15 years ago|reply
Though I'm no complexity theorist, I can assure you than complexity theory is one of the biggest math/CS fields that has everyday, practical impact on "practical computing."

If you use any sort of non-trivial algorithms handling a non-trivial amount of data, you damn well better be interested in the relationship between the runtime of the algorithm and the number of things you're asking it to handle (Big O).

If you're designing a novel algorithm (or at least, novel to you) you should at least have a passing familiarity with complexity classes. It's funny how often I hear about (or see first-hand) someone trying to figure out why their code won't always work only to have pointed out to them that their problem is NP-C.

Not only all of that, but complexity theory supplies all sorts of better algorithms—or, rather, methods to improve a general class of algorithms, I suppose—which can immediately or eventually be applied to your everyday problems.

Perhaps you don't need to deal with a lot of these problems on a daily basis, but chances are that some of the tools you use were built by people who worried about them.

It's a neat field, and I'm excited to see these advances.

[+] mturmon|15 years ago|reply
Here's an example of the kind of research illustrating the gap between the priorities of complexity theorists and those of people who just want efficient algorithms:

It's a hard problem to approximate the volume of a convex body in n-dimensional space, given a membership oracle. (An oracle is an O(1) function that tells if a point is in, or not in, the convex body.) Is it exponential time in the number of dimensions (e.g., requiring an exponential number of calls to the oracle)? This is a fundamental question, and turns out to be important in lots of applications.

The following 1991 paper:

http://www.stat.duke.edu/~scs/Courses/Stat376/Papers/Converg...

caused quite a stir in the field. It has 380 citations. It proved, without explicit construction, that there is a (randomized) approximation algorithm that is order n^23, with a huge, and unknown, leading constant. It was totally useless in applications, but totally exciting for complexity theorists.

Since then, people have been plugging away. The most recent result I can find is by Laszlo Lovasz, appeared in 2003, and shows an O(n^4) algorithm. It's also randomized, very complex, and let's just say that the constant has got to be huge also.

The problem for practitioners is, you never know when all that plugging away is going to produce a good algorithm that is easy enough to implement. It it happens, it can totally change the computational landscape.

The FFT is my favorite example; its discovery has revolutionized all sorts of computations.