FTA: "Freedman was 30 when he solved a version of one of the longest-standing problems in mathematics, the Poincaré conjecture."
Not true, he apparently made contributions to the poincare conjecture in dimension 4. Also, the article has a link from "poincare conjecture" to a clay institute webpage which is broken.
This is what it says about him on the official list of Fields medalists: "Developed new methods for topological analysis of four-manifolds. One of his results is a proof of the four-dimensional Poincaré Conjecture".
The article's statement you quoted is accurate enough to not be "not true".
Can anyone expound further on the practical applications of quantum computing? In my limited understanding, I think the following are definitely candidates (presented in no particular order), but I'm sure there are others:
1. Shor's algorithm could expose all encryption algorithms that are based on integer factorization.
2. Quantum simulation could open new avenues of research into how our universe operates at the quantum level. This could lead to advancements in materials science, for example.
3. Quantum computing could open new avenues of research into the P versus NP problem.
4. Quantum computing could open the door to the possibility of instantaneous communication via an understanding of action at a distance / quantum entanglement.
Edit: Thanks for all of the great responses, clarifications and links to further reading.
3. P vs. NP is a problem phrased in the language of classical Turing machines. In that sense, quantum computing has no bearing on it. It is unknown whether NP problems can be solved efficiently on quantum computers.
4. Is a common misunderstanding about entanglement. Unfortunately, while entanglement does allow non-classical long-distance correlations, it does not allow for communication. Think of it this way: suppose we took two random number generators that start with the same seed, and thus produce the same stream of bits. Then you take yours over there while I keep mine over here. This doesn't help us communicate faster, because we don't get to pick the bits that come out of the generator. So too with quantum entanglement.
> Can anyone expound further on the practical applications of quantum computing?
My research is in molecular dynamics simulations and quantum chemistry. A lot of progress has been made in the field in the last few decades; however, there is a roadblock right now that no one really knows how to get around called the fermion sign problem. If we want to simulate atoms or molecules as accurately as we can, the number of resources we require scales exponentially with the number of particles. This is basically due to the Coulombic (electrostatic) interactions between the electrons in the atom. Accounting for these interactions gives many materials their interesting properties, but we've only been able to come up with approximations for solving these systems.
Quantum computing would allow these systems to be solved in polynomial time. The advantage would be enormous; in some cases, atomic simulation results would match experimental ones exactly†, and that is a very exciting notion.
Quantum Machine Learning would be drastically sped up, as many linear operations could be done in log(N): http://arxiv.org/abs/1307.0411
A quantum machine learning computer could make scientific advances simply by processing large datasets at rate dramatically faster than humans and/or current machine learning systems.
Does anyone know or can explain simplistically why if we can get supercomputer like power from 100 qubits then why not a desktop computer like power with 1 or 2 qubits? I assume that the computational capacity increases exponentially with the number of qubits, but how?
So, the deal is that quantum computers don't do everything for you. They can do some things faster than any classical algorithm, but they're not known to, say, do NP-complete problems in polynomial time. (Friendly reminder: we don't even know whether classical computers can do that or not.)
Supposing that you've got, say, three qubits which can all be 0 or 1, a classical computer can be in one of eight states 000, 001, ... 110, 111, and we can easily store one of those numbers in memory. For a quantum computer, we have to instead store 16 doubles in memory, because it is allowed to be in any state:
such that the complex numbers a0, a1, ... a7 square-sum to 1. In other words, for n bits you often need to store 2^n numbers.
There are rough-and-ready ways to get around a bit of this in simulation with a sort of 'lossy compression' approach, it's stuff like: you form the "state matrix" (basically, take your row-vector and form an outer-product with its conjugate transpose), diagonalize it, and keep only the eigenvectors for the largest eigenvalues. If your system is simple enough you might be able to just reduce the bulk of the calculation to "what's happening to these eigenvalues?" and you can leave out any explicit further representation of the vectors themselves.
Now, here's where this gets interesting: the world is quantum, especially when we're talking about molecular chemistry. So, we have these chemicals which obey quantum mechanics and they have this huge quantum state which is cumbersome to compute with. But with a quantum computer, we can just program those rules and that behavior into the computer and see what it does with the physics that our classical computers can only barely simulate. So that's why these quantum computers, when they're doing what they're good at, beat out our supercomputers.
But why can't they beat out our supercomputers unilaterally? Because that's just not what they do, and for the most part these 2^n complex numbers which tell us the quantum state are not directly observable.
Let me put it like this: at the end of the day, when you measure the |S> above in the computational basis, you will get a classical 8 bits of information and the quantum state will be destroyed: you measured, you found |010>, that is the state of the system now. Prepare millions of these and then you will find some statistics for |000> and |001> and those statistics will give you a few decimal places for |a0|, |a1|, |a2| respectively, with no information about the phase factors of the complex numbers involved. It's not terribly enlightening of itself.
But then there are some generic problems, like "what is the only n-bit input to this set of logic gates which makes the result bit come out 1?" where you can do better. The answer to that is only n bits, but our only classical algorithm searches through all 2^n settings of those bits to find the answer. It turns out, if you can make your logic gates act their normal way on a quantum state, then you can pass one of these a-little-of-everything superpositions as input, then do a little magic to the input bits based on the resulting output bit, some 'yadda yadda' about a quantum fourier transform, and poof: when you measure the bits you will, with high probability, get the bits which set the logic gates to 1. That's with only the one query actually sent to the quantum versions of those logic gates. You peek inside it once and the resulting state knows enough about how it was created that you can extract out something interesting about the function.
So for some algorithms like factoring integers and searching for special inputs to a process, you really do get "exponential speedups" from quantum computers. However, those algorithms are really special-purpose stuff; quantum computers are not known to make arbitrary computations faster. They will make some specific computations, like chemistry calculations, faster.
With four qubits we have been able to factor the 4-bit number 15 into 3 * 5. That's the biggest that I know we've built, although some companies (like D-Wave) claim that they have built bigger systems under a different model of quantum computation (and hence can't do the factoring algorithm straightforwardly). The problem right now is that you need these big experimental apparatuses to cool, manipulate, and read out values from these quantum computers, and while you're doing those tests the clock is ticking -- states don't stay 'coherent' for more than a fraction of a second, after which they act pretty classical. There's just not much we can do with these systems until we get them to be always-online and stuff.
Finally, 1 and 2 qubits don't really show off all of the diversity of quantum mechanics; for one thought-experiment that I like to use to show people the strangeness of quantum mechanics ("Betrayal"), I determined that you needed three people playing the game together, no less, to really get the paradoxical result.
A quantum computer isn't really much like a super-powered classical computer. There are certain specific things it's amazingly good at (most notably, factorizing large integers), some other things it's quite good at (e.g., some kinds of search operation), and lots of things for which no one knows how to make quantum computers any faster than classical ones.
So, if you have a quantum computer, you're probably going to want to use it for breaking RSA or similar cryptography by factorizing large numbers.
There are (ordinary, non-quantum) algorithms for factorizing large-ish numbers. They work pretty well for numbers of modest size. But they don't scale well as the numbers get very large. Something like exp(log(n)^1/3 log(log(n))^2/3) to factorize a number near to n. Or, in terms of the number of bits, something like exp(b^1/3 log(b)^2/3).
So this works fine for numbers up to, let's say, 200 bits on a single machine or 1000 bits for a team with a substantial network. (Note: all numbers here are kinda made up but of the right order of magnitude.)
With a quantum computer, you can factorize a b-bit number with something like 2b qubits, b^3 quantum gates, and a runtime that scales more manageably with b.
So there's not much point in using a quantum computer until the number of qubits it has gets large enough to use for factorizing numbers bigger than you can do on your PC. Which means somewhere on the order of 100 qubits. As the number goes up, the advantage of the quantum computer over the classical computer increases. The transition from "slower than your PC" to "faster than any classical computer that will ever be built" takes place over a fairly small range of qubit-count.
Down with much smaller numbers of qubits, though, the quantum computer can maybe manage to find the factors of, say, 15. Which is, shall we say, quite a lot less than the computer on your desktop can do.
Well, we don't really know if it's exactly exponential, we also don't really know that a quantum computer is more powerfull than a classical one (and we don't know if P == NP, what's related).
That said, yes, the observed speedup on some [1] problems is exponential. It's easy to wave hands and say that when you place the qbits in a coherent state together, the operations you do on them apply to all of the values you they can possibly carry, and those grow exponentialy with the number of bits. But the long explanation makes things much more clear, and is worth reading if you want to really get it. Wikipedia is a good starting point [2][3].
[1] Not all by a wide margin. Quantum computers will be specialized machines for a long time, if not forever. But cryptography will take a huge hit.
With 1 or 2 qubits you wouldn't even be able to compute 2 + 2 because storing 4 in binary requires 3 bits. 100 qubits is also not particularly useful, not because they aren't fast but because most of the interesting stuff simply doesn't fit. RSA keys are thousands of bits long, for example.
You need enough qubits to store the problem, and the working state of the algorithm you're using, and the overhead introduced by error correcting codes, and the overhead from your gates not being ideal, and so on. Austin Fowler gave a talk a few months ago where he goes through the overheads in a particular problem (finding molecular ground state energies) and ends up optimistically requiring tens of millions of qubits [1].
[+] [-] DubiousPusher|11 years ago|reply
http://www.sciencedaily.com/releases/2014/10/141002141757.ht...
[+] [-] duaneb|11 years ago|reply
[+] [-] Zaheer|11 years ago|reply
http://www.wired.com/2014/03/quantum-crypto-google/
[+] [-] thinkling|11 years ago|reply
http://research.microsoft.com/en-us/labs/default.aspx
[+] [-] mathgenius|11 years ago|reply
Not true, he apparently made contributions to the poincare conjecture in dimension 4. Also, the article has a link from "poincare conjecture" to a clay institute webpage which is broken.
Meh
[+] [-] tzs|11 years ago|reply
The article's statement you quoted is accurate enough to not be "not true".
[+] [-] Arjuna|11 years ago|reply
1. Shor's algorithm could expose all encryption algorithms that are based on integer factorization.
2. Quantum simulation could open new avenues of research into how our universe operates at the quantum level. This could lead to advancements in materials science, for example.
3. Quantum computing could open new avenues of research into the P versus NP problem.
4. Quantum computing could open the door to the possibility of instantaneous communication via an understanding of action at a distance / quantum entanglement.
Edit: Thanks for all of the great responses, clarifications and links to further reading.
[+] [-] evanb|11 years ago|reply
4. Is a common misunderstanding about entanglement. Unfortunately, while entanglement does allow non-classical long-distance correlations, it does not allow for communication. Think of it this way: suppose we took two random number generators that start with the same seed, and thus produce the same stream of bits. Then you take yours over there while I keep mine over here. This doesn't help us communicate faster, because we don't get to pick the bits that come out of the generator. So too with quantum entanglement.
[+] [-] Xcelerate|11 years ago|reply
My research is in molecular dynamics simulations and quantum chemistry. A lot of progress has been made in the field in the last few decades; however, there is a roadblock right now that no one really knows how to get around called the fermion sign problem. If we want to simulate atoms or molecules as accurately as we can, the number of resources we require scales exponentially with the number of particles. This is basically due to the Coulombic (electrostatic) interactions between the electrons in the atom. Accounting for these interactions gives many materials their interesting properties, but we've only been able to come up with approximations for solving these systems.
Quantum computing would allow these systems to be solved in polynomial time. The advantage would be enormous; in some cases, atomic simulation results would match experimental ones exactly†, and that is a very exciting notion.
(†glossing over a huge number of details here)
[+] [-] powrtoch|11 years ago|reply
http://en.wikipedia.org/wiki/No-communication_theorem
[+] [-] TTPrograms|11 years ago|reply
A quantum machine learning computer could make scientific advances simply by processing large datasets at rate dramatically faster than humans and/or current machine learning systems.
[+] [-] Strilanc|11 years ago|reply
http://strilanc.com/quantum/2014/05/03/Storing-Bandwidth-wit...
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] ianstallings|11 years ago|reply
http://www.zdnet.com/microsoft-to-close-microsoft-research-l...
Do they have others?
[+] [-] tkmcc|11 years ago|reply
[+] [-] RealCasually|11 years ago|reply
[+] [-] tatqx|11 years ago|reply
[+] [-] drostie|11 years ago|reply
Supposing that you've got, say, three qubits which can all be 0 or 1, a classical computer can be in one of eight states 000, 001, ... 110, 111, and we can easily store one of those numbers in memory. For a quantum computer, we have to instead store 16 doubles in memory, because it is allowed to be in any state:
such that the complex numbers a0, a1, ... a7 square-sum to 1. In other words, for n bits you often need to store 2^n numbers.There are rough-and-ready ways to get around a bit of this in simulation with a sort of 'lossy compression' approach, it's stuff like: you form the "state matrix" (basically, take your row-vector and form an outer-product with its conjugate transpose), diagonalize it, and keep only the eigenvectors for the largest eigenvalues. If your system is simple enough you might be able to just reduce the bulk of the calculation to "what's happening to these eigenvalues?" and you can leave out any explicit further representation of the vectors themselves.
Now, here's where this gets interesting: the world is quantum, especially when we're talking about molecular chemistry. So, we have these chemicals which obey quantum mechanics and they have this huge quantum state which is cumbersome to compute with. But with a quantum computer, we can just program those rules and that behavior into the computer and see what it does with the physics that our classical computers can only barely simulate. So that's why these quantum computers, when they're doing what they're good at, beat out our supercomputers.
But why can't they beat out our supercomputers unilaterally? Because that's just not what they do, and for the most part these 2^n complex numbers which tell us the quantum state are not directly observable.
Let me put it like this: at the end of the day, when you measure the |S> above in the computational basis, you will get a classical 8 bits of information and the quantum state will be destroyed: you measured, you found |010>, that is the state of the system now. Prepare millions of these and then you will find some statistics for |000> and |001> and those statistics will give you a few decimal places for |a0|, |a1|, |a2| respectively, with no information about the phase factors of the complex numbers involved. It's not terribly enlightening of itself.
But then there are some generic problems, like "what is the only n-bit input to this set of logic gates which makes the result bit come out 1?" where you can do better. The answer to that is only n bits, but our only classical algorithm searches through all 2^n settings of those bits to find the answer. It turns out, if you can make your logic gates act their normal way on a quantum state, then you can pass one of these a-little-of-everything superpositions as input, then do a little magic to the input bits based on the resulting output bit, some 'yadda yadda' about a quantum fourier transform, and poof: when you measure the bits you will, with high probability, get the bits which set the logic gates to 1. That's with only the one query actually sent to the quantum versions of those logic gates. You peek inside it once and the resulting state knows enough about how it was created that you can extract out something interesting about the function.
So for some algorithms like factoring integers and searching for special inputs to a process, you really do get "exponential speedups" from quantum computers. However, those algorithms are really special-purpose stuff; quantum computers are not known to make arbitrary computations faster. They will make some specific computations, like chemistry calculations, faster.
With four qubits we have been able to factor the 4-bit number 15 into 3 * 5. That's the biggest that I know we've built, although some companies (like D-Wave) claim that they have built bigger systems under a different model of quantum computation (and hence can't do the factoring algorithm straightforwardly). The problem right now is that you need these big experimental apparatuses to cool, manipulate, and read out values from these quantum computers, and while you're doing those tests the clock is ticking -- states don't stay 'coherent' for more than a fraction of a second, after which they act pretty classical. There's just not much we can do with these systems until we get them to be always-online and stuff.
Finally, 1 and 2 qubits don't really show off all of the diversity of quantum mechanics; for one thought-experiment that I like to use to show people the strangeness of quantum mechanics ("Betrayal"), I determined that you needed three people playing the game together, no less, to really get the paradoxical result.
[+] [-] gjm11|11 years ago|reply
So, if you have a quantum computer, you're probably going to want to use it for breaking RSA or similar cryptography by factorizing large numbers.
There are (ordinary, non-quantum) algorithms for factorizing large-ish numbers. They work pretty well for numbers of modest size. But they don't scale well as the numbers get very large. Something like exp(log(n)^1/3 log(log(n))^2/3) to factorize a number near to n. Or, in terms of the number of bits, something like exp(b^1/3 log(b)^2/3).
So this works fine for numbers up to, let's say, 200 bits on a single machine or 1000 bits for a team with a substantial network. (Note: all numbers here are kinda made up but of the right order of magnitude.)
With a quantum computer, you can factorize a b-bit number with something like 2b qubits, b^3 quantum gates, and a runtime that scales more manageably with b.
So there's not much point in using a quantum computer until the number of qubits it has gets large enough to use for factorizing numbers bigger than you can do on your PC. Which means somewhere on the order of 100 qubits. As the number goes up, the advantage of the quantum computer over the classical computer increases. The transition from "slower than your PC" to "faster than any classical computer that will ever be built" takes place over a fairly small range of qubit-count.
Down with much smaller numbers of qubits, though, the quantum computer can maybe manage to find the factors of, say, 15. Which is, shall we say, quite a lot less than the computer on your desktop can do.
[+] [-] marcosdumay|11 years ago|reply
That said, yes, the observed speedup on some [1] problems is exponential. It's easy to wave hands and say that when you place the qbits in a coherent state together, the operations you do on them apply to all of the values you they can possibly carry, and those grow exponentialy with the number of bits. But the long explanation makes things much more clear, and is worth reading if you want to really get it. Wikipedia is a good starting point [2][3].
[1] Not all by a wide margin. Quantum computers will be specialized machines for a long time, if not forever. But cryptography will take a huge hit.
[2] http://en.wikipedia.org/wiki/Quantum_computer
[3] http://en.wikipedia.org/wiki/Shor%27s_algorithm
[+] [-] Strilanc|11 years ago|reply
You need enough qubits to store the problem, and the working state of the algorithm you're using, and the overhead introduced by error correcting codes, and the overhead from your gates not being ideal, and so on. Austin Fowler gave a talk a few months ago where he goes through the overheads in a particular problem (finding molecular ground state energies) and ends up optimistically requiring tens of millions of qubits [1].
1: https://www.youtube.com/watch?v=XbFoXT73xVQ&feature=youtu.be...
[+] [-] peter303|11 years ago|reply
[+] [-] _broody|11 years ago|reply
[+] [-] _random_|11 years ago|reply
[+] [-] orbifold|11 years ago|reply
[+] [-] unknown|11 years ago|reply
[deleted]