I'm going to try to explain this in programming terms :)
There's a longstanding interpretation that the wave-particle duality is actually caused by (e.g. electrons) being particles riding invisible waves. This interpretation (Bohmian mechanics) has been largely ignored as "valid but uninteresting."
A key reason why this isn't historically that appealing is that the state of the "invisible waves" in the neighborhood of a single particle is mathematically dependent on the instantaneous state of every other particle in the universe.
If you assume the universe is a simulation, then the programmer would be a freshman computer science student who unnecessarily made stepping forward in time quadratic!
The oil-droplet experiments are an accidental existence proof that you can get behavior similar to quantum behavior in particle+wave systems without the quadratic global update rule.
Is it enough? There's still tons of open questions, and possibly (likely) yet another dead end.
my understanding of 'natural' physics is as a relation between particle(s) and every other particle in the universe
and our physics calculations trying to understand and predict those natural events wave away those relations valued insignificant to the desired probability threshold?
contrived but applicable example, "assume you are on a frictionless plane with zero wind resistance"
or in one of the most amazing results of applied physics in recent news :
"Fred Jansen, Rosetta's mission manager at ESA, said officials predicted a 70 to 75 percent probability of a successful landing by Philae before the mission's launch in 2004. But that number assumed the comet was a rounded body, not the oddly-shaped world found by Rosetta."
I feel as though this hypothesis was recently disproven - a measurement confirmed that the waves and particles were really the same thing, not one thing riding on the carrier wave of another.
Is there a good resource out there for laymen to explain "quantum stuff" in a meaningful way?
When I read articles that are using words like "spooky" and "weird" as terms of art, and using Schrödinger's cat as a way to clarify a topic, I get nothing out of it. And I'd like to be able to explain to my mom what this stuff is.
I read this book in high-school and learned about as much as I did in 4 undergrad physics courses about quantum mechanics.
There's no mathematics in this book, so it doesn't teach you how to calculate anything (for this, a physics degree helps), but the principles are presented very nicely.
† The Nature article characterises the Many-Worlds Interpretation like this:
"In the many-worlds picture, the wavefunction governs the evolution of reality so profoundly that whenever a quantum measurement is made, the Universe splits into parallel copies."
The first of the links above explains that this is a mischaracterisation.
It depends on what you mean by layman. I found the book Quantum Computing Since Democritus by Scott Aaronson to be very easy to understand, but it does require some linear algebra and complex numbers. I don't think you can get a good overview of entanglement (where the madness lies) without that basic level.
It's very hard to ELI5 quantum mechanics. The only thing we can definitively constructively say is that we have a mathematical system that describes the universe very well at certain time/energy/distance scales. Trying to ascribe meaningfulness to this math is an ongoing struggle.
Aside, the terms "spooky" and "weird" are used to describe phenomena that "have no classical analogue." Physicists don't agree on what Schrödinger's cat means. I find that it muddles more than it clarifies.
In the early 20th century, math ordinarily used to describe waves was applied to very small systems of particles. This was both strange and extraordinarily successful at modeling systems (if only probabilistically). Applying physical meaning to the math has proven to be more or less intractable. Because the models are so successful in their domain, most physicists tend to brush aside ascribing meaning to the math.
There are a lot of examples in the book "In search of Schrodingers cat" which can be explained in layman terms. There's double slit experiment, then radioactive decay rate, then double polarization of light. That is what I can remember right now. But i remember, the book has lot many other examples.
One way of understanding it, which I currently favour, is that by "observing" the result you you yourself also end up in a super position of states, one for each possible "observation". And since by observing it you've "amplified" the result there is suddenly a large difference between those states. These states are then no longer (as heavily) "coherent", and so you will only see the effects from a small region of the states. Most quantum effects will have vanished since those are a consequence of the "coherence" between those states. So from your perspective it looks like the wave function has "collapsed".
For instance say you're performing some slit experiment, directly after you launch the particle there is still a very broad spectrum of possible positions / states, which are all heavily correlated since states that differ only by the position of 1 particle are in some sense "close". However if the particle hits something then suddenly there can be a very large number of particles that have different positions, depending on where the initial particle hit, hence the difference between those states increases immensely, causing them to become decoherent. Hence, from the perspective of the resulting states, there is only a very small region where it could have hit, anything else becomes remotely unlikely.
This interpretation has some rather interesting issues when you try to interpret what consciousness is, but it's the most consistent one I've found so far.
What if we don't have a human observe? What if all data is only ever interpreted by canines and bovines (dogs and cows)? Will the wave function still collapse?
How would you isolate an experiment from the potentially-wave-function-collapsing influence of human consciousness, and yet still produce a usable contribution to science?
> Why does observation cause the wave function to collapse?
According to the Many-Worlds Interpretation, it doesn't. Both the thing being observed and the observer exist in many states simultaneously (a superposition of states). Before the observation, those states were independent; the act of observation causes the observer and observed to become entangled, so each state of the observer corelates to a single state of the thing observed. From the observer's points of view, there appears to have been a collapse; but the result of the "collapse" will be different for each of the observer's states.
(Caveat: I am not a physicist, and it's possible the above is not even wrong.)
Observer is a poor choice of words. A single atom can cause a wave function to collapse. The next interaction with that atom then propagates that collapse down the chain. So, your desk chair being a complected system operates just fine as an observer. *
Though this being HN: Think of it like the edge of the bitcoin block chain two new blocks are can be in an unintended state, but over time the longest one wins.
* Not that this a useful model, but it's much closer than the often repeated voodoo mysticism people spout.
>Why does observation cause the wave function to collapse?
it becomes much clearer (at least to me) once "observation" is replaced with "interaction" because the former is really not possible without at least a quantum of the latter.
> Why does observation cause the wave function to collapse?
It doesn't. Classical behavior is an emergent property of any large system of mutually entangled particles. For macroscopic systems classical behavior is a damn good approximation, but it's only an approximation. See:
I think it may ultimately turn out that the best explanation for quantum mechanics is simply the set of postulates we already have, minus the Born rule. It's well known that a quantum system continues to evolve unitarily (deterministically) via the propagation operator regardless of whether any subsystems have collapsed. How can you have a discontinuous subsystem within a larger continuous system? You can't.
The explanation for the appearance of collapse lies in the phenomenon of decoherence, which basically says that subsystems tend to quickly evolve into something resembling an eigenstate. This evolution must necessarily occur on an incredibly short timescale. It might be possible to design an experiment that would test the assumption that collapse is instantaneous.
I think the best definition of "collapse" is that it is the moment in time in which a particular system can no longer be described (to good approximation) as the direct product of two subsystems (see http://en.wikipedia.org/wiki/Separable_state). The concept of a "good approximation" is of course subjective, but it can always be objectively metricized (totally made that word up) by using some kind of error term.
Epigrammatically, collapse is not so much a physical process as it is a characterization of the capability to represent a quantum state in a specific mathematical form.
(Of course, this doesn't preclude you from categorizing physical processes as "collapse events"; it just means that collapse isn't a fundamental phenomenon so much as it is an emergent one. Kind of like quasiparticles.)
So what about the randomness? I think it's better to refer to it as unpredictability. The difference is subtle but crucial. True randomness (assuming it exists) is the result of absolute indeterminism. On the other hand, if eigenstate selection is merely "unpredictable", then that implies collapse is in fact a deterministic process (specifically e^(-iHt) applied to Ψ over some time interval that we've decided to call a "measurement"); however, we're unable to extract enough information from the environment to make exact predictions because we ourselves constitute the required missing information. In other words, the information necessary for absolute predictive capability is trapped in the subsystem constituting the measuring environment, and it becomes lost when that subsystem becomes entangled with the subsystem being measured. And there's not really any way to prevent that from occurring, because entanglement must occur in order to learn anything about a system.
This even applies classically. The only difference is that classical entanglement occurs between localized physical boundaries instead of between subspace boundaries in an abstract Hilbert space.
To somewhat reify this, assume (for the sake of argument) that a classical description of physics is enough to describe a human. Then perform a large MD simulation of all the atoms inside a physics lab, including those of a physicist. The evolution of this simulated system is provably deterministic. Yet the physicist appears to have free will, and it appears like he is deciding which measurements to perform on his environment. But he's just an arbitrary collection of atoms that we've labeled "human", and he obeys the same time-transformation rules that the unlabeled atoms in the system obey. Mathematically, it's simply impossible for him to predict everything that occurs within the virtual system -- not because of indeterminism -- but because he isn't so much "choosing" what to measure as he is "appearing to choose". There's a limit to the amount of information any system can obtain about itself (well, maybe there's some fractals that are exceptions, but generally speaking, it holds true.)
That said, experiment is always the ultimate arbitrator of truth, and I wonder if there might yet be some clever way to tell whether our universe is simply unpredictable instead of random, despite the possibility that both potential mechanisms might impose the same limits on predictive capability (in fact, Colbeck and Renner recently proved that QM is already maximally predictive, independent of whatever underlying mechanism governs eigenstate selection -- see http://www.nature.com/ncomms/journal/v2/n8/abs/ncomms1416.ht...)
Great analysis. Is there even a theory of what causes randomness? Where does it come from? Why should we believe it is discrete from "unpredictability?" Even in random number generators, the game is about pulling from widely unpredictable sources to generate entropy - the word "random" should maybe be a misnomer.
I've never believed in anything besides unpredictability in various scopes of systems. By scopes of systems, I mean, in some contexts of analysis it makes sense to deem a system temporarily closed to analyze certain parts of it. For example, the earth is not a closed system, but for some discussions and analysis it makes sense to simply treat it like one.
I don't know why it is so hard for this description - or paradigm - to proliferate to the masses and various pop writers. Writers so often are tying human consciousness to QM experimentation as if it were something special. The fact of the matter is: in each QM experiment the only things really interacting with the experiment are the atoms of the measurement apparatuses, sensors, and whatnot. In the case of the double slit experiment, we could have them "interpreted" automatically - and say, kill a cat if an interference pattern is created and not kill it if one is not made. Making the discussion about consciousness is a distraction from the core issues.
The exploration of what the wavefunction "means" in terms of the world around us has been one of the more interesting things to watch. When I took quantum physics in school (under the generic heading "Modern Physics") psi and quantum mechanics was very much simply a mathematical treatment of things rather, unlike thermodynamics which was actually visible and "real". Between the teleporation work and recent wavefunction work it seems like a lot more layers of the universe are being revealed.
One question I have: if a pilot wave description of quantum mechanics was in fact the "right" one, would quantum computers be impossible? It's confusing because many articles claim that the various interpretations of QM yield the same predictions, and yet the irreducible randomness of the Copenhagen interpretation seems to be a prerequisite for quantum computing.
My own layman theory on this is that time has a wave like nature to it, not the particles. The idea is to rewrite the wave equation solving for time using the minkowski metric (space and time are related through relativity).
The results should be the same, but it is a different interpretation.
Someone more knowledgeable about solitons can chime in on this...
A friend of mine modeled soliton interactions. When one soliton passes through another, information can be exchanged such that colliding solitons contain bits of each other when they move apart. No matter how far apart these solitons get, the soliton "children" still "chat" with their "parents". One soliton can contain multiple elements of other solitons, all of them interacting at a distance. Their behavior can mimic "spooky action at a distance". Also, solitons can have wave like behavior or particle like behavior depending on how they are observed.
I am biased as a Scott A. fan and as somebody who knows essentially nothing about QM. However I see no way to conclude that Anderson and Brady even know what they're talking about, much less that they win the debate on the linked Scott Aaronson page. I just read through it again to confirm my recollection (skipping the Motl and Sidles posts fwiw).
[+] [-] fizx|10 years ago|reply
There's a longstanding interpretation that the wave-particle duality is actually caused by (e.g. electrons) being particles riding invisible waves. This interpretation (Bohmian mechanics) has been largely ignored as "valid but uninteresting."
A key reason why this isn't historically that appealing is that the state of the "invisible waves" in the neighborhood of a single particle is mathematically dependent on the instantaneous state of every other particle in the universe.
If you assume the universe is a simulation, then the programmer would be a freshman computer science student who unnecessarily made stepping forward in time quadratic!
The oil-droplet experiments are an accidental existence proof that you can get behavior similar to quantum behavior in particle+wave systems without the quadratic global update rule.
Is it enough? There's still tons of open questions, and possibly (likely) yet another dead end.
[+] [-] justifier|10 years ago|reply
my understanding of 'natural' physics is as a relation between particle(s) and every other particle in the universe
and our physics calculations trying to understand and predict those natural events wave away those relations valued insignificant to the desired probability threshold?
contrived but applicable example, "assume you are on a frictionless plane with zero wind resistance"
or in one of the most amazing results of applied physics in recent news :
"Fred Jansen, Rosetta's mission manager at ESA, said officials predicted a 70 to 75 percent probability of a successful landing by Philae before the mission's launch in 2004. But that number assumed the comet was a rounded body, not the oddly-shaped world found by Rosetta."
[+] [-] stephengillie|10 years ago|reply
[+] [-] Spooky23|10 years ago|reply
When I read articles that are using words like "spooky" and "weird" as terms of art, and using Schrödinger's cat as a way to clarify a topic, I get nothing out of it. And I'd like to be able to explain to my mom what this stuff is.
[+] [-] amit_m|10 years ago|reply
I read this book in high-school and learned about as much as I did in 4 undergrad physics courses about quantum mechanics. There's no mathematics in this book, so it doesn't teach you how to calculate anything (for this, a physics degree helps), but the principles are presented very nicely.
[+] [-] dghf|10 years ago|reply
http://www.askamathematician.com/2011/11/q-according-to-the-...
http://www.askamathematician.com/2010/10/q-copenhagen-or-man...
http://www.askamathematician.com/2013/08/q-are-there-example...
http://www.askamathematician.com/2011/11/entanglement-omnibu...
http://www.askamathematician.com/2010/06/q-how-it-is-that-be...
† The Nature article characterises the Many-Worlds Interpretation like this:
"In the many-worlds picture, the wavefunction governs the evolution of reality so profoundly that whenever a quantum measurement is made, the Universe splits into parallel copies."
The first of the links above explains that this is a mischaracterisation.
[+] [-] akuma73|10 years ago|reply
[+] [-] rmidthun|10 years ago|reply
Check http://www.scottaaronson.com/democritus/ and see if it works for you.
Otherwise, you are in the realm of popularizers like John Gribbin, who I found entertaining but not very enlightening.
[+] [-] fizx|10 years ago|reply
Aside, the terms "spooky" and "weird" are used to describe phenomena that "have no classical analogue." Physicists don't agree on what Schrödinger's cat means. I find that it muddles more than it clarifies.
[+] [-] BrainInAJar|10 years ago|reply
They're kinna mathy, but a lot easier to understand than similar lectures from physicists (because the background knowledge is much greater)
[+] [-] deciplex|10 years ago|reply
[+] [-] dandelany|10 years ago|reply
And goes into greater depth in the Douglas Robb Memorial Lectures series: http://vega.org.uk/video/subseries/8
If you want to go all the way down the rabbit hole there's an exhaustive list of Feynman videos here :) http://www.richard-feynman.net/videos.htm
[+] [-] kansface|10 years ago|reply
[+] [-] godzilla82|10 years ago|reply
[+] [-] datashovel|10 years ago|reply
https://www.youtube.com/watch?v=DbCl4p5TDPc
[+] [-] andyl|10 years ago|reply
[+] [-] contravariant|10 years ago|reply
For instance say you're performing some slit experiment, directly after you launch the particle there is still a very broad spectrum of possible positions / states, which are all heavily correlated since states that differ only by the position of 1 particle are in some sense "close". However if the particle hits something then suddenly there can be a very large number of particles that have different positions, depending on where the initial particle hit, hence the difference between those states increases immensely, causing them to become decoherent. Hence, from the perspective of the resulting states, there is only a very small region where it could have hit, anything else becomes remotely unlikely.
This interpretation has some rather interesting issues when you try to interpret what consciousness is, but it's the most consistent one I've found so far.
[+] [-] stephengillie|10 years ago|reply
How would you isolate an experiment from the potentially-wave-function-collapsing influence of human consciousness, and yet still produce a usable contribution to science?
[+] [-] dghf|10 years ago|reply
According to the Many-Worlds Interpretation, it doesn't. Both the thing being observed and the observer exist in many states simultaneously (a superposition of states). Before the observation, those states were independent; the act of observation causes the observer and observed to become entangled, so each state of the observer corelates to a single state of the thing observed. From the observer's points of view, there appears to have been a collapse; but the result of the "collapse" will be different for each of the observer's states.
(Caveat: I am not a physicist, and it's possible the above is not even wrong.)
[+] [-] Retric|10 years ago|reply
Though this being HN: Think of it like the edge of the bitcoin block chain two new blocks are can be in an unintended state, but over time the longest one wins.
* Not that this a useful model, but it's much closer than the often repeated voodoo mysticism people spout.
[+] [-] trhway|10 years ago|reply
it becomes much clearer (at least to me) once "observation" is replaced with "interaction" because the former is really not possible without at least a quantum of the latter.
[+] [-] lisper|10 years ago|reply
It doesn't. Classical behavior is an emergent property of any large system of mutually entangled particles. For macroscopic systems classical behavior is a damn good approximation, but it's only an approximation. See:
http://www.flownet.com/ron/QM.pdf
Or the movie version:
https://www.youtube.com/watch?v=dEaecUuEqfc
[+] [-] curiously|10 years ago|reply
[+] [-] Xcelerate|10 years ago|reply
The explanation for the appearance of collapse lies in the phenomenon of decoherence, which basically says that subsystems tend to quickly evolve into something resembling an eigenstate. This evolution must necessarily occur on an incredibly short timescale. It might be possible to design an experiment that would test the assumption that collapse is instantaneous.
I think the best definition of "collapse" is that it is the moment in time in which a particular system can no longer be described (to good approximation) as the direct product of two subsystems (see http://en.wikipedia.org/wiki/Separable_state). The concept of a "good approximation" is of course subjective, but it can always be objectively metricized (totally made that word up) by using some kind of error term.
Epigrammatically, collapse is not so much a physical process as it is a characterization of the capability to represent a quantum state in a specific mathematical form.
(Of course, this doesn't preclude you from categorizing physical processes as "collapse events"; it just means that collapse isn't a fundamental phenomenon so much as it is an emergent one. Kind of like quasiparticles.)
So what about the randomness? I think it's better to refer to it as unpredictability. The difference is subtle but crucial. True randomness (assuming it exists) is the result of absolute indeterminism. On the other hand, if eigenstate selection is merely "unpredictable", then that implies collapse is in fact a deterministic process (specifically e^(-iHt) applied to Ψ over some time interval that we've decided to call a "measurement"); however, we're unable to extract enough information from the environment to make exact predictions because we ourselves constitute the required missing information. In other words, the information necessary for absolute predictive capability is trapped in the subsystem constituting the measuring environment, and it becomes lost when that subsystem becomes entangled with the subsystem being measured. And there's not really any way to prevent that from occurring, because entanglement must occur in order to learn anything about a system.
This even applies classically. The only difference is that classical entanglement occurs between localized physical boundaries instead of between subspace boundaries in an abstract Hilbert space.
To somewhat reify this, assume (for the sake of argument) that a classical description of physics is enough to describe a human. Then perform a large MD simulation of all the atoms inside a physics lab, including those of a physicist. The evolution of this simulated system is provably deterministic. Yet the physicist appears to have free will, and it appears like he is deciding which measurements to perform on his environment. But he's just an arbitrary collection of atoms that we've labeled "human", and he obeys the same time-transformation rules that the unlabeled atoms in the system obey. Mathematically, it's simply impossible for him to predict everything that occurs within the virtual system -- not because of indeterminism -- but because he isn't so much "choosing" what to measure as he is "appearing to choose". There's a limit to the amount of information any system can obtain about itself (well, maybe there's some fractals that are exceptions, but generally speaking, it holds true.)
That said, experiment is always the ultimate arbitrator of truth, and I wonder if there might yet be some clever way to tell whether our universe is simply unpredictable instead of random, despite the possibility that both potential mechanisms might impose the same limits on predictive capability (in fact, Colbeck and Renner recently proved that QM is already maximally predictive, independent of whatever underlying mechanism governs eigenstate selection -- see http://www.nature.com/ncomms/journal/v2/n8/abs/ncomms1416.ht...)
[+] [-] derptacious|10 years ago|reply
I don't know why it is so hard for this description - or paradigm - to proliferate to the masses and various pop writers. Writers so often are tying human consciousness to QM experimentation as if it were something special. The fact of the matter is: in each QM experiment the only things really interacting with the experiment are the atoms of the measurement apparatuses, sensors, and whatnot. In the case of the double slit experiment, we could have them "interpreted" automatically - and say, kill a cat if an interference pattern is created and not kill it if one is not made. Making the discussion about consciousness is a distraction from the core issues.
[+] [-] gaze|10 years ago|reply
[+] [-] ChuckMcM|10 years ago|reply
[+] [-] rjdagost|10 years ago|reply
[+] [-] anoncoder|10 years ago|reply
[+] [-] opneg|10 years ago|reply
[+] [-] snarfy|10 years ago|reply
The results should be the same, but it is a different interpretation.
[+] [-] jivos|10 years ago|reply
A friend of mine modeled soliton interactions. When one soliton passes through another, information can be exchanged such that colliding solitons contain bits of each other when they move apart. No matter how far apart these solitons get, the soliton "children" still "chat" with their "parents". One soliton can contain multiple elements of other solitons, all of them interacting at a distance. Their behavior can mimic "spooky action at a distance". Also, solitons can have wave like behavior or particle like behavior depending on how they are observed.
[+] [-] return0|10 years ago|reply
[+] [-] ciokan|10 years ago|reply
[+] [-] jitan|10 years ago|reply
[+] [-] anoncoder|10 years ago|reply
Also, check out this fascinating debate about these matters on Scott Aaronson's blog. Link is http://www.scottaaronson.com/blog/?p=1255
Brady and Anderson have many posts in this debate, and IMHO, wind up winning.
It's looking more and more like physics took a wrong turn with the Copenhagen interpretation of quantum mechanics.
[+] [-] jeffsco|10 years ago|reply