Wasn't expecting my question to hit top of HN. I guess I'll give some context for why I asked it.
I work in quantum error correction, and was trying to collect interesting and quantitative examples of repetition codes being used implicitly in classical systems. Stuff like DRAM storing a 0 or 1 via the presence or absence of 40K electrons [1], undersea cables sending X photons per bit (don't know that one yet), some kind of number for a transistor switching (haven't even decided on the number for that one yet), etc.
A key reason quantum computing is so hard is that by default repetition makes things worse instead of better, because every repetition is another chance for an unintended measurement. So protecting a qubit tends to require special physical properties, like the energy gap of a superconductor, or complex error correction strategies like surface codes. A surface code can easily use 1000 physical qubits to store 1 logical qubit [2], and I wanted to contrast that with the sizes of implicit repetition codes used in classical computing.
Subsea cables don't use repetition codes (they are very much suboptimal), but typically use large overhead (20%) LDPC codes (as do satellite comms systems for that matter (the dvb-s2 standard is a good example). Generally to get anywhere close to Shannon we always need sophisticated coding.
Regarding the sensitivity of Subsea systems they are still significantly above 1 photon/bit, the highest sensitivity experiments have been done for optical space comms (look e.g. for the work from Mit Lincoln Labs, David Geisler, David Kaplan and Bryan Robinson are some of the people to look for.
> Stuff like DRAM storing a 0 or 1 via the presence or absence of 40K electrons
I'd assume that these days it's a couple of orders of magnitude fewer than that (the cited source is from 1996). Incidentally, 40k e- is roughly the capacity of a single electron well ("pixel") in a modern CMOS image sensor [1] – but those 40k electrons are able to represent a signal of up to ~14 bits, around 10k distinct luminance values, depending on temperature and other noise sources.
I worked in quantum optics for a while. Our DARPA grant once had the "mission" to see how many bits of information could be theoretically crammed into 1 photon. It turns out to be an uninteresting question because you can theoretically cram infinite bits into one photon, encoded in the relative timing of the photon in a pulse train, limited only by the dispersion of your medium (in space, effectively zero).
Even dispersion is a boring question because it is possible to reverse dispersion by sending the light through a parametric amplifier to conjugate the phases and then running it through the dispersion medium a second time locally.
I believe that a classical radio receiver is measuring a coherent state. This is a much lower level notion than people normally think about in QEC since the physical DoF are usually already fixed (and assumed to be a qubit!) in QEC. The closest analogue might be different choices of qubit encodings in a bosonic code.
In general, I'm not sure that the classical information theory toolkit allows us to compare a coherent state with some average occupation number N to say, M (not necessarily coherent) states with average occupation number N' such that N' * M = N. For example, you could use a state that is definitely not "classical" / a coherent state or you could use photon number resolving measurements.
A tangential remark: The classical information theory field uses this notion of "energy per bit" to be able to compare more universally between information transmission schemes. So they would ask something like "How many bits can I transmit with X bandwidth and Y transmission power?"
Isn't sending more than one photon always "repetition" in that sense? Classical systems probably don't do that because of the engineering complexity of sending a single photon at a time -- we had oscillators and switches, not single photon emitters.
> by default repetition makes things worse instead of better
Can you elaborate on this a bit? My intuition is that, by default, statistical models benefit from larger N. But I have no experience in quantum physics.
Thanks for asking it. Thanks too to the person that provided the thorough answer.
After reading that answer - seeing all the math equations and physics that cover several disciplines - I wonder how some people can just hand-wave "science" away as a conspiracy to fool the masses. They clearly have little idea the amount of knowledge that works together to get answers questions like this.
Actually the limit predicted by Shannon can be significantly beaten, because Shannon assumes gaussian noise, but if we use photon counting receivers we need to use a poisson distribution. This is the Gordon-Holevo limit.
To beat Shannon you need PPM formats and photon counters (single photon detectors).
One can do significantly better than the numbers from voyager in the article using optics even without photon cpunting. Our group has shown 1 photon/bit at 10 Gbit/s [1] but others have shown even higher sensitivity (albeit at much lower data rates).
The Deep Space Optical Communication (Dsoc) between earth and the psyche spacecraft uses large-M PPM for this reason! This mission is currently ongoing.
They send optical pulses in one of up to 128 possible time slots, thereby carrying 7 bits each. And each optical pulse on earth may only be received by 5-10 photons.
Can’t you calculate the CRLB for any given distribution if you wanted? That’s what my lab did for microscopy anyway. Saying you’re beating the Shannon limit is like saying you’re beating the second law of thermodynamics to me.. but I could be wrong.
Very interesting, I studied telecommunications and I thought the Shannon limit was the absolute limit. I wonder now if this Gordon Holevo limit is applicable for "traditional" telecommunications (like 5G) as opposed to photon counting a deep space probe
I am not sure if you can use 1 photon per bit because (as I understand) emitting and capturing photons is a probabilistic process and when you have 1 photon, there is a probability that it will not be captured by an antenna, but rather will be reflected or will turn into heat. Or am I wrong here?
Is there some fundamental limit to the number of bits per photon that can be communicated via EM radiation? I think it does not exist, because photons aren't all equal, we can use very high frequency and X-ray quantum can probably carry much more information than RF quantum.
For anyone who is interested in the ultimate limits to communications the seminal paper by Jim Gordon is quite easy to understand even without a physics degree (unlike the Holevo paper IMO). He was incredibly good at writing in an accessible manner (apart from probably being the person who most deserved a Nobel prize but didn't get it).
The overwhelming loss in this calculation is from the antenna’s radiated energy spreading out over a larger and larger area (despite the directional “gain” factor).
I’m wondering: would a probe launched today instead employ a laser to communicate? This would seem to offer many orders of magnitude improvement in the directionality of the signal.
The main challenge is the earth to probe comms for distant probes, since the earth is often very close (in an angular sense) to the sun from the probes perspective, and the sun gives out a lot of black body radiation.
However, due to the shape of the black body radiation curve, the sun gives out relatively less microwave radiation than it does visible light, which might outweigh the advantages of more directionality given by using a laser.
All space agencies have optical comms in their road maps. Largely they are thinking about inter satellite communications (the atmosphere causes significant issues when going back to earth). So the main application is to have some relay satellite that can then transmit to earth via RF. The application is not mainly deep space ropes but Leo or meo satellites, the typically only have very short transit times over the ground stations, so can't get all their measurement data down. By using e.g. a geo relay they can transmit lots of data optically and the geo relay can more slowly transmit the data to earth until the leo satellite comes back in view.
I'm curious about the feasibility of combining the two problems of propulsion alway from Earth and communication with Earth into beam-powered propulsion aimed directly at Earth, pulsed for use as communication.
Probably infeasible for several reasons (only useful when accelerating DIRECTLY away from Earth, incoming light to power spaceship is probably coming from the sun and therefore likely also in the directly of Earth, so net zero acceleration at best from firing the photons back towards the sun), but it'd be pretty neat.
An interesting thing about photons (which may not be true, I just enjoy this stuff amateurishly, that is, without the effort or rigor to actually understand it.) is that they might not exist. the em field is not quantized, or at least is not quantized at the level of photons. A "photon" only exists where the em field interacts with matter, where the electrons that create the disturbance can only pulse in discrete levels.
Not that this changes anything, we can only detect or create light with matter. but it does make me curious about single photon experiments and what they are actually measuring.
One thing that seems missing to me is that while the probe might send 160 bits/sec of useful data, those bits are not sent directly as such[1]:
The TMU encodes the high rate data stream with a convolutional code having constraint length of 7 and a symbol rate equal to twice the bit rate (k=7, r=1/2).
So the effective symbol rate is 320 baud[2], and thus a factor of two should be included in the calculations from what I can gather.
Note that the error correction was changed after Jupiter to use Reed-Solomon[3] (255,223) to lower the effective bit error rate, so effectively I guess the data rate is more like 140 bps.
It looks like a pretty reasonable order of magnitude estimate to me. Energy arguments tend to be quite neat for that because if the efficiency is at all reasonable they constrain things well with fairly simple calculations. The antenna directionality is also reasonably well understood and characterised. The exact noise level discussed later on is probably where things get a bit more uncertain (but aren't directly needed to answer the question).
Wow, I never thought about how Voyager communicates with Earth. But now I wonder: if Voyager just sends photons towards the Earth, at the receiving end how are we recognizing which photons are coming from Voyager and how is the "signal" decoded?
Super interesting! But I feel like there is a bit of a conclusion missing for me.
So 1500 Photons hit the receiver per bit send, but this is obviously way to few to keep processing the signal and it will just be drowned out by noise? Where do we go from here? Does voyager repeat its signal gazillions of times so we can average out the noise on our end? Where can I find more information on what is done with these few photons?
You know, I never really thought of lower wavelengths than light as being carried by photons, but I suppose it's all EM. Antennas are technically just really red light bulbs.
This is true enough, though remember that material properties change dramatically when you start moving through wavelengths by orders of magnitude. Silicon is transparent in the mid-infrared, which is what makes silicon photonics possible [1]
I am confused. I thought photons were just visible light but I guess these little buggers are everywhere. Also very surprised voyager is using 2.3ghz, that's crazy saturated on earth due to wifi. How these engineers make this all work, is magic to me.
What a lovely question. The estimate is 10-100 photons/bit (minimum).
If you’re curious about how many bits a single photon can carry, in controlled settings (tabletop quantum optics) a single photon can carry log(n) bits where n is the size of the state space of the photon, which theoretically is infinite and in practice it can reach into the hundreds/thousands.
This makes me wonder, are the bits = the power turned on for exactly 1/320th sec, every 1/160th sec? Or is the power on/power off ratio something different? Does it vary by protocol? What are the pros and cons?
[+] [-] Strilanc|1 year ago|reply
I work in quantum error correction, and was trying to collect interesting and quantitative examples of repetition codes being used implicitly in classical systems. Stuff like DRAM storing a 0 or 1 via the presence or absence of 40K electrons [1], undersea cables sending X photons per bit (don't know that one yet), some kind of number for a transistor switching (haven't even decided on the number for that one yet), etc.
A key reason quantum computing is so hard is that by default repetition makes things worse instead of better, because every repetition is another chance for an unintended measurement. So protecting a qubit tends to require special physical properties, like the energy gap of a superconductor, or complex error correction strategies like surface codes. A surface code can easily use 1000 physical qubits to store 1 logical qubit [2], and I wanted to contrast that with the sizes of implicit repetition codes used in classical computing.
1: https://web.mit.edu/rec/www/dramfaq/DRAMFAQ.html
2: https://arxiv.org/abs/1208.0928
[+] [-] cycomanic|1 year ago|reply
Regarding the sensitivity of Subsea systems they are still significantly above 1 photon/bit, the highest sensitivity experiments have been done for optical space comms (look e.g. for the work from Mit Lincoln Labs, David Geisler, David Kaplan and Bryan Robinson are some of the people to look for.
[+] [-] Sharlin|1 year ago|reply
I'd assume that these days it's a couple of orders of magnitude fewer than that (the cited source is from 1996). Incidentally, 40k e- is roughly the capacity of a single electron well ("pixel") in a modern CMOS image sensor [1] – but those 40k electrons are able to represent a signal of up to ~14 bits, around 10k distinct luminance values, depending on temperature and other noise sources.
[1] https://www.princetoninstruments.com/learn/camera-fundamenta...
[+] [-] dheera|1 year ago|reply
Even dispersion is a boring question because it is possible to reverse dispersion by sending the light through a parametric amplifier to conjugate the phases and then running it through the dispersion medium a second time locally.
We later ended up working on other things.
[+] [-] s1dev|1 year ago|reply
In general, I'm not sure that the classical information theory toolkit allows us to compare a coherent state with some average occupation number N to say, M (not necessarily coherent) states with average occupation number N' such that N' * M = N. For example, you could use a state that is definitely not "classical" / a coherent state or you could use photon number resolving measurements.
A tangential remark: The classical information theory field uses this notion of "energy per bit" to be able to compare more universally between information transmission schemes. So they would ask something like "How many bits can I transmit with X bandwidth and Y transmission power?"
[+] [-] resters|1 year ago|reply
[+] [-] grog454|1 year ago|reply
Can you elaborate on this a bit? My intuition is that, by default, statistical models benefit from larger N. But I have no experience in quantum physics.
[+] [-] nico|1 year ago|reply
[+] [-] foobar1962|1 year ago|reply
Thanks for asking it. Thanks too to the person that provided the thorough answer.
After reading that answer - seeing all the math equations and physics that cover several disciplines - I wonder how some people can just hand-wave "science" away as a conspiracy to fool the masses. They clearly have little idea the amount of knowledge that works together to get answers questions like this.
[+] [-] fsckboy|1 year ago|reply
wouldn't you also want to know how many photons are transmitted and how many bits transmitted are received?
[+] [-] cycomanic|1 year ago|reply
To beat Shannon you need PPM formats and photon counters (single photon detectors).
One can do significantly better than the numbers from voyager in the article using optics even without photon cpunting. Our group has shown 1 photon/bit at 10 Gbit/s [1] but others have shown even higher sensitivity (albeit at much lower data rates).
[1] https://www.nature.com/articles/s41377-020-00389-2
[+] [-] sansseriff|1 year ago|reply
They send optical pulses in one of up to 128 possible time slots, thereby carrying 7 bits each. And each optical pulse on earth may only be received by 5-10 photons.
[+] [-] ramraj07|1 year ago|reply
[+] [-] aptitude_moo|1 year ago|reply
EDIT: This paper seems to answer my question [1]
[1] https://opg.optica.org/directpdfaccess/8711ab35-bbc2-4d51-8e...
[+] [-] codedokode|1 year ago|reply
[+] [-] stracer|1 year ago|reply
[+] [-] nico|1 year ago|reply
It seems there might be multiple ways to go beyond Shannon’s limit, depending on what you are trying to do
[+] [-] cycomanic|1 year ago|reply
https://doi.org/10.1109%2FJRPROC.1962.288169
[+] [-] prof-dr-ir|1 year ago|reply
You probably want to read up a bit on the remarkable life of Lise Meitner.
[+] [-] superposeur|1 year ago|reply
I’m wondering: would a probe launched today instead employ a laser to communicate? This would seem to offer many orders of magnitude improvement in the directionality of the signal.
[+] [-] londons_explore|1 year ago|reply
However, due to the shape of the black body radiation curve, the sun gives out relatively less microwave radiation than it does visible light, which might outweigh the advantages of more directionality given by using a laser.
[+] [-] Out_of_Characte|1 year ago|reply
https://www.jpl.nasa.gov/news/nasas-deep-space-optical-comm-...
https://en.m.wikipedia.org/wiki/Laser_Interferometer_Space_A...
[+] [-] cycomanic|1 year ago|reply
[+] [-] CobrastanJorji|1 year ago|reply
Probably infeasible for several reasons (only useful when accelerating DIRECTLY away from Earth, incoming light to power spaceship is probably coming from the sun and therefore likely also in the directly of Earth, so net zero acceleration at best from firing the photons back towards the sun), but it'd be pretty neat.
[+] [-] asdfman123|1 year ago|reply
[+] [-] deelowe|1 year ago|reply
[+] [-] sunk1st|1 year ago|reply
[+] [-] mordae|1 year ago|reply
[+] [-] somat|1 year ago|reply
https://www.youtube.com/watch?v=ExhSqq1jysg
Not that this changes anything, we can only detect or create light with matter. but it does make me curious about single photon experiments and what they are actually measuring.
[+] [-] jsjohnst|1 year ago|reply
[+] [-] magicalhippo|1 year ago|reply
The TMU encodes the high rate data stream with a convolutional code having constraint length of 7 and a symbol rate equal to twice the bit rate (k=7, r=1/2).
So the effective symbol rate is 320 baud[2], and thus a factor of two should be included in the calculations from what I can gather.
Note that the error correction was changed after Jupiter to use Reed-Solomon[3] (255,223) to lower the effective bit error rate, so effectively I guess the data rate is more like 140 bps.
[1]: https://web.archive.org/web/20130215195832/http://descanso.j...
[2]: https://destevez.net/2021/09/decoding-voyager-1/
[3]: https://destevez.net/2021/12/voyager-1-and-reed-solomon/
[+] [-] rcxdude|1 year ago|reply
[+] [-] krylon|1 year ago|reply
[+] [-] rocho|1 year ago|reply
[+] [-] tb0ne|1 year ago|reply
So 1500 Photons hit the receiver per bit send, but this is obviously way to few to keep processing the signal and it will just be drowned out by noise? Where do we go from here? Does voyager repeat its signal gazillions of times so we can average out the noise on our end? Where can I find more information on what is done with these few photons?
[+] [-] SonOfLilit|1 year ago|reply
[+] [-] djha-skin|1 year ago|reply
[+] [-] bandrami|1 year ago|reply
[+] [-] moffkalast|1 year ago|reply
[+] [-] gjstein|1 year ago|reply
[1] https://en.wikipedia.org/wiki/Silicon_photonics
[+] [-] lordnacho|1 year ago|reply
And can we beat the Shannon limit somehow, eg collect for longer, put the dish outside the atmosphere, and so on?
[+] [-] asdfman123|1 year ago|reply
Covers a lot of technical details at the level a person without much specific training can understand.
https://voyager.gsfc.nasa.gov/Library/DeepCommo_Chapter3--14...
[+] [-] basil-rash|1 year ago|reply
https://www.ecfr.gov/current/title-47/chapter-I/subchapter-A...
[+] [-] RachelF|1 year ago|reply
They are closer, but the radar equation received power is inversely proportional to range to the fourth power, not range squared as with Voyager.
Anything proportional to 1/R^4 degrades very quickly.
[+] [-] notorandit|1 year ago|reply
[+] [-] pcdoodle|1 year ago|reply
[+] [-] ziofill|1 year ago|reply
If you’re curious about how many bits a single photon can carry, in controlled settings (tabletop quantum optics) a single photon can carry log(n) bits where n is the size of the state space of the photon, which theoretically is infinite and in practice it can reach into the hundreds/thousands.
[+] [-] hammock|1 year ago|reply
This makes me wonder, are the bits = the power turned on for exactly 1/320th sec, every 1/160th sec? Or is the power on/power off ratio something different? Does it vary by protocol? What are the pros and cons?
[+] [-] ggm|1 year ago|reply
Eg send a continuous wave with low signal beyond its phase, and measure at a rate of (digital) bits per month?
[+] [-] omgJustTest|1 year ago|reply