Unfortunately this is somewhat misleading and even wrong at times.
A crucial aspect that many of the I/Q explanations miss is, that the I/Q representation (which is really phasor notation) is a feature of modulation onto a carrier wave, not baseband.
So the example starts of with drawing the sine wave and say we don't know the full wave, because we don't know if frequencies are positive or negative and because it's hard to determine the power (not even sure what he/she means there). That is wrong and directly contradicts Nyquist theory, if we sample a signal at twice it's maximum frequency we can fully reconstruct it. The reason the explanation goes wrong is because they bring in negative frequencies. Negative frequencies do not exist!
This is where the phasor/IQ representation comes in. If we modulate a signal onto a carrier wave, in other words we have a wave at some frequency and we modulate that wave in amplitude/phase/frequency, we generate frequencies higher and lower than that carrier frequency. The modulation can have components that correspond to the sine or cosine of that carrier wave. Importantly these components are orthogonal, so can carry independent data. That's where we use the phasor representation (IQ-modulation), and remove the carrier wave and represent this "artificial" baseband signal using complex numbers or sine or cosine components, or positive/negative frequencies (all these are equivalent). But it is important to remember that this is just a representation of a signal modulated onto a carrier as if there was no carrier.
Exactly. I wrote this explanation, but you beat me to it, so I'll just post it here:
The real reason we use I/Q sampling is because we want to frequency-shift a signal.
Why do we want to frequency-shift a signal? In radio frequency applications the signal of interest almost always has a much lower bandwidth than its highest frequency. In other words, the signal has a small bandwidth (say 40 MHz) centered around a high center-frequency (say 2.4 GHz). If we want to digitize the signal, then one way would be to use a very high sample-rate ADC (e.g. a 2.4 GHz ADC). But these are very expensive, and a much better way of digitizing the signal is to use a mixer (a frequency shifter) to shift the signal to be centered around 0 Hz and then use a relatively low sample-rate ADC (e.g. a 40 MHz ADC).
The way frequency shifting is done is by multipling the signal by a sine signal, which can be done in hardware. But this introduces a distortion to the signal because multiplying by a sine is not actually a frequency shift. It just so happens that this distortion is cancelled out by adding another copy of the signal multiplied with another sine delayed by 90°. But this addition needs to be complex (due to the relationship between sine functions and true frequency shifts), so what we do is sample the two distorted signals and do this complex addition with the digitial signals.
So the reason we have complex samples is because that's the best way we've found to do frequency shifting using real-only sine waves (this explains why we don't use complex numbers in audio signal processing; there's no need to do frequency shifting!). This tutorial goes into the details and is the best explanation I've seen on quadrature sampling (another term for I/Q sampling): https://www.dsprelated.com/showarticle/192.php
I think engineers (myself included) tend to get confused because using complex numbers makes the math simpler, and so they think that's the real reason we use them. All the talk about ambiguous frequencies or negative frequencies or needing to know the phase of a sample is true, but all of those problems could be solved without complex numbers simply by sampling twice as fast and then doing some math (again, audio DSP does just fine without quadrature sampling), so it's not a "real" reason to do this strange kind of sampling.
This may be true for RF applications, but it isn't universally true. Three-phase power is also modeled this way and both positive and negative frequencies definitely exist.
Three-phase/three-wire electric power doesn't have three linearly independent currents or line-to-line voltages. Rather than fight the constraint that Ia+Ib+Ic == 0, we transform the system into one in which the constraint doesn't exist. The three (redundant!) 120-degree three-phase signals are transformed into a virtual two-phase reference system and we operate on the two phases that way. Sometimes we even "frequency shift" the signal into the reference frame of a physical (or virtual) rotor.
There are some interesting side-effects. In the alpha-beta (stator) frame, unbalanced three-phase load looks like the sum of a dominant positive 60 Hz and smaller negative 60Hz signal. Diode-rectified loads are dominated by the negative fifth and positive seventh harmonics in the stator reference frame, but are frequency-shifted to negative and positive sixth in the rotor's reference frame.
Three-phase four-wire (A, B, C, and Neutral) does have three linearly independent currents and voltages. But their disturbances are so different from each other that it is frequently convenient to model this system as the sum of a complex alpha-beta line-to-line current plus a virtual common-mode phase (the zero phase).
I think the issues you mention are mostly just a matter of differing perspectives. The author starts from the perspective that the underlying signals are all fundamentally complex, and a real signal (the projection of the 'true' signal onto the real axis, or upconverted complex baseband) is transmitted for physical convenience (i.e. possible to build). It sounds like you start from the perspective that the underlying signals are real, and the "artificial" analytic signal is only introduced as a mathematical convenience ("negative frequencies do not exist!").
IMO either perspective is fine - neither is "wrong" or "right."
> because it's hard to determine the power (not even sure what he/she means there)
I think they are referring to the fact that having the analytic representation makes envelope detection trivial.
>The reason the explanation goes wrong is because they bring in negative frequencies. Negative frequencies do not exist!
I agree that that's a place where the explanation goes wrong, but negative frequencies do exist, they cancel out the imaginary part of the positive frequency. A real-valued signal has equal amplitude at f and -f, if you're talking about Fourier transforms. Admittedly, Fourier transforms depart slightly from the more elementary definition of frequency as one divided by the period, but that's a linguistic snag, not a conceptual one.
Coincidentally I was planning to write a blog post about what I/Q are, inspired by this very terrible explanation. I recently started playing around with SDR and went to look up what I & Q are, found this page (it is one of the very few explanations) and got completely confused. My main objections:
1. It's actually a fairly simple concept but this page jumps into pages of formulas and irrelevancies. It is also missing motivation for why I/Q exist. No way is this "for dummies"
2. This: "I'd say the true signal is complex, and the real signal is an incomplete projection of it" is factually incorrect. The signal coming down the antenna is real.
I feel like I could explain it better in 1/4 of the text. Would anyone be interested in that?
Yeah, it doesn't even explain the image frequency problem in traditional superheterodyne receivers, which I'm sure is pretty fundamental to why I/Q exists. (Loosely speaking, a superheterodyne receiver transforms the input frequency to some fixed intermediate frequency by shifting everything by an adjustable local oscillator frequency. However, there are two different input frequencies that produce the same frequency on the output: one above the local oscillator by the same amount as that intermediate frequency, and one below it by the same amount. Those are basically the positive and negative frequencies the article talks about I/Q data being able to distinguish.)
Sure, if you can do that, why not? I was also searching for a good explanation on I/Q data when I found this. I posted it since I found the page interesting, but also, I did not dive into the details. I'm more interested in the conceptual explanation, and not the hands-on details.
If you could add another (potentially better explanation), that could be beneficial for future learners (not necessarily limited to the HN community).
I think the author does a good job of looking at some of the practical aspects of IQ data but I see the question of "Why IQ" quite differently.
When I started with SDR and wrote some Fourier Transform code I wanted to know why if I fed it a sine wave at frequency X, I got two peaks in the transform. Experts said, "Well that is because you have a real signal only, if you add the quadrature component you'll see just one peak." Which is true, if you do that you see one peak, but WHY was my question.
The answer to that question was that there is a difference between discrete mathematics and continuous mathematics. And more specifically, when you operate using discrete mathematics, you need to understand both the slope and the magnitude of the the function you are working with.
A discrete version of sin(ωt) (where ω = 2πf) that is "real only" as some would say, has zero values in the quadrature side. That happens when the signal is the sum of two signals that are the complex conjugate of each other (their quadrature (or imaginary) parts differ only in sign). The discrete Fourier transform correctly picks up both of those signals and identifies them as +f and -f.
I/Q is really amazing and it's an offshoot of the Euler's formula.
The best book to understand I/Q if your background is not engineering is probably this book "Digital Signal Processing in Modern Communication System" by Andreas Schwarzinger [1].
This book will teach from the basic principles of communication from the I/Q up to OFDM, basically most of the essential techniques of modern communication as mentioned in its title.
There are some ham radio folks who put together really great videos for building an intuition on all sorts of electrical engineering / signal processing topics that require minimal math.
For I/Q signals in particular, W2AEW has a couple great videos that explain the intuition and then demonstrate things in action on real instruments:
The thing is this is all really not complicated, but unfortunately the way we (at least I) are being taught complex numbers is terrible. They essentially fall out of thin air without much explanation and sure the concept as a mathematical construct is easy enough to understand, but that doesn't really explain the why and the elegance of what you can do with them.
I really only understood this when I started doing communications (and that was after doing a PhD in physics mind you).
Good news, you don't really have to do the math to get an intuition of what's going on... which will likely get you through it.
Try out GNU Radio... you can just use your audio I/O instead of messing with RF, etc. Build a few graphs where you use complex signals, multiply them together, etc... and you can get the hang of it, for no $$ and only a few hours time.
There is math, sure, but you could treat it as a target to work towards. You really just need calculus for DSP work to start with. Remember that it's very easy to use Python or Matlab to check your answers
My only concrete advise is to avoid the Proakis book
>> I/Q Data is a signal representation much more precise
Is precise the right word here? It’s more complete, less information is lost (sampling is never lossless though) but there’s no difference in precision.
If i have IQ data then i can derive the data that would be preserved by any other modulation (phase, frequency, amplitude shifting). I can’t recover the original IQ samples from a sampled FM signal.
I believe used notation (complex exponential) is confusing for most people. In physical implementation physical waveform will be computed I+Q (writing I+jQ seems to exclude this). I found this very helpful: https://www.youtube.com/watch?v=h_7d-m1ehoY
I think this is one of the best visualizations of I/Q signals I've ever seen. The idea of negative audio frequencies takes some getting used to, but works out very well in some DSP systems.
Negative frequencies are like when a 3 phase generator runs backwards... everything plugged into it also runs backwards... the sign of the frequency tells you the direction of rotation, and the frequency is the speed.
If you had 2 antennas 1/4 wave apart, and used them for I and Q, you would then be able to tell which direction the signal was coming from along the I/Q antenna axis. With 3 or more antenna, you can determine compass direction of the source.
The same is true for 2 microphones, at any frequency where the microphones are 1/4+N wavelengths apart.
[+] [-] cycomanic|5 years ago|reply
A crucial aspect that many of the I/Q explanations miss is, that the I/Q representation (which is really phasor notation) is a feature of modulation onto a carrier wave, not baseband.
So the example starts of with drawing the sine wave and say we don't know the full wave, because we don't know if frequencies are positive or negative and because it's hard to determine the power (not even sure what he/she means there). That is wrong and directly contradicts Nyquist theory, if we sample a signal at twice it's maximum frequency we can fully reconstruct it. The reason the explanation goes wrong is because they bring in negative frequencies. Negative frequencies do not exist!
This is where the phasor/IQ representation comes in. If we modulate a signal onto a carrier wave, in other words we have a wave at some frequency and we modulate that wave in amplitude/phase/frequency, we generate frequencies higher and lower than that carrier frequency. The modulation can have components that correspond to the sine or cosine of that carrier wave. Importantly these components are orthogonal, so can carry independent data. That's where we use the phasor representation (IQ-modulation), and remove the carrier wave and represent this "artificial" baseband signal using complex numbers or sine or cosine components, or positive/negative frequencies (all these are equivalent). But it is important to remember that this is just a representation of a signal modulated onto a carrier as if there was no carrier.
That's where this explanation goes wrong.
[+] [-] awelkie|5 years ago|reply
The real reason we use I/Q sampling is because we want to frequency-shift a signal.
Why do we want to frequency-shift a signal? In radio frequency applications the signal of interest almost always has a much lower bandwidth than its highest frequency. In other words, the signal has a small bandwidth (say 40 MHz) centered around a high center-frequency (say 2.4 GHz). If we want to digitize the signal, then one way would be to use a very high sample-rate ADC (e.g. a 2.4 GHz ADC). But these are very expensive, and a much better way of digitizing the signal is to use a mixer (a frequency shifter) to shift the signal to be centered around 0 Hz and then use a relatively low sample-rate ADC (e.g. a 40 MHz ADC).
The way frequency shifting is done is by multipling the signal by a sine signal, which can be done in hardware. But this introduces a distortion to the signal because multiplying by a sine is not actually a frequency shift. It just so happens that this distortion is cancelled out by adding another copy of the signal multiplied with another sine delayed by 90°. But this addition needs to be complex (due to the relationship between sine functions and true frequency shifts), so what we do is sample the two distorted signals and do this complex addition with the digitial signals.
So the reason we have complex samples is because that's the best way we've found to do frequency shifting using real-only sine waves (this explains why we don't use complex numbers in audio signal processing; there's no need to do frequency shifting!). This tutorial goes into the details and is the best explanation I've seen on quadrature sampling (another term for I/Q sampling): https://www.dsprelated.com/showarticle/192.php
I think engineers (myself included) tend to get confused because using complex numbers makes the math simpler, and so they think that's the real reason we use them. All the talk about ambiguous frequencies or negative frequencies or needing to know the phase of a sample is true, but all of those problems could be solved without complex numbers simply by sampling twice as fast and then doing some math (again, audio DSP does just fine without quadrature sampling), so it's not a "real" reason to do this strange kind of sampling.
[+] [-] brandmeyer|5 years ago|reply
Three-phase/three-wire electric power doesn't have three linearly independent currents or line-to-line voltages. Rather than fight the constraint that Ia+Ib+Ic == 0, we transform the system into one in which the constraint doesn't exist. The three (redundant!) 120-degree three-phase signals are transformed into a virtual two-phase reference system and we operate on the two phases that way. Sometimes we even "frequency shift" the signal into the reference frame of a physical (or virtual) rotor.
There are some interesting side-effects. In the alpha-beta (stator) frame, unbalanced three-phase load looks like the sum of a dominant positive 60 Hz and smaller negative 60Hz signal. Diode-rectified loads are dominated by the negative fifth and positive seventh harmonics in the stator reference frame, but are frequency-shifted to negative and positive sixth in the rotor's reference frame.
Three-phase four-wire (A, B, C, and Neutral) does have three linearly independent currents and voltages. But their disturbances are so different from each other that it is frequently convenient to model this system as the sum of a complex alpha-beta line-to-line current plus a virtual common-mode phase (the zero phase).
See also the Clarke and Park transforms.
[+] [-] rrss|5 years ago|reply
IMO either perspective is fine - neither is "wrong" or "right."
> because it's hard to determine the power (not even sure what he/she means there)
I think they are referring to the fact that having the analytic representation makes envelope detection trivial.
[+] [-] whatshisface|5 years ago|reply
I agree that that's a place where the explanation goes wrong, but negative frequencies do exist, they cancel out the imaginary part of the positive frequency. A real-valued signal has equal amplitude at f and -f, if you're talking about Fourier transforms. Admittedly, Fourier transforms depart slightly from the more elementary definition of frequency as one divided by the period, but that's a linguistic snag, not a conceptual one.
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] IshKebab|5 years ago|reply
1. It's actually a fairly simple concept but this page jumps into pages of formulas and irrelevancies. It is also missing motivation for why I/Q exist. No way is this "for dummies"
2. This: "I'd say the true signal is complex, and the real signal is an incomplete projection of it" is factually incorrect. The signal coming down the antenna is real.
I feel like I could explain it better in 1/4 of the text. Would anyone be interested in that?
[+] [-] dkozel|5 years ago|reply
A few that I think are useful or have interesting components are:
* https://pysdr.org/content/sampling.html#quadrature-sampling
* https://visual-dsp.switchb.org/
* Understanding Digital Signal Processing by Richard Lyons
[+] [-] makomk|5 years ago|reply
[+] [-] pabo|5 years ago|reply
If you could add another (potentially better explanation), that could be beneficial for future learners (not necessarily limited to the HN community).
[+] [-] severino|5 years ago|reply
I would, as I read this article but I also think a clearer explanation could be made.
[+] [-] vmilner|5 years ago|reply
https://pysdr.org/content/sampling.html#
first.
[+] [-] ChuckMcM|5 years ago|reply
When I started with SDR and wrote some Fourier Transform code I wanted to know why if I fed it a sine wave at frequency X, I got two peaks in the transform. Experts said, "Well that is because you have a real signal only, if you add the quadrature component you'll see just one peak." Which is true, if you do that you see one peak, but WHY was my question.
The answer to that question was that there is a difference between discrete mathematics and continuous mathematics. And more specifically, when you operate using discrete mathematics, you need to understand both the slope and the magnitude of the the function you are working with.
A discrete version of sin(ωt) (where ω = 2πf) that is "real only" as some would say, has zero values in the quadrature side. That happens when the signal is the sum of two signals that are the complex conjugate of each other (their quadrature (or imaginary) parts differ only in sign). The discrete Fourier transform correctly picks up both of those signals and identifies them as +f and -f.
[+] [-] teleforce|5 years ago|reply
The best book to understand I/Q if your background is not engineering is probably this book "Digital Signal Processing in Modern Communication System" by Andreas Schwarzinger [1].
This book will teach from the basic principles of communication from the I/Q up to OFDM, basically most of the essential techniques of modern communication as mentioned in its title.
[1]https://www.amazon.com/Digital-Signal-Processing-Communicati...
[+] [-] the_only_law|5 years ago|reply
I've tried to get into communication theory to work on some moderately complex SDR projects, but it seems I cant escape math for this.
[+] [-] newhouseb|5 years ago|reply
For I/Q signals in particular, W2AEW has a couple great videos that explain the intuition and then demonstrate things in action on real instruments:
- https://www.youtube.com/watch?v=h_7d-m1ehoY
- https://www.youtube.com/watch?v=5GGD99Qi1PA
[+] [-] cycomanic|5 years ago|reply
I really only understood this when I started doing communications (and that was after doing a PhD in physics mind you).
[+] [-] mikewarot|5 years ago|reply
Try out GNU Radio... you can just use your audio I/O instead of messing with RF, etc. Build a few graphs where you use complex signals, multiply them together, etc... and you can get the hang of it, for no $$ and only a few hours time.
[+] [-] mhh__|5 years ago|reply
My only concrete advise is to avoid the Proakis book
[+] [-] CraigJPerry|5 years ago|reply
Is precise the right word here? It’s more complete, less information is lost (sampling is never lossless though) but there’s no difference in precision.
If i have IQ data then i can derive the data that would be preserved by any other modulation (phase, frequency, amplitude shifting). I can’t recover the original IQ samples from a sampled FM signal.
[+] [-] SonOfThePlower|5 years ago|reply
[+] [-] mikewarot|5 years ago|reply
Negative frequencies are like when a 3 phase generator runs backwards... everything plugged into it also runs backwards... the sign of the frequency tells you the direction of rotation, and the frequency is the speed.
If you had 2 antennas 1/4 wave apart, and used them for I and Q, you would then be able to tell which direction the signal was coming from along the I/Q antenna axis. With 3 or more antenna, you can determine compass direction of the source.
The same is true for 2 microphones, at any frequency where the microphones are 1/4+N wavelengths apart.
[+] [-] threeme3|5 years ago|reply