top | item 34859883

(no title)

womod | 3 years ago

Such multi-site, same frequency broadcasting setups are usually called Simulcast, and are still used for some public safety trunked radio systems in the U.S. The biggest issue with simulcasting is the need for extremely precise clock synchronization and the audio carrier generation, to the point where something like using a slightly different soundcard can result in massively distorted received audio due to the effects of FM doubling. Clock synchronization is usually achieved via GPS, there's a good number of off-the-shelf solutions available. But when it comes to the audio path, it's typical to use the exact same hardware across all sites because otherwise controlling for even the smallest of differences when attempting to troubleshoot becomes an huge hassle. At the beginning of this blog post[1] is a brief overview of some things to take into consideration when building a simulcast system.

[1] - https://www.hamradiodx.net/building-a-simulcasting-voting-re...

discuss

order

tudorw|3 years ago

Is this still an issue if it's low power nodes in a mesh network that has infrequent overlap and a shared system that pre-emptively mitigates this with geolocation data?

myself248|3 years ago

If you don't do synchronization, then you just have a bunch of independent transmitters that're transmitting roughly the same program. -ish. Sorta.

Suppose you put an RDS/RBDS subcarrier under the audio, so receivers can display metadata or whatever. If a given receiver drives between transmitters, or if FM capture and phasing effects mean it's constantly bouncing between transmitters, then it corrupts the data frame(s) being transmitted at the time.

Or if the desynchronization is more than about 20ms or so, it can be audibly annoying to the listener, disorienting and unpleasant if it happens too often. (We've all probably experienced a phenomenon when creeping forward at a red light, the audio fades, and comes back, and fades again. Same effect, but imagine the program glitching back and forth a bit each time because every few inches, the capture effect switches sources.)

If you synchronize, then you have effectively one transmitter with a bunch of antennas. The program is exactly the same except for microsecond-level discontinuities when moving between transmitters, and that's not enough to corrupt either the subcarrier data or the audio. (In the creeping-forward scenario above, it would behave exactly like a normal radio station -- fading and restoring, but not glitching.)

Now, in Europe, they do the many-transmitters-same-audio thing, but they do 'em on different frequencies, and use RBDS to inform the receiver of all the alternate frequencies (AF list) carrying the same program. When RSSI on the current signal drops below a threshold, the receiver checks the others, then doublechecks their RBDS program identification (PI field) to make sure they really are the same program, then selects the strongest one and makes the switch. This only happens once in a while though, say 10-30 minutes as you drive through a valley, not several times a second, so the timing glitch isn't problematic.