top | item 40918152

Ask HN: How do browsers isolate internal audio from microphone input?

238 points| dumbest | 1 year ago

I've noticed an interesting feature in Chrome and Chromium: they seem to isolate internal audio from the microphone input. For instance, when I'm on a Google Meet call in one tab and playing a YouTube video at full volume in another tab, the video’s audio isn’t picked up by Google Meet. This isolation doesn’t happen if I use different browsers for each task (e.g., Google Meet on Chrome and YouTube on Chromium).

Does anyone know how Chrome and Chromium achieve this audio isolation?

Given that Chromium is open source, it would be helpful if someone could point me to the specific part of the codebase that handles this. Any insights or technical details would be greatly appreciated!

99 comments

order
[+] padenot|1 year ago|reply
The way this works (and I'm obviously taking a high level view here) is by comparing what is being played to what is being captured. There is an inherent latency in between what is called the capture stream (the mic) and the reverse stream (what is being output to the speakers, be it people taking or music or whatever), and by finding this latency and comparing, one can cancel the music from the speech captured.

Within a single process, or tree of processes that can cooperate, this is straightforward (modulo the actual audio signal processing which isn't) to do: keep what you're playing for a few hundreds milliseconds around, compare to what you're getting in the microphone, find correlations, cancel.

If the process aren't related there are multiple ways to do this. Either the OS provides a capture API that does the cancellation, this is what happens e.g. on macOS for Firefox and Safari, you can use this. The OS knows what is being output. This is often available on mobile as well.

Sometimes (Linux desktop, Windows) the OS provides a loopback stream: a way to capture the audio that is being played back, and that can similarly be used for cancellation.

If none of this is available, you mix the audio output and perform cancellation yourself, and the behaviour your observe happens.

Source: I do that, but at Mozilla and we unsurprisingly have the same problems and solutions.

[+] Johnie|1 year ago|reply
This reminds me of:

>The missile knows where it is at all times. It knows this because it knows where it isn't. By subtracting where it is from where it isn't, or where it isn't from where it is (whichever is greater), it obtains a difference, or deviation

https://knowyourmeme.com/memes/the-missile-knows-where-it-is

[+] wormius|1 year ago|reply
For a little more context on negative feedback to those who want to know more (I believe this is what you're referring to?)

Here's a short historical interview with Harold Black from AT&T on his discovery/invention of the negative feedback technique for noise reduction. It's not super explanatory but a nice historical context: https://youtu.be/iFrxyJAtJ7U?si=8ONC8N2KZwq3Jfsq

Here's a more indepth circuit explanation: https://youtu.be/iFrxyJAtJ7U?si=8ONC8N2KZwq3Jfsq

IIRC the issue was AT&T was trying to get cross-country calling, but to make the signal carry further you needed a louder signal. Amplifying the signal also the distortion.

So Harold came up with this method that ultimately allowed enough signal reduction to allow calls to cross the country within the power constraints available.

For some reason I recall something about transmission about Denver being a cut off point before the signal was too degraded... But I'm too old and forgetful so I could be misremembering something I read a while ago. If anyone has more specific info/context/citations that'd be great. Since this is just "hearsay" from memory, but I think it's something like this.

[+] gpvos|1 year ago|reply
It just seems more logical for the OS to do that, rather than the application. Basically every application that uses microphone input will want to do this, and will want to compensate for all audio output of the device, not just its own. Why does the OS not provide a way to do this?
[+] Log_out_|1 year ago|reply
At the lowest level its a fouriertransform over a systems (your room the echochambers response is know from some testsound )and the expected output going through that transform on its way to the mic is subtracted. Most socks and machines have dedicated systems for that. The very same chip produces the echo of the surroundings.
[+] generalizations|1 year ago|reply
Is there any way to apply this outside the browser? Like, is there a version of this that can be used with Pulseaudio?
[+] umutisik|1 year ago|reply
It's called Acoustic Echo Cancellation. An implementation is included in WebRTC included in Chrome. A FIR filter (1D convolution) is applied to what the browser knows is coming out of the speakers; and this filter is continually optimized to to cancel out as much as possible of what's coming into the microphone (this is a first approximation, the actual algorithm is more involved).
[+] sojuz151|1 year ago|reply
Remember that convolution is multiplication in the frequency domain, so this also handles different responses at different frequencies, not just delays
[+] meindnoch|1 year ago|reply
Search for the compilation flag "CHROME_WIDE_ECHO_CANCELLATION" in the Chromium sources, and you will find your answer.

Can't tell you anything else due to NDAs.

[+] Wowfunhappy|1 year ago|reply
It's kind of nuts that (I'm assuming) the source code is publicly available but the developers who wrote it can't talk about it.

(I realize this situation isn't up to you and I appreciate that you chimed in as you could!)

[+] geor9e|1 year ago|reply
Thanks, I see it's a user toggle too chrome://flags#chrome-wide-echo-cancellation or edge://flags/#edge-wide-echo-cancellation . All these years I was praising my macbook, thinking it was the hardware doing the cancellation, but it was Chromium the whole time.
[+] codetrotter|1 year ago|reply
Side note, this can also cause a bit of difficulty in some situations apparently as seen in a HN post from a few months ago that didn’t get much attention

https://news.ycombinator.com/item?id=39669626

> I've been working on an audio application for a little bit, and was shocked to find Chrome handles simultaneous recording & playback very poorly. Made this site to demo the issue as clearly as possible

https://chrome-please-fix-your-audio.xyz/

[+] filleokus|1 year ago|reply
Not sure if it's the whole story, but the latest response in the linked Chrome ticket seems to indicate that the api's were used incorrectly by the author

> <[email protected]>

> Status: Won't Fix (Intended Behavior)

> Looking at the sample in https://chrome-please-fix-your-audio.xyz, the issue seems to be that the constraints just aren't being passed correctly [...]

> If you supply the constraints within the audio block of the constraints, then it seems to work [...]

> See https://jsfiddle.net/40821ukc/4/ for an adapted version of https://chrome-please-fix-your-audio.xyz. I can repro the issue on the original page, not on that jsfiddle.

https://issues.chromium.org/issues/327472528#comment14

[+] supriyo-biswas|1 year ago|reply
The technical term that you're looking for is acoustic echo cancellation[1].

It's a fairly common problem in signal processing, and comes up in "simple" devices like telephones too.

[1] https://www.mathworks.com/help/audio/ug/acoustic-echo-cancel...

[+] mananaysiempre|1 year ago|reply
I seem to remember analog telephone lines used a very simple but magic-looking transformer-based circuit of some sort for this purpose. Presumably that worked because they didn’t need to worry about a processing delay?
[+] kajecounterhack|1 year ago|reply
Google Meet uses source separation technology to denoise the audio. It's a neural net that's been trained to separate speech from non-speech and ensure that only speech is being piped through. It can even separate different speakers from one another. This technology got really good around 2021 when semi-supervised ways of training the models were developed, and is still improving :)
[+] atoav|1 year ago|reply
A side effect of echo cancellation. Browser knows what audio it is playing, can correlate that to whatever comes in through the mic, maybe even by outputing inaudible test signals, or by picking wide supported defaults.

This is needed because many people don't use headphones and if you have more than one endpoint with mic and speakers open you will get feedback gallore if you don't do something to suppress it.

[+] j45|1 year ago|reply
Have used audio a lot on windows/mac for a long time, and a bit of linux too.

I'd say it depends on the combination of the hardware/software/OS that does pieces of it on how audio routing comes together.

Generally you have to see what's available, how it can or can't be routed, what software or settings could be enabled or added to introduce more flexibility in routing, and then making the audio routing work how you want.

More specifically some datapoints:

SOUND DRIVERS: Part of this can be managed by the sound drivers on the computer. Applications like web browsers can access those settings or list of devices available.

Software drivers can let you pick what's that's playing on a computer, and then specifically in browsers it can vary.

CHANNELS: There are often different channels for everything. Physical headphone/microphone jacks, etc. They all become devices with channels (input and output).

ROUTING: The input into a microphone can be just the voice, and/or system audio. System audio can further be broken down to be specific ones. OBS has some nice examples of this functionality.

ADVANCED ROUTING: There are some audio drivers that are virtual audio drivers that can also help you achieve the audio isolation or workflow folks are after.

[+] alihesari|1 year ago|reply
So Chrome and Chromium got this cool trick where they block internal audio from your mic. Like, you can be on a Google Meet call and blast a YouTube vid in another tab, and Meet won’t pick it up. No clue how they do it exactly, but since Chromium’s open-source, someone can probably dig into the code for the deets. If anyone knows the techy stuff behind this, spill the beans!
[+] _flux|1 year ago|reply
I think this would be part of echo cancellation: in a meeting you don't want the data from the meeting to be fed back to it. I suppose it uses the all the streams from the browser then, though I think in general it would be even better to cancel out everything that comes from the speakers. Maybe it can work this way on some other platforms?

E.g. PulseAudio and Pipewire have a module for echo cancellation.

[+] exabrial|1 year ago|reply
"echo cancellation" is what its called, there's a few general purpose (non-ai) algos out there!

What's really interesting is I can get the algorithm to "mess up" by using external speakers a foot or two away from my computer's mic! Just that little bit of travel time is enough to screw with the algo.

[+] meatmanek|1 year ago|reply
Echo cancellation is often disabled if you have headphones plugged in, under the assumption that headphones won't be audible in the microphone, and it's better to disable it to avoid it degrading your microphone signal.

It might be that whatever program you're using doesn't know the difference between speakers and headphones (possibly because you're using the 3.5mm jack?)

[+] hpen|1 year ago|reply
Since chrome has the data from both sources: the microphone, and the audio stream from YouTube, I imagine you can construct a filter from the impulse response of the YouTube source and then run the microphone through it
[+] bigbones|1 year ago|reply
Guessing it's a feature of the WebRTC stack if it's to be found anywhere, there's always a requirement for cancelling feedback from other meeting participants
[+] sciencesama|1 year ago|reply
Windows 11 does this ! Videos playing on browser (edge, tested on youtube, linkedin videos) wont he heard to the meeting folks or folks on call in teams !
[+] blharr|1 year ago|reply
That's interesting. Following up, is there any reason why it wouldn't be possible for other browsers/applications? It seems like the operating system should be able to generally access the audio from any application
[+] sciencesama|1 year ago|reply
Windows 11 does that !! Videos playing on edge wont be heard by the folks on teams !!