(no title)
f5ve | 2 years ago
As far as I can tell, though, it doesn't mention what may be the most important reason (especially to the folks here at hackernews): resampling and processing.
This is why professional grade audio processing operates at a sample rate many multiples higher than human hearing. It's not because of the quality difference between, say, 192 and 96 kHz, but rather if you're resampling or iterating a process dozens of times at those rates, eventually artifacts will form and make their way into the range of human hearing (20 kHz).
shampto3|2 years ago
In my opinion, having a high sample rate only really matters during the production phase and does not have a noticeable effect on the final form factor. If the producer uses high sample rate during the creation process, I see no reason why the listener would care if the file they’re listening to is higher than even 44.1kHz unless they are planning on using it for their own production.
duped|2 years ago
All because Sony and Philips wanted 80 minutes of stereo audio on CDs decades ago.
hunter2_|2 years ago
However, any type of subsequent processing in the digital domain, even just a volume change by the listener if it's applied digitally in the 16 bit realm (i.e., without first upscaling to 24 bits), completely destroys the benefit of dithering. For that reason, we might say that additional processing isn't confined to the recording studio and can happen at the end user level.
I'm unsure whether this same logic applies to sampling frequency, but probably? I guess post-mastering processing of amplitude is far more common than time-based changes, but maybe DJs doing beat matching?
unknown|2 years ago
[deleted]
rcxdude|2 years ago
scns|2 years ago
https://www.youtube.com/watch?v=-jCwIsT0X8M
markkitti|2 years ago
jmsgwd|2 years ago
Everything you said about sample rate applies more to bit depth. Higher bit depth (bits per sample) results in a lower noise floor. When audio is digitally processed or resampled, a small amount of noise ("quantization distortion") is added, which accumulates with further processing. This can be mitigated by working at higher bit depths - which is why professional grade audio processing routinely uses 24 bit formats (for storage) and 32-bit or 64-bit floating point internally (for processing), even if the final delivery format is only 16 bit.
Sample rate, on the other hand, affects bandwidth. A higher sample rate recording will contain higher frequencies. It doesn't have any direct effect on the noise floor or level of distortion introduced by resampling, as I understand. (It could have an indirect effect - for example, if certain hardware or plugins work better at particular sample rates.)
A survey of ~2,000 professional audio engineers done in May 2023 showed that 75% of those working in music use 41.1 kHz or 48 kHz, while 93% of those working in post production use 41.1 kHz or 48 kHz.[1] These are the basic CD-derived and video-derived sample rate standards.
From this it's clear that even in professional audio, higher sample rates are a minority pursuit. Furthermore, the differences are extremely subjective. Some audio engineers swear by higher sample rates, while others say it's a waste of time unless you're recording for bats. It's very rare (and practically, quite difficult) to do proper tests to eliminate confirmation bias.
[1] https://www.production-expert.com/production-expert-1/sample...
EDIT: add link to survey.
gumby|2 years ago
Espressosaurus|2 years ago
Which makes sense I suppose.
flyinghamster|2 years ago
charcircuit|2 years ago
throwaway0665|2 years ago
rightbyte|2 years ago
unknown|2 years ago
[deleted]