This is such an awesome video. I really hope there's a follow-up at some point, because as someone with a decent amount of experience with computer graphics, but only a fairly general understanding of sampling theorem, the stuff that would naturally come next is what most interests me, namely:
- What happens when you start combining waves together to create more complex signals? This is pretty important, since any real instrument produces hundreds of harmonics with complex attack and decay properties, so decomposition of the fundamental frequencies won't be as accurate as with toy examples.
- Following on from that: the effects of aliasing. It definitely exists and I have a very good understanding of aliasing with respect to computer graphics, but what effect does aliasing really have on an audio signal? In CG it's something that's talked about all the time and there are tons of papers about it, but it seems (from the outside at least) that audio guys only ever talk about aliasing in very hand-wavy terms.
- Although we can't hear sounds (much) above 20KHz, we can detect artefacts such as "beats" produced when harmonics are slightly mismatched. Is is possible to show that such information either isn't lost, or that what is lost is either below the noise floor or outside the audible frequency range? This particular one is a fairly common complaint made by audiophiles about 44KHz/Nyquist, so it would be nice to see it addressed head-on.
FWIW, I'm a bit of an audiophile myself, but not one of those people who thinks they can hear a difference between 192kbps MP3 and uncompressed, let alone 16/44 vs 24/192. Generally the noise floor in the original recording is too high to tell the difference, even if you could hear a difference in artificially constructed pathological cases. But I am interested in really understanding what information is lost, what isn't, and why that may or may not make any difference. In other words, what are those pathological cases? This video is a really good start and clears up a lot, but in some ways it only scratches the surface.
Wow! Just amazing! I can't believe the amount of confusing science this video dispelled for me. This should be seen by everybody who think they know what a digital signal is.
Wow, I haven't learned that much in 20 minutes in years! Too bad this new found wisdom has such limited application for anything I'm actively involved with.
The argument for mainstream 24bit dynamic range shouldn't be for higher fidelity reproduction for audiophiles. It should be for enabling more sophistication in our playback devices. In the YouTube era not every recording is perfectly mastered by audio engineers to fill the available range. And I could be listening on high end home theater equipment or the tiny speaker in my smart phone.
I watched the full 20 minutes and have absolutely no idea what he was talking about for most of it. I did have that misconception about the stepping in digital signals though - so I did learn something!
Is it just me, or is anyone else hearing something that sounds like wind noise over the narration? (It goes away when he plays sample sounds, and other audio on my system sounds normal.)
That was an awesome video, resulting in some follow up questions. Is it correct to say that lowering the bits increased the ground noise level, then why, (basically is quantization on the amplitude of the wave??)?
If lowering the number bits still produces the same output wave as input wave what and where is the lower limit that changes the sound?
Finally how is this 24 bit 16 bit related to my 128kbps mp3s?
I'd like to point out the potentially non-obvious here. This is a test of delivery format.
There still are benefits to using 24-bit audio in the recording and processing stages. This is in large part due to most recording systems expecting 0dbVU = -18dbFS and the subsequent processing that can bring the noise floor well into the audible range (dynamic range processors are notoriously effective at this, and heavily used in modern music). We could take a simple example of a snare drum being recorded at 16bit, being EQ'd with a +6db boost anywhere on the spectrum, then compressed with a reduction peaking around 10db (not uncommon). After brickwall limiting in the final mix, this track will easily have a noise floor (via quantization error only) > -68dbFS best case. (-96db starting. -12dbFs peaking snare, +6 + 10 to noise floor, assuming limiting with no gain reduction) -68dbFS is already audible in a critical listening scenario. With dozens (sometimes hundreds) of uncorrelated signals being subjected to similar processing, this noise floor raises well into the audible range for even a modest playback system.
While I realize that delivery format is the only thing important to most people, it is important to differentiate since the article does make a point to separate out musicians, sound engineers and hardware reviewers. These are groups of people that _should_ be aware of the benefits of higher sample resolution. Since it's fairly obvious that most people in these categories are confused about their ability to discern delivery formats, it's not beneficial to confuse them even further about working formats.
To be more succinct, the difference between 16bit and 24bit is largely inaudible when the source material is worked in a higher resolution format and properly converted.
> These are groups of people that _should_ be aware of the benefits of higher sample resolution.
I don't think the author is necessarily dismissing the idea of high fidelity audio; especially for the reasons you point out. Rather, the author is claiming that if you're simply _listening_ to the playback, it won't make a bit of difference, regardless of how your ears are trained.
Edit: Also note that several of the respondents were really confident that they heard a difference. This small study demonstrates that their confidence was misplaced. This, I believe, is what the author is trying to drive home.
> There still are benefits to using 24-bit audio in the recording and processing stages.
An analogy of what you said:
It's similar to HDR for audio[1] (but not exactly like it). HDR can be used for photography that, once composed and edited, will present more realistic information to our eyes. For example, with HDR you wouldn't have an overexposed sky - however the HDR is only used in order to get to that final 16 bit image (and even with 16 bit your eyes have a hard time discerning different colors).
The same applies to audio. Listening to 24-bit is pointless, however, if you are editing something you want to retain as much information as possible until the final render so that you don't run into clamping issues as you described.
Therefore, sites that provide 192/24 downloads are valuable. If I'm a DJ getting music for my gig I do want those production quality files, as I cross-fade between two songs I don't want artefacts popping (excuse the pun) up.
On to my own opinion: 24-bit is still not good enough. DAWs should be working in floats. Audio needs to go true HDR, 24-bit is a cop-out. Why would you even used a 24-bit int when floats are there and ready to go? Imagery went floating point, what, 10 years ago? Why can't audio catch up? Being able to exceed the clip in my DAW in a channel, and then wrangle it back down in another would be awesome.
Unrelated: That Xiph video really amazes in terms of what nature does. We rarely care about it (we do in terms of e.g. intercontinental fibre cables), but nature does all of this when we send a signal to a speaker. Even normal sound does actually have a band limit and does behave (albeit, far higher dynamic range) exactly the same way automatically. Shoot a signal down a fibre cable that can't handle it, and you'll get Nyquist. Too high frequency for RTP air? Expect distortion (that we can't hear). You don't even have to include electronics to get nature to impose these limitations for you, you have to do no extra work. Completely amazing - a deeper level of logic that is mind boggling.
That's true. But I understood this as talking about playback, not about mastering. The article linked below makes this distinction more clearly: https://news.ycombinator.com/item?id=8727591
Slight problem - DXD is not PCM. It's downsampled DSD, which isn't a true PCM format and is of debatable value in a bit-depth test.
DSD uses single-bit delta-sigma modulation at a very high sample rate. You have to downconvert it before you can hear it, and this adds noise/dither/distortion. One of the problems with DSD is that it's not entirely clear what useful bit-depth you're left with after downsampling, because there are theoretical reasons for criticising one-bit sampling. See e.g.
A useful test would start with high quality unmastered and unprocessed 24-bit PCM recordings and A/B them with 16-bit downconversions. (Remember, even orchestral recordings are mixed in a studio and the individual stems usually have some dynamic processing and gain riding, even if it's not as obvious as dance music pumping.)
I'd expect a test like this to use a bit meter like Bitter to confirm there's useful information in the lower bits, and not just rely on a vague estimate of the dynamic range.
Ironically, all of the reviews of the Bozza track say that the BluRay audio version sounds cleaner than the SACD source used here. (I have no idea if this is true. But if someone has both and wants to do a blind A/B, that would be interesting.)
It's also worth mentioning there are easy-to-find test tones you can use to check how clean your audio hardware is at extreme sample rates. They're not directly relevant to bit-depth tests, but they're a good torture test for audio.
I'm really glad there have been more audio related posts on HN lately. Maybe it's just my own bias, but seems that print/online media related to audio is dying off and is now limited to forums and hobbiest sites now. The few that are still around are difficult to take seriously with some of the snake oil that gets reviewed. (Carbon fiber disc stabilizers anyone?)
I would love to see a serious attempt at an enthusiast "magazine" done again...
There's a lot of great data here, and the author obviously tried to cover all of the bases. Unfortunately I'm still bothered by a couple of aspects.
Firstly, the question of "can you hear a difference" is completely orthogonal to the question of "which do you think is 24 bit". By using the answer to the second question to infer an answer to the first, you're entangling them. If someone could reliably hear the difference but on half the songs they preferred 16 bit and on the other half preferred 24 bit, their own answers would cancel each other out.
Secondly, all it takes is ONE PERSON who can reliably tell the difference [1] to prove that the difference is audible, even if it's only to a very small subset of the population. The test was structured to detect the abilities of a group, not a single person. I'm perfectly willing to believe that as a group, people on average can't tell a difference, but that doesn't tell me whether I can tell a difference.
[1] Reliably telling the difference would mean being consistent on double-blind A/B testing, repeated enough times to achieve statistical significance.
I think the conclusion ("there was no evidence that 24-bit audio could be appreciably differentiated from the same music dithered down to 16-bits") is not correct.
EDIT: I'm not sure if the conclusion is correct or not, but the logic that lead to the conclusion has flaws.
50%/50% accuracy means a random guess - people can't distinguish 24-bit from 16-bit.
But if accuracy is less than 50% for a large enough sample it means the difference between 24-bit and 16-bit is heard.
Article said: "As a subgroup (total of 31 respondents), the self identified respondents with a "good amount" of musical background did not do well. In fact, this group of respondents consistently scored worse than the combined result."
People with persumably better ears ("musicians" and "hardware reviewers") were less accurate than regular people, especially on Vivaldi. I think this means they did well. This means they heard the difference - it is not possible to be significantly less accurate than 50% without hearing a difference. They failed on deciding which one is "better", but they were able to differentiate 16-bit music from 24-bit.
Perhaps the 16-bit sample really did sound better in some cases?
The lowest bits of a D/A converter are the most non-linear. By avoiding them you might get a more accurate waveform overall.
This would explain the people who were confident that they knew which was which, even when they got it consistently wrong. It would depend greatly on the specific D/A converter so you'd expect it to go both ways.
is still a better approximation of a linearly increasing list of numbers than
1.0 1.0 2.0 2.0 2.0 3.0 3.0
and I doubt that any 24bit DAC or ADC in existence will interpolate as bad as I did in this example. Whatever distortion due to quantisation the latter creates, the former will create less of it.
Quality reports on audio depend, as far as I know, entirely on conscious reporting of quality that's accessible to a test subject through introspection.
By definition, this makes it easy to debunk "golden ears." Because loudness (energy) determines what we pay attention to in sound, this is why sonic detail that's low in energy compared to the total can be dropped without test subjects being able to report the missing information. And maybe this is valid. If we can't report our experiences, are they really experiences?
But I find this unsatisfactory if only from the point of view of experimental design. Does the brain really throw this information away at a low level? Does our ear "compress" audition on the way to other parts of the brain? Or does our subconscious experience uncompressed music differently?
While the conclusion is in accordance with what I would expect, I think the study suffers a lot from not having a control group. From these results, there is no telling if participants screwed up in replay, or if they were all just guessing anyway or whatever. This should be redone with at least one sample pair where one of the samples is deliberately reduced in quality is delivered to a subset of the participants.
Shoot, most people won't be able to tell 8 bit audio from 16 bit audio. Try the following:
sox highres.flac --bits 8 lowres.wav dither
Wave is required because flac doesn't do 8 bit and we want to be 100% certain nothing sneaky is going on. You might be able to notice a slight increase in background hiss if you are in a very quiet room.
That background hiss IS the difference. I'm not sure what you think the difference would be otherwise, but the increase in the noise floor caused by quantization error will be the difference between the formats.
I also don't know what 'lowres.wav' is (is this linked in the article?), but on classical or jazz recordings the difference is very noticeable due to the lower 'average' amplitude of the recordings. If you did this on a modern pop recording that's smashed to hell and back... then yeah, many people won't even notice the noise.
> Do female audiophiles face the same adversities as women in tech?
How would that work? There isn't even any opportunity to be excluded from being an audiophile.
It's the exact same thing that accounts for a lot of the discrepancy between men and women in professional tech: the fact that men, on average, like gadgets more than women.
Is your second question independent or leading from the first? If the latter, then you realize that just because there is a lack of certain demographics in a group does not necessarily mean it is due to adversities, right? There could be countless reasons.
As for what those reasons are, tech enthusiast communities of all sorts tend to be predominantly male. I don't think it's anything particular to audio. I agree that I was a bit surprised at how overwhelmingly male the sample was, though.
I'd gamble it's because they're less into bragging about how much money they spent on audio equipment / dick measuring. But that's just my cynical point of view. I'm sure the distribution between men and women that can appreciate good music (and good music equipment) is much less skewed than men/women in tech; it's just the public part where they're not represented as much.
It's all about obsesive compulsive.
I count myself on that group... I bought every recommended pair of headphones promising more and more magic but one day you come to notice that it's just a sound equalizer and some people like it some way... some other.
so 192kbps 44khz and mdrv6 to a 30$ player; that's it.
not really. headphones range a lot within the frequencies you DO hear.
also, the weight vs outside sound isolation ratio varies with price.
those are all observable and measurable things.
speakers, they also vary on those audible frequencies, but after you are past $150 per speaker you are only dealing with quality after very loud volumes.
over $3000 for a home system? just be honest with yourself and confess you are buying the prettiest furniture that match your decor.
As I said in the other recent 24-bit audio thread, any improvement in sound quality offered by using 24-bit will be inaudible for the vast majority of listeners, the extra low-level detail of the increased dynamic range lost in the noise floor of a typical room.
Is there any individual human who can reliably distinguish between 16 and 24 bit audio? If somebody believes they can, where can I send them to establish whether it's true or not?
According to https://people.xiph.org/~xiphmont/demo/neil-young.html the answer is no. 16bits is more than enough to cover the entire range a human ear can pick up (assuming the source material was mixed and sampled correctly).
you can hear the difference between summing 16 channels of 24bit audio vs. 16bit audio in a daw. 24 sounds better. then when you render it, you can't tell the difference between a 16bit dump and a 24bit dump.
i think the recent aphex twin release was out on 24bit and 16bit, it would be a great test subject matter for the foobar abx plugin.
with all types of music, you can train your ear to listen for what mp3 hiccups on. i know nothing about classic music, but i can spot a 320kbps mp3 a mile off due to terrible sounding high hats and crashes in genres where they are prominent. disco records also suffer very badly, just something about how they were recorded. i wouldn't know what to listen for in classical.
and honestly, i can easily tell the samples with dither because they added a LOT of white noise. the original and 8bit non-dither to me practically the same.
[+] [-] taeric|11 years ago|reply
[+] [-] robert_tweed|11 years ago|reply
- What happens when you start combining waves together to create more complex signals? This is pretty important, since any real instrument produces hundreds of harmonics with complex attack and decay properties, so decomposition of the fundamental frequencies won't be as accurate as with toy examples.
- Following on from that: the effects of aliasing. It definitely exists and I have a very good understanding of aliasing with respect to computer graphics, but what effect does aliasing really have on an audio signal? In CG it's something that's talked about all the time and there are tons of papers about it, but it seems (from the outside at least) that audio guys only ever talk about aliasing in very hand-wavy terms.
- Although we can't hear sounds (much) above 20KHz, we can detect artefacts such as "beats" produced when harmonics are slightly mismatched. Is is possible to show that such information either isn't lost, or that what is lost is either below the noise floor or outside the audible frequency range? This particular one is a fairly common complaint made by audiophiles about 44KHz/Nyquist, so it would be nice to see it addressed head-on.
FWIW, I'm a bit of an audiophile myself, but not one of those people who thinks they can hear a difference between 192kbps MP3 and uncompressed, let alone 16/44 vs 24/192. Generally the noise floor in the original recording is too high to tell the difference, even if you could hear a difference in artificially constructed pathological cases. But I am interested in really understanding what information is lost, what isn't, and why that may or may not make any difference. In other words, what are those pathological cases? This video is a really good start and clears up a lot, but in some ways it only scratches the surface.
[+] [-] thunderbong|11 years ago|reply
[+] [-] drcode|11 years ago|reply
[+] [-] imaginenore|11 years ago|reply
[+] [-] gfody|11 years ago|reply
[+] [-] Demiurge|11 years ago|reply
[+] [-] iLoch|11 years ago|reply
[+] [-] lisper|11 years ago|reply
[+] [-] falfour|11 years ago|reply
If lowering the number bits still produces the same output wave as input wave what and where is the lower limit that changes the sound?
Finally how is this 24 bit 16 bit related to my 128kbps mp3s?
[+] [-] agumonkey|11 years ago|reply
[+] [-] ehPReth|11 years ago|reply
[+] [-] q_no|11 years ago|reply
[+] [-] gcb0|11 years ago|reply
there is a reason they like you to think DAC (digital to analog converter) is a moot problem.
[+] [-] casion|11 years ago|reply
There still are benefits to using 24-bit audio in the recording and processing stages. This is in large part due to most recording systems expecting 0dbVU = -18dbFS and the subsequent processing that can bring the noise floor well into the audible range (dynamic range processors are notoriously effective at this, and heavily used in modern music). We could take a simple example of a snare drum being recorded at 16bit, being EQ'd with a +6db boost anywhere on the spectrum, then compressed with a reduction peaking around 10db (not uncommon). After brickwall limiting in the final mix, this track will easily have a noise floor (via quantization error only) > -68dbFS best case. (-96db starting. -12dbFs peaking snare, +6 + 10 to noise floor, assuming limiting with no gain reduction) -68dbFS is already audible in a critical listening scenario. With dozens (sometimes hundreds) of uncorrelated signals being subjected to similar processing, this noise floor raises well into the audible range for even a modest playback system.
While I realize that delivery format is the only thing important to most people, it is important to differentiate since the article does make a point to separate out musicians, sound engineers and hardware reviewers. These are groups of people that _should_ be aware of the benefits of higher sample resolution. Since it's fairly obvious that most people in these categories are confused about their ability to discern delivery formats, it's not beneficial to confuse them even further about working formats.
To be more succinct, the difference between 16bit and 24bit is largely inaudible when the source material is worked in a higher resolution format and properly converted.
[+] [-] mankyd|11 years ago|reply
I don't think the author is necessarily dismissing the idea of high fidelity audio; especially for the reasons you point out. Rather, the author is claiming that if you're simply _listening_ to the playback, it won't make a bit of difference, regardless of how your ears are trained.
Edit: Also note that several of the respondents were really confident that they heard a difference. This small study demonstrates that their confidence was misplaced. This, I believe, is what the author is trying to drive home.
[+] [-] zamalek|11 years ago|reply
An analogy of what you said:
It's similar to HDR for audio[1] (but not exactly like it). HDR can be used for photography that, once composed and edited, will present more realistic information to our eyes. For example, with HDR you wouldn't have an overexposed sky - however the HDR is only used in order to get to that final 16 bit image (and even with 16 bit your eyes have a hard time discerning different colors).
The same applies to audio. Listening to 24-bit is pointless, however, if you are editing something you want to retain as much information as possible until the final render so that you don't run into clamping issues as you described.
Therefore, sites that provide 192/24 downloads are valuable. If I'm a DJ getting music for my gig I do want those production quality files, as I cross-fade between two songs I don't want artefacts popping (excuse the pun) up.
On to my own opinion: 24-bit is still not good enough. DAWs should be working in floats. Audio needs to go true HDR, 24-bit is a cop-out. Why would you even used a 24-bit int when floats are there and ready to go? Imagery went floating point, what, 10 years ago? Why can't audio catch up? Being able to exceed the clip in my DAW in a channel, and then wrangle it back down in another would be awesome.
Unrelated: That Xiph video really amazes in terms of what nature does. We rarely care about it (we do in terms of e.g. intercontinental fibre cables), but nature does all of this when we send a signal to a speaker. Even normal sound does actually have a band limit and does behave (albeit, far higher dynamic range) exactly the same way automatically. Shoot a signal down a fibre cable that can't handle it, and you'll get Nyquist. Too high frequency for RTP air? Expect distortion (that we can't hear). You don't even have to include electronics to get nature to impose these limitations for you, you have to do no extra work. Completely amazing - a deeper level of logic that is mind boggling.
[1]: http://www.slideshare.net/DICEStudio/audio-for-multiplayer-b...
[+] [-] shmerl|11 years ago|reply
[+] [-] TheOtherHobbes|11 years ago|reply
DSD uses single-bit delta-sigma modulation at a very high sample rate. You have to downconvert it before you can hear it, and this adds noise/dither/distortion. One of the problems with DSD is that it's not entirely clear what useful bit-depth you're left with after downsampling, because there are theoretical reasons for criticising one-bit sampling. See e.g.
http://sjeng.org/ftp/SACD.pdf
A useful test would start with high quality unmastered and unprocessed 24-bit PCM recordings and A/B them with 16-bit downconversions. (Remember, even orchestral recordings are mixed in a studio and the individual stems usually have some dynamic processing and gain riding, even if it's not as obvious as dance music pumping.)
I'd expect a test like this to use a bit meter like Bitter to confirm there's useful information in the lower bits, and not just rely on a vague estimate of the dynamic range.
http://www.stillwellaudio.com/plugins/bitter/
Ironically, all of the reviews of the Bozza track say that the BluRay audio version sounds cleaner than the SACD source used here. (I have no idea if this is true. But if someone has both and wants to do a blind A/B, that would be interesting.)
It's also worth mentioning there are easy-to-find test tones you can use to check how clean your audio hardware is at extreme sample rates. They're not directly relevant to bit-depth tests, but they're a good torture test for audio.
http://www.audiocheck.net/testtones_highdefinitionaudio.php
[+] [-] shmerl|11 years ago|reply
[+] [-] S_A_P|11 years ago|reply
[+] [-] mark-r|11 years ago|reply
Firstly, the question of "can you hear a difference" is completely orthogonal to the question of "which do you think is 24 bit". By using the answer to the second question to infer an answer to the first, you're entangling them. If someone could reliably hear the difference but on half the songs they preferred 16 bit and on the other half preferred 24 bit, their own answers would cancel each other out.
Secondly, all it takes is ONE PERSON who can reliably tell the difference [1] to prove that the difference is audible, even if it's only to a very small subset of the population. The test was structured to detect the abilities of a group, not a single person. I'm perfectly willing to believe that as a group, people on average can't tell a difference, but that doesn't tell me whether I can tell a difference.
[1] Reliably telling the difference would mean being consistent on double-blind A/B testing, repeated enough times to achieve statistical significance.
[+] [-] kmike84|11 years ago|reply
EDIT: I'm not sure if the conclusion is correct or not, but the logic that lead to the conclusion has flaws.
50%/50% accuracy means a random guess - people can't distinguish 24-bit from 16-bit.
But if accuracy is less than 50% for a large enough sample it means the difference between 24-bit and 16-bit is heard.
Article said: "As a subgroup (total of 31 respondents), the self identified respondents with a "good amount" of musical background did not do well. In fact, this group of respondents consistently scored worse than the combined result."
People with persumably better ears ("musicians" and "hardware reviewers") were less accurate than regular people, especially on Vivaldi. I think this means they did well. This means they heard the difference - it is not possible to be significantly less accurate than 50% without hearing a difference. They failed on deciding which one is "better", but they were able to differentiate 16-bit music from 24-bit.
[+] [-] mark-r|11 years ago|reply
The lowest bits of a D/A converter are the most non-linear. By avoiding them you might get a more accurate waveform overall.
This would explain the people who were confident that they knew which was which, even when they got it consistently wrong. It would depend greatly on the specific D/A converter so you'd expect it to go both ways.
[+] [-] cnvogel|11 years ago|reply
[+] [-] Zigurd|11 years ago|reply
By definition, this makes it easy to debunk "golden ears." Because loudness (energy) determines what we pay attention to in sound, this is why sonic detail that's low in energy compared to the total can be dropped without test subjects being able to report the missing information. And maybe this is valid. If we can't report our experiences, are they really experiences?
But I find this unsatisfactory if only from the point of view of experimental design. Does the brain really throw this information away at a low level? Does our ear "compress" audition on the way to other parts of the brain? Or does our subconscious experience uncompressed music differently?
[+] [-] vitoreiji|11 years ago|reply
[+] [-] keenerd|11 years ago|reply
[+] [-] casion|11 years ago|reply
I also don't know what 'lowres.wav' is (is this linked in the article?), but on classical or jazz recordings the difference is very noticeable due to the lower 'average' amplitude of the recordings. If you did this on a modern pop recording that's smashed to hell and back... then yeah, many people won't even notice the noise.
[+] [-] afsina|11 years ago|reply
[+] [-] woah|11 years ago|reply
[+] [-] wyager|11 years ago|reply
How would that work? There isn't even any opportunity to be excluded from being an audiophile.
It's the exact same thing that accounts for a lot of the discrepancy between men and women in professional tech: the fact that men, on average, like gadgets more than women.
[+] [-] meowface|11 years ago|reply
As for what those reasons are, tech enthusiast communities of all sorts tend to be predominantly male. I don't think it's anything particular to audio. I agree that I was a bit surprised at how overwhelmingly male the sample was, though.
[+] [-] Cthulhu_|11 years ago|reply
[+] [-] spopejoy|11 years ago|reply
That is, this finally proves women are smarter. Or at least less likely to spend 3-figure sums on an interconnect cable.
[+] [-] slowmotiony|11 years ago|reply
[+] [-] xlayn|11 years ago|reply
[+] [-] gcb0|11 years ago|reply
also, the weight vs outside sound isolation ratio varies with price.
those are all observable and measurable things.
speakers, they also vary on those audible frequencies, but after you are past $150 per speaker you are only dealing with quality after very loud volumes.
over $3000 for a home system? just be honest with yourself and confess you are buying the prettiest furniture that match your decor.
[+] [-] ChrisGranger|11 years ago|reply
[+] [-] Derbasti|11 years ago|reply
[+] [-] r721|11 years ago|reply
[+] [-] Joeboy|11 years ago|reply
Is there any individual human who can reliably distinguish between 16 and 24 bit audio? If somebody believes they can, where can I send them to establish whether it's true or not?
[+] [-] corford|11 years ago|reply
[+] [-] throwawayaway|11 years ago|reply
i think the recent aphex twin release was out on 24bit and 16bit, it would be a great test subject matter for the foobar abx plugin.
with all types of music, you can train your ear to listen for what mp3 hiccups on. i know nothing about classic music, but i can spot a 320kbps mp3 a mile off due to terrible sounding high hats and crashes in genres where they are prominent. disco records also suffer very badly, just something about how they were recorded. i wouldn't know what to listen for in classical.
[+] [-] antonios|11 years ago|reply
Take those results with a (large) grain of salt.
[+] [-] stefantalpalaru|11 years ago|reply
This is getting ridiculous. Just call it "sex" already.
[+] [-] retrogradeorbit|11 years ago|reply
[+] [-] gcb0|11 years ago|reply
and honestly, i can easily tell the samples with dither because they added a LOT of white noise. the original and 8bit non-dither to me practically the same.
[+] [-] spankalee|11 years ago|reply
[+] [-] thrownaway2424|11 years ago|reply