This sounds like the kind of thing someone would write when the world switched from mono to stereo. Quite a few parts of it are completely wrong too not to mention insulting ("... more music is being made by individuals in bedrooms, home studios, on a budget. They have neither the equipment nor the skill to mix in Dolby Atmos").
Apple has done quite a poor job showcasing it so far. Some of the tracks they've put in their playlists sound like crap. On the otherhand some of them sound fantastic. Jazz in Atmos is wonderful. It's also important to note that 'head tracking' comes with iOS 15 and in my experience improves spatial further.
A lot of music made for stereo won't remix well. If you listen to most of the rock tracks they don't sound good because they don't have a lot going on. 3 or 4 instruments + 2 or 3 vocals tracks aren't going to take advantage of spatial. Listening to some modern pop with lots of synths and layers upon layers of backing vocals, the experience is much better.
Atmos/spatial is a new tool like any other. People will write and record songs that work for it and take advantage of it. They'll use it creatively. That's what will be interesting. Most remixed songs from the 60's aren't the least bit interesting but the possibilities for the future are.
I could be wrong and it could end up largely ignored, time will tell. But there is the opportunity for a lot of creativity regardless of what cranky old music critics think.
I'm confused. Did you even read the article? They are not talking about Atmos/spatial audio in general, they are talking about older mixes being "remixed" on mass without any input from the original artists and engineers.
Well, the whole thing is an anonymous email. And the "remixing by the truckload" doesn't seem in evidence given the low number of tracks released thus far; there are many more available on Tidal in Atmos currently.
> Jazz in Atmos is wonderful.
I've been very impressed by the classical tracks thus far, which have mostly used a hint of surround to let you feel as though you're at the front of a concert, with the players in a crescnt moon; and a bit of a delay in the rears to create the sensation of space echoing behind.
Some of the rock/pop mixes have been very interesting; the Linkin Park mixes use the surround/rear to create a "voices inside my head" effect where it makes sense on with the lyrics. Robert Palmer actually sounds like a much better singer on the remixes than the originals, with the guitars pushed out to the side rather than stomping all over the singer (whether they applied a little autotune as well, who knows?). I don't much like The Doors, but the use of surround in Riders on the Storm is remarkable.
There are some very poor remixes too, but that's the case for any re-engineering. As for the idea that only the original engineer or producer can make something work, well, that's just idiotic on a number of levels.
I totally agree - there are real possibilities to do interesting new creative work afforded by this, particularly if the barriers to entry are kept low/non-proprietary/open-to-all (…as another poster noted, it would have perhaps been nice if an open standard had been used instead…). I would only add that I think there are even more interesting possibilities for using the same technological capabilities with AR. That was the focus of the iPhone app. I wrote, which was more along the lines of Microsoft Soundscape:
https://www.microsoft.com/en-us/research/product/soundscape/
(but to be fair, before Microsoft Soundscape was available :)
The main question for me, beyond the rights of the artists is “can I listen to the version I prefer”? Apple Music has been pretty bad about the issue of multiple mixes of a certain track getting attached to the wrong album (or in odd cases subbing in a live version for an album track). If I don’t like what the Atmos mix of [favorite song] sounds like I’ll be unhappy. But most people probably won’t care.
Totally agree but wanted to add a point I found interesting doing some spatial audio work which is synths tend to spatialize poorly compared to more organic instruments. You can dirty them up with additional effects like distortion or an exciter, but I think it’s something to do with the evenness of the waveforms compared to the real world variance of a string or vocal cord vibrating.
Many people in the audiophile community believe that measurements and bit-perfect are the entire scope of good sound. The author is clearly someone who appreciates good audio, with mid-range headphones.
The smaller part of the community acknowledges that beauty is in the eye/ear of the beholder/listener.
When it comes to headphones, though, audio is to some degree already spatial. Apple does seem to be spreading butter on butter here.
> Listening to some modern pop with lots of synths and layers upon layers of backing vocals, the experience is much better.
I've listened to quite a few pop songs I know, and without fail they sound much muddier in Atmos. Vocals are recessed, the bass loses definition. The improvement in sound stage feels better when you switch back and forth, but paradoxically the sound quality is worse.
There's nothing wrong with spatial audio, it's beautiful. There's something terribly wrong with Apple trying to force audio that wasn't meant to be spatial into being spatial.
If the original artist intended the audio to be spatial, it has enormous potential and I'm pretty much sure there will be more and more artists doing that, creating a new category. But taking our good ol' perfectly-sounding stereo songs and turning them into spatial just for the sake of marketing, not good.
Wait, are artists being forced to do this? Or are you referring to old, dead artists whose music is being converted? If the latter, then it is hard to say they wouldn’t have used spatial Gad the technology existed.
> “Just for the sake of marketing”
This seems to conflict with the first sentence where you admit it’s beautiful.
It's a bit of an insult to the original artist, in my view. When I create something, it's an expression of my personality, and I the choices I make reflect my aesthetic. If you take it and remix it, that's fine, but don't put my name on it, it's "Song X (Apple Atmos Remix)" now.
Spatial audio per se is not a scam, but its effective simulation is actually potentially rather complicated and I too worry about its successful implementation by Apple… Funnily enough, I had plans for a system to implement spatial audio using smartphones/headphones originally back in 2006 and more recently wrote an app for iOS trying to implement it (…prototype/demo. that I made worked pretty well - used what I learnt working on games/interactive audio design…) I had toyed with starting a business myself over the years but life got in the way sort of… Before all this was announced I even (naively) wrote to Apple sort of roundabout-ly trying for a job in order to improve their spatial audio offering, believe it or not, but only got a polite sort of: ‘don’t call us we’ll call you’ kind of reply, which was understandable… (…don’t be thinking differently! :) Some students I taught/supervised/wrote a brief for a project of’s up at the university here subsequently started a company making a 3D binaural audio plugin thing - the business was bought by Facebook, whete they were afterwards employed implementing spatial audio into some of their offerings also… Anyhow, don’t want to bore anyone, but happy to answer any questions about it (do actually know what I’m talking about more or less in this area, as opposed to most stuff I read on HN.! :) P.S. Jens Blauert wrote (what used to be, at least) the standard reference on this:
https://mitpress.mit.edu/books/spatial-hearing-revised-editi...
From what I understand, to really get spatial audio from stereo headphones, you need to use a HRTF (head-related transfer function) specific to the person.
There's an open dataset of 50 different HRTFs [1] and a long video to compare them [2]. For me personally, only 2 of those 50 samples actually vaguely sound like audio is ever coming from in front of me. Left, right, and behind mostly works, but for most samples when it should be coming from in front of me it either comes from behind me or from above me.
So it shouldn't be really possible to get good spatial audio without 3d-scanning a persons head or at least making them go through a calibration step where they rate "where is the sound coming from?" . And it definitely shouldn't be possible to get any good results by pre-mixing the audio down to stereo before a user-specific transform is applied.
Some mixes are very off, but I’ve loved most of them so far.
In fact I hope that more artists start recording and mastering for this from the start going forward.
That said, as a layman non-audiophile who doesn’t know the technical engineering details, I can’t help but wonder if the biggest win for me is an apparent lack of compression.
I don’t mean “bitrate and file size” compression, but rather the “take the soft stuff and loud stuff and cram them together” compression.
The thing I’ve hated about music from around 2010 onwards is just how much all the instruments and vocals blend together, especially in rock and pop music where all of the drums become as “snappy” as the snare and none of the instruments sound distinct. Everything sounds crisp, to the point that nothing sounds crisp.
At times, and with some albums, it actually becomes exhausting to listen to.
I noticed immediately with Apple’s Atmos mixes I have to crank the volume up a couple extra notches, but when I do I hear everything more distinctly, much like many of the albums from the 70s, 80s and 90s before everything had to be compressed and made louder.
But, like I said, I don’t know what I’m talking about or if there’s anything to this, but I’m so far getting more out of the average Atmos mix than I am of the average stereo mix.
Then the question becomes, if the artist intended for everything to sound one way but I enjoy it more the other way, who’s right?
Download foobar2000. Go to Preferences -> Components -> "Install..." and install the Dolby Plugin.
Then get your "dolbyhph.dll". Unfortunately you need to find it on the internet, as it is technically copyrighted and distribution of that file is probably not allowed. SHA1 for dolbyhph.dll v1.20.0.276 is 819FC1EE87B15996B89328061693F4D37FD7DB39
Then go to Preferences -> Playback -> DSP Manager.
Add "Convert stereo to 4 channels" (sounds better and a little closer to the original imo) and "Dolby Headphone" to your active DSPs.
Click on the 3 dots next to "Dolby Headphone" to open its configuration. Select your "dolbyhph.dll", Room model "DH2", lower amplification to ~70% (avoids clipping), no dynamic compression.
Apply, and then listen to any stereo song you want. Enjoy!
Also try to experiment the DSPs (only Dolby, 4channels+Dolby, Upmix5.1+Dolby, ...) - they change live while you play a song.
According to Apple, wireless headphones like the AirPods do support spacial audio. No idea where the author got the information they wouldn't.
Apart from that, I get how an audio engineer would think that way, but for me, and I suppose most music listeners, what's important is that the sound is great. And personally I really like spacial audio.
I don't know how the downmix for headphones is created in this case, but if it spatializes the individual objects correctly, this removes the in-head-localization artifact that unprocessed stereo on headphones suffers from naturally, especially for sounds with a center pan. We have just trained ourselves to accept that headphones make sounds appear inside our heads. And I am pretty certain that the author is mistaking the acceptance of this artifact for a "prominent presence" of the vocals in the demo tracks.
Listening to a track that was intended for a stereo loudspeaker pair on headphones always creates a degraded experience if there is no further processing of the audio signal. Judging spatial audio without consideration of these differences leads to weird (wrong?) conclusions.
And why does it matter how expensive a certain pair of headphones is? If I want to know the price tag I can look it up!
> And why does it matter how expensive a certain pair of headphones is?
“Grown-ups love figures. When you describe a new friend to them, they never ask you about the important things. They never say ‘What's his voice like? What are his favourite games? Does he collect butterflies?’ Instead they demand ‘How old is he? How many brothers has he? How much does he weigh? How much does his father earn?’ Only then do they feel they know him. If you say to the grown-ups: ‘I've seen a lovely house made of pink brick, with geraniums in the windows and doves on the roof’, they are unable to picture such a house. You must say: ‘I saw a house that come a hundred thousand francs.’ Then they cry out: ‘How pretty!’”
For a more balanced perspective, read the Verge's review. [1] My experience so far matches theirs: the impact varies by album/track, and in some cases, the lossless version remains better.
It isn't really that central to TFA's main points, but I find it very disappointing that the audio tech world seems to be settling for Dolby Atmos. Once again the choice is made to use licensable technology in preference over non-licensable, libre technology (Ambisonics). Both have their pros and cons, but only one of them is associated with a licensed-based revenue stream.
Sadly it's probably the patent-pool monopoly using its leverage. No h264/h265 without HDMI, HDCP, and all that crap, probably find Ambisonics is locked out for not being a pool contributer.
Yes - it does seems a shame and potentially a loss to developers/the open source community? Surely an open standard like Ambisonics would have had more of a chance of being widely adopted/supported and therefore perhaps longer-lived anyhow (was just thinking about MIDI)… I guess that’s not the end goal for Apple?
From that post it doesn't sound like Atmos itself is the problem, but either the mixing engineers or the source material.
>I compared Spatial Audio tracks to their HD equivalents on Amazon Music and I found exactly what one writer said: the vocal gets lost. Instead of being up front and in your face, it’s buried more in the mix.
That complaint there sounds more like a mixing problem than a technology problem.
It sounds a lot like the original issues that came up when people first started converting music to from mono to stereo or stereo to surround, it's a new technology that requires some time for people to learn the ins and outs of. The early mixes released using this will likely be a mixed bag. Like pretty much every other time people have tried to take music written with fewer tracks and make it sound like it was recorded with more.
Just because of the sheer number of tracks there are in the world it is clear the quality here will vary wildly.
I really want to hear jazz like this. Specifically trios/quartets. Even if it is not the original and I can hear each musician separately I think it would be worth it.
Highly recommend listening to Jazz in spatial. There's such nice separation between the instruments. With the head tracking in iOS 15 you can isolate stuff a little too by turning your ear towards certain sounds (a little gimmicky I'll admit but but it does let you experience parts of the track you may have missed before).
They have Jazz with spatial. Apple Music on android doesn't have spatial audio support annoyingly but Tidal does if your phone supports Dolby Atmos so I've been checking out a trial of that instead.
At least the Marvin Gaye track is not a good example of just the Atmos difference. It is a completely remastered version that sounds nothing like the original. It also is 3.5 dB louder, which naturally would make the listener think it sounds better.
Overall, I think the OP was on point. It seemed like the mixing engineers thought that the additional spatialization provided by Atmos gives them the license to crank up the volume of all the supporting instruments (e.g. congas). This indeed drowns out the main vocal and puts it on the listener to focus on it (which is of course now easier due to the spatial separation with the instruments).
I’ve tested a bunch of the spatial audio tracks on Apple Music with multiple high end headphones - you can make it work with any headphone if you set Atmos to “always on” in Apple Music. They sounded alright but I think you’ll need a real Dolby Atmos system with many speakers to really experience it properly. On headphones it just gets rendered via a binaural generic HRTF (with 3dof head tracking on the AirPod pro/Max).
It’s alright but not a game changer for me at least. Not that different to a regular binaural recording on headphones.
Nice to have Dolby Atmos baked into Logic Pro when it arrives, for creating Atmos mixes easily.
Slightly related. But I’ve had single-sided deafness all my life (which means all I’ve experienced is mono audio) and now with this shift from stereo to spatial. I was wondering if some kind HN users could fill me in on what I’m missing out. (By listening to stereo tracks in mono and observing how it sounds or anything that could give me a picture of that— is it crowded? Distorted? Etc). The only pertinent study I could find on this is https://pubmed.ncbi.nlm.nih.gov/28534734/
I assume you have two eyes, right? What happens when you close one eye and only have monocular vision?
Not much, but it's a lot harder to judge distances correctly. You still can, if you use outside knowledge and perspective, but it's less accurate.
Same for stereo hearing. With two ears it's easier to locate where a sound is coming from. With one ear you can tell if something at constant volume is moving away or moving towards you based on loudness. With two ears you can tell if something at constant volume is stationary or circling around you.
Unfortunately I don't think there are words to describe it. You "just know", because it's an unconscious brain thing.
You could sort of simulate it by rotating your head 180 degrees and noticing how the sound changes. People with two working ears can do this without moving their head.
The effect for music is mostly: it's easier to separate instruments even if they are the same volume, if they are in different spatial locations so the brain can filter against it. Stuff can sound less "cluttered, muddy", but it doesn't help as much as you think in recordings because you as a listener can't move the microphone. I think it would be a bigger deal in small live shows, or music performed in virtual reality where a 3D engine can calculate the audio delay appropriately for each ear, and that difference is relevant because you the listener are close to the musician and possibly moving relative to them.
There’s been some wonderful recordings that were designed solely/primarily for mono playback (early Motown/rock n roll/blues for example) that IMHO actually kind of benefit somehow from lack of stereo separation. If mixed well, the instruments can blend really well and work as one unit and there’s kind of an energy and punch to the speaker/s working as one... Early stereo records often had rather drastic left/right panning (eg. some Beatles stuff) or had to hedge their bets in case they were played back through eg. mono radios/badly set-up stereos and so on… Something similar is still the case when engineers think about what their track might sound like coming through a crappy mono phone speaker with no bass at 128kbps/what have you… The advice I’ve always heard is that you should make sure that your mix would ‘fold down’, so to speak, to mono anyhow… So you should never rely on the listener being in the sweet spot (if you think about eg. a club where an audience member could be stood directly in front of one speaker, but barely hear another, or some listeners weird home setups with one speaker balanced on a bookcase and the other on a table or something). In most environments you get complicated reflections and build ups/absorptions of various frequencies due to the acoustics anyhow… Although with single-sided deafness you won’t get the effect of directionality from the delay between the sound hitting one ear before the other, I imagine you would still experience the attenuation of certain frequencies as the sound moves around your head, the height dependent effects and so on …I suppose it’s not a million miles away from if you were to close one eye - monoscopic as opposed to stereoscopic? Bass frequencies are mostly experienced more or less monophonically, so you’re not missing out there... I wonder if reverb effects/using impulse responses of different environments and volume adjustments might also help give some of the impression?
What the other comments said, but keep in mind that it's not very precise in normal circumstances: you have an extremely rough estimate of direction (along all axis) and distance.
You can increase the accuracy if you are afforded the opportunity to concentrate on the sound.
Related question, but does spatial audio actually send Atmos separate audio streams with positioning data to the client? How does this impact data sizes?
I have listened to some of the SA samples and they sound good, but I still don't get the technology. If it's sending separate streams and spatially mixing them at the client, it's still ending up with two channels of audio, so why would it differ from normal stereo audio which can be mixed in such a way? And if it doesn't, then it's just a production process?
Indeed, it reminds me a bit of Q-Sound of the Madonna Immaculate Collection. Still the most impressive stereo headphone experience I've enjoyed.
Dolby Atmos makes loads of sense for oddball speaker arrangements -- you tell it that you have three speakers layered above, five spread out across the back, etc, and it can process to that arrangement optimally (versus say 5.1 that was geared for a very specific arrangement). I don't understand how it is relevant for headphones, which is specifically what it is targeting in this case.
It seems weird to me that people would disregard the input of the artist and producer who will have painstakingly mixed the original - surely that is as much a part of the art form as the music in many cases? I'm thinking of records like "Bitches Brew" where you couldn't just go back to the master recordings and recreate them without Miles in the room.
Though I guess the early stereo mixes for e.g. The Beatles were often done without the band's input simply because no one was interested in the stereo mix - it was all about the mono mix. 2nd Engineer Richard Lush said “The only real version of Sgt. Pepper’s Lonely Hearts Club Band is the mono version.
im not sure what a good analogy might be... a gallery displaying a replica of some famous painting but deciding that part of it needs to be brightened up? i think most people would find that really odd
I like to listen to old live music recordings on archive.org. One thing I’ve grown to love about such recordings is the spatial nature.
Someone is standing in the audience with a stereo recorder and when you close your eyes and listen to the recording, there is the sense that you are located in the venue at that same time and place. You can hear people talk and clap to your left or right. The experience of the music is very close to what someone would have felt if they closed their eyes in the arena. It’s a form of time travel that a video recording just can’t touch because the experience of looking at a screen is so much different from the experience of looking at something in real life. This is much less the case with live audio recordings.
I’m curious if folks will be able to go back to these old recordings to improve the spatial nature of the audio.
On a side note I would much rather see spatial audio applied to resolve the unidirectional audio mess of video-conferencing.
The ability to resolve who is saying what in simultaneous conversation is in my opinion a much bigger problem than trying to get really high resolution video during calls.
From what Apple say about spatial audio, you cannot just run it thru a converter. You need the tracks in their individual original recording before the mixing to be able to do it properly. So I’m not sure how a handful of chop shops are doing all that.
[+] [-] basisword|4 years ago|reply
Apple has done quite a poor job showcasing it so far. Some of the tracks they've put in their playlists sound like crap. On the otherhand some of them sound fantastic. Jazz in Atmos is wonderful. It's also important to note that 'head tracking' comes with iOS 15 and in my experience improves spatial further.
A lot of music made for stereo won't remix well. If you listen to most of the rock tracks they don't sound good because they don't have a lot going on. 3 or 4 instruments + 2 or 3 vocals tracks aren't going to take advantage of spatial. Listening to some modern pop with lots of synths and layers upon layers of backing vocals, the experience is much better.
Atmos/spatial is a new tool like any other. People will write and record songs that work for it and take advantage of it. They'll use it creatively. That's what will be interesting. Most remixed songs from the 60's aren't the least bit interesting but the possibilities for the future are.
I could be wrong and it could end up largely ignored, time will tell. But there is the opportunity for a lot of creativity regardless of what cranky old music critics think.
[+] [-] globular-toast|4 years ago|reply
[+] [-] rodgerd|4 years ago|reply
Well, the whole thing is an anonymous email. And the "remixing by the truckload" doesn't seem in evidence given the low number of tracks released thus far; there are many more available on Tidal in Atmos currently.
> Jazz in Atmos is wonderful.
I've been very impressed by the classical tracks thus far, which have mostly used a hint of surround to let you feel as though you're at the front of a concert, with the players in a crescnt moon; and a bit of a delay in the rears to create the sensation of space echoing behind.
Some of the rock/pop mixes have been very interesting; the Linkin Park mixes use the surround/rear to create a "voices inside my head" effect where it makes sense on with the lyrics. Robert Palmer actually sounds like a much better singer on the remixes than the originals, with the guitars pushed out to the side rather than stomping all over the singer (whether they applied a little autotune as well, who knows?). I don't much like The Doors, but the use of surround in Riders on the Storm is remarkable.
There are some very poor remixes too, but that's the case for any re-engineering. As for the idea that only the original engineer or producer can make something work, well, that's just idiotic on a number of levels.
[+] [-] jtbayly|4 years ago|reply
Then why start by screwing up the old music?
[+] [-] Cybotron5000|4 years ago|reply
[+] [-] skywhopper|4 years ago|reply
[+] [-] lux|4 years ago|reply
[+] [-] zamalek|4 years ago|reply
The smaller part of the community acknowledges that beauty is in the eye/ear of the beholder/listener.
When it comes to headphones, though, audio is to some degree already spatial. Apple does seem to be spreading butter on butter here.
[+] [-] ricardobeat|4 years ago|reply
I've listened to quite a few pop songs I know, and without fail they sound much muddier in Atmos. Vocals are recessed, the bass loses definition. The improvement in sound stage feels better when you switch back and forth, but paradoxically the sound quality is worse.
[+] [-] can16358p|4 years ago|reply
If the original artist intended the audio to be spatial, it has enormous potential and I'm pretty much sure there will be more and more artists doing that, creating a new category. But taking our good ol' perfectly-sounding stereo songs and turning them into spatial just for the sake of marketing, not good.
[+] [-] voisin|4 years ago|reply
> “Just for the sake of marketing”
This seems to conflict with the first sentence where you admit it’s beautiful.
[+] [-] StavrosK|4 years ago|reply
[+] [-] Cybotron5000|4 years ago|reply
[+] [-] phiresky|4 years ago|reply
There's an open dataset of 50 different HRTFs [1] and a long video to compare them [2]. For me personally, only 2 of those 50 samples actually vaguely sound like audio is ever coming from in front of me. Left, right, and behind mostly works, but for most samples when it should be coming from in front of me it either comes from behind me or from above me.
So it shouldn't be really possible to get good spatial audio without 3d-scanning a persons head or at least making them go through a calibration step where they rate "where is the sound coming from?" . And it definitely shouldn't be possible to get any good results by pre-mixing the audio down to stereo before a user-specific transform is applied.
[1]: http://recherche.ircam.fr/equipes/salles/listen/index.html [2]: https://www.youtube.com/watch?v=VCXQp7swp5k
[+] [-] tinus_hn|4 years ago|reply
[+] [-] kiawe_fire|4 years ago|reply
In fact I hope that more artists start recording and mastering for this from the start going forward.
That said, as a layman non-audiophile who doesn’t know the technical engineering details, I can’t help but wonder if the biggest win for me is an apparent lack of compression.
I don’t mean “bitrate and file size” compression, but rather the “take the soft stuff and loud stuff and cram them together” compression.
The thing I’ve hated about music from around 2010 onwards is just how much all the instruments and vocals blend together, especially in rock and pop music where all of the drums become as “snappy” as the snare and none of the instruments sound distinct. Everything sounds crisp, to the point that nothing sounds crisp.
At times, and with some albums, it actually becomes exhausting to listen to.
I noticed immediately with Apple’s Atmos mixes I have to crank the volume up a couple extra notches, but when I do I hear everything more distinctly, much like many of the albums from the 70s, 80s and 90s before everything had to be compressed and made louder.
But, like I said, I don’t know what I’m talking about or if there’s anything to this, but I’m so far getting more out of the average Atmos mix than I am of the average stereo mix.
Then the question becomes, if the artist intended for everything to sound one way but I enjoy it more the other way, who’s right?
[+] [-] mr_sturd|4 years ago|reply
[+] [-] SirCypher|4 years ago|reply
Download foobar2000. Go to Preferences -> Components -> "Install..." and install the Dolby Plugin.
Then get your "dolbyhph.dll". Unfortunately you need to find it on the internet, as it is technically copyrighted and distribution of that file is probably not allowed. SHA1 for dolbyhph.dll v1.20.0.276 is 819FC1EE87B15996B89328061693F4D37FD7DB39
Then go to Preferences -> Playback -> DSP Manager. Add "Convert stereo to 4 channels" (sounds better and a little closer to the original imo) and "Dolby Headphone" to your active DSPs. Click on the 3 dots next to "Dolby Headphone" to open its configuration. Select your "dolbyhph.dll", Room model "DH2", lower amplification to ~70% (avoids clipping), no dynamic compression.
Apply, and then listen to any stereo song you want. Enjoy! Also try to experiment the DSPs (only Dolby, 4channels+Dolby, Upmix5.1+Dolby, ...) - they change live while you play a song.
[+] [-] bristleworm|4 years ago|reply
Apart from that, I get how an audio engineer would think that way, but for me, and I suppose most music listeners, what's important is that the sound is great. And personally I really like spacial audio.
[+] [-] balls187|4 years ago|reply
Being misinformed.
Apple Lossless HD Audio, which was recently announced, is not currently supported by wireless (read: bluetooth) headphones.
[+] [-] gmueckl|4 years ago|reply
Listening to a track that was intended for a stereo loudspeaker pair on headphones always creates a degraded experience if there is no further processing of the audio signal. Judging spatial audio without consideration of these differences leads to weird (wrong?) conclusions.
And why does it matter how expensive a certain pair of headphones is? If I want to know the price tag I can look it up!
[+] [-] aasasd|4 years ago|reply
“Grown-ups love figures. When you describe a new friend to them, they never ask you about the important things. They never say ‘What's his voice like? What are his favourite games? Does he collect butterflies?’ Instead they demand ‘How old is he? How many brothers has he? How much does he weigh? How much does his father earn?’ Only then do they feel they know him. If you say to the grown-ups: ‘I've seen a lovely house made of pink brick, with geraniums in the windows and doves on the roof’, they are unable to picture such a house. You must say: ‘I saw a house that come a hundred thousand francs.’ Then they cry out: ‘How pretty!’”
[+] [-] Cybotron5000|4 years ago|reply
[+] [-] thebiss|4 years ago|reply
https://www.theverge.com/2021/6/9/22525028/apple-music-spati...
[+] [-] PaulDavisThe1st|4 years ago|reply
[+] [-] rodgerd|4 years ago|reply
[+] [-] Cybotron5000|4 years ago|reply
[+] [-] grawprog|4 years ago|reply
>I compared Spatial Audio tracks to their HD equivalents on Amazon Music and I found exactly what one writer said: the vocal gets lost. Instead of being up front and in your face, it’s buried more in the mix.
That complaint there sounds more like a mixing problem than a technology problem.
It sounds a lot like the original issues that came up when people first started converting music to from mono to stereo or stereo to surround, it's a new technology that requires some time for people to learn the ins and outs of. The early mixes released using this will likely be a mixed bag. Like pretty much every other time people have tried to take music written with fewer tracks and make it sound like it was recorded with more.
[+] [-] tonetheman|4 years ago|reply
I really want to hear jazz like this. Specifically trios/quartets. Even if it is not the original and I can hear each musician separately I think it would be worth it.
[+] [-] basisword|4 years ago|reply
[+] [-] salamandersauce|4 years ago|reply
[+] [-] rodgerd|4 years ago|reply
[+] [-] EMM_386|4 years ago|reply
https://www.dolby.com/technologies/dolby-atmos/
It's pretty impressive with headphones.
[+] [-] mkbosmans|4 years ago|reply
Overall, I think the OP was on point. It seemed like the mixing engineers thought that the additional spatialization provided by Atmos gives them the license to crank up the volume of all the supporting instruments (e.g. congas). This indeed drowns out the main vocal and puts it on the listener to focus on it (which is of course now easier due to the spatial separation with the instruments).
[+] [-] dharma1|4 years ago|reply
It’s alright but not a game changer for me at least. Not that different to a regular binaural recording on headphones.
Nice to have Dolby Atmos baked into Logic Pro when it arrives, for creating Atmos mixes easily.
[+] [-] sidechaining|4 years ago|reply
[+] [-] krapht|4 years ago|reply
Not much, but it's a lot harder to judge distances correctly. You still can, if you use outside knowledge and perspective, but it's less accurate.
Same for stereo hearing. With two ears it's easier to locate where a sound is coming from. With one ear you can tell if something at constant volume is moving away or moving towards you based on loudness. With two ears you can tell if something at constant volume is stationary or circling around you.
Unfortunately I don't think there are words to describe it. You "just know", because it's an unconscious brain thing.
You could sort of simulate it by rotating your head 180 degrees and noticing how the sound changes. People with two working ears can do this without moving their head.
The effect for music is mostly: it's easier to separate instruments even if they are the same volume, if they are in different spatial locations so the brain can filter against it. Stuff can sound less "cluttered, muddy", but it doesn't help as much as you think in recordings because you as a listener can't move the microphone. I think it would be a bigger deal in small live shows, or music performed in virtual reality where a 3D engine can calculate the audio delay appropriately for each ear, and that difference is relevant because you the listener are close to the musician and possibly moving relative to them.
[+] [-] Cybotron5000|4 years ago|reply
[+] [-] zamalek|4 years ago|reply
You can increase the accuracy if you are afforded the opportunity to concentrate on the sound.
[+] [-] BugsJustFindMe|4 years ago|reply
[+] [-] defaultname|4 years ago|reply
I have listened to some of the SA samples and they sound good, but I still don't get the technology. If it's sending separate streams and spatially mixing them at the client, it's still ending up with two channels of audio, so why would it differ from normal stereo audio which can be mixed in such a way? And if it doesn't, then it's just a production process?
Indeed, it reminds me a bit of Q-Sound of the Madonna Immaculate Collection. Still the most impressive stereo headphone experience I've enjoyed.
Dolby Atmos makes loads of sense for oddball speaker arrangements -- you tell it that you have three speakers layered above, five spread out across the back, etc, and it can process to that arrangement optimally (versus say 5.1 that was geared for a very specific arrangement). I don't understand how it is relevant for headphones, which is specifically what it is targeting in this case.
[+] [-] bloke_zero|4 years ago|reply
Though I guess the early stereo mixes for e.g. The Beatles were often done without the band's input simply because no one was interested in the stereo mix - it was all about the mono mix. 2nd Engineer Richard Lush said “The only real version of Sgt. Pepper’s Lonely Hearts Club Band is the mono version.
https://www.analogplanet.com/content/sgt-peppers-lonely-hear...
The format controversy runs and runs!
[+] [-] chrisseaton|4 years ago|reply
[+] [-] mackrevinack|4 years ago|reply
[+] [-] snikeris|4 years ago|reply
Someone is standing in the audience with a stereo recorder and when you close your eyes and listen to the recording, there is the sense that you are located in the venue at that same time and place. You can hear people talk and clap to your left or right. The experience of the music is very close to what someone would have felt if they closed their eyes in the arena. It’s a form of time travel that a video recording just can’t touch because the experience of looking at a screen is so much different from the experience of looking at something in real life. This is much less the case with live audio recordings.
I’m curious if folks will be able to go back to these old recordings to improve the spatial nature of the audio.
[+] [-] DoingIsLearning|4 years ago|reply
The ability to resolve who is saying what in simultaneous conversation is in my opinion a much bigger problem than trying to get really high resolution video during calls.
[+] [-] thies226j|4 years ago|reply
[+] [-] m3kw9|4 years ago|reply