Ask HN: Whatever happened to dedicated sound cards?
220 points| Dracophoenix | 3 years ago | reply
So what other reasons could have caused the decline in interest? Was there nothing that could be improved upon? Were there improvements on the software side that made hardware redundant and/or useless? Is there any other company besides Creative, however large or small, still holding the torch for innovating in this space?
[+] [-] speeder|3 years ago|reply
When DVDs and HDMI were becoming popular, and Windows Vista was launched, a lot of restrictions were put on drivers, I saw many people defending them claiming it was for better stability, avoiding blue screens and so on.
But a major thing the restrictions did, was restrain several of the sound cards features, most notably their 3D audio calculations that were then just starting to take off, people were making 3D audio APIs that intentionally mirrored 3D graphics API with the idea you would have both a GPU and a 3D audio processor, and you would have games where the audio was calculated with reflections, refractions and diffractions...
After that, the only use of sound cards became what the drivers still allowed you to do, that was mostly play sampled audio, so sound cards became kinda pointless.
Gone are the days of 3D audio chips, or having sound cards full of synthethizers that could create new audio on the fly.
Yamaha still manufactures sound card chips, and their current ones have way less features than the ones that they made during the sound card era.
EDIT: also forgot to point out the same restrictions kinda killed analog video too, for example before the restrictions nothing prevented people from sending arbitrary data to analog monitors, so you could have monitors with non-standard resolutions, non-square pixels, unusual bit depths (for example SGI made some monitors that happily accepted 48 bits of color) or not even having pixels at all (think vectrex) and so on. All this died and in a sense also affected video development, some features that video cards were getting at the time were removed and hardware design moved to a narrower path, more compatible with MS rules.
As for what the restrictions have to do with DRM: the point was not allow people to intercept audio and video using analog signals with perfect quality, since this would be an easy way to go around the DRM built-in on HDMI cables.
[+] [-] joe91|3 years ago|reply
3D sound and other processing got baked into middleware for games because it became trivial to do all of the processing in software - and the processing became more advanced than anything that the sound card vendors were offering (and they didn't move quickly enough anyway).
Pro audio vastly progressed past anything that is possible to provide in fixed silicon. For input, dedicated USB (and ethernet) audio interfaces progressed to the point where it would be ridiculous to provide such functionality on a general "sound card".
It's just evolution - there just isn't a compelling enough niche for a dedicated sound card any more.
[+] [-] rocket_surgeron|3 years ago|reply
Modern CPUs can ether do or emulate this, probably using less power than a sound card.
Very, very, few people have their PCs connected to an AV receiver or multichannel speakers, but positional audio is still widely supported in Windows applications using Xaudio2.
The reasons sound cards went away is the use cases went away:
1. People who want high quality recording shifted to firewire and later high-speed USB external audio interfaces. No matter how hard you try an external metal box with multiple inputs and outputs will always be better than a PCI/PCIe card inside a PC for recording. Rare use case in the recording world for sound cards.
2. Gamers who want 3d/positional audio either use headphones, find the 5.1 integrated outputs to be adequate, or like me, run a digital audio cable to a surround sound receiver. Rare use case in the gaming world for sound cards.
Dolby Atmos is awesome for positional audio in games but there are multiple less expensive and more accessible methods for surround audio nowadays. Decent positional audio can be experienced using a laptop and headphones-- no sound card required.
https://www.pcgamingwiki.com/wiki/Glossary:Surround_sound
Back in the sound card days you had to squint on the back of the box and ask "is this creative 3d? aureal?" nowadays you just plug in 5.1 to your PC's onboard audio, tell windows you have 5.1, and it works (mostly).
[+] [-] majormajor|3 years ago|reply
And beyond that, this is the first I'm seeing a claim that "doing 3d audio calculations" was restricted and that this had anything to do with intercepting pre-encoded multi-channel DVD/digital media streams. They seem completely separate from each other as far as technical pipelines go.
Sound cards as a general consumer product were dead long before Vista. The last hurrah I remember was the SBLive!/Turtle Beach Santa Cruz era 1998-2001 stuff, Vista didn't come out until 2007 (Longhorn was famously botched, etc...).
CPUs just got fast enough that all of that, including 3d calcs, could be done better on common CPU by the mid-2000s. Do it on a sound card, you have to buy a new sound card to get improvements. Do it directly in the OS or in-game, and you can benefit from improvements from the OS or library or game devs immediately.
[+] [-] jasonwatkinspdx|3 years ago|reply
3D audio on the PC was deliberately killed by Creative.
They sued Aureal into bankruptcy, bought it in the court auction, and the day the sale closed they nuked the support website and took the drivers offline.
They used similar scummy tactics to decapitate any other competitors. Then they considered their reverb based spatial audio solution sufficient, and promptly sat on their heels doing zero innovation while collecting a rent.
And them as chip technology improved, a basic "Soundblaster 64" chip became so cheap that motherboard manufactures started bundling it in as a selling point (which made a ton of sense for non gaming PC users btw). Additionally MS stepped in and provided some software spatial functionality within DirectX, as processors had improved to the point where dedicated hardware for it wasn't necessary.
Back then I worked in gamedev, and I briefly considered going into competition with MILES et all with a 3D audio library after the Aureal fiasco, after I stumbled on some interesting papers doing Fresnel Zone Tracing variations as low overhead spatial audio, but ultimately wasn't serious about it vs other options at the time.
[+] [-] Sharlin|3 years ago|reply
To be fair, realtime synthesis just became obsolete for most purposes once CD quality digitized audio became cheap enough to store (and later, to stream). And for musicians, once CPUs became fast enough, SW synthesis with its limitless possibilities took over from HW synthesis.
[+] [-] com2kid|3 years ago|reply
Creative drivers were a double digit % of all Windows BSODs. Microsoft gave Creative plenty of time to fix their drivers, creative never did, so sound drivers got booted from the kernel.
[+] [-] PaulKeeble|3 years ago|reply
[+] [-] justsomehnguy|3 years ago|reply
If you never seen a system BSODing from the sound drivers - I'm glad for you. I've seen enough sound card drivers crashes to tell you what that WAS a problem. Along with network cards, video cards, TV-tuner cards and almost anything what needed a driver.
> After that, the only use of sound cards became what the drivers still allowed you to do, that was mostly play sampled audio, so sound cards became kinda pointless.
Discrete sound cards became pointless because by 2001 almost every consumer motherboard had an AC'97 compatible audio coded on board.
So if you didn't need super extra fidelity 5.1235435 sound system AND didn't want to shell out additional ~$100 (SB Live! in 1999) or $2-300 (SB Audigy 2 in various variants, 2003) - you just could use the onboard one.
> having sound cards full of synthethizers that could create new audio on the fly.
NO THANKS: https://youtu.be/3AZI07_qts8?t=9
And this is a Creative card! I had my share of good synthesized music (because computers couldn't yet do a proper digitized sounds yet), but the tech should had die and it did.
[+] [-] Pulcinella|3 years ago|reply
It’s a shame because I would love more audio sources to support HRTF (head related transfer function) and “ray traced” audio.
[+] [-] naikrovek|3 years ago|reply
drivers live in the kernel, and prior to Windows Vista, they had the same rights as the kernel. pre-Windows Vista, a driver could easily be malicious and exfiltrate anything it wanted to anywhere it wanted. the driver architecture change fixed this gaping security hole, while still allowing drivers to exist.
drivers needed to be rewritten to accommodate LARGE changes to how they needed to do their work, and the result of that is that drivers which interacted with hardware directly no longer could, they had to ask the kernel to do stuff, and the kernel could say “no.” imagine being a driver maintainer and needing to react to this change.
this change often required a complete rewrite of the driver. this is why drivers of the era were so feature-limited.
this architecture change allowed kernel-level DRM drivers to become a thing, but DRM would have happened with or without any changes to driver arch, i assure you.
everyone suddenly needing to rewrite their drivers is what caused drivers to appear limited in the new paradigm. it simply took time to reimplement everything that existed in the old driver model, and people wanted working drivers before everything was implemented in the new drivers.
[+] [-] AndrewKemendo|3 years ago|reply
[+] [-] michaelrmmiller|3 years ago|reply
[+] [-] mehdix|3 years ago|reply
[+] [-] pjlegato|3 years ago|reply
They're not necessary for consumer apps. Consumer audio applications got "good enough" with mass produced builtin motherboard "soundcard on a chips" that basically replicated the function of the old soundcards at a much lower price point.
If you want to, say, connect 16 microphones at once and record to 16 seperate tracks, or you plan to apply a bunch of digital effects and therefore want a much higher sample rate than what your consumer audio chip can do, you can buy an audio interface.
[1] https://www.sweetwater.com/shop/studio-recording/audio-inter...
[+] [-] gonehome|3 years ago|reply
If you want the Shure SM7B you need an audio interface (and probably also a cloud lift or dynamite to bump up the gain).
Lots of streamers, podcasters, youtube people use them.
[+] [-] TremendousJudge|3 years ago|reply
[+] [-] jamal-kumar|3 years ago|reply
[+] [-] endorphine|3 years ago|reply
[+] [-] nsxwolf|3 years ago|reply
Now every computer has a little chip that plays back at least CD quality audio from an infinite pool of storage and RAM. Nobody wants to hear MIDI in their games anymore. I'm not even sure what a better sound card could even do for me - reduce line noise, drive high impedance headphones or something. Boring!
[+] [-] Tangurena2|3 years ago|reply
For games today, I use the audio interface that came with the computer. For dealing with my synthesizers, I use a Scarlett by Focusrite [0].
[0] - https://focusrite.com/en/scarlett
[+] [-] Sharlin|3 years ago|reply
[+] [-] therealplato|3 years ago|reply
even a $15 USB-to-dual-3.5mm adapter sounded significantly better on other machines
I experienced this annoying line noise on ASUS, MSI and Dell motherboards
[+] [-] gxqoz|3 years ago|reply
[+] [-] an_aparallel|3 years ago|reply
This is a massive bugaboo in the audio industry in my opinion.
I have always been a PCI soundcard user - and still am to this day, but industry trends are stopping this. I think a big part of this is due to laptops/ipads and the like becoming more popular devices, as well as from a useability standpoint - companies optimise for succesful adoption into a users system - than technical specifications.
I started my DAW with a Terratec soundcard with midi + stereo audio ins and outs roughly 20 years ago.
Fastforward 7 years - i bought an early USB interface - the NI Audio Kontrol 1 to use with a laptop. I could run everything on it - take it out and about - cool!!
Fastworward another few years - and i got more serious about audio and bought a Lynx PCIe AES card (now without midi) - to use with an Apogee Rosetta 800 (8 in / 8 out). Now we're getting there. But - not an all in one solution.
In 2022 - surprisingly - the only (?) companies doing full PCIe audio solutions are Lynx and RME. In a fresh session in FL Studio or Ableton - with a sample buffer rate of 64 (the lowest) i enjoy latency of 0.72m/s. This cant be beaten by USB. However - that's not a deal breaker for most people sadly.
It greatly saddens me that audio in general is a second class citizen with regards to tech advancement. It still blows my mind that the Atari STE with MIDI in built onto its circuit still thwarts the tightness in the midi department - of a brand new full specced blazing machine. We need more development for Realtime O/S in the midi world.
[+] [-] akx|3 years ago|reply
Anyway, have you actually measured the _true_ latency from when your computer thinks it's sending out a signal and when it comes out of the speakers, and comparing it between a (good) USB interface and your preferred PCIe solution? After all, I have an old Focusrite Scarlett 2i2 gen1 and I too can technically crank the buffer size down to 64 in FL Studio and post about it on the internet...
[+] [-] timc3|3 years ago|reply
USB is good enough for a lot of people, though I am not a fan so that covers the average prosumer.
MIDI is a totally different subject, but I can run MIDI clock from audio which is as tight as it gets these days. See USAMO from Expert Sleepers.
[+] [-] te_chris|3 years ago|reply
[+] [-] h2odragon|3 years ago|reply
I think another factor is MP3 players and phone audio; people stopped using their computer as the (interface to) media source when other things took that function over for them.
[+] [-] hedgehog|3 years ago|reply
[+] [-] modeless|3 years ago|reply
Add-on sound devices still exist, but they are simple because they don't include extensive hardware acceleration anything like what a GPU has. In fact, if you want hardware acceleration for audio processing algorithms today, like really fancy 3D sound propagation or something, GPUs would actually be great at that, and they support digital audio output too.
[+] [-] saltcured|3 years ago|reply
At roughly the same time, there were more peripheral buses like USB and Firewire being introduced, which meant that an add-on peripheral did not need to be an internal ISA/PCI card in order to have sufficient bandwidth for rich audio streams. These external devices could also be built with lower noise/interference compared to the boards inside a computer.
And of course, silicon integration always increased so that the bundled onboard IO chip became good enough for many users. So, add-on peripherals had to move up market or into niche settings. That is a bit like how the iGPU in Intel CPUs got rid of the market for basic VGA/XGA/etc. graphics cards for office machines.
[+] [-] kllrnohj|3 years ago|reply
These days basic "audio correctness" is readily available from onboard audio, though. Motherboards have gotten much, much better at noise isolating the audio area, and dacs & amps have generally improved.
[+] [-] magicalhippo|3 years ago|reply
For those that care about noise, you don't want the analog audio anywhere inside the case since it's a horribly noisy place. So you get an external DAC.
[+] [-] theevilsharpie|3 years ago|reply
- During the MS-DOS era, there wasn't really a standard API for sound, so using a cheap, off-brand sound chip (including anything that might be integrated) often meant compatibility problems. Even though it might not necessarily have offered the highest quality sound, Creative's Sound Blaster line was the gold standard for compatibility during this time. Standardized sound APIs have largely eliminated this issue.
-Throughout the '90s, music for games (and a number of other applications) was distributed as MIDI (or MIDI-like) instructions to be generated by a synthesizer, and the quality of the music was very much dependent on the synthesizer used. The Roland Sound Canvas series was the gold standard at the time (in part due to its quality, and in part because that's what the composers themselves used), but it was very expensive and out of reach to the mass market. Software synthesizers were either too slow, or the quality sucked. That gave an opportunity for sound card manufacturers like Creative to offer higher-quality hardware synthesizers on their sound cards than what cheap/integrated cards could do. These days, most audio is PCM, and hardware is perfectly capable of high-quality software sound synthesis, so hardware synthesis has become a non-issue and modern consumer sound hardware doesn't even have hardware synthesis capabilities anymore.
- During the '00s, sound cards began to offer accelerated environmental and positional audio (e.g., Aureal3D, Creative EAX), which games quickly adopted to improve the sense of immersion. However, changes in the Windows audio architecture introduced with Windows Vista broke this functionality without a replacement. Advances in CPU hardware have since allowed this type of processing to be done on the CPU (e.g., XAudio 3D, OpenAL Soft) with acceptable performance.
In the current era, we do have dedicated soundcards, although not in the form of PCIe add-in boards. External DACs (either dedicated USB, or integrated into a display or AV receiver) are popular, as are the DACs used by wireless/USB headphones. Also, there has been some work done to utilize the computational capability of GPUs for real-time audio ray tracing.
[+] [-] ksec|3 years ago|reply
There was also Aureal. Both Creative and Aureal had their own specific API to try and create a similar moat like Glide from 3DFx but failed. And then Realtek took over.
Creative could have competed with onboard Audio as well. But they were too worry about losing their Sound Blaster Revenue, so they somehow diverged into other things like GPU ( 3DLabs ), MP3 players, Speakers, etc etc. And every single one of them failed.
If you are looking for modern Audio Engineering, you could look at PS5. But powerful DSP isn't exactly rocket science anymore. A lot of the improvement has to do with software.
Creative used to be the pride of Singapore. It is sad the company was badly managed and never made the leap to the next stage.
[+] [-] tenebrisalietum|3 years ago|reply
This is also around the time it started to be common for pre-built systems to integrate functionality into the motherboard, such as VGA, audio, USB, and in some cases even AGP video all as part of a chipset.
The peak of PC audio probably matches the peak of the "HTPC" wave that happened in the first half of the 2000's - PCs designed to be put under your TV and replace your stereo.
But also, laptops started getting cheaper and more popular as the late 90's turned into the 2000's and beyond - where integration of components was even more valued. Then smartphones started to take over in the 2010's.
The culture is different now. These days, the young people don't have stereos anymore, they might have at best have a TV soundbar or some really good wireless speakers, or a couple of bluetooth speakers, and the phone is the centerpiece of the personal audio experience now.
Hi-Fi that's not dedicated to making your car rattle or be \blasted at 500w-per-channel volume over a bar/club PA speaker is dead.
Desktop PCs are for businesses which need only good enough audio for business purposes, and gamers who probably want to spend money on a GPU over audio.
[+] [-] rickdeckard|3 years ago|reply
It forced the dedicated soundcard vendors to justify the add-on price by pushing features like multichannel, Surround sound codecs, hardware controls etc, but none of those features were of mainstream interest.
Total sales volume for dedicated soundcards dropped, economics of scale dropped, prices had to increase, pushing the products even more into niche...
[+] [-] mmastrac|3 years ago|reply
If you don't like the DAC in the headphones, you can also find a high-quality USB DAC and use the audio cable from there.
[+] [-] anigbrowl|3 years ago|reply
If you're serious about audio you just plug a cable into your breakout box and have your interfaces. converts, and preamps there. Your sound hardware can be anything from a pure i/o to an elaborate instrument under computer control. You can do audio synthesis and compositing on the CPU, GPU (not so different from a DSP) or external hardware.
Soundcards are only 'gone' in the sense that PCI cards are less important because many people use laptops and the audio built into motherboards is more than Good Enough for everyday purposes.
[+] [-] Merad|3 years ago|reply
And while 3d audio is/was a cool concept, most people don't really have a sound system that will really take advantage of it. Even most "serious gamers" that I know use headphones or stereo speakers... now that I think about it I'm pretty sure I'm the only person in my friend group with a 5.1 speaker setup on my PC.
[+] [-] schlauerfox|3 years ago|reply
[+] [-] jensgk|3 years ago|reply
[+] [-] oneplane|3 years ago|reply
There is no point to using an add-in card if the facility is now on the main board and can do the task to the user's wishes.
The same thing can be said about many previously modular components where more and more is now simply a function of the main board itself. Take all the legacy I/O which used to be various chips, often on various add-in boards. They were all condensed into one single Super IO chip that can do all of it, but at a fraction of the size, cost and energy usage.
A lot of peripherals used to be implemented in separate chips and sometimes even discrete logic. If we were to try to do that today we'd either have to cut 90% of the currently available standard features or make the mainboard ten times as big to be able to implement it the old school way.
[+] [-] fortym2|3 years ago|reply
For example if you produce music, you probably are good with an external USB audio interface like a Focusrite Scarlett.
[+] [-] spiffytech|3 years ago|reply
1) Most people are happy with good enough. To most people's ears, speaker quality makes a bigger difference than audio output, and people already settle there. Furthermore, when iTunes was a big deal it turned out people got accustomed to low bit rates and mediocre equipment and thought it sounded better than the good stuff, because it's how they expected their music to sound.
2) With most computing moving to laptops and then to mobile, people generally don't have a choice about the audio processing technology inside their computer.
[+] [-] peteforde|3 years ago|reply
For what it's worth, if you have a small discretionary budget, I would recommend a "top of the low-end" DAC to anyone who listens to a lot of music. I "did my own research" and concluded that for me, the Topping D10s USB DAC was the correct amount of gadget. It has RCA outputs and supports 384kHz audio, which needs to be enabled in your sound settings.
When I got it set up, it was as if my previously disappointing desk stereo speakers and preamp combo took a great sigh of relief and the sound opened up. Everything sounds more defined, I can hear where the instruments are positioned in the soundstage, and I am now one of those people who appreciates Bandcamp allowing FLAC downloads. For me, this was worth CAD$139.
https://www.amazon.ca/gp/product/B08CVBKHFX/ <- currently "unavailable" but likely easy to locate via search
[+] [-] Dracophoenix|3 years ago|reply
Also, I'm certainly a fan of your romplers. Now if only I had the discretionary budget to find a couple of those in working order.
[+] [-] squarefoot|3 years ago|reply
[+] [-] kaladin-jasnah|3 years ago|reply