Parallelization is not the only to pursue specialized hardware [0]. The real benifit of specialized hardware is that it can do one thing very well. Most of what a CPU spends its energy on is deciding what computation to do. If you are designing an circuit that only ever does one type of computation you can save vast amounts of energy.
[0] Its not even a particularly good reason. The only reason we don't have massively parallel general purpose CPUs is because of how specific the problems that can benefit from it are. Even then, modern GPUs are pretty close to being general purpose parralell processors.
One other item for that list is packet-offloading for networking cards. That is, taking the work of checksum calculation, and even the wrapping/unwrapping of data (converting from streams to/from packets), and pushing that into the NIC's hardware.
I was also thinking about including the work that hard-drive controllers do (like checksumming, and handling 512/4K logical/physical sectors), but the difference there is that, for NIC offload, the kernel has that functionality already, where for hard drives the kernel does not do the work of the hard-drive controller.
The list might grow slowly, but the last item on it - ML - grows like crazy right now. It's not unreasonable to expect that in 20 years the vast majority of all computation (from tiniest IoT devices to largest supercomputers) will be running ML models (from simplest classifiers to whole brain simulations).
It's worth noting that all four of your examples routinely run on GPUs.
3D graphics? Check (freebie).
Fluid dynamics? Check - supercomputers increasingly get most of their compute from GPUs.
Cryptography? Check - this is the only one that really got specialized hardware.
Machine learning? Check.
So "large quantities of special purpose hardware" wasn't even used for these. Just large quantities of general purpose parallel processors, known for historical reasons as "graphics processing units."
Isn't this more appropriately described as "The Era of General Purpose Microprocessors Is Ending"?
A general purpose computer is the entire machine, reprogrammable to perform a variety of tasks, hence general-purpose. While I do think it's potentially coming to an end as well, I think it's doing so for entirely different reasons.
The general purpose computer has become a somewhat niche device in that the public is increasingly interested in consumer-oriented appliances which just happen to contain microprocessors, like phones and tablets. They're often locked-down and only capable of running a blessed subset of applications available from select suppliers through a walled-garden.
That's threatening demise of the general-purpose computer as we know it. I'm genuinely concerned that we may find ourselves one day limited to very expensive niche machines produced in low volumes having general-purpose capabilities targeting STEM-oriented uses. I hope I'm wrong here, but we're already seeing evidence of young people not even learning how to type because they never used a keyboard, it doesn't seem impossible.
The linked article is talking about processors, not computers.
The general-purpose computer was something most of the public never wanted. It just was that for the a while starting in the mid 90's you needed a computer to use the Internet, "AOL", or email in any way.
> I'm genuinely concerned that we may find ourselves one day limited to very expensive niche machines produced in low volumes having general-purpose capabilities targeting STEM-oriented uses.
General purpose computers belonged to tech nerds in the 70's and the same plus professionals/creatives through most of the 80's, and it sounds like it's going to go back to them. Honestly I can see benefits to this, it was nice to have dirt cheap hardware for a while but maybe things will get back to being more modular and expandable again.
Regarding general purpose processing, I think RISC-V is going to save us here and keep a general purpose microprocessor around as long as anyone wants.
While many valid points, they put GPUs in the specialized processor category to seal their argument. It is technically true, however the trend in GPUs is and has always been toward more general computing, and most computers have GPUs in them. I expect to see commercially viable CPUs with SIMD units (Intel tried and bailed, maybe they’ll try again...) as well as GPUs with virtual & shared memory any day now.
GPUs already have virtual addressing and the ability to share memory with the CPU and other GPUs in at least some circumstances. What they don't have is automatic page faulting to persistent storage or fully shared memory with the CPU by default but that is for performance reasons. For most applications of GPUs performance is too important to want either behavior by default.
All GPUs are basically full of giant SIMD units and the programming models increasingly expose this. They just don't have a common standard ISA.
> I expect to see commercially viable CPUs with SIMD units (Intel tried and bailed, maybe they’ll try again...)
I don't think you meant what you wrote here. ARM(/AArch64), PowerPC, MIPS, RISC-V, SPARC, and x86 all have SIMD unit ISAs on them. In fact, on x86-64, I'm annoyed because scalar floating point uses the vector registers anyways, so just glancing for use of %xmm is insufficient to tell you if your code got vectorized.
What Intel tried and axed was the many-dumb-x86 core model of Larrabee and the Xeon Phi stuff. It's arguable that a lot of that failure was do to Intel stupidly asserting that you wouldn't need to rewrite your code to make it run on that sort of architecture.
Perhaps the era of general purpose computing will come to an end. But what I see is a shift away from single CPUs supporting thousands of complex instructions to GPUs with simpler instructions sets capable of running calculations in parallel. It's more of a shift from serial to parallel computing than a shift from general purpose to special purpose computing
GPUs typically have instruction sets of similar complexity to many CPUs, plus additional specialized instructions related to their SIMD model, plus a bunch of specialized hardware for particular functionality. I don't really think it's accurate to describe GPUs as having simpler instruction sets than CPUs.
Plus, most non tech people don't need anything high end. So the percentage of Chromebooks, Celerons, etc, goes up.
Then, on the server side, a lot of the chip sales are going direct to the FAANG group, rather than to someone like Dell or HP.
Those two things do take a lot of wind out of better, generally available, general purpose devices for regular people and companies. A shrinking market doesn't usually improve quality.
"Special Purpose" can mean so many things that it really depends on the purpose to tell if they're going to be replaced.
For example traditional RAID controllers were replaced with software based solutions once there was surplus compute in the multicore era. If your workload can be viewed as "offload the CPU", there's only a matter of time before general purpose CPU cores are more plentiful and the need to offload goes away.
Pure compute (be it traditional CPU's or some other vector variant that GPU/TPU offer) or latency sensitive tasks (some network, other FPGA, ASIC, accelerated tasks etc.) are only areas where non-GP can maintain a long term foothold.
The article says developing the TPU was “very expensive for Google” at tens of millions of dollars. That’s between one one-hundredth and one tenth of one percent of Google’s 2018 revenue. Not expensive in my book at that scale.
it's ending because we're approaching maximum transistor density. The market demands increasingly faster computers, if we're reaching the limits to how many transistors we can cram into a single CPU, ASICs seem to be a logical evolutionary step.
“That’s mainly because the cost of developing and manufacturing a custom chip is between $30 and $80 million.”
I’ve heard figures an order of magnitude smaller for ARM. If so, the processor market needs to move beyond the Intel/x86 market corner before generalizations about cpu/gpu may be made.
The sound card had its own Midi and sound effects chips, for example. Now reduced to one chip if that on the AC97 capable motherboard.
The modems had their own discrete processor to handle the communication over the phone line. Now again reduced to a WinModem chip and/or a NIC or WiFi chip.
General Purpose Computing is ending for another reason: Apple controlling its entire supply chain, and dictating what computing can be used for. If market developments continue along this line and competitors follow suit, then soon buying a PC for your research will cost you a lot more.
Based solely on the title, I assumed this article was going to be about Jeff Bezos. We're entering a brave new world where all compute is rented from Bezos and can only be used for the furtherance of his agenda. The recent tabloid scandal kinda speaks to the underlying problem. When given documentary evidence of a tryst between Bezos and a married woman, these people did the right thing and tried to blackmail him. Bezos somehow managed to turn this into a story about his endless accomplishments and his courage in the face of adversity! Bezos isn't even competing against other companies anymore because that would be too easy. Bezos is actually competing against the rest of humanity now. We're entering the era of Bezos Purpose computing.
[+] [-] Animats|7 years ago|reply
• 3D graphics - hence GPUs.
• Fluid dynamics simulations (weather, aerodynamics, nuclear, injection molding - what supercomputers do all day.)
• Crypto key testers - from the WWII Bombe to Bitcoin miners
• Machine learning inner loops
That list grows very slowly. Everything on that list was on it by 1970, if you include the original hardware perceptron.
[+] [-] gizmo686|7 years ago|reply
[0] Its not even a particularly good reason. The only reason we don't have massively parallel general purpose CPUs is because of how specific the problems that can benefit from it are. Even then, modern GPUs are pretty close to being general purpose parralell processors.
[+] [-] CaliforniaKarl|7 years ago|reply
I was also thinking about including the work that hard-drive controllers do (like checksumming, and handling 512/4K logical/physical sectors), but the difference there is that, for NIC offload, the kernel has that functionality already, where for hard drives the kernel does not do the work of the hard-drive controller.
[+] [-] p1esk|7 years ago|reply
[+] [-] twtw|7 years ago|reply
3D graphics? Check (freebie).
Fluid dynamics? Check - supercomputers increasingly get most of their compute from GPUs.
Cryptography? Check - this is the only one that really got specialized hardware.
Machine learning? Check.
So "large quantities of special purpose hardware" wasn't even used for these. Just large quantities of general purpose parallel processors, known for historical reasons as "graphics processing units."
[+] [-] newnewpdro|7 years ago|reply
A general purpose computer is the entire machine, reprogrammable to perform a variety of tasks, hence general-purpose. While I do think it's potentially coming to an end as well, I think it's doing so for entirely different reasons.
The general purpose computer has become a somewhat niche device in that the public is increasingly interested in consumer-oriented appliances which just happen to contain microprocessors, like phones and tablets. They're often locked-down and only capable of running a blessed subset of applications available from select suppliers through a walled-garden.
That's threatening demise of the general-purpose computer as we know it. I'm genuinely concerned that we may find ourselves one day limited to very expensive niche machines produced in low volumes having general-purpose capabilities targeting STEM-oriented uses. I hope I'm wrong here, but we're already seeing evidence of young people not even learning how to type because they never used a keyboard, it doesn't seem impossible.
The linked article is talking about processors, not computers.
[+] [-] tenebrisalietum|7 years ago|reply
> I'm genuinely concerned that we may find ourselves one day limited to very expensive niche machines produced in low volumes having general-purpose capabilities targeting STEM-oriented uses.
General purpose computers belonged to tech nerds in the 70's and the same plus professionals/creatives through most of the 80's, and it sounds like it's going to go back to them. Honestly I can see benefits to this, it was nice to have dirt cheap hardware for a while but maybe things will get back to being more modular and expandable again.
Regarding general purpose processing, I think RISC-V is going to save us here and keep a general purpose microprocessor around as long as anyone wants.
[+] [-] segfaultbuserr|7 years ago|reply
[+] [-] dahart|7 years ago|reply
[+] [-] mattnewport|7 years ago|reply
All GPUs are basically full of giant SIMD units and the programming models increasingly expose this. They just don't have a common standard ISA.
[+] [-] jcranmer|7 years ago|reply
I don't think you meant what you wrote here. ARM(/AArch64), PowerPC, MIPS, RISC-V, SPARC, and x86 all have SIMD unit ISAs on them. In fact, on x86-64, I'm annoyed because scalar floating point uses the vector registers anyways, so just glancing for use of %xmm is insufficient to tell you if your code got vectorized.
What Intel tried and axed was the many-dumb-x86 core model of Larrabee and the Xeon Phi stuff. It's arguable that a lot of that failure was do to Intel stupidly asserting that you wouldn't need to rewrite your code to make it run on that sort of architecture.
[+] [-] jostmey|7 years ago|reply
[+] [-] mattnewport|7 years ago|reply
[+] [-] tyingq|7 years ago|reply
Plus, most non tech people don't need anything high end. So the percentage of Chromebooks, Celerons, etc, goes up.
Then, on the server side, a lot of the chip sales are going direct to the FAANG group, rather than to someone like Dell or HP.
Those two things do take a lot of wind out of better, generally available, general purpose devices for regular people and companies. A shrinking market doesn't usually improve quality.
[+] [-] redisman|7 years ago|reply
[+] [-] swagasaurus-rex|7 years ago|reply
https://herbsutter.com/welcome-to-the-jungle/
[+] [-] jmount|7 years ago|reply
[+] [-] zdw|7 years ago|reply
For example traditional RAID controllers were replaced with software based solutions once there was surplus compute in the multicore era. If your workload can be viewed as "offload the CPU", there's only a matter of time before general purpose CPU cores are more plentiful and the need to offload goes away.
Pure compute (be it traditional CPU's or some other vector variant that GPU/TPU offer) or latency sensitive tasks (some network, other FPGA, ASIC, accelerated tasks etc.) are only areas where non-GP can maintain a long term foothold.
[+] [-] votepaunchy|7 years ago|reply
The end of Moore’s Law means no additional transistors and therefore no additional cores without simplifying or otherwise reducing the architecture.
[+] [-] peapicker|7 years ago|reply
[+] [-] deevolution|7 years ago|reply
[+] [-] ianai|7 years ago|reply
I’ve heard figures an order of magnitude smaller for ARM. If so, the processor market needs to move beyond the Intel/x86 market corner before generalizations about cpu/gpu may be made.
One source, not fully vetted: https://www.anandtech.com/show/7112/the-arm-diaries-part-1-h...
[+] [-] patrickg_zill|7 years ago|reply
The sound card had its own Midi and sound effects chips, for example. Now reduced to one chip if that on the AC97 capable motherboard.
The modems had their own discrete processor to handle the communication over the phone line. Now again reduced to a WinModem chip and/or a NIC or WiFi chip.
[+] [-] amelius|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] npx|7 years ago|reply