Moore's law itself may be dead but there is still additional potential to explore. Right now, CPUs are mostly two dimensional with only a few layers stacked on top compared to the extent in the other two dimensions.
Compare this to a human brain which too, has a sulcated layering system for the grey matter but the white matter connectome turns it into a fully three dimensional object. We can't produce such a thing with technological means yet.
The CPUs we produce have to be deployed in datacenters and computers within distance to each other, mostly for heat dissipation reasons. We can't just stack $N CPUs onto and next to each other until we reach an object of the size of a brain. It would not be practical to cool that thing. Compare this to human brains which require much less energy (20 Watts on average).
Moore's law is dead but it's about size of transistors. Human axons have diameters of hundreds of nanometers, even the gap of intracellular space of a synapse is approximately 20 nanometers. Compare this to the 5 nanometers we got now. We don't need size improvements to model brains, we need power consumption/heat dissipation/electric loss improvements. Also, CPUs are still very expensive. We also need improvements in manufacturing them more cheaply. Both isn't really requiring node sizes to shrink further.
Sadly, there's a different law called Dennard's Scaling which was precisely about power consumption, but it has been dead for about 13 years. https://en.wikipedia.org/wiki/Dennard_scaling
The power dissipation problem of current chips (performance is almost always power/thermal limited) is correlated with 2D structures.
Communication is one giant problem: 3D structures could allow for far better colocation of connected compute elements, so we might not have to spend most of our power budget feeding data in and out of compute clusters.
Scaling in 2D does make this problem easier (in some ways!), much as adding a 3rd dimension would too.
> Right now, CPUs are mostly two dimensional with only a few layers stacked on top compared to the extent in the other two dimensions.
Moving away from that would still be increasing transistor density and thus would not be rewriting or beating moore's law, it would still be moore's law applying.
Stacks of thin silicon chips of 100 layers or so with microfluidic channels between the layers, or in the layers, sounds like the way of the future to me.
with the current semiconductor technology I doubt that 3D is an option for the foreseeable future. You would need a whole new implantation (or doping) technology for that
The brain architecture is not the end-all of computing. For example weather prediction is just about fast math, a brain like architecture is useless here.
I, for one, welcome the end of our Moore's Law overlords.
This race to increase various aspects of performance has long let the fields of architecture and the software on top of it be lazy in their optimizations. When there's no more juice in that fruit to squeeze, I know there is much more to be found elsewhere.
/I can't wait for my non-von neumann SoC FPGA hybrid to use for...I'll figure it out then I'm sure.
Do you think that "FLOPS/OPS/whatever per year" will slow down or speed up now that Moore's law is ending? Ie do you think that the hardware/architecture/software combo will accelerate more now, or less?
If you think it will accelerate more while hardware gains drop to zero, why were we leaving these absolutely massive gains on the table? This means that, if a new processor gave us double the speed, we could get more than double just from software alone and get the hardware doubling on top of that.
If you think it will decelerate, why are you happy Moore's Law ends? We're worse off than we were before, even though we aren't "lazy" now.
This approach seems to me a bit like saying "Oh I'm happy I lost my job, now I can finally make money by looking for pennies on the ground all day, which the job made me too lazy to do".
It would at least be nice to freeze the hardware for 10 years and give software time to learn how to fully extract what we have. Most large scale software ends up a factor of 10x to 1000x slower than it needs to be.
Me too. As long as reducing feature size is possible, it'll never make sense to try anything really new. Nobody would have ever built a car if horses halved in price and doubled in performance every two years.
> I, for one, welcome the end of our Moore's Law overlords.
I think if you fully understood the implications of this, you wouldn't be so sanguine.
We're already seeing the effects of a stalling out of process improvements.
Starting a couple years ago, there were articles by enthusiast PC builders who were buying used Xeon processors on eBay, and building new gaming PCs around them, with comparable performance to the i7's available at the time. Just to put some round numbers on that, you had people spending $100 or so on a used CPU (none of that money goes directly to Intel) vs. spending nearly $1000 on a brand new i7.
You can find other instances where a company is competing with their own older, used products... it isn't fun for the company in question.
What's going to happen to Intel's stock price when that sort of thing becomes more common? The price of their stock (and all other tech companies) is based on future value, rather than other companies that pay dividends but otherwise aren't expected to expand significantly.
If investors come to accept the fact that Intel can't really make a chip next year (that will drive hardware and software sales for the entire PC industry), better than last year's chip, their stock will crash. As will all the others in the tech industry.
New hardware (with more capabilities at a reasonable price) drives new software / new applications. If next year's phone isn't better than last year's phone (assuming you can replace the battery that is losing capacity), that will kill valuations in the mobile segment.
Some tech like stacking higher in 3-D on the chip will help applications like NAND Flash storage that aren't heat-dissipation limited. But even there the cost per bit stored will won't be going down at nearly the rate like we're used to seeing.
A tech crash across the board will tank the entire world economy as well, because much of the productivity improvements are based on computer technology.
This is a massive, massive upheaval we're looking at. Unless we switch to new technology that can keep the productivity party going for a while...
"After all, there aren’t many numbers left between 5 and 0."
Math purists may roll their eyes at this statement and clearly even if you are obsessed with integers you can just switch to picometers, but I wonder if it is time to start counting atoms. The diameter of a silicon atom is about 0.21 nanometers, so a 5nm process is dealing with features only about 20-30 atoms wide.
Wow, this puts things in perspective for me. I knew 5nm was small but I didn't really realize how small until we started getting into "I-have-more-dollars-in-my -wallet-than-this-thing-is-atoms-wide" territory.
The node label (e.g. "5 nm") no longer represents a physical dimension and hasn't for some time -- it's simply a continuation of a naming convention. In fact, while we used to scale down transistors by simply reducing the gate width (where the naming convention originated) we now take far more elaborate steps to do so.
What ways to get faster cpu remain after this? Nanoscale vacuum-channel transistors [1] seem promising as they can work at terahertz frequencies but they do not look anywhere close to production.
I think moore's law has spoiled a generation on what technology can do. It is extremely ridiculous that lithography has gotten so much better in the last few decades, and IMO it has resulted in a lot of people thinking that exponentials are a) common, and b) automatic. Neither of these are true.
There are few, if any, other exponentials in the history of humanity that worked as consistently for as long with as massive of impacts on technology, and TBH I'm doubtful that we will see anything like it again.
Moore's law worked because loads of engineers and scientists made it their life's work, but everyone watching seems to think it just happened.
There was one magical exponential that changed the world, and it's coming to an end.
There's certainly a lot that can be done in architecture and software, but most applications probably aren't worth the effort.
The problem with this is the likely fragmentation of the programming languages and tool support.
Eg GPUs have been around for ~25 years and there's still no decent way to portably program them. We have Metal, Direct3D, OpenGL, OpenCL, Cuda, Vulkan, ... all of which have small market shares. It's markedly worse than microprocessors at 25.
(Not even getting into quality issues like driver quality problems and the closed&proprietary nature of most of the SW stacks).
The end of Moore's law doesn't bother me too much from a practical point of view. My phone and computer are about as fast as I need them to be. Why it does bother me, though, is how it's an inescapable reminder that there are hard limits to everything, and sooner or later we're going to hit them. The fact we're hitting this limit so fast is damaging my hope for a whiz-bang sci-fi future. I didn't expect to be alive when we invented strong AI and perfect 3D projection systems and retinal display screens and all the rest, but I did hope we eventually would. Slamming into the end of per-volume and per-watt performance before the year 2100 makes me think maybe those things will never happen at all.
It's worth noting that we hit the wall only with respect to silicon-based transistor-packing wafer technology. There are other alternatives (ex. SiGe instead of just Si, memristors, photon instead of electron circuits, etc.) that could allow significant increases in compute capacity. They just haven't been explored or invested in to a large extent because it's been relatively cheap with proven ROI to follow traditional manufacturing paths to their maximum. Other technologies like Li-ion batteries are in the same boat - there are big advancements yet to be made in the broader space in general, but the specific processes, chemistries, etc. being used today are nearing their physical limits.
I'm personally still hopeful for a sci-fi future, just based on a series of ramps and plateaus as we progress through the limits of each specific technology.
Don't despair. We still have economic, energy, and material science boundaries to smash.
Just imagine your current phone costing pennies, compostable, and easily chargeable anywhere. Imagine never again worrying about file size or storage capacity.
We've got a long way to go and I'm very excited about what the future holds.
A bit of a side topic, but can you imagine if there were fewer people in the world? Computing power would never improve because it would be too expensive.
I wonder what other technology could be done, if only there were more people (i.e. larger market), but is too expensive right now.
People always like to talk about the downsides of population, but the high tech world we live in now could not exist otherwise.
These lithography machines are completely wild: the part that fails the most spits out a tiny droplet of molten tin. That's hit with a laser, and the excited electrons emit what's called extreme ultraviolet (EUV) which is very close to the softest of X-rays. Most of the power released is wasted bouncing off a series of mirrors, and then wild stuff happens as these energetic photons hit the semi-optional mask cover (pellicle), mask, and then chip: https://en.wikipedia.org/wiki/EUV_lithography
Probably because it's both patently untrue and still informative in a way. It would be more precisely stated as "there aren't many atoms left between 5(nm) and 0". If my math is right, 5nm is only a width of about 25 silicon atoms.
In this display, the show is far from over. It even start to look like over-exponential progress. The latest data points show GPUs instead of CPUs but this is most likly OK since it is the suitable technology for easy to parallelize workloads like machine learning applications.
I kind of welcome the death of the "just wait for hardware to improve" approach to optimization. I find computers interesting because the field is so fresh and you have to figure out so many things on your own and through communication with people that are still alive. Due to the death of Moore's Law, it's going to stay that way for some longer and the end result is going to be more varied.
Nice article with a bit of low-level details. We are indeed encountering the problem that light wavelength is now small enough for us! So using 15nm EUV for multiple patterns to get 5nm. Only two foundries left in the entire world which can do such process (Intel and others are years behind).
Not clear anyone other than Intel is trying, but Intel's situation is much more complicated. They tried for a purely optical for its life "10nm" node more aggressive than TSMC's initial "7nm" node and catastrophically failed. There's clearly some top level management issues there, a big problem for Intel for decades, and it's extremely worrying they let them destroy a generation of their crown jewel.
But that doesn't tell us much about their "7nm" node very roughly equivalent to Samsung and TSMC's "5nm" nodes, except those companies have a lot more real world EUV experience, but it's not always good to be a pioneer. Intel could conceivably get back in the game in about the same time frame as these two companies exit "5nm" risk production, we just don't know. All we know is that they're buying EUV machines and installing them in multiple fabs.
[+] [-] est31|6 years ago|reply
Compare this to a human brain which too, has a sulcated layering system for the grey matter but the white matter connectome turns it into a fully three dimensional object. We can't produce such a thing with technological means yet.
The CPUs we produce have to be deployed in datacenters and computers within distance to each other, mostly for heat dissipation reasons. We can't just stack $N CPUs onto and next to each other until we reach an object of the size of a brain. It would not be practical to cool that thing. Compare this to human brains which require much less energy (20 Watts on average).
Moore's law is dead but it's about size of transistors. Human axons have diameters of hundreds of nanometers, even the gap of intracellular space of a synapse is approximately 20 nanometers. Compare this to the 5 nanometers we got now. We don't need size improvements to model brains, we need power consumption/heat dissipation/electric loss improvements. Also, CPUs are still very expensive. We also need improvements in manufacturing them more cheaply. Both isn't really requiring node sizes to shrink further.
Sadly, there's a different law called Dennard's Scaling which was precisely about power consumption, but it has been dead for about 13 years. https://en.wikipedia.org/wiki/Dennard_scaling
[+] [-] sephamorr|6 years ago|reply
[+] [-] bognition|6 years ago|reply
Luckily it looks like humanity has finally figured out how to do it at scale /sarcasm/
[+] [-] Retric|6 years ago|reply
[+] [-] ChuckMcM|6 years ago|reply
[+] [-] NikkiA|6 years ago|reply
Moving away from that would still be increasing transistor density and thus would not be rewriting or beating moore's law, it would still be moore's law applying.
[+] [-] etaioinshrdlu|6 years ago|reply
[+] [-] agumonkey|6 years ago|reply
[+] [-] wolfi1|6 years ago|reply
[+] [-] 781|6 years ago|reply
[+] [-] noobiemcfoob|6 years ago|reply
This race to increase various aspects of performance has long let the fields of architecture and the software on top of it be lazy in their optimizations. When there's no more juice in that fruit to squeeze, I know there is much more to be found elsewhere.
/I can't wait for my non-von neumann SoC FPGA hybrid to use for...I'll figure it out then I'm sure.
[+] [-] StavrosK|6 years ago|reply
If you think it will accelerate more while hardware gains drop to zero, why were we leaving these absolutely massive gains on the table? This means that, if a new processor gave us double the speed, we could get more than double just from software alone and get the hardware doubling on top of that.
If you think it will decelerate, why are you happy Moore's Law ends? We're worse off than we were before, even though we aren't "lazy" now.
This approach seems to me a bit like saying "Oh I'm happy I lost my job, now I can finally make money by looking for pennies on the ground all day, which the job made me too lazy to do".
[+] [-] gameswithgo|6 years ago|reply
[+] [-] pasabagi|6 years ago|reply
[+] [-] ansible|6 years ago|reply
I think if you fully understood the implications of this, you wouldn't be so sanguine.
We're already seeing the effects of a stalling out of process improvements.
Starting a couple years ago, there were articles by enthusiast PC builders who were buying used Xeon processors on eBay, and building new gaming PCs around them, with comparable performance to the i7's available at the time. Just to put some round numbers on that, you had people spending $100 or so on a used CPU (none of that money goes directly to Intel) vs. spending nearly $1000 on a brand new i7.
You can find other instances where a company is competing with their own older, used products... it isn't fun for the company in question.
What's going to happen to Intel's stock price when that sort of thing becomes more common? The price of their stock (and all other tech companies) is based on future value, rather than other companies that pay dividends but otherwise aren't expected to expand significantly.
If investors come to accept the fact that Intel can't really make a chip next year (that will drive hardware and software sales for the entire PC industry), better than last year's chip, their stock will crash. As will all the others in the tech industry.
New hardware (with more capabilities at a reasonable price) drives new software / new applications. If next year's phone isn't better than last year's phone (assuming you can replace the battery that is losing capacity), that will kill valuations in the mobile segment.
Some tech like stacking higher in 3-D on the chip will help applications like NAND Flash storage that aren't heat-dissipation limited. But even there the cost per bit stored will won't be going down at nearly the rate like we're used to seeing.
A tech crash across the board will tank the entire world economy as well, because much of the productivity improvements are based on computer technology.
This is a massive, massive upheaval we're looking at. Unless we switch to new technology that can keep the productivity party going for a while...
[+] [-] femto113|6 years ago|reply
Math purists may roll their eyes at this statement and clearly even if you are obsessed with integers you can just switch to picometers, but I wonder if it is time to start counting atoms. The diameter of a silicon atom is about 0.21 nanometers, so a 5nm process is dealing with features only about 20-30 atoms wide.
(edit: found more accurate diameter number)
[+] [-] tombert|6 years ago|reply
[+] [-] alokv28|6 years ago|reply
[+] [-] chr1|6 years ago|reply
[1] https://en.m.wikipedia.org/wiki/Nanoscale_vacuum-channel_tra...
[+] [-] falcrist|6 years ago|reply
Nanoscale devices seem like an interesting development, but all of these technologies seem like they're a decade or more away from mainstream use.
[1] http://www.sandia.gov/%7Ejytsao/WCS.pdf
[+] [-] tntn|6 years ago|reply
There are few, if any, other exponentials in the history of humanity that worked as consistently for as long with as massive of impacts on technology, and TBH I'm doubtful that we will see anything like it again.
Moore's law worked because loads of engineers and scientists made it their life's work, but everyone watching seems to think it just happened.
There was one magical exponential that changed the world, and it's coming to an end.
There's certainly a lot that can be done in architecture and software, but most applications probably aren't worth the effort.
[+] [-] trimbo|6 years ago|reply
Spoiler: the way out is "Domain Specific Architectures".
https://www.youtube.com/watch?v=3LVeEjsn8Ts
[+] [-] tambourine_man|6 years ago|reply
[+] [-] fulafel|6 years ago|reply
Eg GPUs have been around for ~25 years and there's still no decent way to portably program them. We have Metal, Direct3D, OpenGL, OpenCL, Cuda, Vulkan, ... all of which have small market shares. It's markedly worse than microprocessors at 25.
(Not even getting into quality issues like driver quality problems and the closed&proprietary nature of most of the SW stacks).
[+] [-] Causality1|6 years ago|reply
[+] [-] mdorazio|6 years ago|reply
I'm personally still hopeful for a sci-fi future, just based on a series of ramps and plateaus as we progress through the limits of each specific technology.
[+] [-] sixothree|6 years ago|reply
[+] [-] kleer001|6 years ago|reply
Just imagine your current phone costing pennies, compostable, and easily chargeable anywhere. Imagine never again worrying about file size or storage capacity.
We've got a long way to go and I'm very excited about what the future holds.
[+] [-] tim333|6 years ago|reply
[+] [-] ars|6 years ago|reply
A bit of a side topic, but can you imagine if there were fewer people in the world? Computing power would never improve because it would be too expensive.
I wonder what other technology could be done, if only there were more people (i.e. larger market), but is too expensive right now.
People always like to talk about the downsides of population, but the high tech world we live in now could not exist otherwise.
[+] [-] dfrage|6 years ago|reply
These lithography machines are completely wild: the part that fails the most spits out a tiny droplet of molten tin. That's hit with a laser, and the excited electrons emit what's called extreme ultraviolet (EUV) which is very close to the softest of X-rays. Most of the power released is wasted bouncing off a series of mirrors, and then wild stuff happens as these energetic photons hit the semi-optional mask cover (pellicle), mask, and then chip: https://en.wikipedia.org/wiki/EUV_lithography
[+] [-] sanxiyn|6 years ago|reply
[+] [-] 0xffff2|6 years ago|reply
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] _Microft|6 years ago|reply
https://sv.wikipedia.org/wiki/Teknologisk_singularitet#/medi...
In this display, the show is far from over. It even start to look like over-exponential progress. The latest data points show GPUs instead of CPUs but this is most likly OK since it is the suitable technology for easy to parallelize workloads like machine learning applications.
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] dmix|6 years ago|reply
https://spectrum.ieee.org/image/MzMwNzU1Ng.jpeg
[+] [-] moflome|6 years ago|reply
[+] [-] blu42|6 years ago|reply
[+] [-] ahartmetz|6 years ago|reply
I kind of welcome the death of the "just wait for hardware to improve" approach to optimization. I find computers interesting because the field is so fresh and you have to figure out so many things on your own and through communication with people that are still alive. Due to the death of Moore's Law, it's going to stay that way for some longer and the end result is going to be more varied.
[+] [-] sytelus|6 years ago|reply
[+] [-] dfrage|6 years ago|reply
Not clear anyone other than Intel is trying, but Intel's situation is much more complicated. They tried for a purely optical for its life "10nm" node more aggressive than TSMC's initial "7nm" node and catastrophically failed. There's clearly some top level management issues there, a big problem for Intel for decades, and it's extremely worrying they let them destroy a generation of their crown jewel.
But that doesn't tell us much about their "7nm" node very roughly equivalent to Samsung and TSMC's "5nm" nodes, except those companies have a lot more real world EUV experience, but it's not always good to be a pioneer. Intel could conceivably get back in the game in about the same time frame as these two companies exit "5nm" risk production, we just don't know. All we know is that they're buying EUV machines and installing them in multiple fabs.
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] Darmody|6 years ago|reply
I keep hearing people talking about them and their benefits over que binary system.
[+] [-] layoutIfNeeded|6 years ago|reply
Who are these people?
[+] [-] danaur|6 years ago|reply