With all the over-the-top sensationalized news reports, I did not hear an understated humor in a long time and then I saw this in the article! Nice one!
>>IBM states that the technology can fit ’50 billion transistors onto a chip the size of a fingernail’. We reached out to IBM to ask for clarification on what the size of a fingernail was, given that internally we were coming up with numbers from 50 square millimeters to 250 square millimeters.
The author seems to be a Professor in Oxford. I took the trouble to look up because i (correctly) guessed this was typical, dry British irony. The academic link also came as no surprise.
The table in the article suggests to me that instead of this fictional "feature size", we could use the achievable transistor area as a more meaningful measure of process scale.
IBM achieved 50B transistors in 150mm^2, for a per-transistor area of 3000 nm^2.
TSMC's 5nm process (used by Apple's M1 chip) apparently achieves a transistor area of 5837 nm^2, while Intel's 10nm is lagging at roughly 10000 nm^2.
Yes. Transistor density is a good measure that translates into something meaningful. It can be compared across different process technologies. Bear in mind that transistor density is not not actually transistor density.
Transistor density is weighted number that combines NAND2 transistors/area and scan flip-flops/area.
It is slightly more complicated than that unfortuantely...although the current nomenclature is more cargo cult than meaningful.
I could produce a physically smaller transistor, with a smaller gate, source, and drain. However, depending on the limitations of my process changes for scaling I may not actually be able to pack transistors more tightly. Notionally, the smaller transistor could use less energy which improves the chip design, but not be packed more tightly.
There is more than one way to improve a semiconductor at the feature, device, and chip level.
The node naming is a useful convention for the industry because saying something like '10nm' efficiently communicates historical context, likely technological changes, timelines, and other things that have nothing to do with the physical size of the devices on the chips.
I'm the author of the article, but I've also made a video on the topic. Covers the same topic, but some people like video better than written. Also includes a small segment of a relevant Jim Keller talk on the topic of EUV.
This is an amazing step and the transistor density chart shows you just how big a deal this is. 1/3B transistors per square mm. Now for 'grins' take a piece of paper and make a 2 x 2 mm square on it. Now figure out what you can do with 1.3B transistors in that space. Based on this[1] you are looking at a quad-core w/GPU desktop processor.
Of course you aren't really because you cant fit the 1440 pins that processor needs to talk to the outside world. But it suggests to me that at some point we'll see these things in "sockets" that connect via laser + WDM to provide the I/O. An octagonal device with 8 laser channels off each of the facets would be kind of cool. Power and ground take offs on the top and bottom. That would be some serious sci-fi shit would it not?
Moore to the rescue.
But can they keep up with the bloatware curve?
If we cook all the horsepower down to glue, and glue ever more horrible libraries together, when will we reach peak horse-pisa-stack?
I stopped eyeing for cpu advances a decade ago.
If we could have a NN-driven prefetcher that is able to anticipate cache misses from instructions and data, 300 cycles ahead of time, that would be some speedup we all could benefit from, if it found its way into hardware.
> If we could have a NN-driven prefetcher that is able to anticipate cache misses
It’s not that critical for memory prefetcher because cache hierarchy is helping a lot. Most software doesn’t read random addresses. And prefetchers in modern CPUs are pretty good already. Another thing, prefetcher is not a great application of AI because the address space is huge.
I think it's the glue that keeps up with the CPU power curve, so to speak. You give devs more RAM and more cycles and they'll find a way to use them with inefficient languages, suboptimal code and shiny UI.
I think it's important to remember that for instance Zelda: Link's Awakening was less than 512KiB in size and ran on a primitive 1MHz CPU.
But at the same time we have to acknowledge how better it can be to develop for a modern target. We can decide to waste this potential with bloated electron runtimes, but we can also leverage it to make things we thought impossible before.
I guess you could cheat and use the area of a thumb nail, but then you could take it a step further and use the area of a horse's fingernail, which is at least 100 times larger...
It really depends when this will arrive ( if at all ) at Samsung Foundry. TSMC will have 3nm shipping next year. And currently still looking at 2024 for their 2nm. Which would have 50% higher transistor density than this IBM 2nm.
And I really do support Intel's node renaming if the rumour were true. I am sick and tired of people arguing over it. It is wrong, but that is how the industry has decided to use. Adopt it and move on.
Heh, I remember being roundly mocked here about 10 years ago for disputing the idea that 22nm was the end of the road. 'Maybe 16 or 14nm, but then the laws of physics intervene.' I should have put down a bet.
Interesting table showing millions of transistors per mm^2 there. Is Intel’s 10nm really having more transistor density that TSMC’s 7nm? This could mean some big things coming from AMD moving to 5nm next year!
yes, intel 10nm is similar to tsmc's 7mn. Unfortunately for Intel they have struggled to produce any non-low-powered chips on 10nm. That can be alleviated by not cramming the entire area with transistors.
not much experience in this space: who would use this patent? I see Samsung and Intel as partners; do they simply use IBMs research with their own manufacturing to produce this?
Also curious if this development will affect apple silicon or TSMCs bottom line in the near future
IBM still makes a lot of money on Power, but at the moment it is a managed decline. Ever refresh cycle they get a nice big revenue bump, and they have a pretty large Fortune 500/US Defense base that won't be leaving Power any time soon.
Practically ? News from last few years says: it's vaporware.
Every few years we hear "IBM made actually-smallest node" and what ? Just x86 cabal keep trashing our lives.
It's possible IBM is making CPUs for secret govs projects. But if not then all that "IBM made x node" is fraud news.
Or something other that change f* nothing. Just scary potential fab builders.
Rather not, they are not in the business of mass production of this kind of chips. They don't need, probably they will own a few really hot patents (and they deserve them).
The article mention they will team up with Samsung and Intel. If Intel managed to produce have that chip at scale this would really helped them, given the fact they are behind Apple/TCMS and AMD.
Not necessarily, and most likely no. The cost of craming so much in to such a small is not linear it's been getting more and more expensive. There are also factors like yield that come into play and potentially drive the costs up again.
Historically this has been true, and costs reducing is a part of Gordon Moore's original statement.
It has been true for Intel up until at least 2015, and I expect that ignoring the recent supply chain weirdness it will remain true for a while longer.
Not necessarily, because this inversely also allows packing additional complexity onto the same die size.
Just look back over CPU development and costs over the last couple decades. The latest gen stuff got ever more complex and ever more capable but cost roughly the same at the time it hit the market.
[+] [-] hi41|4 years ago|reply
>>IBM states that the technology can fit ’50 billion transistors onto a chip the size of a fingernail’. We reached out to IBM to ask for clarification on what the size of a fingernail was, given that internally we were coming up with numbers from 50 square millimeters to 250 square millimeters.
[+] [-] FridayoLeary|4 years ago|reply
[+] [-] ineedasername|4 years ago|reply
[+] [-] babypuncher|4 years ago|reply
[+] [-] Normille|4 years ago|reply
[deleted]
[+] [-] tromp|4 years ago|reply
IBM achieved 50B transistors in 150mm^2, for a per-transistor area of 3000 nm^2.
TSMC's 5nm process (used by Apple's M1 chip) apparently achieves a transistor area of 5837 nm^2, while Intel's 10nm is lagging at roughly 10000 nm^2.
[+] [-] robocat|4 years ago|reply
* Estimated Logic Density
[+] [-] Nokinside|4 years ago|reply
Transistor density is weighted number that combines NAND2 transistors/area and scan flip-flops/area.
Tr/mm² = 0.6×(NAND2 Tr/mm²) + 0.4×(scan flip-flop/mm²)
[+] [-] avs733|4 years ago|reply
I could produce a physically smaller transistor, with a smaller gate, source, and drain. However, depending on the limitations of my process changes for scaling I may not actually be able to pack transistors more tightly. Notionally, the smaller transistor could use less energy which improves the chip design, but not be packed more tightly.
There is more than one way to improve a semiconductor at the feature, device, and chip level.
The node naming is a useful convention for the industry because saying something like '10nm' efficiently communicates historical context, likely technological changes, timelines, and other things that have nothing to do with the physical size of the devices on the chips.
It's basically a form of controlled vocabulary.
[+] [-] dorfsmay|4 years ago|reply
https://old.reddit.com/r/ECE/comments/jxb806/how_big_are_tra...
https://old.reddit.com/r/askscience/comments/jwgdld/what_is_...
[+] [-] iamgopal|4 years ago|reply
[+] [-] d110af5ccf|4 years ago|reply
[+] [-] fallingknife|4 years ago|reply
[+] [-] kuprel|4 years ago|reply
[+] [-] TchoBeer|4 years ago|reply
Too many zeroes?
[+] [-] mrfusion|4 years ago|reply
[+] [-] IanCutress|4 years ago|reply
https://www.youtube.com/watch?v=DZ0yfEnwipo
[+] [-] ChuckMcM|4 years ago|reply
Of course you aren't really because you cant fit the 1440 pins that processor needs to talk to the outside world. But it suggests to me that at some point we'll see these things in "sockets" that connect via laser + WDM to provide the I/O. An octagonal device with 8 laser channels off each of the facets would be kind of cool. Power and ground take offs on the top and bottom. That would be some serious sci-fi shit would it not?
[1] https://en.wikipedia.org/wiki/Transistor_count
[+] [-] PicassoCTs|4 years ago|reply
I stopped eyeing for cpu advances a decade ago. If we could have a NN-driven prefetcher that is able to anticipate cache misses from instructions and data, 300 cycles ahead of time, that would be some speedup we all could benefit from, if it found its way into hardware.
https://drive.google.com/file/d/17THn_qNQJTH0ewRvDukllRAKSFg...
[+] [-] Const-me|4 years ago|reply
It’s not that critical for memory prefetcher because cache hierarchy is helping a lot. Most software doesn’t read random addresses. And prefetchers in modern CPUs are pretty good already. Another thing, prefetcher is not a great application of AI because the address space is huge.
Branch prediction is another story. CPUs only need to predict a single Boolean value, taken/not taken. Modern processors are actually using neural networks for that: https://www.amd.com/en/technologies/sense-mi https://www.youtube.com/watch?v=uZRih6APtiQ
[+] [-] simias|4 years ago|reply
I think it's important to remember that for instance Zelda: Link's Awakening was less than 512KiB in size and ran on a primitive 1MHz CPU.
But at the same time we have to acknowledge how better it can be to develop for a modern target. We can decide to waste this potential with bloated electron runtimes, but we can also leverage it to make things we thought impossible before.
[+] [-] thombles|4 years ago|reply
[+] [-] guidopallemans|4 years ago|reply
I guess you could cheat and use the area of a thumb nail, but then you could take it a step further and use the area of a horse's fingernail, which is at least 100 times larger...
[+] [-] danmur|4 years ago|reply
[+] [-] ksec|4 years ago|reply
And I really do support Intel's node renaming if the rumour were true. I am sick and tired of people arguing over it. It is wrong, but that is how the industry has decided to use. Adopt it and move on.
[+] [-] anigbrowl|4 years ago|reply
[+] [-] andy_ppp|4 years ago|reply
[+] [-] xxs|4 years ago|reply
Overall the numbers of "X" nm mean very little
[+] [-] ksec|4 years ago|reply
[+] [-] chipzillazilla|4 years ago|reply
[+] [-] mlacks|4 years ago|reply
Also curious if this development will affect apple silicon or TSMCs bottom line in the near future
[+] [-] iamthemonster|4 years ago|reply
[+] [-] blodkorv|4 years ago|reply
Power seems to be more and more out of fashion and other chips seems to preform way better. They cant be making much of those?
[+] [-] jhickok|4 years ago|reply
Soon IBM will be the biggest consumer of Power as they continue to move customers to Power on IBM Cloud. https://www.ibm.com/cloud/power-virtual-server
[+] [-] ilogik|4 years ago|reply
[+] [-] agumonkey|4 years ago|reply
[+] [-] Woodi|4 years ago|reply
Every few years we hear "IBM made actually-smallest node" and what ? Just x86 cabal keep trashing our lives.
It's possible IBM is making CPUs for secret govs projects. But if not then all that "IBM made x node" is fraud news. Or something other that change f* nothing. Just scary potential fab builders.
[+] [-] birdyrooster|4 years ago|reply
[+] [-] piokoch|4 years ago|reply
The article mention they will team up with Samsung and Intel. If Intel managed to produce have that chip at scale this would really helped them, given the fact they are behind Apple/TCMS and AMD.
[+] [-] whoknowswhat11|4 years ago|reply
Intel should pay them to help get stuff into production maybe?
[+] [-] piinbinary|4 years ago|reply
Napkin math: 50b transistors/chiplet x 6 chiplets/socket x 4 socket/motherboard = 1200b transistors/motherboard
[+] [-] slver|4 years ago|reply
[+] [-] jbverschoor|4 years ago|reply
[+] [-] whoknowswhat11|4 years ago|reply
28nm probably where you want to be for cost - check out what raspberry pi and other lower cost products use
[+] [-] Guthur|4 years ago|reply
[+] [-] FinanceAnon|4 years ago|reply
https://www.gizmochina.com/2021/03/31/tsmc-price-increase-ru...
[+] [-] reportingsjr|4 years ago|reply
It has been true for Intel up until at least 2015, and I expect that ignoring the recent supply chain weirdness it will remain true for a while longer.
http://www.imf.org/~/media/Files/Conferences/2017-stats-foru...
[+] [-] st_goliath|4 years ago|reply
Just look back over CPU development and costs over the last couple decades. The latest gen stuff got ever more complex and ever more capable but cost roughly the same at the time it hit the market.
[+] [-] galaxyLogic|4 years ago|reply
[+] [-] unknown|4 years ago|reply
[deleted]