Basically, this is about IBM and 3M starting a new research project. Whether it would work, how long it would take, at what cost - these are all open questions.
There is a lot of sensationalist PR from IBM.
The Cat Brain, the Smarter Planet thing, the AI processor and now a 1k times faster processor..... some arbitrary stuff an nothing concrete. I wonder what this PR strategy should accomplish and who is the target for this.
The point with 3d layout is that you can have shorter interconnects, which means far less resistive heating, and less junction loss. It also means that your path lengths can be shortened, possibly allowing you to bump up the frequency.
Stacking stuff means you should be able to run cooler for the same amount of processing power.
1000x? I think that's probably BS. But I can see this being a significant win.
IBM talks about up to 100 layers, not 1000. Not all components have the same power requirements or are even powered up at the same time.
Also this is IBM research. They don't need to make robust processors for the consumer market. If the CPU requires watercooling or better, IBM can do and sell that. They already have such solutions for super computers and mainframes in the field.
I would assume the heat would be expected to ooze out the sides of each layer... e.g like the condiments leaking out the burger when you squeeze it :-)
First 3D silicon stack I saw was in the late 80s. (Back then it was a stack of 2.) The field has produced a rich string of PR announcements and funding proposals over that time though.
The problems then as now: Heat dissipation and signal integrity as you pass through the stack.
If this is real (and since it's IBM and they're claiming production by 2013, I'm optimistic), this is going to be a huge game changer on so many levels.
This does not look like a big win for Python, since it's most likely to provide you with ludicrous amount of computing cores, and Python currently is a bad tool for concurrent computing.
If we're going to increase processing power by stacking instead of shrinking, then we will double the volume every 18 months. We'll be back at room-sized computers in 40 years :-)
This is really exciting. I don't see anything with this techology that would limit it to just CPU chips. I'd imagine it would work just as well in a GPU chip. That would make incredibly realistic simulation of graphics and physics possible.
It would obiviously revolutionize gaming completely, but most imporantly it would revolutize science. Imagine having the power of the whole Folding@Home network in your laptop! Imagine how much power the future Folding@Home projects and other similar projects will have.
I'm a diehard optimist, but a sudden 100-1000x increase in ordinary CPU speeds could have the possibility to save thousands (if not millions) of lives and advice science a hell of a lot!
The glue helps with heat dissipation, that heat came from using electricity. 10'000 systems stacked on top of each other would consume 1MW assuming each one would consume 100W of power at full usage.
Does anyone know by what metric it will be 1,000 faster? It seems like a technique for having massive numbers of cores in parallel, but I'm not sure how the adhesive would address any specific improvements to the cores themselves.
That said, even if it's just a way to get 1,000-core CPUs or whatever, it sounds like a remarkable breakthrough.
A lot of wait states in a CPU come from accessing memory outside of cache. If you make all the RAM in your system the same as cache, you'd get a huge leap.
I'm speculating wildly here, but I suspect a major benefit to going 3D like this will be improved chip layout -- no need to take a lengthy route around all the stuff in the middle just to get from the ALU to the L1 cache (for example), just go straight up.
Of course the article doesn't mention if it's possible to make (and align) vias through this special glue, but I would really hope so.
I miss Jim Gray, he really did see things clearly. His response to the Alpha processor was that some day a computer would be a smoking hot, hairy golf ball. The reasoning was straight forward, spheres have the shortest paths between any two points, as density increased you needed more and more connections (wires) which were getting finer and finer in diameter, and power dissipation, well it wasn't going anywhere as electrons moving around bump into things, get over it.
My speculation is that the future is carbon for a variety of reasons. In its many forms it has all the properties you need to make chips, from diamond insulators to graphene conductors and nano-tube semiconductors. It can make light, it can trap light, it can conduct heat like there is no tomorrow. Truly, the day we can drop layers of carbon down and control the structure as it drops, think 'maker-bot with a molecular carbon extruder head that works at nanometer scale' its game on for truly mind blowing electronics.
Well 3d processors have always been out there as a research subject. The issue is heat when you crank it up. The stream processors (your GPU) gets around it by doing less work per core.
And so the heat dissipation seems to be the major point of this news. Will be interesting to see a more technical writeup or some published papers on this.
So this is a way of dissipating heat more efficiently, such that chips can be apparently be stacked on top of each other -- does that really help all that much for consumer (and especially mobile) stuff?
Seems like you're still generating the same total amount of heat, that needs venting from the machine. And still consuming the same amount of power.
I'm sure there's some value here, but it's not going to arise from just stacking up the same chips we use today.
When large advances in processing power become cost-effective and widespread, programming paradigms will change to take advantage of the fact. Cases in point: the Cell processor and modern PS3 games that really take advantage of it, CUDA on graphics cards and all the scientific computing that is starting to shift to it, and even multi-core and the renewed interest in multithreaded programming.
I don't see how stacking chips would do anything but make the heat dissipation problem much much worse. There is no magical "heat dissipating adhesive", only substances that conduct heat better or worse.
A conventional chip is a thin planar heating element attached with maximal surface area to its heat sink and we have great trouble keeping them cool now.
Heat generated internally a cubic object is more difficult to remove because the volume that is generating heat grows faster in proportion to the surface area available for removing it. I.e., the heat generation grows with the cube of the unit length and the surface only with the square).
The worst shape of all is spherical which has the highest possible ratio of volume to surface area.
Even if they managed to put heat sinks in contact with all six sides of this stack of dies, they could (at best) only remove six conventional chips' worth of heat.
Unless they have a way of circulating liquid coolant through that volume, stacking chips like that is only practical if they're almost completely off.
Sure there's value. The biggest obstacle to cranking up clock speed is heat dissipation. If we can get the heat away from the chip faster, we can increase the clock speed.
The article is not very technical, but I wanted to understand: Would this enable non-CPU core to be also glued?
I mean a single stack of say 12CPU layers, 12GPU layers, 8 memory layers, 2 Physics and so on... Could that be possible, then we are getting a really good overall throughput, or am I thinking wrong?
The question I have is what would we do with processors that are 1000 times faster than now?
I remember buying my first computer and asking to upgrade to a 30mb hard-drive instead of 25mb. The sales guy said I would 'never fill 25mb'. He had no point of reference for how large graphic, files would be (this is pre-digital photography).
What new capabilities/industries will be enabled with even 1/10th of that power in a mobile device?
Because for most consumer-grade computing you are limited to Wintel. Multi-socket and multi-core only became popular after the 9x-NT Windows transition. 64-bits didn't became popular until Vista.
I don't understand why TEC hasn't been somehow used to make 3D chips workable by now, even just for specialized applications where size and weight are not major concerns.
"Thermoelectric cooling uses the Peltier effect to create a heat flux between the junction of two different types of materials. A Peltier cooler, heater, or thermoelectric heat pump is a solid-state active heat pump which transfers heat from one side of the device to the other side against the temperature gradient (from cold to hot), with consumption of electrical energy. Such an instrument is also called a Peltier device, Peltier heat pump, solid state refrigerator, or thermoelectric cooler (TEC). The Peltier device is a heat pump: when direct current runs through it, heat is moved from one side to the other. Therefore it can be used either for heating or for cooling (refrigeration), although in practice the main application is cooling. It can also be used as a temperature controller that either heats or cools."
Peltier cooling is very inefficient; water cooling (especially with microchannels, where you could use some layers of the chip for cooling) is more efficient.
[+] [-] zeteo|14 years ago|reply
http://www-03.ibm.com/press/us/en/pressrelease/35358.wss
Basically, this is about IBM and 3M starting a new research project. Whether it would work, how long it would take, at what cost - these are all open questions.
[+] [-] kylelibra|14 years ago|reply
[+] [-] sek|14 years ago|reply
[+] [-] sliverstorm|14 years ago|reply
To solve this problem, we are going to stack 1000 dies, and produce 1000x as much heat.
Who cares about whether the heat can travel up the column- where is the improvement in cooling technology to remove 1000x the heat from the die stack?
[+] [-] ori_b|14 years ago|reply
Stacking stuff means you should be able to run cooler for the same amount of processing power.
1000x? I think that's probably BS. But I can see this being a significant win.
[+] [-] tobiasu|14 years ago|reply
Also this is IBM research. They don't need to make robust processors for the consumer market. If the CPU requires watercooling or better, IBM can do and sell that. They already have such solutions for super computers and mainframes in the field.
[+] [-] redwood|14 years ago|reply
[+] [-] jpdoctor|14 years ago|reply
The problems then as now: Heat dissipation and signal integrity as you pass through the stack.
[+] [-] michaelchisari|14 years ago|reply
If this is real (and since it's IBM and they're claiming production by 2013, I'm optimistic), this is going to be a huge game changer on so many levels.
[+] [-] xyzzyz|14 years ago|reply
[+] [-] swah|14 years ago|reply
[+] [-] saintfiends|14 years ago|reply
[+] [-] wcoenen|14 years ago|reply
[+] [-] kristofferR|14 years ago|reply
It would obiviously revolutionize gaming completely, but most imporantly it would revolutize science. Imagine having the power of the whole Folding@Home network in your laptop! Imagine how much power the future Folding@Home projects and other similar projects will have.
I'm a diehard optimist, but a sudden 100-1000x increase in ordinary CPU speeds could have the possibility to save thousands (if not millions) of lives and advice science a hell of a lot!
[+] [-] mahyarm|14 years ago|reply
[+] [-] angrycoder|14 years ago|reply
so, asking questions now results in downvotes?
[+] [-] smhinsey|14 years ago|reply
That said, even if it's just a way to get 1,000-core CPUs or whatever, it sounds like a remarkable breakthrough.
[+] [-] joezydeco|14 years ago|reply
[+] [-] gmaslov|14 years ago|reply
Of course the article doesn't mention if it's possible to make (and align) vias through this special glue, but I would really hope so.
[+] [-] ChuckMcM|14 years ago|reply
I miss Jim Gray, he really did see things clearly. His response to the Alpha processor was that some day a computer would be a smoking hot, hairy golf ball. The reasoning was straight forward, spheres have the shortest paths between any two points, as density increased you needed more and more connections (wires) which were getting finer and finer in diameter, and power dissipation, well it wasn't going anywhere as electrons moving around bump into things, get over it.
My speculation is that the future is carbon for a variety of reasons. In its many forms it has all the properties you need to make chips, from diamond insulators to graphene conductors and nano-tube semiconductors. It can make light, it can trap light, it can conduct heat like there is no tomorrow. Truly, the day we can drop layers of carbon down and control the structure as it drops, think 'maker-bot with a molecular carbon extruder head that works at nanometer scale' its game on for truly mind blowing electronics.
[+] [-] ambertch|14 years ago|reply
And so the heat dissipation seems to be the major point of this news. Will be interesting to see a more technical writeup or some published papers on this.
[+] [-] starwed|14 years ago|reply
Seems like you're still generating the same total amount of heat, that needs venting from the machine. And still consuming the same amount of power.
I'm sure there's some value here, but it's not going to arise from just stacking up the same chips we use today.
[+] [-] pork|14 years ago|reply
[+] [-] marshray|14 years ago|reply
A conventional chip is a thin planar heating element attached with maximal surface area to its heat sink and we have great trouble keeping them cool now.
Heat generated internally a cubic object is more difficult to remove because the volume that is generating heat grows faster in proportion to the surface area available for removing it. I.e., the heat generation grows with the cube of the unit length and the surface only with the square).
The worst shape of all is spherical which has the highest possible ratio of volume to surface area.
Even if they managed to put heat sinks in contact with all six sides of this stack of dies, they could (at best) only remove six conventional chips' worth of heat.
Unless they have a way of circulating liquid coolant through that volume, stacking chips like that is only practical if they're almost completely off.
[+] [-] ramidarigaz|14 years ago|reply
[+] [-] Tycho|14 years ago|reply
[+] [-] ch0wn|14 years ago|reply
[+] [-] DrCatbox|14 years ago|reply
[+] [-] brainless|14 years ago|reply
I mean a single stack of say 12CPU layers, 12GPU layers, 8 memory layers, 2 Physics and so on... Could that be possible, then we are getting a really good overall throughput, or am I thinking wrong?
[+] [-] wmf|14 years ago|reply
[+] [-] moe|14 years ago|reply
Cinemas, TV sets, handheld consoles, and now even our CPUs.
[+] [-] Zolomon|14 years ago|reply
[+] [-] espeed|14 years ago|reply
[+] [-] pedalpete|14 years ago|reply
I remember buying my first computer and asking to upgrade to a 30mb hard-drive instead of 25mb. The sales guy said I would 'never fill 25mb'. He had no point of reference for how large graphic, files would be (this is pre-digital photography).
What new capabilities/industries will be enabled with even 1/10th of that power in a mobile device?
[+] [-] AshleysBrain|14 years ago|reply
[+] [-] burgerbrain|14 years ago|reply
[+] [-] njharman|14 years ago|reply
[+] [-] idonthack|14 years ago|reply
[+] [-] spitfire|14 years ago|reply
Faster CPU's are important in several areas, but for 99% of problems it's RAM and stable storage that are the bottleneck.
[+] [-] marshray|14 years ago|reply
[+] [-] swah|14 years ago|reply
[+] [-] rbanffy|14 years ago|reply
[+] [-] unknown|14 years ago|reply
[deleted]
[+] [-] 46Bit|14 years ago|reply
[+] [-] ThaddeusQuay2|14 years ago|reply
"Thermoelectric cooling uses the Peltier effect to create a heat flux between the junction of two different types of materials. A Peltier cooler, heater, or thermoelectric heat pump is a solid-state active heat pump which transfers heat from one side of the device to the other side against the temperature gradient (from cold to hot), with consumption of electrical energy. Such an instrument is also called a Peltier device, Peltier heat pump, solid state refrigerator, or thermoelectric cooler (TEC). The Peltier device is a heat pump: when direct current runs through it, heat is moved from one side to the other. Therefore it can be used either for heating or for cooling (refrigeration), although in practice the main application is cooling. It can also be used as a temperature controller that either heats or cools."
http://en.wikipedia.org/wiki/Thermoelectric_cooling
[+] [-] wmf|14 years ago|reply