Slightly unrelated but is there any way of maintaining that low of a temperature (77k and 10k according to the paper numbers) that does not immediately kill perf/w and perf/$ ?
Otherwise might as well just buy more cpu
The minimum amount of work needed to pump some amount of heat Q from a temperature T0 to a higher temperature T1 is W = Q*(T1/T0 - 1). For example, if your ambient heat sink is at 20C (293K) you need at least 2.8W of electricity to run the cooler for every 1W dissipated at 77K, or 28.3W for 1W dissipated at 10K. This is the thermodynamic lower limit, and practical heat pumps will be less efficient in general. In practice it might be something like 4x and 50x, respectively.
Leakage current is what heats up the chip, and if it drops by five orders of magnitude when it's cool, the energy requirements for refrigeration will be low. Memory chips are already not that power-dense (on the order of 10W for a DIMM) so we're only talking about extracting 1mW of heat from the cryo chamber.
>As IOFF at 77 and 10 K decreases by four to five orders [29], the primary constraint of building a large memory array, i.e., leakage current (Ileak), will not be a major concern and will lead to novel design tradeoffs for memory optimization.
This comment assumes that the leakage current is all of the power draw, and not just the majority of it. I find it unthinkable that leakage current is 99.99% of the power draw of SRAM. 95% sounds believable, but then you're talking about removing 500 mW, not 1 mW.
This also gets rather tricky, because the standard way to connect computer chips is with copper traces, which are wildly good conductors of heat. A solution like this will probably need optical interconnects with the made from a thermal insulator.
Leakage current is generally a rounding error for heat. In CMOS, the power that causes the most heat is the dynamic switching power which is lost to P = C * Vdd^2 * frequency
Which implies that for the fastest chips, most power is lost simply to running the clock which has both the highest frequency and largest capacitive load.
Where leakage current matters is for battery driven systems where you spend most of your time sleeping.
77k is basically the boiling point of liquid nitrogen, and 10k is probably the same for liquid helium. Liquid nitrogen is in ample supply and is not difficult to manufacture, I suppose one could have a facility on site to produce it and use it immediately. It is going to be very energy intensive though... to answer your question, I struggle to think of a scenario where it would be better than buying more compute power. I suppose for stubbornly serial workloads... but I'm not sure what that could be? Running Crysis at 20k resolution?
You can make liquid N2, though very inefficiently. So yeah, power is an issue although we are still making gains on cooling efficiency so it's not inconceivable the equation could swing towards super low temperature coolants.
The idea I heard was to make liquid nitrogen during the day when solar power is abundant and then run the chips at greater efficiency at night using your stored liquid nitrogen.
They mention space and medical and quantum computing equipment as the target uses, where all of the processing is done @ cryogenic temperatures.One of the biggest benifits they have found is that increased density in chips is possible.The researchers behind this paper are only working with aproximate numbers, and as mentioned are useing the numbers for liquid nitrogen, but space based cryo pumps use helium, so the actual performance would improve.
https://hackaday.com/2022/05/05/about-as-cold-as-it-gets-the...
On earth, difficult as you need to pay the price of being inside a 300 kelvin enviroment. But there's no such temparature in space, just the size of your radiator you'll need anyway. So there may be a very real performance improvement from doing math in space.
OTOH, you might want to burry your supercomputer deep into the crust of Pluto (or in a permanently shaded lunar crater) with just a radiator sticking out.
Latencies between Earth and Pluto can be a problem for computing, but I would appreciate the impossibility of receiving Teams calls. Also, any AI running on that hardware will have a ton of time to think about... anything.
mppm|1 year ago
unknown|1 year ago
[deleted]
whatshisface|1 year ago
>As IOFF at 77 and 10 K decreases by four to five orders [29], the primary constraint of building a large memory array, i.e., leakage current (Ileak), will not be a major concern and will lead to novel design tradeoffs for memory optimization.
timerol|1 year ago
This also gets rather tricky, because the standard way to connect computer chips is with copper traces, which are wildly good conductors of heat. A solution like this will probably need optical interconnects with the made from a thermal insulator.
It's a fun design problem to chew on
bsder|1 year ago
Leakage current is generally a rounding error for heat. In CMOS, the power that causes the most heat is the dynamic switching power which is lost to P = C * Vdd^2 * frequency
Which implies that for the fastest chips, most power is lost simply to running the clock which has both the highest frequency and largest capacitive load.
Where leakage current matters is for battery driven systems where you spend most of your time sleeping.
I strongly suggest that you go over this lecture "CMOS Power Consumption": https://course.ece.cmu.edu/~ece322/LECTURES/Lecture13/Lectur...
short_sells_poo|1 year ago
pfdietz|1 year ago
XorNot|1 year ago
changoplatanero|1 year ago
Tostino|1 year ago
metalman|1 year ago
Out_of_Characte|1 year ago
rbanffy|1 year ago
OTOH, you might want to burry your supercomputer deep into the crust of Pluto (or in a permanently shaded lunar crater) with just a radiator sticking out.
Latencies between Earth and Pluto can be a problem for computing, but I would appreciate the impossibility of receiving Teams calls. Also, any AI running on that hardware will have a ton of time to think about... anything.
unknown|1 year ago
[deleted]
HPsquared|1 year ago
unwind|1 year ago