top | item 42812161

(no title)

darkfloo | 1 year ago

Slightly unrelated but is there any way of maintaining that low of a temperature (77k and 10k according to the paper numbers) that does not immediately kill perf/w and perf/$ ? Otherwise might as well just buy more cpu

discuss

order

mppm|1 year ago

The minimum amount of work needed to pump some amount of heat Q from a temperature T0 to a higher temperature T1 is W = Q*(T1/T0 - 1). For example, if your ambient heat sink is at 20C (293K) you need at least 2.8W of electricity to run the cooler for every 1W dissipated at 77K, or 28.3W for 1W dissipated at 10K. This is the thermodynamic lower limit, and practical heat pumps will be less efficient in general. In practice it might be something like 4x and 50x, respectively.

whatshisface|1 year ago

Leakage current is what heats up the chip, and if it drops by five orders of magnitude when it's cool, the energy requirements for refrigeration will be low. Memory chips are already not that power-dense (on the order of 10W for a DIMM) so we're only talking about extracting 1mW of heat from the cryo chamber.

>As IOFF at 77 and 10 K decreases by four to five orders [29], the primary constraint of building a large memory array, i.e., leakage current (Ileak), will not be a major concern and will lead to novel design tradeoffs for memory optimization.

timerol|1 year ago

This comment assumes that the leakage current is all of the power draw, and not just the majority of it. I find it unthinkable that leakage current is 99.99% of the power draw of SRAM. 95% sounds believable, but then you're talking about removing 500 mW, not 1 mW.

This also gets rather tricky, because the standard way to connect computer chips is with copper traces, which are wildly good conductors of heat. A solution like this will probably need optical interconnects with the made from a thermal insulator.

It's a fun design problem to chew on

bsder|1 year ago

> Leakage current is what heats up the chip

Leakage current is generally a rounding error for heat. In CMOS, the power that causes the most heat is the dynamic switching power which is lost to P = C * Vdd^2 * frequency

Which implies that for the fastest chips, most power is lost simply to running the clock which has both the highest frequency and largest capacitive load.

Where leakage current matters is for battery driven systems where you spend most of your time sleeping.

I strongly suggest that you go over this lecture "CMOS Power Consumption": https://course.ece.cmu.edu/~ece322/LECTURES/Lecture13/Lectur...

short_sells_poo|1 year ago

77k is basically the boiling point of liquid nitrogen, and 10k is probably the same for liquid helium. Liquid nitrogen is in ample supply and is not difficult to manufacture, I suppose one could have a facility on site to produce it and use it immediately. It is going to be very energy intensive though... to answer your question, I struggle to think of a scenario where it would be better than buying more compute power. I suppose for stubbornly serial workloads... but I'm not sure what that could be? Running Crysis at 20k resolution?

pfdietz|1 year ago

Boiling point of He at 1 bar is 4.222 K, and its critical point is at 5.1953 K. At 10 K helium is a gas.

XorNot|1 year ago

You can make liquid N2, though very inefficiently. So yeah, power is an issue although we are still making gains on cooling efficiency so it's not inconceivable the equation could swing towards super low temperature coolants.

changoplatanero|1 year ago

The idea I heard was to make liquid nitrogen during the day when solar power is abundant and then run the chips at greater efficiency at night using your stored liquid nitrogen.

Tostino|1 year ago

Trading algorithms.

metalman|1 year ago

They mention space and medical and quantum computing equipment as the target uses, where all of the processing is done @ cryogenic temperatures.One of the biggest benifits they have found is that increased density in chips is possible.The researchers behind this paper are only working with aproximate numbers, and as mentioned are useing the numbers for liquid nitrogen, but space based cryo pumps use helium, so the actual performance would improve. https://hackaday.com/2022/05/05/about-as-cold-as-it-gets-the...

Out_of_Characte|1 year ago

On earth, difficult as you need to pay the price of being inside a 300 kelvin enviroment. But there's no such temparature in space, just the size of your radiator you'll need anyway. So there may be a very real performance improvement from doing math in space.

rbanffy|1 year ago

Radiation will want to talk to you.

OTOH, you might want to burry your supercomputer deep into the crust of Pluto (or in a permanently shaded lunar crater) with just a radiator sticking out.

Latencies between Earth and Pluto can be a problem for computing, but I would appreciate the impossibility of receiving Teams calls. Also, any AI running on that hardware will have a ton of time to think about... anything.

HPsquared|1 year ago

If you really really want single-thread performance, that's where you go.

unwind|1 year ago

Note sure, but not all tasks are possible/easy to split among multiple CPUs so it's not always "might as well" ... Just saying.