> We use a logic primitive called the adiabatic quantum-flux-parametron (AQFP), which has a switching energy of 1.4 zJ per JJ when driven by a four-phase 5-GHz sinusoidal ac-clockat 4.2 K.
The landauer limit at 4.2K is 4.019×10^-23 J (joules). So this is only a factor of 38x away from the landauer limit.
The interesting thing about this is that if we get close to the Landauer limit we may have to seriously start thinking of using reversible computing[1] paradigms and languages to get optimal performance.
Given the cooling requirements, I suppose it would create completely impassable rift between datacenter computing and other kinds. Imagine how programming and operating systems might look in a world where processing power is 80x cheaper.
Considering that "data centers alone consume 2% of world's enegy", I think it's worth it.
It seems likely that the more efficient our processors become, the larger share of the world's energy we'll devote to them [0]. Not that that's necessarily a bad thing, if we're getting more than proportionally more utility out of the processors, but I worry about that too [1].
Solid state physics begets both cryogenic technology and cryocooling technology. I wouldn't write off the possibility of making an extremely small cryocooler quite yet. Maybe a pile of solid state heat pumps could do it.
I don't see an impassible rift. Probably at first, but supercooling something very small is something that could certainly be productized if there is demand for it.
I can see demand in areas like graphics. Imagine real-time raytracing at 8K at 100+ FPS with <10ms latency.
It’ll all be wasted. When gasoline prices plummet, everyone buys 8mpg SUVs. If power & performance gets cheaper, it’ll be wasted. Blockchain in your refrigerator.
As processing power cheapens, programmers will settle for lazy and inefficient code for everyday consumer applications. It will be easier to be a programmer, because you can get away with writing shitty code. So wages will fall and the prestige of being a software developer wanes. The jobs requiring truly elite skill and understanding will dwindle and face fierce competition for their high pay.
Before this happens, I recommend having your exit strategy for the industry, living off whatever profits you made working as a developer during the early 21st century.
Nice, but requires 10 K temperature - not very practical.
Once this can be done at the temperature of liquid nitrogen, that will be a true revolution. The difference in cost of producing liquid nitrogen and liquid helium is enormous.
Alternatively, such servers could be theoretically stored in the permanently shaded craters of the lunar South Pole, but at the cost of massive ping.
If the throughput is fast enough 3+3=6 seconds latency doesn't really sound that bad. There are websites with that kind of lag. You can't use to build a chat app, but you can use it as a cloud for general computing.
I'm no physicist, but wouldn't you need some kind of medium to efficiently transfer the heat away?
On the moon you have no atmosphere to do it with radiators with fans, so I gues you would have to make huge radiators which simply emit the heat away as infrared radiation?
Doubt if 80x difference would make it attractive. If it were 8000x then maybe.
And that only if you use the soil for cooling, which is non-renewable resource. If you use radiators, then you can put them on a satellite instead with much lower ping.
It'll be interesting to see if the cryptocurrency mining industry will help subsidize this work, since their primary edge is power/performance.
During stable price periods, the power/performance of cryptocurrency miners runs right up to the edge of profitability, so someone who can come in at 20% under that would have a SIGNIFICANT advantage.
In this paper, we study the use of superconducting technology to build an accelerator for SHA-256 engines commonly used in Bitcoin mining applications. We show that merely porting existing CMOS-based accelerator to superconducting technology provides 10.6X improvement in energy efficiency.https://arxiv.org/abs/1902.04641
If something like that happens it will have far reaching consequences IMO. I'm not pro blockchain.. but the energy cost is important and it goes away significantly people will just pile 10x harder on it.
Not a physicist so I'm probably getting different concepts mixed up, but maybe someone could explain:
> in principle, energy is not gained or lost from the system during the computing process
Landauer's principle (from Wikipedia):
> any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase in non-information-bearing degrees of freedom of the information-processing apparatus or its environment
Where is this information going, inside of the processor, if it's not turned into heat?
If your computational gates are reversible [1], then in principle, energy is not converted to heat during the computational process, only interconverted between other forms. So, in principle, when you reverse the computation, you recover the entire energy you input into the system.
However, in order to read out the output of computation, or to clear your register to prepare for new computation, you do generate heat energy and that is Landauer's principle.
In other words, you can run a reversible computer back and forth and do as many computations as you want (imagine a perfect ball bouncing in a frictionless environment), as long as you don't read out the results of your computation.
[1] NOT gate is reversible, and you can create reversible versions of AND and OR by adding some wires to store the input.
I was curious about this too. This chip is using adiabatic computing, which means your computations are reversible and therefore don't necessarily generate heat.
I'm having trouble interpreting what exactly that means though.
It's still getting turned into heat, just much less of it. The theoretical entropy increase required to run a computer is WAY less than current computers (and probably even the one in the article) generate so there is a lot of room to improve.
If it ever gets to home computing, it will get to data center computing far sooner. What does a world look like where data center computing is roughly 100x cheaper than home computing?
Flexible dumb terminals everywhere. But we already have this with things like google stadia. Fast internet becomes more important. Tricks like vs code remote extensions to do realtime rendering locally but bulk compute (compiling in the case) on the server become more common. I don't think any of this results in radical changes from current technology.
Dumb terminals everywhere. A huge upgrade of high-speed infrastructure across the US since everyone will need high throughput and low latency. Subscriptions will arise first, as people fucking love predictable monthly revenue - and by people I mean vulture capitalists, and to a lesser degree, risk-averse entrepreneurs (which is almost an oxymoron...), both of whom you can see I hold in low regard. Get ready for a "$39.99 mo. Office Productivity / Streaming / Web browsing" package", a "$59.99 PrO gAmEr package", and God knows what other kinds of disgusting segmentation.
Someone, somewhere, will adopt a Ting-type model where you pay for your compute per cycle, or per trillion cycles or whatever, with a small connection fee per month. It'll be broken down into some kind of easy-to-understand gibberish bullshit for the normies.
In short, it'll create another circle of Hell for everyone - at least initially.
The problem is the capital cost of the cryocooler.
The upfront costs of a cryocooler, spread out over the usable lifetime of the cryocooler (they're mechanical, they wear out), vastly exceeds the cost of electricity you save by switching from CMOS to JJs. Yes, I did the math on this. And cryocoolers are not following Moore's Law. Incredibly, they're actually becoming slightly more expensive over time after accounting for inflation. There was a LANL report about this which I'm trying to find, will edit when I find it. The report speculated that it had to do with raw materials depletion.
All of the above I'm quite certain of. I suspect (but am in no way certain) that the energy expended to manufacture a cryocooler also vastly exceeds the energy saved over its expected lifetime as a result of its use. That's just conjecture however, but nobody ever seems to address that point.
I'm one of the authors of the published paper that IEEE Spectrum referred to in the post. First off, thanks for posting! We're so delighted to see our work garner general interest! A few friends and relatives of mine mentioned that they came across my work by chance on Hacker News. I already noticed the excellent questions and excellent responses already provided by the community.
This comment might get buried but I'd just like to mention a few things:
- Indeed, we took into account the additional energy cost of cooling in the "80x" advantage quoted in the article. This is based on a cryocooling efficiency of 1000 W at room temperature per Watt dissipated at cryotemps (4.2 Kelvin). This 1000W/W coefficient is commonly used in the superconductor electronics field. The switching energy of 1.4 zJ per device is quite close to the Landauer limit as mentioned in the comments but this assumes a 4.2 K environment. With cryocooling, the 1000x factor brings it to 1.4 aJ per device. Still not bad compared to SOTA FinFETs (~80x advantage) and we believe we can go even lower with improvement in our technology as well as cryocooling technology. The tables in Section VI of the published paper (open-access btw) goes on to estimate what a supercomputer using our devices might look like using helium referigeration systems commercially available today (which have an even more efficient ~400W/W cooling efficiency). The conclusion: we may easily surpass the US Department of Energy's exascale computing initiative goal of 1 exaFLOPS within a 20-MW power budget, some thing that's been difficult using current tech (although HP/AMD's El Capitan may finally get there, we may be 1-2 orders of magnitude better assuming a similar architecture).
- Quantum computers require very very low temps (0.015 K for IBM vs the 9.3 K for niobium in our devices). With the surge in superconductor-based quantum computing research, we expect new developments in cryocooling tech which would be very helpful for us to reduce the "plug-in" power.
- Our circuits are adiabatic but they're not ideal devices hence we still dissipate a tiny bit of energy. We have ideas to reduce the energy even further through logically and physically reversible computation. The trade-off is more circuit area overhead and generation of "garbage" bits that we have to deal with.
- The study featured only a prototype microprocessor and the main goal was to demonstrate that these AQFP devices can indeed do computation (processing and storage). Through the experience of developing this chip, it helped revealed the practical challenges in scaling up, and our new research directions are aggressively targetting them.
- The circuits are also suitable for the "classical" portion of quantum computing as the controller electronics. The advantage here is we can do classical processing close to the quantum computer chip which can help reduce the cable clutter going in/out of the cryocooling system. The very low-energy dissipation makes it less likely to disturb the qubits as well.
- We also have ideas on how to use the devices to build artificial neurons for AI hardware, and how we can implement hashing accelerators for cryptoprocessing/blockchain. (all in the very early stages)
- Other superconductor electronics showed super fast 700+ GHz gates but the power consumption is through the roof even before taking into account cooling. There are other "SOTA" superconductor chips showing more Josephson junction devices on a chip... many of those are just really long shift-registers that don't do any meaningful computation (useful for yield evaluation though) and don't have the labyrinth of interconnects that a microprocessor has.
- There are many pieces to think about: physics, IC fabrication, analog/digital design, architecture, etc. to make this commercially viable. At the end of the day, we're still working on the tech and trying to improve it, and we hope this study is just the beginning of some thing exciting.
Huh, this seemed a bit too good to be true on first reading. But given that the limits on computing power tend to thermal, and that a superconducting computer presumably wouldn't produce any heat at all, it does kind of make sense.
Sure. But how efficient are they once you include the power used to keep them cold enough to superconduct? I doubt that they're even as efficient as a normal microprocessor would be.
AQFP logic operates adiabatically which limits the clock rate to around 10 GHz in order to remain in the adiabatic regime. The SFQ logic families are non-adiabatic, which means they are capable of running at extremely fast clock rates as high as 770 GHz at the cost of much higher switching energy.
[+] [-] the8472|5 years ago|reply
The landauer limit at 4.2K is 4.019×10^-23 J (joules). So this is only a factor of 38x away from the landauer limit.
[+] [-] pgt|5 years ago|reply
Admittedely, I haven't done much reading, but I see it is a linked page from Bremermann's Limit: https://en.wikipedia.org/wiki/Landauer%27s_principle
[+] [-] freeqaz|5 years ago|reply
[+] [-] ginko|5 years ago|reply
[1] https://en.wikipedia.org/wiki/Reversible_computing
[+] [-] dcposch|5 years ago|reply
https://youtube.com/watch?v=BBqIlBs51M8
[+] [-] centimeter|5 years ago|reply
At room temperature.
[+] [-] b0rsuk|5 years ago|reply
Considering that "data centers alone consume 2% of world's enegy", I think it's worth it.
[+] [-] xvedejas|5 years ago|reply
[0] https://en.wikipedia.org/wiki/Jevons_paradox
[1] https://en.wikipedia.org/wiki/Wirth%27s_law
[+] [-] layoutIfNeeded|5 years ago|reply
So like 2009 compared to 2021? Based on that, I'd say even more inefficient webshit.
[+] [-] whatshisface|5 years ago|reply
[+] [-] api|5 years ago|reply
I can see demand in areas like graphics. Imagine real-time raytracing at 8K at 100+ FPS with <10ms latency.
[+] [-] hacknat|5 years ago|reply
Just wait 10 years?
[+] [-] twobitshifter|5 years ago|reply
[+] [-] mailslot|5 years ago|reply
[+] [-] ffhhj|5 years ago|reply
UI's will have physically based rendering and interaction.
[+] [-] xwdv|5 years ago|reply
Before this happens, I recommend having your exit strategy for the industry, living off whatever profits you made working as a developer during the early 21st century.
[+] [-] inglor_cz|5 years ago|reply
Once this can be done at the temperature of liquid nitrogen, that will be a true revolution. The difference in cost of producing liquid nitrogen and liquid helium is enormous.
Alternatively, such servers could be theoretically stored in the permanently shaded craters of the lunar South Pole, but at the cost of massive ping.
[+] [-] gnulinux|5 years ago|reply
[+] [-] juancampa|5 years ago|reply
Quick google search yields: $3.50 for 1L of He vs $0.30 for 1L of H2. So roughly 10 times more expensive.
[+] [-] faeyanpiraat|5 years ago|reply
On the moon you have no atmosphere to do it with radiators with fans, so I gues you would have to make huge radiators which simply emit the heat away as infrared radiation?
[+] [-] rini17|5 years ago|reply
And that only if you use the soil for cooling, which is non-renewable resource. If you use radiators, then you can put them on a satellite instead with much lower ping.
[+] [-] reasonabl_human|5 years ago|reply
Astronaut DRIs?
[+] [-] px43|5 years ago|reply
During stable price periods, the power/performance of cryptocurrency miners runs right up to the edge of profitability, so someone who can come in at 20% under that would have a SIGNIFICANT advantage.
[+] [-] wmf|5 years ago|reply
[+] [-] Badfood|5 years ago|reply
[+] [-] agumonkey|5 years ago|reply
[+] [-] andrelaszlo|5 years ago|reply
> in principle, energy is not gained or lost from the system during the computing process
Landauer's principle (from Wikipedia):
> any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase in non-information-bearing degrees of freedom of the information-processing apparatus or its environment
Where is this information going, inside of the processor, if it's not turned into heat?
[+] [-] abdullahkhalids|5 years ago|reply
However, in order to read out the output of computation, or to clear your register to prepare for new computation, you do generate heat energy and that is Landauer's principle.
In other words, you can run a reversible computer back and forth and do as many computations as you want (imagine a perfect ball bouncing in a frictionless environment), as long as you don't read out the results of your computation.
[1] NOT gate is reversible, and you can create reversible versions of AND and OR by adding some wires to store the input.
[+] [-] aqme28|5 years ago|reply
I'm having trouble interpreting what exactly that means though.
[+] [-] ladberg|5 years ago|reply
[+] [-] VanillaCafe|5 years ago|reply
[+] [-] whatshisface|5 years ago|reply
[+] [-] valine|5 years ago|reply
[+] [-] gpm|5 years ago|reply
[+] [-] tiborsaas|5 years ago|reply
:)
[+] [-] cbozeman|5 years ago|reply
Someone, somewhere, will adopt a Ting-type model where you pay for your compute per cycle, or per trillion cycles or whatever, with a small connection fee per month. It'll be broken down into some kind of easy-to-understand gibberish bullshit for the normies.
In short, it'll create another circle of Hell for everyone - at least initially.
[+] [-] KirillPanov|5 years ago|reply
The upfront costs of a cryocooler, spread out over the usable lifetime of the cryocooler (they're mechanical, they wear out), vastly exceeds the cost of electricity you save by switching from CMOS to JJs. Yes, I did the math on this. And cryocoolers are not following Moore's Law. Incredibly, they're actually becoming slightly more expensive over time after accounting for inflation. There was a LANL report about this which I'm trying to find, will edit when I find it. The report speculated that it had to do with raw materials depletion.
All of the above I'm quite certain of. I suspect (but am in no way certain) that the energy expended to manufacture a cryocooler also vastly exceeds the energy saved over its expected lifetime as a result of its use. That's just conjecture however, but nobody ever seems to address that point.
[+] [-] aqfp|5 years ago|reply
This comment might get buried but I'd just like to mention a few things:
- Indeed, we took into account the additional energy cost of cooling in the "80x" advantage quoted in the article. This is based on a cryocooling efficiency of 1000 W at room temperature per Watt dissipated at cryotemps (4.2 Kelvin). This 1000W/W coefficient is commonly used in the superconductor electronics field. The switching energy of 1.4 zJ per device is quite close to the Landauer limit as mentioned in the comments but this assumes a 4.2 K environment. With cryocooling, the 1000x factor brings it to 1.4 aJ per device. Still not bad compared to SOTA FinFETs (~80x advantage) and we believe we can go even lower with improvement in our technology as well as cryocooling technology. The tables in Section VI of the published paper (open-access btw) goes on to estimate what a supercomputer using our devices might look like using helium referigeration systems commercially available today (which have an even more efficient ~400W/W cooling efficiency). The conclusion: we may easily surpass the US Department of Energy's exascale computing initiative goal of 1 exaFLOPS within a 20-MW power budget, some thing that's been difficult using current tech (although HP/AMD's El Capitan may finally get there, we may be 1-2 orders of magnitude better assuming a similar architecture).
- Quantum computers require very very low temps (0.015 K for IBM vs the 9.3 K for niobium in our devices). With the surge in superconductor-based quantum computing research, we expect new developments in cryocooling tech which would be very helpful for us to reduce the "plug-in" power.
- Our circuits are adiabatic but they're not ideal devices hence we still dissipate a tiny bit of energy. We have ideas to reduce the energy even further through logically and physically reversible computation. The trade-off is more circuit area overhead and generation of "garbage" bits that we have to deal with.
- The study featured only a prototype microprocessor and the main goal was to demonstrate that these AQFP devices can indeed do computation (processing and storage). Through the experience of developing this chip, it helped revealed the practical challenges in scaling up, and our new research directions are aggressively targetting them.
- The circuits are also suitable for the "classical" portion of quantum computing as the controller electronics. The advantage here is we can do classical processing close to the quantum computer chip which can help reduce the cable clutter going in/out of the cryocooling system. The very low-energy dissipation makes it less likely to disturb the qubits as well.
- We also have ideas on how to use the devices to build artificial neurons for AI hardware, and how we can implement hashing accelerators for cryptoprocessing/blockchain. (all in the very early stages)
- Other superconductor electronics showed super fast 700+ GHz gates but the power consumption is through the roof even before taking into account cooling. There are other "SOTA" superconductor chips showing more Josephson junction devices on a chip... many of those are just really long shift-registers that don't do any meaningful computation (useful for yield evaluation though) and don't have the labyrinth of interconnects that a microprocessor has.
- There are many pieces to think about: physics, IC fabrication, analog/digital design, architecture, etc. to make this commercially viable. At the end of the day, we're still working on the tech and trying to improve it, and we hope this study is just the beginning of some thing exciting.
[+] [-] peter_d_sherman|5 years ago|reply
Footnotes:
[1] Including non-conductors... but you need a lot of voltage! <g>
[+] [-] tromp|5 years ago|reply
In practice it will need to interface to external memory in order to perform (more) useful work.
Would there be any problems fashioning memory cells out of Josephson junctions, so that the power savings can carry over to the system as a whole?
[+] [-] nicoburns|5 years ago|reply
[+] [-] superkuh|5 years ago|reply
[+] [-] wmf|5 years ago|reply
AQFP logic operates adiabatically which limits the clock rate to around 10 GHz in order to remain in the adiabatic regime. The SFQ logic families are non-adiabatic, which means they are capable of running at extremely fast clock rates as high as 770 GHz at the cost of much higher switching energy.
[+] [-] adamredwoods|5 years ago|reply
https://cacm.acm.org/news/232327-the-outlook-for-superconduc...
[+] [-] jcfrei|5 years ago|reply
[+] [-] klysm|5 years ago|reply
[+] [-] Symmetry|5 years ago|reply
[+] [-] m3kw9|5 years ago|reply
[+] [-] lallysingh|5 years ago|reply
[+] [-] hikerclimb|5 years ago|reply
[deleted]