To get a ballpark power usage, we can look at comparable (for some definition thereof) commercial offerings. Take a public datasheet from Arista[1], they quote 16W typical for a 400Gbps module for 120km of reach. You would need 2500 modems at 16W (38kW) jointly decoding (i.e. very close together) to process this data rate. GPU compute has really pushed the boundaries on thermal management, but this would be far more thermally dense.[1] https://www.arista.com/assets/data/pdf/Datasheets/400ZR_DCI_...
cycomanic|7 months ago
The 38kW is not a very high number btw, the switches at the end points of submarine links are quite a bit more power hungry already.
aDfbrtVt|7 months ago
The main point I was trying to make is the impracticality of MIMO SDM. The topic has been discussed to death (see the endless papers from Nokia) and has yet to be deployed because the spatial gain is never worth the real world implementation issues.
BobbyTables2|7 months ago
eqvinox|7 months ago
[but now that I think about it… I think my estimate is indeed too low; I was assuming commonplace transceivers for the unit factor, i.e. ≤1Tb; but a petabit on 19 cores is still 53Tb per core…]
¹ note the setup in this paper has separate amplifiers in 86.1km steps, so the transmitter doesn't need to be particularly high powered.
quickthrowman|7 months ago
I mean, it’s a shitload more power than a simple media converter that takes in fiber and outputs to a RJ-45 but not all that much compared to other commercial electrical loads. This Eaton/Tripplite unit draws ~40W at 120V - https://tripplite.eaton.com/gigabit-multimode-fiber-to-ether...
A smallish commercial heat pump/CRAC unit (~12kW) can handle the cooling requirements (assuming a COP of 3)