top | item 46880296

(no title)

hirsin | 26 days ago

Simply put no, 50MW is not the typical hyperscaler cloud size. It's not even the typical single datacenter size.

A single AI rack consumes 60kW, and there is apparently a single DC that alone consumes 650MW.

When Microsoft puts in a DC, the machines are done in units of a "stamp", ie a couple racks together. These aren't scaled by dollar or sqft, but by the MW.

And on top of that... That's a bunch of satellites not even trying to crunch data at top speed. No where near the right order of magnitude.

discuss

order

pera|25 days ago

New GPU dense racks are going up to 300kW, but I believe the normal at moment for hyperscalers is somewhere around ~150kW, can someone confirm?

The energy demand of these DCs is monstrous, I seriously can't imagine something similar being deployed in orbit...

stonogo|25 days ago

Most of the OEMs are past 300kW racks, planning on 600kW racks within a year or two, with realistic plans to hit a megawatt

synctext|25 days ago

Could this be about bypassing government regulation and taxation? Silkroad only needed a tiny server, not 150kW.

The Outer Space Treaty (1967) has a loophole. If you launch from international waters (planned by SpaceX) and the equipment is not owned by a US-company or other legal entity there is significant legal ambiguity. This is Dogecoin with AI. Exploiting this accountability gap and creating a Grok AI plus free-speech platform in space sounds like a typical Elon endeavour.

tensor|25 days ago

How much of that power is radiated as the radio waves it sends?

hirsin|25 days ago

Good point - the comms satellites are not even "keeping" some of the energy, while a DC would. I _am_ now curious about the connection between bandwidth and wattage, but I'm willing to bet that less than 1% of the total energy dissipation on one of these DC satellites would be in the form of satellite-to-earth broadcast (keeping in mind that s2s broadcast would presumably be something of a wash).

mlyle|25 days ago

I doubt half the power is to the transmitter, and radio efficiency is poor -- 20% might be a good starting point.

adgjlsfhk1|25 days ago

the majority is likely in radio waves and the inter satellite laser communication

nosianu|25 days ago

The radio receiver and transmitter are additional hardware and energy consumption. They add to the heat, not subtract from it.

mike_hearn|25 days ago

But the focus on building giant monolithic datacenters comes from the practicalities of ground based construction. There are huge overheads involved with obtaining permits, grid connections, leveling land, pouring concrete foundations, building roads and increasingly often now, building a power plant on site. So it makes sense to amortize these overheads by building massive facilities, which is why they get so big.

That doesn't mean you need a gigawatt of power before achieving anything useful. For training, maybe, but not for inference which scales horizontally.

With satellites you need an orbital slot and launch time, and I honestly don't know how hard it is to get those, but space is pretty big and the only reasons for denying them would be safety. Once those are obtained done you can make satellite inferencing cubes in a factory and just keep launching them on a cadence.

I also strongly suspect, given some background reading, that radiator tech is very far from optimized. Most stuff we put into space so far just doesn't have big cooling needs, so there wasn't a market for advanced space radiator tech. If now there is, there's probably a lot of low hanging fruit (droplet radiators maybe).

leoedin|25 days ago

But why would you?

Space has some huge downsides:

* Everything is being irradiated all the time. Things need to be radiation hardened or shielded.

* Putting even 1kg into space takes vast amounts of energy. A Falcon 9 burns 260 MJ of fuel per kg into LEO. I imagine the embodied energy in the disposable rocket and liquid oxygen make the total number 2-3x that at least.

* Cooling is a nightmare. The side of the satellite in the sun is very hot, while the side facing space is incredibly cold. No fans or heat sinks - all the heat has to be conducted from the electronics and radiated into space.

* Orbit keeping requires continuous effort. You need some sort of hypergolic rocket, which has the nasty effect of coating all your stuff in horrible corrosive chemicals

* You can't fix anything. Even a tiny failure means writing off the entire system.

* Everything has to be able to operate in a vacuum. No electrolytic capacitors for you!

So I guess the question is - why bother? The only benefit I can think of is very short "days" and "nights" - so you don't need as much solar or as big a battery to power the thing. But that benefit is surely outweighed by the fact you have to blast it all into space? Why not just overbuild the solar and batteries on earth?

cogman10|25 days ago

> I also strongly suspect, given some background reading, that radiator tech is very far from optimized. Most stuff we put into space so far just doesn't have big cooling needs, so there wasn't a market for advanced space radiator tech. If now there is, there's probably a lot of low hanging fruit (droplet radiators maybe).

You'd be wrong. There's a huge incentive to optimized radiator tech because of things like the international space station and MIR. It's a huge part of the deployment due to life having pretty narrow thermal bands. The added cost to deploy that tech also incentivizes hyper optimization.

Making bigger structures doesn't make that problem easier.

Fun fact, heat pipes were invented by NASA in the 60s to help address this very problem.

skywhopper|25 days ago

All of those “huge overheads” you cite are nothing compared to the huge overhead of building and fueling rockets to launch the vibration- and radiation-hardened versions of the solar panels and GPUs and cooling equipment that you could use much cheaper versions of on Earth. How many permitted, regulated launches would it take to get around the one-time permitting and predictable regulation of a ground-based datacenter?

Are Earth-based datacenters actually bound by some bottleneck that space-based datacenters would not be? Grid connections or on-site power plants take time to build, yes. How long does it take to build the rocket fleet required to launch a space “datacenter” in a reasonable time window?

This is not a problem that needs to be solved. Certainly not worth investing billions in, and definitely not when run by the biggest scam artist of the 21st century.

thephyber|25 days ago

There is a lot of hand waiving away of the orders of magnitude more manufacturing, more launches, and more satellites that have to navigate around each other.

We still don’t have any plan I’ve heard of for avoiding a cascade of space debris when satellites collide and turn into lots of fast moving shrapnel. Yes, space is big, but low Earth orbit is a very tiny subset of all space.

The amount of propulsion satellites have before they become unable to maneuver is relatively small and the more satellite traffic there is, the faster each satellite will exhaust their propulsion gasses.

lloeki|25 days ago

For another reference, the Nvidia-OpenAI deal is reportedly 10GW worth of DC.