(no title)
gpt5 | 26 days ago
A Starlink satellite uses about 5K Watts of solar power. It needs to dissipate around that amount (+ the sun power on it) just to operate. There are around 10K starlink satellites already in orbit, which means that the Starlink constellation is already effectively equivalent to a 50 Mega-watt (in a rough, back of the envelope feasibility way).
Isn't 50MW already by itself equivalent to the energy consumption of a typical hyperscaler cloud?
Why is starlink possible and other computations are not? Starlink is also already financially viable. Wouldn't it also become significantly cheaper as we improve our orbital launch vehicles?
hirsin|26 days ago
A single AI rack consumes 60kW, and there is apparently a single DC that alone consumes 650MW.
When Microsoft puts in a DC, the machines are done in units of a "stamp", ie a couple racks together. These aren't scaled by dollar or sqft, but by the MW.
And on top of that... That's a bunch of satellites not even trying to crunch data at top speed. No where near the right order of magnitude.
pera|26 days ago
The energy demand of these DCs is monstrous, I seriously can't imagine something similar being deployed in orbit...
tensor|26 days ago
mike_hearn|26 days ago
That doesn't mean you need a gigawatt of power before achieving anything useful. For training, maybe, but not for inference which scales horizontally.
With satellites you need an orbital slot and launch time, and I honestly don't know how hard it is to get those, but space is pretty big and the only reasons for denying them would be safety. Once those are obtained done you can make satellite inferencing cubes in a factory and just keep launching them on a cadence.
I also strongly suspect, given some background reading, that radiator tech is very far from optimized. Most stuff we put into space so far just doesn't have big cooling needs, so there wasn't a market for advanced space radiator tech. If now there is, there's probably a lot of low hanging fruit (droplet radiators maybe).
lloeki|26 days ago
kimixa|25 days ago
Distributing useful work over so many small objects is a very hard problem, and not even shown to be possible at useful scales for many of the things AI datacenters are doing today. And that's with direct cables - using wireless communication means even less bandwidth between nodes, more noise as the number of nodes grows, and significantly higher power use and complexity for the communication in the first place.
Building data centres in the middle of the sahara desert is still much better in pretty much every metric than in space, be it price, performance, maintainance, efficiency, ease of cooling, pollution/"trash" disposal etc. Even things like communication network connectivity would be easier, as at the amounts of money this constellation mesh would cost you could lay new fibre optic cables to build an entire new global network to anywhere on earth and have new trunk connections to every major hub.
There are advantages to being in space - normally around increased visibility for wireless signals, allowing great distances to be covered at (relatively) low bandwidth. But that comes at an extreme cost. Paying that cost for a use case that simply doesn't get much advantages from those benefits is nonsense.
sandworm101|25 days ago
This is a pump-and-dump bid for investor money. They will line up to give it to him.
ineedasername|25 days ago
Of course this doesn't solve the myriad problems, but it does put dissipation squarely in the category of "we've solved similar problems". I agree there's still no good reason to actually do this unless there's a use for all that compute out there in orbit, but that too is happening with immense growth and demand expected for increased pharmaceutical research and various manufacturing capabilities that require low/no gravity.
abalone|25 days ago
Space changes this. Laser based optical links offer bandwidth of 100 - 1000 Gbps with much lower power consumption than radio based links. They are more feasible in orbit due to the lack of interference and fogging.
> Building data centres in the middle of the sahara desert is still much better in pretty much every metric
This is not true for the power generation aspect (which is the main motivation for orbital TPUs). Desert solar is a hard problem due to the need for a water supply to keep the panels clear of dust. Also the cooling problem is greatly exacerbated.
unknown|25 days ago
[deleted]
space_fountain|26 days ago
1. The capital costs are higher, you have to expend tons of energy to put it into orbit
2. The maintenance costs are higher because the lifetime of satellites is pretty low
3. Refurbishment is next to impossible
4. Networking is harder, either you are ok with a relatively small datacenter or you have to deal with radio or laser links between satellites
For starlink this isn't as important. Starlink provides something that can't really be provided any other way, but even so just the US uses 176 terawatt-hours of power for data centers so starlink is 1/400th of that assuming your estimate is accurate (and I'm not sure it is, does it account for the night cycle?)
WillPostForFood|26 days ago
smileeeee|26 days ago
At the end of the day I don't really care either way. It ain't my money, and their money isn't going to get back into the economy by sitting in a brokerage portfolio. To get them to spend money this is as good a way as any other, I guess. At least it helps fund a little spaceflight and satellite R&D on the way.
murderfs|26 days ago
Presumably they're planning on doing in-orbit propellant transfer to reboost the satellites so that they don't have to let their GPUs crash into the ocean...
trhway|26 days ago
putting 1KW of solar on land - $2K, putting it into orbit on Starship (current ground-based heavy solar panels, 40kg for 4m2 of 1KW in space) - anywhere between $400 and $4K. Add to that that the costs on Earth will only be growing, while costs in space will be falling.
Ultimately Starship's costs will come down to the bare cost of fuel + oxidizer, 20kg per 1kg in LEO, i.e. less than $10. And if they manage streamlined operations and high reuse. Yet even with $100/kg, it is still better in space than on the ground.
And for cooling that people so complain about without running it in calculator - https://news.ycombinator.com/item?id=46878961
>2. The maintenance costs are higher because the lifetime of satellites is pretty low
it will live those 3-5 years of the GPU lifecycle.
JumpCrisscross|26 days ago
Minus one big one: permitting. Every datacentre I know going up right now is spending 90% of their bullshit budget on battlig state and local governments.
tw04|26 days ago
Aurornis|26 days ago
xAI’s first data center buildout was in the 300MW range and their second is in the Gigawatt range. There are planned buildouts from other companies even bigger than that.
So data center buildouts in the AI era need 1-2 orders of magnitude more power and cooling than your 50MW estimate.
Even a single NVL72 rack, just one rack, needs 120kW.
javascriptfan69|26 days ago
Datacenters already exist. Putting datacenters in space does not offer any new capabilities.
_fizz_buzz_|26 days ago
jdhwosnhw|25 days ago
The “+ solar power” part is the majority of the energy. Solar panel efficiency is only about 25-30% at beginning-of-life whereas typical albedos are effectively 100%. So your estimate is off by at least a factor of three.
Also, I’m not sure where you got 5 kw from. The area of the satellite is ~100 m2, which means they are intercepting over 100 kw of bolometric solar power.
pdpi|26 days ago
0. https://www.arccompute.io/solutions/hardware/gpu-servers/sup...
MadnessASAP|26 days ago
The short answer is that ~100m2 of steel plate at 1400C (just below its melting point) will shed 50MW of power in black body radiation.
https://news.ycombinator.com/item?id=46087616#46093316
adrian_b|25 days ago
So your huge metal plate would radiate (1673/374)^4 = 400 times less heat, i.e. only 125 kW.
In reality, it would radiate much less than that, even if made of copper or silver covered with Vantablack, because the limited thermal conductivity will reduce the temperature for the parts distant from the body.
ViewTrick1002|26 days ago
gclawes|26 days ago
markhahn|26 days ago
if the current satellite model dissipates 5kW, you can't just add a GPU (+1kW). maybe removing most of the downlink stuff lets you put in 2 GPUs? so if you had 10k of these, you'd have a pretty high-latency cluster of 20k GPUs.
I'm not saying I'd turn down free access to it, but it's also very cracked. you know, sort of Howard Hughesy.
hackernudes|26 days ago
kristjansson|26 days ago
ErroneousBosh|25 days ago
Is that 5kW of electrical power input at the terminals, or 5kW irradiation onto the panels?
Because that sounds like kind of a lot, for something the size of a fridge.
padjo|26 days ago
antonvs|26 days ago
Aside from the point others have made that 50 MW is small in the context of hyperscalers, if you want to do things like SOTA LLM training, you can't feasibly do it with large numbers of small devices.
Density is key because of latency - you need the nodes to be in close physical proximity to communicate with each other at very high speeds.
For training an LLM, you're ideally going to want individual satellites with power delivery on the order of at least about 20 MW, and that's just for training previous-generation SOTA models. That's nearly 5,000 times more power than a single current Starlink satellite, and nearly 300 times that of the ISS.
You'd need radiator areas in the range of tens of thousands of square meters to handle that. Is it theoretically technically possible? Sure. But it's a long-term project, the kind of thing that Musk will say takes "5 years" that will actually take many decades. And making it economically viable is another story - the OP article points out other issues with that, such as handling hardware upgrades. Starlink's current model relies on many cheap satellites - the equation changes when each one is going to be very, very expensive, large, and difficult to deploy.
whiplash451|25 days ago
A data center is nowhere near that and requires constant physical interventions. How do they suggest to address this?
unknown|25 days ago
[deleted]
PurpleRamen|26 days ago
michaelmrose|26 days ago
adgjlsfhk1|26 days ago
This isn't quite true. It's very possible that the majority of that power is going into the antennas/lasers which technically means that the energy is being dissipated, but it never became heat in the first place. Also, 5KW solar power likely only means ~3kw of actual electrical consumption (you will over-provision a bit both for when you're behind the earth and also just for safety margin).
rootnod3|25 days ago
Sharlin|26 days ago
chairmansteve|25 days ago
A single server in a data center will consume 5-10 kW.
phs318u|26 days ago
thebolt00|26 days ago
cjfd|26 days ago
ndsipa_pomu|25 days ago