top | item 46879507

(no title)

lancewiggs | 26 days ago

It's exiting the 5th best social network and the 10th (or worse) best AI company and selling them to a decent company.

It probably increases Elon's share of the combined entity.

It delivers on a promise to investors that he will make money for them, even as the underlying businesses are lousy.

discuss

order

gpt5|26 days ago

I'm confused about the level of conversation here. Can we actually run the math on heat dissipation and feasibility?

A Starlink satellite uses about 5K Watts of solar power. It needs to dissipate around that amount (+ the sun power on it) just to operate. There are around 10K starlink satellites already in orbit, which means that the Starlink constellation is already effectively equivalent to a 50 Mega-watt (in a rough, back of the envelope feasibility way).

Isn't 50MW already by itself equivalent to the energy consumption of a typical hyperscaler cloud?

Why is starlink possible and other computations are not? Starlink is also already financially viable. Wouldn't it also become significantly cheaper as we improve our orbital launch vehicles?

hirsin|26 days ago

Simply put no, 50MW is not the typical hyperscaler cloud size. It's not even the typical single datacenter size.

A single AI rack consumes 60kW, and there is apparently a single DC that alone consumes 650MW.

When Microsoft puts in a DC, the machines are done in units of a "stamp", ie a couple racks together. These aren't scaled by dollar or sqft, but by the MW.

And on top of that... That's a bunch of satellites not even trying to crunch data at top speed. No where near the right order of magnitude.

kimixa|25 days ago

Output from radiating heat scales with area it can dissipate from. Lots of small satellites have a much higher ratio than fewer larger satellites. Cooling 10k separate objects is orders of magnitude easier than 10 objects at 1000x the power use, even if the total power output is the same.

Distributing useful work over so many small objects is a very hard problem, and not even shown to be possible at useful scales for many of the things AI datacenters are doing today. And that's with direct cables - using wireless communication means even less bandwidth between nodes, more noise as the number of nodes grows, and significantly higher power use and complexity for the communication in the first place.

Building data centres in the middle of the sahara desert is still much better in pretty much every metric than in space, be it price, performance, maintainance, efficiency, ease of cooling, pollution/"trash" disposal etc. Even things like communication network connectivity would be easier, as at the amounts of money this constellation mesh would cost you could lay new fibre optic cables to build an entire new global network to anywhere on earth and have new trunk connections to every major hub.

There are advantages to being in space - normally around increased visibility for wireless signals, allowing great distances to be covered at (relatively) low bandwidth. But that comes at an extreme cost. Paying that cost for a use case that simply doesn't get much advantages from those benefits is nonsense.

space_fountain|26 days ago

It's like this. Everything about operating a datacenter in space is more difficult than it is to operate one on earth.

1. The capital costs are higher, you have to expend tons of energy to put it into orbit

2. The maintenance costs are higher because the lifetime of satellites is pretty low

3. Refurbishment is next to impossible

4. Networking is harder, either you are ok with a relatively small datacenter or you have to deal with radio or laser links between satellites

For starlink this isn't as important. Starlink provides something that can't really be provided any other way, but even so just the US uses 176 terawatt-hours of power for data centers so starlink is 1/400th of that assuming your estimate is accurate (and I'm not sure it is, does it account for the night cycle?)

tw04|26 days ago

Amazon’s new campus in Indiana is expected to use 2.2GW when complete. 50Mw is nothing, and that’s ignoring the fact that most of that power wouldn't actually be used for compute.

Aurornis|26 days ago

> Isn't 50MW already by itself equivalent to the energy consumption of a typical hyperscaler cloud?

xAI’s first data center buildout was in the 300MW range and their second is in the Gigawatt range. There are planned buildouts from other companies even bigger than that.

So data center buildouts in the AI era need 1-2 orders of magnitude more power and cooling than your 50MW estimate.

Even a single NVL72 rack, just one rack, needs 120kW.

javascriptfan69|26 days ago

Starlink provides a service that couldn't exist without the satellite infrastructure.

Datacenters already exist. Putting datacenters in space does not offer any new capabilities.

jdhwosnhw|25 days ago

> A Starlink satellite uses about 5K Watts of solar power. It needs to dissipate around that amount (+ the sun power on it) just to operate.

The “+ solar power” part is the majority of the energy. Solar panel efficiency is only about 25-30% at beginning-of-life whereas typical albedos are effectively 100%. So your estimate is off by at least a factor of three.

Also, I’m not sure where you got 5 kw from. The area of the satellite is ~100 m2, which means they are intercepting over 100 kw of bolometric solar power.

gclawes|26 days ago

Starlink satellites also radiate a non-trivial amount of the energy they consume from their phased arrays

markhahn|26 days ago

50MW is on the small side for an AI cluster - probably less than 50k gpus.

if the current satellite model dissipates 5kW, you can't just add a GPU (+1kW). maybe removing most of the downlink stuff lets you put in 2 GPUs? so if you had 10k of these, you'd have a pretty high-latency cluster of 20k GPUs.

I'm not saying I'd turn down free access to it, but it's also very cracked. you know, sort of Howard Hughesy.

kristjansson|26 days ago

50MW might be one aisle of a really dense DC. A single rack might draw 120kW.

ErroneousBosh|25 days ago

> A Starlink satellite uses about 5K Watts of solar power

Is that 5kW of electrical power input at the terminals, or 5kW irradiation onto the panels?

Because that sounds like kind of a lot, for something the size of a fridge.

padjo|25 days ago

Are starlink satellites in sun synchronous orbits? Doesn't constant solar heating change the energy balance quite a bit?

antonvs|26 days ago

> Why is starlink possible and other computations are not?

Aside from the point others have made that 50 MW is small in the context of hyperscalers, if you want to do things like SOTA LLM training, you can't feasibly do it with large numbers of small devices.

Density is key because of latency - you need the nodes to be in close physical proximity to communicate with each other at very high speeds.

For training an LLM, you're ideally going to want individual satellites with power delivery on the order of at least about 20 MW, and that's just for training previous-generation SOTA models. That's nearly 5,000 times more power than a single current Starlink satellite, and nearly 300 times that of the ISS.

You'd need radiator areas in the range of tens of thousands of square meters to handle that. Is it theoretically technically possible? Sure. But it's a long-term project, the kind of thing that Musk will say takes "5 years" that will actually take many decades. And making it economically viable is another story - the OP article points out other issues with that, such as handling hardware upgrades. Starlink's current model relies on many cheap satellites - the equation changes when each one is going to be very, very expensive, large, and difficult to deploy.

whiplash451|25 days ago

Not related to heat, but a com satellite is built from extremely durable HW/SW that's been battle-tested to run flawlessly over years with massive MTBF numbers.

A data center is nowhere near that and requires constant physical interventions. How do they suggest to address this?

PurpleRamen|25 days ago

A Starlink satellite is mainly just receiving and sending data, the bare minimum of a data center-satellite's abilities; everything else comes on top and would be the real power drain.

michaelmrose|25 days ago

Why would anyone think the unit cost would be competitive with cheap power / land on earth? If that doesn't make sense how could anything else?

adgjlsfhk1|26 days ago

> A Starlink satellite uses about 5K Watts of solar power. It needs to dissipate around that amount (+ the sun power on it) just to operate.

This isn't quite true. It's very possible that the majority of that power is going into the antennas/lasers which technically means that the energy is being dissipated, but it never became heat in the first place. Also, 5KW solar power likely only means ~3kw of actual electrical consumption (you will over-provision a bit both for when you're behind the earth and also just for safety margin).

rootnod3|25 days ago

Forget heat. Replacing disks alone is a deal breaker on that one.

Sharlin|25 days ago

Square–cube law.

chairmansteve|25 days ago

A typical desktop/tower PC will consume 400 watts. So 12 PC's equals 1 starlink satellite.

A single server in a data center will consume 5-10 kW.

phs318u|26 days ago

Because 10K satellites have a FAR greater combined surface area than a single space-borne DC would. Stefan-Boltzman law: ability to radiate heat increase to the 4th power of surface area.

cjfd|25 days ago

Sure, we can run the math on heat dissipation. The law of Stefan-Boltzman is free and open source and it application is high school level physics. You talk about 50 MW. You are going to need a lot of surface area to radiate that off at somewhere close to reasonable temperatures.

TurdF3rguson|26 days ago

> 10th (or worse) best AI company

You might only care about coding models, but text is dominating the market share right now and Grok is the #2 model for that in arena rankings.

mbesto|25 days ago

Arena rankings, lol.

Openrouter is a decent proxy for real world use and Grok is currently 8% of the market: https://openrouter.ai/rankings (and is less than 7% of TypeScript programming)

adventured|26 days ago

Grok is losing pretty spectacularly on the user / subscriber side of things.

They have no path to paying for their existence unless they drastically increase usage. There aren't going to be very many big winners in this segment and xAI's expenses are really really big.

ojbyrne|26 days ago

Plus government backstop. The federal government (especially the current one) is not going to let SpaceX fail.

mullingitover|26 days ago

Maybe not, but they might force it to sell at fire sale prices to another aerospace company that doesn't have the baggage.

stogot|26 days ago

xAI includes twitter? I thought twitter was just X?

7bees|26 days ago

xAI acquired twitter in 2025 as part of Musk's financial shell game (probably the same game he is playing with SpaceX/xAI now).

Vaslo|25 days ago

Sounds like Elon hurt someone’s feelings

chairmansteve|25 days ago

Elon's always looking for another Brooklyn Bridge to sell to the rubes...