I asked Google for more information about AI datacenter in space. This was the first sentence, 'AI data centers are being developed in space to handle the massive energy demands of AI, using solar power and the vacuum of space for cooling.'
> After laughing at "the vacuum of space for cooling" I closed the page because there was nothing serious there. Basic high school physics student would be laughing at that sentence.
Heat conduction requires a medium, but radiation works perfectly fine in a vacuum. Otherwise the Sun wouldn't be able to heat up the Earth. The problem for spacecraft is that you're limited by how much IR radiation is passively emitted from your heat sinks, you can't actively expel heat any faster.
There is some medium in low Earth orbit. Not all vacuums are created equal. However, LEO vacuum is still very, very sparse compared to the air and water we use for cooling systems.
I wonder if there should be levels of "in theory". Yes theoretically black body radiation exist and well stuff cools down to near background radiation via that. But the next level is theoretical implementation. Like actually moving around the heat from source and so on. Maybe this could be the spherical cow step...
Reminds me of the hyperloop. Well yes, things in vacuum tube go fast. Now does enough things go fast to make any sense...
I man you totally can radiate excess heat energy on earth, but your comment implies that the parents idea of radiating off excess "energy", specifically HEAT energy in space is possible, which it isn't.
You can radiate excess energy for sure, but you'd first have to convert it away from heat energy into light or radio waves or similar.
I don't think we even have that tech at this point in time, and neither do we have any concepts how this could be done in theory.
Yes. And it's an absolutely terrible way to get rid of heat. Cooling in space is a major problem because the actually effective ways to do it are not available.
There's no air and negligible thermal medium to convect heat away. The only way heat leaves is through convection from the extremely sparse atmosphere in low Earth orbit (less than a single atom per cubic millimeter) and through thermal radiation. Both of which are much, much slower than convection with water or air.
Space stations need enormous radiator panels to dissipate the heat from the onboard computers and the body heat of a few humans. Cooling an entire data center would require utterly colossal radiator panels.
If you would kindly consult your Human HR Universal Handbook (2025 Edition) and navigate to section 226.8.2F, you’ll be gently reminded that it’s the responsibility of any and all employees to train their replacements.
Typically, these sorts of things are located in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard'.
Please consult your Human HR Universal Handbook (2025 Edition) on how to request a new copy of the Human HR Universal Handbook (2025 Edition). I believe it's in Volume III Section 9912.64.1 or thereabouts.
I don't agree with the logic that "something is hard/can't be done right now" is equivalent to "this is a terrible idea and won't work."
There are dozens of companies solving each problem outlined here; if we never attempt the 'hard' thing we will never progress. The author could have easily taken a tone of 'these are all the things that are hard that we will need to solve first' but actively chose to take the 'catastrophically bad idea' angle.
From a more positive angle, I'm a big fan of Northwood Space and they're tackling the 'Communications' problem outlined in this article pretty well.
Always remember the magic words: dual use technology. The people pushing these aren't saying to you that they want to build data centers in space because conventional data centers are at huge risk of getting bombed by foreign nations or eventually getting smashed by angry mobs. But you can bet they're saying that to the people with the dual-use technology money bag. Or even better, let them draw that conclusion themselves, to make them think it was their idea - that also has the advantage of deniability when it turns out data centers in space was a terrible solution to the problem.
At this point I wouldn't be surprised if a non zero number of pitch meetings start with, "in order to not disrupt your life too much as the mobs of the starving and displaced beat down your door"
It is far easier to build them at remote places and bunkers (or both). Even at the middle of the ocean will make more sense and provide better cooling (See Microsoft attempt at that).
So many ideas involving AI just seems to be built off of sci-fi (not in a good way), including this one. Like sci-fi, there are little practical considerations made.
Sci-fi isn't even really about the tech. It's about what happens to us, humans, when the tech changes in dramatic ways. Sci-fi authors dream up types of technology that create new social orders, factions, rifts, types of interpersonal relationships, types of fascism, where the unforseen consequences of human ingenuity hoist us upon our collective petard.
But these baffoons only see the blinky shiney and completely miss the point of the stories. They have a child's view of SF the way that men in their teens and 20d thought they were supposed to be like Tyler Durden.
As someone with a similar background to the writer of this post (I did avionics work for NASA before moving into more “traditional” software engineering), this post does a great job at summing up my thoughts on why space-based data centers won’t work. The SEU issues were my first though followed by the thermal concerns, and both are addressed here fantastically.
On the SEU issue I’ll add in that even in LEO you can still get SEUs - the ISS is in LEO and gets SEUs on occasion. There’s also the South Atlantic Anomaly where spacecraft in LEO see a higher number of SEUs.
The only advantage I can come up with is the background temperature being much colder than Earth surface. If you ignored the capex cost to get this launched and running in orbit, could the cooling cost be smaller? Maybe that's the gimmick being used to sell the idea. "Yes it costs more upfront but then the 40% cooling bill goes away... breakeven in X years"
Only legit thing I can see this being used for is redundant archival storage or just general research into hardening equipment to radiation or micrograv (eg for liquid cooling). But anything that generates significant amounts of heat seems like it'd be a huge problem.
Then again there's lots of space in space, perhaps it's possible to isolate racks/aisles into their own individual satellites, each with massive radiant heatshedding panels? It's an interesting problem space that would be very interesting to try to solve, but ultimately I agree with OP when we come back around to "But, why?" Research for the sake of research is a valid answer, but "For prod"? I don't see it.
Google’s paper [1] does talk about radiation hardening and thermal management. Maybe their ideas are naive and it’s a bad paper? I’m not an expert so I couldn’t tell from a brief skim.
It does sound to me like other concepts that Google has explored and shelved, like building data centers out of shipping container sized units and building data centers underwater.
If you think about it, all the existing data centers are in space already. They're just attached to a big ball of rock, water, and air that acts as a support system for them, simplifying cooling and radiation protection.
If humans are going to expand beyond the Earth, we'll certainly need to get much better at building and maintaining things in space, but we don't need to put data centers in space just to support people stuck on the ground.
The one thing that space has going for itself is space. You could have way bigger datacenters than on Earth and just leave them there, assuming Starship makes it cheap enough to get them there. I think it would maybe make sense if 2 things:
- We are sure we will need a lot of gpus for the next 30-40 years.
- We can make the solar panels + cooling + GPUs have a great life expectancy, so that we can just leave them up there and accumulate them.
Latency wise it seems okay for llm training to put them higher than Starlink to make them last longer and avoid decelerating because of the atmosphere. And for inference, well, if the infra can be amortized over decades than it might make the inference price cheap enough to endure additional latencies.
Concerning communication, SpaceX I think already has inter-starlinks laser comms, at least a prototype.
You can't just "leave them there" though. They orbit at high speed, which effectively means they actually take up vastly more space, with other objects orbiting at high speed intersecting those orbits. The orbits that are most useful are relatively narrow bands shared with a lot of other satellites and a fair amount of debris, and orbits tend to decay over time (which is a problem if you're in low earth orbit because they'll decay all the way into the atmosphere, and a problem if you're in geostationary orbit because you'll lose the advantage of stationary bit for maintaining comms links). This is a solvable problem with propulsion, but that entails bringing the propellant with you and end-of-life (or an expensive refuelling operation) when it runs out. The cost of maintaining real estate space is vastly more than out right owning land.
Similarly, making stuff have a great life expectancy is much more expensive than having it optimized for cost and operational requirements instead but stored somewhere you can replace individual components as and when they fail, and it's also much easier to maximise life expectancy somewhere bombarded by considerably less radiation.
There is lots and lots and lots of space on Earth where hardly anyone is living. Cheap rural areas can support extremely large datacenters, limited only by availability of utilities and workers.
We also have to build a lot more solar and nuclear in addition of the datacenters themselves, which we need to do anyway but it would compound the land we use for energy production.
Yet a colossal number of servers on satellites would require the same energy-production facilities to be shipped into orbit (and to receive regular maintainence in orbit whenever they fail), which requires loads of land for launch facilities as well as processing for fuel and other consumable resources. Solar might be somewhat more efficient, but not nearly so much so as to make up for the added difficulty in cooling. One could maybe postulate asteroid mining and space manufacturing to reduce the total delta-V requirement per satellite-year, but missions to asteroids have fuel requirements of their own.
If anything, I'd expect large-scale Mars datacenters before large-scale space datacenters, if we can find viable resources there.
It makes sense, I would be curious to see the price computations done by the different space GPUs startups and Big Tech, I wonder how they are getting a cheaper cost, or maybe it is marketing.
Space is not much of an issue for datacenters. For one thing, compute density is growing; it's not uncommon for a datacenter to be capacity limited by power and/or cooling before space becomes an issue; especially for older datacenters.
There are plenty of data centers in urban centers; most major internet exchanges have their core in a skyscraper in a significant downtown, and there will almost always be several floors of colospace surrounding that, and typically in neighboring buildings as well. But when that is too expensive, it's almost always the case that there are satellite DCs in the surrounding suburbs. Running fiber out to the warehouse district isn't too expensive, especially compared to putting things in orbit; and terrestrial power delivery has got to be a lot less expensive and more reliable too.
According to a quick search, StarLink has one 100g space laser on equipped satellites; that's peanuts for terrestrial equipment.
Falcon heavy is only $1,500/kg to LEO. This rate is considerably undercut here on Earth by me, a weasley little nerd, who will move a kilogram in exchange for a pat on the head (if your praise is desirable) or up to tens of dollars (if it isn't).
Launching a datacenter like that carries an absurd cost even with Starship type launchers. Unless TSMC moves its production to LEO it's a joke of a proposal.
Underwater [0] is the obvious choice for both space and cooling. Seal the thing and chuck it next to an internet backbone cable.
> More than half the world’s population lives within 120 miles of the coast. By putting datacenters underwater near coastal cities, data would have a short distance to travel
> Among the components crated up and sent to Redmond are a handful of failed servers and related cables. The researchers think this hardware will help them understand why the servers in the underwater datacenter are eight times more reliable than those on land.
I agree with most of this post and think the problems are harder than the proponents are making them seem.
But, 1) literally the smartest people and AI in the world will be working on this and 2) man I want to see us get to a type 2 civilisation bad.
The layout of this blog post is also very interesting, it presents a bunch of very hard items to solve and funny enough the last has been solved recently with starlink. So we can approach this problem, it requires great engineering but it’s possible. Maybe it’s as complicated as CERNs LHC but we have one of those.
Next up then is the strong why? When you’re in space, if you set the cost of electricity to zero, the equation gets massively skewed.
Thermal is the biggest challenge but if you have unlimited electricity, lots of stuff becomes possible. Fluorinert cooling, piezoelectric pumps and dual/multi stage cooling loops with step ups. We can put liquid cooling with piezos on phones now, so that technology is moving in the right direction.
For a thought experiment, if launch costs were $0/kg, would this be possible? If the answers yes, then at some point above $0/kg it becomes uneconomical, the challenge is then to beat that number.
None of these problems seem intractable, just really hard and probably not being solved soon, but one has to start somewhere... so at least the billionaires will fund some scientists and engineers who will do that work?
One thing I haven't seen talked about at all: how quickly would space heat up?
I presume Earth's gravity largely keeps the exosphere it has around it. With some modest fractional % lost year by year. There is a colossal vast volume out there! But given that there's so little matter up in space, what if any temperature rise would we expect from say a constant 1TW of heat being added?
What about on the Moon? My understanding is that heat is the killer. There you could sink pipes into the surface and use that as a heat sink. There are “peaks of eternal light” near the poles where you could get 24/7 solar power.
Latency becomes high but you send large batches of work.
Probably not at all economical compared to anywhere on Earth but the physics work better than orbit where you need giant heat sinks.
Not if you bury it in regolith. That’s an idea for a Lunar base too. The design is called “Hobbit holes.” Bury the occupied structures in piles of basically any local mass you can bury them in.
It’s another huge problem for orbit though. Shielding would add a ton of mass and destroy the economics.
You'd have most of the problems of building in space, an abrasive quasi-atmosphere of dust, half a month of darkness every month, and not as good of a heat sink as the Earth's atmosphere.
I had this same thought and mentioned it on an ArsTechnica forum. There was reply that suggested that lunar regolith wouldn't be a good heat sink and a bit of googling makes me think this is probably true.
That said anything has to be better then almost literally nothing so I'm still holding out for datacenters on the moon.
Regardless of how terrible an idea it is, I wouldn't mind some billionaires funding R&D that advances the state of the art in thermal management in space.
Risky/untried things aren't dumb because they're hard, they're dumb when they're more expensive/harder than cheaper/easier alternatives that already exist that do the same thing.
"Mind-bogglingly poorly thought out to the degree of a cynical money-grubbing scheme worthy of the finest cambodian slave camp" was taken and is disrespectful to the hard work and education of said slave camp's operators.
Additionally, their distributions were different. People who read Dijkstra circa 1968 started using the phrase in their own publications within a decade, whereas people who read Viorst (or had it read to them) in 1972 and following years had at least a few decades of further delay before publishing anything using the corresponding phrase.
Except you don’t build a data center, you add a GPU to an individual starlink node. If you can do that a couple hundred or thousand times you’ve got a lot of compute in space. The next question is how would you redesign compute around your distributed power and cooling profiles?
The article doesn’t talk about the actual engineering challenges. (Such as scaling down the radiative cooling design, matching compute node to the maximum feasible power profile, etc)
I’m not arguing it’ll be easy or will ultimately work, but articles like this are unhelpful because they don’t address the fundamental insight being proposed.
> After laughing at "the vacuum of space for cooling" I closed the page because there was nothing serious there. Basic high school physics student would be laughing at that sentence.
reply