Technology

The 5 biggest obstacles to AI data centers in space

· 5 min read

However you feel about artificial intelligence (AI) — and, in particular, about the large language models and chatbots that are powered by it — the reality is that humanity is currently building and expanding infrastructure to support it. This includes large networks of power-demanding and water-requiring data centers that are being constructed, often conflicting with the electricity and water needs of the humans who live in those locations. It’s because of these concerns that some have floated the idea of AI data centers in space, with one company, SpaceX, recently announcing plans to build a literal megaconstellation of one million satellites to further that ambition.

Is this an example of an emerging technology that could provide an off-world solution to the problem of competing demands for limited resources? Or is it, like the hyperloop, an example of grift: where the concept itself isn’t exactly physically impossible, but is rendered so impractical due to the actual physical constraints of the endeavor, that it absolutely cannot materialize as advertised? It turns out that there are several challenges to building a functional network of AI data centers in space. Those challenges come on several fronts: economically, from an engineering perspective, and constrained by the laws of physics themselves.

Of the five big obstacles, three might yet be solved by technological developments. The last two, however, are set by the physics of the Universe itself, and are likely to be dealbreakers for the entire endeavor.

This image shows the parabola-like trajectory trail left by a rocket after launch. Currently, rocket launches are the only way to get large mass payloads above Earth’s atmosphere and into space. However, launch costs have fallen since the dawn of the space age by nearly a factor of 1000, with launches now approaching the vaunted 1000USD-per-kg to low-Earth orbit threshold.

Credit: SpaceX/rawpixel

5.) The prohibitive launch costs of satellites.

Two of the most important technological advances in recent years have come in the field of rocketry:

  • the ability to safely land and reuse rockets for launching satellites into orbit,
  • and the associated lowered launch costs of catapulting mass into low-Earth orbit.

The original launch vehicles for satellites used by the United States was the Vanguard rocket, which worked out to a cost of about $1,000,000 per kilogram. Given that a typical satellite payload is around 800 kg (1760 pounds), those prices were close to a billion dollars (in today’s dollars) per launch in the early days of spaceflight. Over the subsequent decades, however, those launch costs dropped precipitously.

During the Space Shuttle era, that cost dropped to around $50,000 per kg. In the 2010s, as private companies like Arianespace and SpaceX came to the forefront, launch costs dropped to below $10,000 per kg for the first time. At the present time, with reusable rockets that require little maintenance (other than refueling) between launches, launch costs are finally coming down to approach the $1000 per kg milestone. With vehicles from nation-states like Russia and China, as well as private companies like Rocket Lab, SpaceX, Arianespace and others, launch costs are no longer prohibitive. In fact, they’re likely to continue to drop over time, with the largest launch vehicles capable of carrying the heaviest payloads leading to the lowest overall costs at present.

This one, although often cited as a drawback to data centers in space, is likely the most easy obstacle to overcome simply through continued improvements due to economies of scale.

Two astronauts conduct maintenance on a space telescope in orbit, surrounded by cutting-edge technology and machinery. This mission supports NASA's quest to capture the most important images in history, all set against the breathtaking backdrop of the cosmos.

NASA astronauts Kathryn Thornton (top) and Thomas Akers (bottom) prepare the Corrective Optics Space Telescope Axial Replacement (COSTAR) package for installation aboard the Hubble Space Telescope on the STS-61 mission in December of 1993. Thornton can be seen anchored to a foot restraint on the end of the Remote Manipulator System arm. Since the end of the Space Shuttle program, very few satellites have been, or even could be, serviced in space with existing technology.

Credit: NASA

4.) The inability to repair or upgrade satellites in space.

This objection, raised by OpenAI CEO Sam Altman, is also largely an economic one. AI data centers are required to be optimized for computationally intensive tasks, including:

  • the training and running of AI and machine learning models,
  • the parallel processing demands of AI/LLM workloads,
  • utilizing high bandwidth memory as well as GPUs and TPUs,
  • and requiring high speed interconnects.

These specialized AI data centers require not only these specialized computer chips and architectures, but are extremely power intensive. Whereas most computers use CPUs (central processing units) to perform the majority of their computations, the specialized GPUs (graphical processing units) and TPUs (tensor processing units) used in AI data centers use several times as much energy per computer as a standard CPU would.

In fact, on a per-server-rack basis, as of December 2025, the average AI data center used 60 or more kilowatts of power, as opposed to 5-10 kilowatts for a standard data center. Although power is an issue (we’ll get to that further down the list), a key concern is maintenance, as components wear out over time and must be replaced.

A data center room with rows of server racks housing network and computing equipment, displaying status lights. The ceiling has words like "Health," "Mobility," and "Philosophy" printed on panels, subtly hinting at the underlying math AI that drives why machines learn.

This photograph of a high-performance computing center at the University of Stuttgart leverages an incredible amount of computing power, but requires the expenditure of a tremendous amount of electrical power for it to work. The recent rise of AI data centers, which need an enormous amount of power themselves, is another new addition to humanity’s need for electricity generation, requiring 5-10 times more power per server rack than traditional high-performance computing applications.
Credit: Julian Herzog/Wikimedia Commons

Here on Earth, there’s a whole pipeline for identifying which components need to be replaced or repaired and when, with continuous monitoring transforming maintenance from a reactive endeavor to a predictive one. Load levels, temperatures, events, alarm histories, battery status and more are all monitored in real-time, and once any measured parameter strays outside of normal levels, the urgency of an onsite intervention can be quantified and acted upon. At least, that’s how we do it now: on the ground.

In space, we can have those same sensors, but problems can only be fixed if they fall into one out of two categories. If there’s a problem that can be solved remotely, such as with a system reboot, a new set of software commands, or the automated rerouting of certain hardware systems, then we can take those actions for an AI data center in space just as we could here on Earth. However, if there’s a problem that requires someone (whether human or robot) to travel to the site to conduct repairs or component replacements, that cannot be fixed in space.

Therefore, there will be many problems in space that, once they appear, will require a new (replacement) satellite to fix the problem with the original. That, again, is just an economics argument against AI data centers in space; with large enough numbers of satellites and a sufficient cadence of launches, this obstacle, on its own, may not be cost-prohibitive.

Rows of solar panels are installed outdoors with green trees in the background under a clear sky, highlighting innovation in harnessing solar power at night.

This array of solar panels is now a solar power plant in Minnesota, alongside an old farm field. With current technology, an array of solar panels that was more than about 240 square meters (around 2500 square feet) could power a single AI data center server rack, requiring around 60 kW of continuous power. Each year, a total of about 600 GW of new solar power is installed across the globe.

Credit: Courtney Celley/USFWS

3.) Providing power to these satellites.

This one is a bit more problematic, as the process of power generation is hard. Here on Earth, we typically derive power from combustion reactions, where the chemical energy stored in molecules is released as bonds are broken and new bonds are formed in the reaction. We can also use nuclear reactions, particularly through nuclear fission, to generate energy on Earth. Additional options include using wind, solar, or hydroelectric power to produce energy. But not all of these methods work once you leave the environment of Earth, with its natural resources like water, air, and the solid ground. We really have only two ways to power devices in space at present:

  • you can use solar panels, and collect energy from the Sun from within the vacuum of space,
  • or you can use a radioactive isotope thermoelectric generator, where radioactive material provides energy as your source material decays.

Because radioactive isotopes are difficult to make, they’re typically only used for deep space missions: to the gas giant worlds and beyond. Therefore, we have no real alternative to solar power for large constellations of satellites. Because there’s a maximum efficiency that solar panels can reach (currently topping out at about 20%), that means the way to get larger and larger amounts of power is simply to build a series of large-enough solar panels that are within mission feasibility to provide that essential energy.

The International Space Station orbits Earth, displaying its large solar panels and modular structure against a black background.

The TIGERISS (Trans-Iron Galactic Element Recorder for the International Space Station) mission is designed to measure the abundances of ultra-heavy galactic cosmic rays, including for elements all the way up to lead (element 82) on the periodic table. The ISS’s solar panels produce a total of 120 kW, the largest and most powerful solar array in space at present.
Credit: NASA/Roskosmos

For the power equivalent of a single AI data center server rack, or 60 kilowatts, that would require about a 16 meter (53 foot) by 16 meter (53 foot) square of solar panels, given the Earth-Sun distance and the efficiency of those solar panels. For comparison, as you can see above, the International Space Station currently holds the record for largest solar array in space, with its numerous solar panels putting out about 120 kilowatts of power in direct sunlight. That’s only approximately double the power needed for just a single AI data center server rack’s worth of energy outputted over time.

For a large solar array, the panels must be foldable or able to be rolled and unfurled, otherwise they wouldn’t be able to fit in the launch vehicle. An array of one million satellites, as proposed, would take the energy capacity of this megaconstellation up to 60 GW, which represents about 3% of the total global energy generation through solar. Gathering this enormous amount of energy is a tremendous task, and that’s why there are currently a whole slew of new off-grid power plants being built: to provide that needed energy to the AI data centers that are being constructed here on Earth. Because of the rare elements needed to build these panels, and the specialized restrictions that apply to solar panels in space, an entire new set of industries would need to be developed and scaled up to make this work: from mining to manufacturing to production to assembly and more. Its feasibility, at the present, is highly unclear.

cosmic rays

Cosmic ray spectrum of the various atomic nuclei found among them. Of all the cosmic rays that exist, 99% of them are atomic nuclei. Of the atomic nuclei, approximately 90% are hydrogen, 9% are helium, and ~1%, combined, is everything else. Iron, a low-abundance but important example of the heavy, high-energy atomic nuclei found, may compose the highest-energy cosmic rays of all: found with up to 10^11 GeV of energy.
Credit: M. Tanabashi et al. (Particle Data Group), Phys. Rev. D, 2019

2.) Cosmic ray errors.

Now, we begin to move from “technological problems” to “these are problems that arise from the laws of physics themselves.” From the Sun, from stars, from white dwarfs, neutron stars, black holes, accretion disks, and all other forms of hot, accelerated matter, a set of fast-moving particles emerge: cosmic rays. These cosmic rays are all charged particles, being mostly made up of protons, with helium nuclei, electrons, positrons, a rare antiproton, and heavier atomic nuclei making up the rest. They typically move close to the speed of light, but here on Earth, they rarely affect us in our day-to-day activities.

There are two reasons for that:

  • our planet’s magnetic field has a protective effect, mostly funneling these particles away from Earth except in a couple of rings toward the poles, which are the same locations where aurorae frequently appear,
  • and our atmosphere has a large amount of “stopping power” when it comes to these cosmic rays, causing them to produce large particle showers that dissipate the energy, ensuring that the secondary cosmic rays that do hit Earth’s surface are low in energy.

When cosmic rays strike a data-containing electronic storage device, if they get absorbed, what they most often do is cause a single bit to “flip” inside those electronics, turning a 0 to a 1 or a 1 to a 0 in the process.

An artist's impression of an ultra high energy cosmic ray.

Here on Earth, we have a planet-wide magnetic field and an atmosphere that extends for hundreds of kilometers to help shield us from cosmic rays. From the environment above Earth’s atmosphere, however, cosmic rays come omnidirectionally and with very large energies. When they strike sensitive materials, such as electronics where bits of information are stored, they lead to bit-flip errors, which can have tremendous downstream consequences. There is no shielding that’s effective against these relativistic particles.

Credit: Osaka Metropolitan University/Kyoto University/Ryuunosuke Takeshige

This can be disastrous for a computational application that requires that all mathematical operations be performed correctly. “Flipping a bit” might not sound like a big error, but it can be the difference between 2+3=5 and 2+3=37, or it can be the difference between your bank account balance being positive or negative. In the context of a large language model, it can be the difference between a correct translation and a mistranslation, a correct medical diagnosis or an incorrect diagnosis, or the difference between a venomous or a non-venomous snake. The consequences of such an error, in the context of a computation where there is no cross-checking in place, can range from unnoticeable to catastrophic.

In space, there is no atmosphere to protect your satellites, and the Earth’s magnetic field offers little protection as well. Unless you plan on having doubly or triply redundant AI data centers (doubling or tripling your costs) in space, you won’t have a way to guard against these types of errors; once a bit is flipped, it remains so, and there’s no way to detect it without having backup, redundant systems to check it against. While these types of errors are exceedingly rare here on Earth, they happen all the time in space, and no amount of physical shielding or other protective measures can stop it. When they occur in space, and they inevitably will, there needs to be a better defense against the computer returning a wholly incorrect answer because of a bit-flip error.

Cosmic rays are real, and the larger and more complex these spaceborne AI data centers become, the more susceptible to these errors they will be.

Here on Earth, many of our largest power consumers and producers rely on cooling to keep their infrastructure from suffering degradation due to heat damage. This comes along with the associated need for water: water which is used as an efficient method for cooling. This affects nuclear, geothermal, biofuel, and all fossil fuel-generated energy. In space, both air cooling and water cooling are impossible, as only radiation can transport heat away in the vacuum of space.

Credit: Michael Kappel/flickr

1.) The problem of cooling.

This is the big one: the big problem with trying to operate a system in space that requires a large amount of power consumption. How will you keep it from overheating, melting down, suffering from performance degradation and heat-induced damage, and ultimately, from shorting out?

Here on Earth, we have two big things that help us out: the ambient atmosphere, where air conducts heat away from hot sources, sometimes aided by fans and increased airflow, and the copious presence of liquid water on our surface, where water cooling is much more efficient than air cooling.

In fact, if you were to stand outside on a cold day, exposed to the air, you’d lose body heat relatively quickly the colder it was. If you moved from the air into a bath of water at the same temperature, you’d lose your heat dozens of times more quickly, which is why hypothermia is such a risk for those who enter bodies of water in near-freezing conditions. (It’s also why flamingos benefit from standing on one leg rather than two, as it better retains their body heat when they’re in the water.) It’s those interactions with molecules that efficiently transport heat away from a hot source. The greater the rate of those interactions, the quicker heat is transported away.

James Webb instruments

The cryocooler for the Mid-Infrared Instrument (MIRI), as it was tested and inspected back in 2016. This cooler is essential for keeping the MIRI instrument at about ~7 K: the coldest part of the James Webb Space Telescope. If it gets warmer, the longest wavelengths will return nothing but noise, as the telescope will actually see itself radiating at higher temperatures. This type of liquid cryogenic cooling is useful for transporting heat from one area of the spacecraft to another, but cannot be used to shed the heat entirely, which must be radiated away.

Credit: NASA/JPL-Caltech

But in space, there’s none of that. You can only cool your spacecraft, overall, through the process of radiation. Even if you have a coolant system on board your spacecraft, that can only transport heat from one location to another. Keeping one part of the system cold means another part gets even hotter, and that part can only shed heat in one way: by radiating it away. Heat radiation is slow, inefficient, and quite frankly insufficient for cooling such a power-intensive system made of sensitive electronics.

The cost of insufficient cooling is easy to quantify:

  • thermal errors,
  • short circuits,
  • broken connections between components,
  • and the eventual melting of the most heat-sensitive parts, such as lead solder.

When electronics get too hot, they fail. When you use a lot of power, you produce a lot of heat energy as an inevitable by-product. If you do this in space, you cannot use air cooling or water cooling, you can only cool through radiation. And there just isn’t a way to passively cool a 60 kilowatt AI data center rack quickly enough to avoid the problems and associated costs of insufficient cooling.

There’s a common refrain that you should never bet against an innovator, particularly when they already have a track record of things that were previously said to be impossible. But the only thing that’s truly impossible is to defy the physical laws that govern reality at a fundamental level. As Scotty from Star Trek: The Original Series famously pleaded to Captain Kirk, “Ye cannae change the laws of physics.” Until there’s a robust method for addressing the problem of overheating that is bound to arise with an AI data center, we can predict exactly when and how all such endeavors will fail.

This article The 5 biggest obstacles to AI data centers in space is featured on Big Think.