> The maintenance costs are higher because the lifetime of satellites is pretty low
Presumably they're planning on doing in-orbit propellant transfer to reboost the satellites so that they don't have to let their GPUs crash into the ocean...
If anything, considering this + limited satellite lifetime, it almost looks like a ploy to deal with the current issue of warehouses full of GPUs and the questions about overbuild with just the currently actively installed GPUs (which is a fraction of the total that Nvidia has promised to deliver within a year or two).
Just shoot it into space where it's all inaccessible and will burn out within 5 years, forcing a continuous replacement scheme and steady contracts with Nvidia and the like to deliver the next generation at the exact same scale, forever
> Presumably they're planning on doing in-orbit propellant transfer to reboost the satellites so that they don't have to let their GPUs crash into the ocean
Hell, you're going to lose some fraction of chips to entropy every year. What if you could process those into reaction mass?
I believe that a modern GPU will burn out immediately. Chips for space are using ancient process nodes with chunky sized components so that they are more resilient to radiation. Deploying a 3nm process into space seems unlikely to work unless you surround it with a foot of lead.
Hah, kill three birds with one stone? The satellites double up as propellant depots for other space missions, that just happen to have GPUs inside? And maybe use droplet radiators to expel the low grade heat from the propellant. I wonder if that can be made safe at all. They use propellant to cool the engine skins so... maybe?
You're describing cryogenic fuels there and dumping heat into them. Dumping heat (sparks, electricity) into liquid oxygen would not necessarily be the best of ideas.
Dumping heat into liquid hydrogen wouldn't be explosive, but rather exacerbate the problem of boil off that is already one of the "this isn't going to work well" problems that needs to be solved for space fuel depots.
> Large upper-stage rocket engines generally use a cryogenic fuel like liquid hydrogen and liquid oxygen (LOX) as an oxidizer because of the large specific impulse possible, but must carefully consider a problem called "boil off", or the evaporation of the cryogenic propellant. The boil off from only a few days of delay may not allow sufficient fuel for higher orbit injection, potentially resulting in a mission abort.
They've already got the problem of that the fuel is boiled off in a matter of days. This is not a long term solution for a place to dump waste heat. Furthermore, it needs to be at cryogenic temperatures for it to be used by the spacecraft that the fuel depot is going to refuel.
> In a 2010 NASA study, an additional flight of an Ares V heavy launch vehicle was required to stage a US government Mars reference mission due to 70 tons of boiloff, assuming 0.1% boiloff/day for hydrolox propellant. The study identified the need to decrease the design boiloff rate by an order of magnitude or more.
0.1% boiloff/day is considered an order of magnitude to large now. That's not a place to shunt waste heat.
This brings a whole new dimension to that joke about how our software used to leak memory, then file descriptors, then ec2 instances, and soon we'll be leaking entire data centers. So essentially you're saying - let's convert this into a feature.
Reminds me of the proposal to deorbit end of life satellites by puncturing their lithium batteries :)
The physics of consuming bits of old chip in an inefficient plasma thruster probably work, as do the crawling robots and crushers needed for orbital disassembly, but we're a few years away yet. And whilst on orbit chip replacement is much more mass efficient than replacing the whole spacecraft, radiators and all, it's also a nontrivial undertaking
Presumably they're planning on doing in-orbit propellant transfer to reboost the satellites so that they don't have to let their GPUs crash into the ocean...