a) Salt water is highly corrosive. Wouldn't maintenance costs be high?
b) Isn't marine biology highly sensitive to heat pollution?
For high latency services like Amazon Glacier, wouldn't it make sense to host in a place like Iceland? Really cheap hydrothermic/clean power. Highly educated local talent pool, and relatively consistent cool temperatures. If you're maintaining 80F (http://www.datacenterknowledge.com/archives/2008/10/14/googl...) the ambient temps outside should provide ample cooling year round.
For lower latency requirements, wouldn't it be worth it to install efficient cooling powered by electricity? (preferably renewable) With cheap solar who really cares about grid power loss and associated inefficiencies?
Essentially, I think it makes sense to get better at solar than to have Steve Zissou as a server admin.
Essentially, I think it makes sense to get better at solar than to have Steve Zissou as a server admin.
1) In the original article, these server farms would be remote administration only.
2) Such facilities would make the secret construction of one's James Bond Villain underwater base much, much easier. (Wouldn't Elon Musk, Jeff Bezos, and Peter Thiel all make great Bond villains?)
Villains are sometimes just victims of their circumstances and trying to survive.
Julius Caesar is the quintessential example of someone who becomes something just out of trying to survive. He was a hero to many and a villain to many as well. I wish more villains were written that way.
> b) Isn't marine biology highly sensitive to heat pollution?
Impact on the marine environment is briefly mentioned in the last couple of paragraphs of the article, although they're a little dismissive and don't provide any evidence supporting their claim of negligible impact.
Nuclear power plants usually dump their tertiary cooling water into the ocean. That's not a great situation for the local sea life (within a few hundred meters, IIRC), but doesn't have much significance beyond that. That is several orders of magnitudes more heat output than could conceivably come off a data centre. So their dismissiveness is likely warranted.
A bit more concerning is the notion of relocating these data centre pods. Long-term "reefs" that are relocated around the world could perhaps be a new and different vector for transmitting invasive organisms. I'm not an expert, but I'd want to hear a few of them opine as to whether this creates any risks that ordinary shipping does not.
>Nuclear power plants usually dump their tertiary cooling water into the ocean
The heated water from a power plant can actually provide a refuge for manatees and other wildlife that prefer warm water during cold winter. They're actually known as good fishing spots in Florida!
I don't think the plan is relocations. More than likely, MS see server numbers growing over time, at the very least for the foreseeable future, and they'll be located once, and then decommissioned, not moved.
That may change, but I doubt there are less server farms in 2020 than now.
As a side note, isn't that also Google's plan - the server farm as a single entity, where nothing is really fixed, just turned off and eventually replaced? In such a world, where a server farm exists then dies, underwater seems a great place for them. No regulations, no repairs, locally powered via means that no one can complain about (NIMBYism is a powerful force) just run until decommissioned.
> "A bit more concerning is the notion of relocating these data centre pods"
What are the requirements for cargo ships? I suspect they run into this regularly when in a dock for extended periods or undergoing maintenance that doesn't require a dry-dock.
Nuclear power plants usually dump their tertiary cooling water into the ocean.
Nuclear plants also use cooling towers to lower the temperature before it dumped back into a lake/river/ocean. You can do a Google search, but those towers are hollow on the inside. Water runs down the inside walls and convection currents move up.
I'm not an expert either, but one should keep in mind that this was an experiment with a single vessel. Even the shiny PR images in the articles show clusters of hundreds of vessels for actual operation. So it's more "few thousandths of a degree times X" and that seems less harmless to me.
A single coal power plant is negligible to the environment too. Half the world powered by coal isn't.
I'm not sure what that means (or if it's really true). The temperature at night doesn't drop to almost (absolute) zero; the oceans don't freeze; so plenty of the Sun's energy must still be present.
The initial statement was power/area, the instantaneous energy input from the sun, given as 1GW/sq km. The energy input from the sun at night really is pretty much zero.
The initial statement argued that because organisms are "fine" with 1GW/sq km, they therefore should be fine with the additional heat of a datacenter. I disagreed with the conclusion, not the number.
The second statement was that "OTOH, that 1GW drops to almost zero at night." It may be a literally true statement on its own, but it's misleading in response to a statement that absolute zero isn't a realistic basis for this discussion.
That the ecology is able to handle variations in temperature, even large ones, does not mean it can handle an absolute shift. 10±5 degrees is not the same as 11±5.
For a, you can put a sacrificial anode on the frame. Given that the rate of corrosion should be known, the size of it can be calculated to however long the pod is expected to be in service.
I will say that my only experience with sacrificial anodes is that they help but don't prevent corrosion entirely. We had screws starting to turn to dust from a couple hundred hours of pool water exposure even with a sizeable anode. Maybe it wasn't in sufficient contact or whatever. They're unlikely to be curealls anyway.
One thing about sacrificial anodes is that multiple seem to be required. E.g. even though the hull of a ship may be a good conductor, you don't just have one sacrificial anode.
Wikipedia ["Galvanic anode"]: The arrangement of the anodes is then planned so as to provide an even distribution of current over the whole structure. For example, if a particular design shows that a pipeline 10 kilometres (6.2 mi) long needs 10 anodes, then approximately one anode per kilometere would be more effective than putting all 10 anodes at one end or in the centre.
I came here to say the exact same thing. The resistance of the metal surface will mean you want to cover the metal needing protection all around. I think dependant on the salinity of the water different metals should be tested to best suit you environment.
Basically, the reason that things corrode is that there are dissimilar metals, one with more electrons than the other creating a voltage between them. So your part is a battery and the ocean is the "wire". The electrons move from your part (corroding them), to other parts with less electrons. If instead you have a sacrificial piece of zinc, the zinc has more electrons than your part so the electrons will come from the sacrificial piece of zinc instead of your part. Think of it like a lightning rod, but for corrosion instead of lightning. It's more complicated than this, but that's the general idea.
"They are made from a metal alloy with a more "active" voltage (more negative reduction potential / more positive electrochemical potential) than the metal of the structure. The difference in potential between the two metals means that the galvanic anode corrodes, so that the anode material is consumed in preference to the structure."
https://en.wikipedia.org/wiki/Galvanic_anode
I worked on a 120m old passenger ship for a while. During dry-dock each year we'd put 8 or so 2kg (ish) zinc anodes around the hull, a welded bracket around each one.
The trouble was that they're pretty expensive, and zinc has a great resale value - so we had to do extra watches around the lower decks with rigged firehoses to try and stop local divers stealing them who knew we'd just come out of drydock. (This was in the Philippines). We lost several that way.
Then why don't you "get better at solar" and become rich in the process? What did you say, it's too hard, practically impossible? Yeah, life's a bitch, isn't it?
> Isn't land-based biology highly sensitive to heat pollution?
No, land-based organisms comparatively have a much, much higher tolerance to temperature swings.
Water's high heat capacity and conductivity means that most marine environments have extremely stable temperature ranges (a few degrees one way or other), so most marine species are adapted to live in only that range, and quickly die otherwise. Tropical fish are hard to keep alive precisely for this reason.
Air temperature varies much more widely depending on time of day, season, and weather, so terrestrial organisms have all developed extensive strategies for thermoregulation (e.g. mammals, birds). For example, dogs can survive in both arctic and tropical environments.
Plus, while heat is naturally dissipated upward, underwater thermal plumes are more likely to affect fish than above-ground plumes would affect birds.
FYI, Cornell uses lake water to cool several campus buildings, and they needed to carefully review that the warmer return water wouldn't negatively affect fish [1].
Sunlight imparts about a kilowatt per square meter to the surface of the ocean at noon, I'm not exactly sure how to use that information, but it does seem relevant for bounding the problem.
This gets a lot lower with differing latitudes and seasonal and daily cloud cover/evening time - closer to 100 W/m2 [1]. So over a day you'd get 2.4 kWh. In comparison an electric space heater uses about 1 kWh of energy in an hour.
So how much will 2.4 kWh change the temperature of a cubic meter of water at the ocean surface?
Density of water = 1000kg/1m3 so mass = 1000 kg
2.4 kWh = 8640 kJ
Heat_Energy = Conductance * Mass * change_in_temp
8640kJ = 4.186 kJ/(kg⋅K) * 1000kg * xC
x = 2C
So solar heat will change the first meter of water by about 2C over the course of a day. That's a small number b/c solar radiation is not a huge contributer to temperature swings. Instead what will effect the temperature difference is the seasonal change. If we can calculate the heat energy gained or lost due to seasonal extremes and water temp, then we can figure out what the range of acceptable heat energy is for surface water.
I'm guessing you've never experienced the joys of owning a 30 to 40 ft sized, completely non metal (fiberglass hull) sailboat kept in salt water, with all of the great fun of paying to haul it and have the marine growth scaped off, pressure washed, double or triple coat bottom paint re-applied, etc. Just because it's not metal doesn't mean it doesn't attract the category of what the maritime industry lumps together as "marine fouling".
Though marine fouling is primarily of concern for fiberglass boats because it massively increases drag which admittedly is less of an issue for a datacenter. It's still an issue for the pods as mentioned in the article, but I'd expect it to be less grave than on a boat.
GP was quite specifically taking about chemical corrosion, not bio-fouling. Even if the marine industry conflates these (I can't tell if that's what you're implying in the last sentence), we don't have to.
Plastics are good, but you need to have tubes going into the thing, as well as some means of accessing the inside even if you only access it on a ship/land. That means rubber and metal will have to be on the exterior somewhere.
The article mentions that one of the benefits is that they don't have to deal with building codes being different in different places. It's truly an "off the shelf" data center that doesn't have to be customized for the laws, zoning, and codes in that area.
But who is in charge of the seabed around the United States? I imagine that I can't just drop a big metal container on the ocean floor without informing someone or asking permission, otherwise there would be a lot of things just dumped out there.
I also wonder about jurisdiction. If the data center is outside of the Territorial Zone (12 nautical miles from shore) but is inside of the 200nm exclusive economic zone (EEZ), which country would jurisdiction fall under? The country that owns the EEZ? Would the data center fly a flag of its owning country? Perhaps the country where the cabling comes ashore?
Some interesting legal questions. Perhaps the datacenter would be treated like a submarine in terms of jurisdiction. It could allow Microsoft to have their Irish branch open data centers 13 nautical miles offshore of the West Coast to provide storage that's harder for the US to legally get into.
Definitely curious about the jurisdiction aspects here. There aren't any immediate cases that come to mind re stuff outside EEZs (I wonder if there's any precedent associated with oil rigs - they're the only freestanding marine structures that immediately come to mind, besides artificial islands).
One thing that does stand out is the sheer greyness of it all - international legal matters like this move very slowly and moreover the ICJ seems very poorly equipped to answer questions like this.
Let's say climate change gets absolutely horrific, and we see a 1m rise in sea levels over the course of a pod's 10 year lifespan. So your underwater datacenter pod, originally deployed 200m below the surface, is now at 201m. Somehow I don't think this would be an enormous problem.
A similar argument could be made for concerns over rising water temperatures as a result of climate change - what might be considered a dramatic rise to climate scientists and marine biologists would likely have little impact on a datacenter.
Forget climate scientists and marine biologists -- if ocean temperatures rise 3-4 degrees, we will all have much bigger problems than increased datacenter cooling costs.
But Marine Law seems to have a healthily deep section in the legal bookstore I visited too frequently, not long ago.
I imagine this plan means to situate in littoral waters, not treaty open seas.
But what hazard could chipping containers packed with servers be?
I guess there is rare element pollution all too plenty, oozing from recycles in countries where had currency at all, means only barely just hard, not harsh life. But relativism appeases not real concerns.
The idea i think has been mentioned a few times by others, if near/offline storage makes any sense, but to some imaginable degree, sea storage of data also offers unusual security against all but highly determined assailants.
Is there any thermal maintenance factor, that means e.g. disk drives last longer with less data loss if kept at stable temperatures? Would that be effective to do at sea - i think this already mentioned too, the narrow bands of temperature required for marine tropical fish, suggests to me that the cost of mitigating heat waves and cold seasons s sure not applicable.
Can wave power be harnessed to circulate enough air?
Can a inexpensive efficient massive area of the structure be made to be a heatsink?
(Marine fouling as mentioned, seems highly challenging, even if access to surface by buoyancy charges is easily achieved.
Can something organic, even living like a coral, be coaxed into providing natural dissipation of any heat excess?
Oh, i'm saddened now, thinking about barren, sterile coasts where coral reefs have died. But would such areas be better to offer minimal marine fouling by lack of support for life in those God forsaken waters of shame?
Are any challenges less, say, in the Great Lakes?
Or installed inline in channels parallel to canals, utilizing water flow controlled often by lock systems, adding emergency cooling capacity? The trenches might easily be dug. In London there are fiber rights of way along many canals as well. My company long looked at using arch space under rail bridges and viaducts, for they are naturally very cool places, and next to fiber and power.*
Is there a military application for designing the containers?
For forward deployment?
Oceanic monitoring of SONAR?
Or required replacement civilian facilities you need to park aways from looting?
Sounds to me like there is utility in having a reference design, and a potential number of platform customers who habitually fund conceptual capabilities.
*(If anyone is seriously interested even for long shot / entertainment value, in containerized colo under London railroad arches, I'd love to hear from you, i'll freely donate some real energy into getting further than we did before recognizing we couldn't sell it to investors as a solo undertaking. (not meaning selling to investors for the idea, but to those in us, who would balk at cost of diversion of time, if undertaken without sharing interests. That said, it was appreciated the excellent security possible, as well as a side idea about possibility to get planning permission for erecting sky poles for microwave parallel to known HTF line of sight paths, as potential backup capacity and even service to colocated compute. Variously quite a number of projects we researched as thoroughly as possible before spending real money progressing.k, i.e. time, thought, desk research, scouting, but nothing like hiring planning lawyers to approach Network Rail et.al., just modestly intelligent motivated daydreaming, as line of sight to a lot of major landmarks seemed very plausible at 1km and under. Absolutely please do drop me a line if it's even just a daydream you agree has a angle to work, email in profile...)
I really can't see the benefit of submerging these -- the complexity involved with keeping out several hundred PSI of sea water just doesn't seem to make sense to me.
For the cooling aspect, they're already using pumped water and heat exchangers. What is the benefit of submersion vs just pumping sea water from a pipe to a land, or barge-based datacenter and using that to cool?
>the complexity involved with keeping out several hundred PSI of sea water just doesn't seem to make sense to me.
Use immersed electronics.
ROVs and the like have quite complicated onboard control systems. They're immersed in mineral oil (or something like that). They don't get wet and hydrostatic pressure is no longer an issue.
Spinning rust hard drives have an air-filled (or helium-filled sometimes) cavity in which the spinning rust spins. If you're building a computer immersed in oil, you generally need to leave that part outside the oil.
I don't think it would be that complex, since the pods can be permanently sealed and don't need to exchange matter with the environment. I would assume that a lot of the complexity in waterproofing marine vessels comes from the need for openable doors.
Many of the challenges could be reduced or mitigated if the pods could be at least partially pressurized to reduce the pressure differential between the inside and out -- but that introduces another complexity -- can the server hardware handle it.
There are quite a few engineering challenges, none of them unsolvable but also none of them seem necessary when it seems the same benefits can be obtained for less effort by keeping the pods above water.
Hosting companies would be either adding capacity (new locations, new land, new construction) or filling existing capacity. Once capacity is full at a location, then what?
The marginal, incremental value of adding a single server once capacity is reached is currently quite high. Using bogus numbers, lets say a location can hold 100,000 servers, then the marginal cost to add the 100,001st server will be very high, and might cost millions. These costs include:
- Cost of planning - when? Which countries? Which locations? Do you serve Kenya out of say South Africa? Or do you need to go to the mountains of Kenya where the air is a more consistent temperature?
- Cost of finding locations - doing deals to get the land for server farms is likely costly.
- Actually buying/leasing the land - how in advance do you do this? Years? Months? Weeks? Leasing land that lays dormant for say 18 months is wasted money.
- Getting approval to put in the required infrastructure (data cables, electricity, redundancy etc)
- Engineering/architectural costs to design buildings each and every time for varying sizes, in different regulatory environments.
- Cost of construction.
- Wasted capacity. There may be a need worldwide for say 50,000 servers, but with multiple continents, that may equate to several locations with 100,000 capacity.
All those costs add up.
Compare that to the marginal cost is adding a standard pod, with a lower server numbers per pod, mass manufactured in one location and shipped to a regulation free spot in the ocean. The marginal cost is not millions to go from (bogus numbers of) 100,000 to 100,001, it is the cost of one additional pod, at a much lower number, e.g. to go from 10,000 to 10,001 costs a fraction of the 100,000 to 100,001 example. The waste is also reduced, as multiple countries/continents can have say an additional X% capacity, added in smaller increments, rather than huge leaps in (initially) wasted capacity.
Now, is that overall cheaper? Maybe, maybe not, who knows? But many large hosting businesses might prefer say a 10% increase in theoretical cost per server added in smaller discrete numbers of servers. Factor in adding capacity in smaller increments with less capacity waste, less need to acquire the rights to land in advance and it might be cheaper in aggregate, even if it is theoretically more expensive per server at currently active locations.
Incidentally, that is the core value proposition of cloud computing like AWS - that you can add capacity incrementally. Despite there being theoretically cheaper options running your own hardware, this is larger capital outlay, with longer turnarounds to add capacity.
This is ridiculous. Data centers contain thousands of racks like the single rack shown in the demo "pod" and they all require hands-on human maintenance from time to time. This becomes quite a bit harder when your server rack is in a sealed steel tube on the seafloor.
Splitting the data center into thousands of "pods" will also greatly increase the cost of cabling and connectors for power and data distribution (we're talking $100 per connector here) and the number of cable failures will skyrocket. Did you know that fish like to chew on submarine cables? Unless you get steel-armored cables, which are defended against "fishbite"...
If one server goes down and they are not able to restore it remotely they will just write it off. If a large number go down due to some common failure, it might make sense to haul the whole thing up and refurbish it.
I've heard that this is what many data centers currently do anyhow. That when a client "moves on" (moves out, dissolves, etc) - the machines are just left behind, because it costs more to unrack/transport/etc than it does to leave them in place.
/source: worked for a small cloud computing VPS provider
> The reason underwater data centers could be built more quickly than land-based ones is easy enough to understand. Today, the construction of each such installation is unique. The equipment might be the same, but building codes, taxes, climate, workforce, electricity supply, and network connectivity are different everywhere. And those variables affect how long construction takes. We also observe their effects in the performance of our facilities, where otherwise identical equipment exhibits different levels of reliability depending on where it is located.
In the context of the discussion about cost disease, this is quite remarkable for demonstrating how regulation and local differences have raised costs to the point that a large company is seriously considering building data centres beneath the waves.
This is great. A long time ago I had this crazy idea that if we built data centers at the poles, we could solve a lot of problems. Each one would sit idle 1/2 the year, but you could power it with solar and cool it with the outside air.
Of course the big problem was latency. It was one I couldn't conceive of a good solution for.
This is much better. It takes advantage of the same idea but you can park it right next to a population center and move it around the world fairly quickly as demand requires.
There was an article posted about one of the Antarctic data centers. And it's a horrible place to put one. Air too dry, too cold, parts being months away, and rubber/cardboard losing the properties that it needs. The shear terror when power goes out which will destroy a server if it's out too long.
The poles are too cold. The datacenter maintained by researchers in Antarctica ends up having to be insulated against external temperatures, and then ironically has a worse cooling problem than ones on other continents.
I wonder what the cost differential is (any given rack, not just an underwater one) for a system which is designed to NEVER be maintained by people over its service life (or duty). Hard drive goes bad? Board fails? Just push the load elsewhere and shut off power to it. No idea what sort of attrition rate these systems have.
Sure, you might be at 50% capacity after 3 years, but that's just like lithium ion batteries.
Some of the big cloud providers (Google and I think Facebook) do exactly this in their DC's, when hardware dies they just leave it dead until enough has failed in that rack to make it worth swapping out the whole thing.
When you have a 100,000 of anything electrical MTBF will hit you every day.
I'm guessing that someone has done an equation and shown that to be cost effective. Does that calculation apply when the cost of building the data centre and running it are considerably higher due to the location you built in?
Interesting question. Perhaps you should look at the cost of spacecraft engineering for an example. You can't replace the hard drive on a probe billions of miles from earth!
I think what GP is proposing is significantly different than space craft: in space, you have limited space for redundancy, so you overbuild to avoid failure of key parts.
I think GP is proposing you start with, say, 11000 servers when you have a planned capacity of 10000, and just cycle off bad ones for spares (or just diminished capacity) until you redo the whole system.
I believe that's how Microsoft operates their server farms, but I could be mistaken.
Space is craaazy expensive for a lot more reasons though (ionization and lack of mission-redundancy are the big ones. Every launch is expensive, so every mission/satellite must have a bazillion nines).
Seems like it would be simpler and easier to just build a datacenter near the ocean and pump seawater to it.
Regardless of how it works, I wonder if it's possible to get significantly more performance out of chips that are explicitly designed to operate at a much lower maximum temperature? Let's say you want to build a datacenter somewhere like the Northern coast of Alaska or Antarctica, and you have year-round reliable and abundant access to sea water that's barely above freezing, and suppose that you're willing to order enough CPUs / GPUs / bitcoin mining ASICs or whatever to justify your supplier to design and manufacture custom chips to operate in a lower temperature range. Is there some potential boon to performance or reduced manufacturing cost (due to looser tolerances) that might make such an effort worthwhile?
> Is there some potential boon to performance or reduced manufacturing cost
Guessing, but in the balance, I would assume no. Power generation costs are higher and service is less reliable. Staffing is more costly. Transport for all other resources is also more costly and at a much higher delay than in many other areas that are traditionally used.
From my own experience with high power (~25kW) FM radio transmitters, some of which we have at the top of mountains in the middle of ski-resorts: while cooling costs are reduced throughout the winter, they /aren't/ totally eliminated, so you still need to size and maintain the equipment for summer loads anyways. Getting to the site to do work and maintenance is an absolute chore; replacing the transmitter was a 10 man job where similar rigs in other locations only required 3.
Although, if it was the summer, after we finished our work, we got to take the alpine slide down the hill. As to why we put it there in the first place, signal coverage was excellent, and they already had the power we required available due to the chair-lifts being nearby. If we could have put it somewhere else and got the same coverage, we would have.
Imagine if you built a data center was on a river that was fed by mountain snow melt, and you created a diversion that took some of that water, ran it along heat exchangers in your racks and the out through a hydro-power unit and back into the river on the other side. You might be able to build the first negative PUE data center in the world.
But you wouldn't tell anyone about it lest you lose your competitive advantage!
The Google datacenter a few miles from my house my house is right next to the Chattahoochee river. They take water from the treatment plant that was going to be pumped into the river to use in their heat exchangers. Then they clean it and pump it into the river.
This is a fun concept with potentially even more interesting consequences. Using the ocean as a giant heatsink would result in a water temperature slightly higher than elsewhere, I wonder if over time it would suffice to attract the attention of non-local aquatic life, or affect the migratory habits of the existing species
Nuclear power plant water exchangers in both California and Sweden have attracted blooms of jellyfish[1], who then clogged the heat exchangers and caused the power plants to cut power output to mitigate the efficiency loss.
Looking at this more closely, it would seem that the primary test they were doing with this was the effectiveness of the external heat exchangers. They had one rack consuming 27kW maximum, but they had 5 large external heat exchangers (of two different designs) and all kinds of external tubing and valves to manage the cooling fluid flow. Seems like overkill unless they were running tests on the effect of different operating temperatures on biofouling and vice versa.
Why dissipate the residual heat into the environment when you could use it for residential heating and/or heating of green houses. This concept seems to be a design for waste of resources to me.
Thermodynamics. The exhaust temperature of the data centre has to be significantly higher than the temperature of the thing you want to heat, especially if that heat is to be transported any appreciable distance. Otherwise, the heat will not flow in the right direction.
With data centres these days trying to keep chip temperatures as low as possible, this really makes the temperature gradient very low.
An alternative would be to pump in a little bit more energy and use the exhaust heat as a source for a heat pump.
With the temperature level of a datacenter you can easily operate a heat pump with a COP of 5 to 6. Or mix in the CHP to raise temperature level to motor exhaust levels. Use the electricity for the datacenter. Lot of potential.
Always fun with people that run with exotic ideas =)
I might be missing some aspects but would it not be much easier to locate the datacenter on land but close to the ocean (or other large body of water), with similar principles as used for cooling nuclear reactors? Then you would get the benefits but would not have the same strict requirements on reliability of the components.
If you locate it on land you would also have the benefit of using the excess heat for house heating (if located in cold climate and the temperature difference to outside is big enough). This is done in a number of cases in Sweden with excess heat from large industrial plants [1].
I would be concerned with the cost of real estate at that point. They didn't talk about it, but I expect they're not paying rent/buying land off shore. Seaside real estate is expensive afaik.
>They didn't talk about it, but I expect they're not paying rent/buying land off shore. Seaside real estate is expensive afaik.
I don't know how it'd work for something like a submerged data center, but blocks of area are indeed sold off by the government bodies that administrate them to O&G producers. This is also not cheap.
I can see the problem of delivering effective data centers to warm climates, but apart from the latency aspect, my guess is that Facebook's strategy of building a traditional mega-data center close to cheap hydro power in a region with year round cool air is more economic than this.
I hope to see them succeed though, especially if the data centers can be self sustained with wind or ocean (wave, tidal, ...) power.
You say that, but it sounds like a win/win from where I'm sitting:
>Our investigation was a great success: not only were we able to reduce our electricity costs by 35%, but we discovered our high-density SSD storage was even more dense at 87atm!
The article doesn't mention crush pressure at depths of 200M so I'm thinking the plan is to replace the air with mineral oil which is non-conductive and absorbs heat well. But it will also mean the containers will probably not be bouyant.
I would be interested knowing
1) % of total operating cost saved from cooling savings
2) % of building costs saved from standardized building and simplified regulations.
Also, it wounld be awesome if we could moor these to offshore wind turbines.
How long before our computation needs require LEO datacenters with huge solar arrays that always face the sun? Consecutive requests might be sent to different satellites as they move past. More difficult to dump heat in space.
This is cool and all, but salt water is basically cancer. If you want to experience first hand entropy at a rapidly accelerated rate, buy a boat and moor it in salt water. With all of the associated repair/maintenance costs and headaches that go with it. Now add electronics and server equipment.
There are ways to use cold water for cooling without submersing things. Many of the buildings in downtown Toronto (including the 151 Front IX point and datacenter) are cooled from deep lake loop cooling/heat exchange: https://www.google.com/search?q=toronto+deep+lake+cooling&ie...
Lots of comments already, but 100% this is complete bullshit. If they manage to make it a diverse project (branching into other possible fields) and even if not, this is part of a cat-and-mouse game of placing live electronics for purposes of underwater warfare, espionage and such.
For example smart mines that will be powered, but indistinguishable from the signature of these "server farms", then hydrophones and other such technology for locating and tracking submarines and so on.
"About the Authors" makes clear how no thoughts other than the clown game were entertained by the article, but aren't we past the times that things get written about 20-30-50 years down the line. :)
I'm surprised that they're so dismissive of heat pollution. Google's Finland data center, which is cooled by ocean water, has a holding tank used to cool the ocean water before pumping it back into the ocean[1] in order to prevent a similar problem. I would like to see additional research here rather than baselessly dismissing it.
Isn't it obvious and common sensical that it is a red flag to operate a data center underwater? I mean there might be short term benefits, but in the long term you'll be polluting and damaging the underwater ecosystem.
Energy efficient maybe, but not cost efficient. Computers+salt water = bad things. Ask anyone trying to maintain anything within sight of an ocean. Salt gets into everything eventually.
Build on the land. Put a heat exchanger in the ocean. Pump nice friendly coolant to and fro. Forget the idea of setting up shop literally underwater.
And how does data get into this data center? Where is the power comming from? All those connections still must go through penetrations. It can be done, i just question the worth.
EM isn't going to travel very well or very far through water. You would want something exposed on the surface, and preferably in very close proximity. Every time you transform power, you're going to incur losses.
Edit: Great points! Space would be a terrible location for data center hosting, not only due to latency and heat radiation issues, but also due to the difficulty in mitigating solar radiation.
What about putting the data center in some kind of low Earth orbit? Something within the magnetic field, but far enough out to radiate heat? Would the latency still be a factor if your web server and CDN were hosted "in a cloud" on numerous communications satellites?
Someone correct me if I'm wrong, but I believe that radiating heat is actually really hard in space. It seems counter intuitive, because space is so cold, but it's mostly a vacuum.
To transfer heat, you need a medium for the heat to transfer through. I believe some heat can be lost due to radiation, and so they have these big metal fins as heat sinks to radiate as much heat as possible, and reflective material on the solar facing side to prevent more heat from being absorbed.
The water, on the other hand, is a fantastic absorber of heat, and I'm guessing it's easier to drop one in the ocean, rather than launch one to space.
> I believe some heat can be lost due to radiation.
In hard space all the heat is lost as radiation (minus and incredibly tiny loss from hitting the very few particles in space), the dark side can be treat as a blackbody for a rough approximation.
One of the reasons that the side exposed to the sun is mirrored is to reflect as much of the radiation falling on that side as possible as the heat is so hard to get rid off, people think the shiny part is gold but it's actually usually mylar or something stacked in a layered insulation.
Space is a horrible environment to make things work, metals spontaneous weld themselves (vacuum welding), radical differentials in temperature depending on orientation, your electronics are constantly hammered by high energy radiation, micro impacts by things travelling at velocities that make a sniper rifle look like a slingshot, it's fascinating as a layman what it takes to keep stuff working reliably up there at all.
That sounds like just about the worst place to put a datacenter. Getting just a single kilogram to orbit costs about $2000 at its absolute cheapest, so I can think of no better way to send deployment costs through the roof. On top of that, space is bad for heat exchange because there's no convective cooling.
Also cosmic rays will wreck havoc on all your delicate silicon chips.
There is also a high cost to get something into space, a risk of it blowing up or crashing while heading to space, dealing with very, very long wires (space is ~160km-2,000km) as opposed to the 50 and 200 meters that'd be used for servers under the water.
Also is the issue of having these very long cables running through airspace. Airplanes would have to be made aware of these wires. I can also imagine a lot less electromagnetic interference going on under the water.
Also, heat transfer becomes more difficult. No longer is there conduction of hot metal to cold water. In space, you have one main option for transferring heat. Radiation. I think this is addressed in your edit, but yeah, there's the Sun adding energy while you're struggling to radiate out your own server heat...
a) Salt water is highly corrosive. Wouldn't maintenance costs be high?
b) Isn't marine biology highly sensitive to heat pollution?
For high latency services like Amazon Glacier, wouldn't it make sense to host in a place like Iceland? Really cheap hydrothermic/clean power. Highly educated local talent pool, and relatively consistent cool temperatures. If you're maintaining 80F (http://www.datacenterknowledge.com/archives/2008/10/14/googl...) the ambient temps outside should provide ample cooling year round.
For lower latency requirements, wouldn't it be worth it to install efficient cooling powered by electricity? (preferably renewable) With cheap solar who really cares about grid power loss and associated inefficiencies?
Essentially, I think it makes sense to get better at solar than to have Steve Zissou as a server admin.