Hacker News new | past | comments | ask | show | jobs | submit login

Why give money to a solution before exploring other options? Decentralization, for example, seems like a more natural way of distributing meterological forecasting workload across the planet, by colocating sensors and computation on clusters of devices in close proximity. Decentralized computing is a natural fit for weather problems.



The mathematics that needs to be solved to produce weather forecasts isn't embarrassingly parallel in the way something like Ray tracing is or in the way the BOINC projects like Folding@Home take advantage of. The mathematics involved is attempting to model the continually interacting, global state of weather across the planet in order to determine what it will be like at various places on the planet.

These models are extremely reliant on being able to "share" data between parallel operations, if computer core 1 is doing a grid cell, and computer core 2 is calculating things about a neighbouring grid cell, it will need to know things about the grid cell core 1 is working on. These models run on supercomputers in order to facilitate efficient parallel processing in an environment where each node can access the memory of surrounding nodes since promptly accessing such memory is necessary for the models to be computed "faster than real time".


Distributed computing "across the planet" only works with low-bandwidth/low-latency tasks. Take a look at the various distributed internet projects (the Great Internet Mersenne Prime Search, Folding@Home, etc) and they are ones that process for a long time and send back a small amount of data, typically to a single server.

People use supercomputers for problems, like weather forecasting, which depend on high-bandwidth, low latency interconnects. For example, the Cray CS-Storm (which the Swiss National Supercomputing Centre will be using) supports "QDR or FDR InfiniBand with Mellanox ConnectX®-3/Connect-IB, or Intel True Scale host channel adapters" (quoting http://www.cray.com/sites/default/files/resources/CrayCS-Sto... ).

To prevent congestion, the supercomputer network topology might even be wired so each pair of neighboring nodes has a dedicated connection.


Err, "with low-bandwidth/high-latency tasks".




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: