This is complete fluff without quantifying the properties of the algebraic system. 1D PDEs with billions of variables can easily be solved in seconds on a laptop. Fully 3D highly indefinite equations are another story. The university press should be ashamed.
I almost didn't submit it because there were no details on what "mathematical equations" were used.
I think the emphasis was (or should have been) on the algorithms developed to work with multiple processors, but there wasn't much info on that either.
"When you connect 100 computers and tell them to solve a system of
equations, I need to break it into 100 pieces and ship each piece to a
computer, and then they need to talk to each other," Farhat said.
"They cannot do this independently."
To confront this well-known problem, Farhat and his team – led by Jari
Toivanen, Radek Tezaur and Philip Avery – collaborated with the ARL
DSRC to craft algorithms to divide these calculations among thousands
of computers.
The team members worked around the clock for three weeks to prepare
their software for the test on Excalibur. When the day came last
month, they had access to a significant chunk of the facility's
101,184 processors to divide slices of their equations, share
information and solve the problem. A mere three minutes later, those
thousands of processors had solved over 10 billion calculations
accurately.
It's also troubling when the link to "Army High Performance Computing Research Center (AHPCRC)" gives a Drupal error.
If you're interested, Prof. Farhat is very well known for the FETI method [1], which is likely what is being alluded to by the article. But the article as written is devoid of any description of what was interesting about the calculation.
I may have found more info on what "equations" were used.
This article states:
But before the Army carved up the spoils, it gave Farhat and his team
a chance to harness a significant slice of Excalibur's massive
computational cake and demonstrate the power of algorithms to solve
large-scale equations.
The team succeeded far beyond Farhat's last endeavor.
"The last time we'd had access to a large machine like this one, we
probably ran our algorithms on about 3,000 processors," Farhat said,
referring to the computations that helped his team win a 2002 Gordon
Bell Prize from the Institute of Electrical and Electronics Engineers.
"Now we ran on 22,000."
Of course I can't be certain, but it seems they may have been using the software/algorithms/calculations they used when they won a Gordon Bell award in 2002. [0]
An excerpt from "Salinas: A Scalable Software for High-Performance Structural and Solid Mechanics Simulations" [1]
To the best of our knowledge, Salinas is today the only FE software
capable of computing a dozen eigen modes for a million-dof FE
structural model in less than 10 minutes. Given the pressing need for
such computations, it is rapidly becoming the model for parallel FE
analysis software in both academic and industrial structural
mechanics/dynamics communities
It appears that they cut that from 10 minutes (3,000 processors) to under 3 minutes (22,000 processors) using the Army Research Laboratory's new Cray XC40.