Hacker News new | past | comments | ask | show | jobs | submit login
Stanford engineers team up with U.S. Army to set computational record (stanford.edu)
27 points by Oatseller on Oct 26, 2015 | hide | past | favorite | 12 comments



This is complete fluff without quantifying the properties of the algebraic system. 1D PDEs with billions of variables can easily be solved in seconds on a laptop. Fully 3D highly indefinite equations are another story. The university press should be ashamed.


I almost didn't submit it because there were no details on what "mathematical equations" were used.

I think the emphasis was (or should have been) on the algorithms developed to work with multiple processors, but there wasn't much info on that either.

    "When you connect 100 computers and tell them to solve a system of
    equations, I need to break it into 100 pieces and ship each piece to a
    computer, and then they need to talk to each other," Farhat said.
    "They cannot do this independently."

    To confront this well-known problem, Farhat and his team – led by Jari
    Toivanen, Radek Tezaur and Philip Avery – collaborated with the ARL
    DSRC to craft algorithms to divide these calculations among thousands
    of computers.

    The team members worked around the clock for three weeks to prepare
    their software for the test on Excalibur. When the day came last
    month, they had access to a significant chunk of the facility's
    101,184 processors to divide slices of their equations, share
    information and solve the problem. A mere three minutes later, those
    thousands of processors had solved over 10 billion calculations
    accurately.

It's also troubling when the link to "Army High Performance Computing Research Center (AHPCRC)" gives a Drupal error.


If you're interested, Prof. Farhat is very well known for the FETI method [1], which is likely what is being alluded to by the article. But the article as written is devoid of any description of what was interesting about the calculation.

[1] https://en.wikipedia.org/wiki/FETI


    Prof. Farhat is very well known for the FETI method
And he's flown with the Blue Angels (I would give anything to do that).

https://www.flickr.com/photos/stanfordeng/sets/7215764813436...


Thanks for the link.

I agree, the article is missing important information.


Very fluffy. Why did they "work around the clock"? Were they trying to beat the Nazis?


No, they were working hard before the machine was put to other uses.


I may have found more info on what "equations" were used.

This article states:

    But before the Army carved up the spoils, it gave Farhat and his team
    a chance to harness a significant slice of Excalibur's massive
    computational cake and demonstrate the power of algorithms to solve
    large-scale equations.

    The team succeeded far beyond Farhat's last endeavor.

    "The last time we'd had access to a large machine like this one, we
    probably ran our algorithms on about 3,000 processors," Farhat said,
    referring to the computations that helped his team win a 2002 Gordon
    Bell Prize from the Institute of Electrical and Electronics Engineers.
    "Now we ran on 22,000."
Of course I can't be certain, but it seems they may have been using the software/algorithms/calculations they used when they won a Gordon Bell award in 2002. [0]

An excerpt from "Salinas: A Scalable Software for High-Performance Structural and Solid Mechanics Simulations" [1]

    To the best of our knowledge, Salinas is today the only FE software
    capable of computing a dozen eigen modes for a million-dof FE
    structural model in less than 10 minutes. Given the pressing need for
    such computations, it is rapidly becoming the model for parallel FE
    analysis software in both academic and industrial structural
    mechanics/dynamics communities
It appears that they cut that from 10 minutes (3,000 processors) to under 3 minutes (22,000 processors) using the Army Research Laboratory's new Cray XC40.

[0] http://www.supercomp.org/sc2002/news_nrp_conclude.html

[1] https://www-it.desy.de/common/documentation/cd-docs/SC2002/p...


Yes, I kept reading trying to figure out what the meat was here. Medium scale HPC initiative at _Stanford_?


What exactly did they solve on those machines?


This is old school monolithic system HPC trying to survive in a world of distributed HPC and GPUs.


Distributed hpc? GPUs? Wake me when distributed HPC or a GPU machine wins gordon bell.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: