Hacker News new | past | comments | ask | show | jobs | submit login
Eighty Years of the Finite Element Method (springer.com)
179 points by leephillips on Nov 5, 2022 | hide | past | favorite | 45 comments



My first programming role was on the back of work by an engineer in this article. The core of the solver was a FORTRAN implementation of a paper on p-convergence. It was really amazing seeing our software predict how a small crack in a part of an aircraft would propagate. The 3D model produced matched the photograph shared later.

The lead developer (at the time) once said that the biggest software failure we can have is not incorrect results, but incorrect results without the user knowing. This is probably why I am so bothered by silent failures in my big company role now.


I know it's not a simple answer, but how would you embed checks to flag or highlight potentially incorrect results to the user?


For physical processes, you can sometimes lean on conservation laws. Financial, no-arbitrage "laws". Sometimes it can help to run two different models, or different numerical methods, and compare their results. More generally, it's very, very hard. I am fairly sure there are multiple published results in computational fluid dynamics that are subtly wrong.


> I am fairly sure there are multiple published results in computational fluid dynamics that are subtly wrong.

I wonder what that means for the accuraccy of the climate models...


All manner of tests are done in fluid-dynamical modelling. Test suites are developed along with models. Simulations are done with a range of alternative models, or individual models with different conditions are applied, and the differences between the simulations are studied in detail. Models must rely on a fairly large set of parameterizations of processes that cannot be resolved directly, and tests are done to see which parameterizations make the most sense.

Models stem from the academic environment, not the business environment. This means that the models are open-source, so everyone can see what everyone else is doing. And 'everyone' is a pretty large set.

Note that there is no commercial element to any of this, which means there is no incentive to hide problems. It's the reverse: reputation is earned by finding problems, not by hiding them.

A large part of the challenge of climate modelling is the leveraging of increasing computer power to resolve progressively smaller scales of motion, and this uncovers the need to understand those scales in isolation. Modelling is often used at this level also.

None of this is to say that climate models are perfect. They obviously are not. But the system of academic science is very good at improving models and, importantly, exposing their limits. An indication of the latter is the pairing of uncertainties with predictions: a hallmark of this scientific community.


They are all wrong. Historically they've overpredicted warming, and then the models are updated to fit the (new) data better. They perform worse in areas with less data, like polar regions.

To forestall knee-jerk downvotes: I'm not saying climate change isn't real or that anthropogenic global warming doesn't exist. I'm saying the models are not yet developed enough to predict very accurately. Early heliocentric models made poor predictions too because they assumed circular rather than elliptical orbits, they were still more "right" than geocentric models.


In fact, climate models going back over fifty years have performed quite well; surprisingly so:

https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/201...

https://www.theguardian.com/environment/climate-consensus-97...


It depends on how you define "quite well". They are definitely directionally correct and in the right ballpark. This is a politically sensitive research area so the researchers are incentivized to make strong claims that you wouldn't make about, say, predicting sports outcomes or the stock market.

The first link you posted is kind of punting on the hard part by saying that the reason the models overpredict warming is because CO2 didn't rise as much as they expected so if you put the actual observed CO2 concentration in then the temperature prediction comes out closer to what was observed. But the CO2 concentration is a parameter of the model, so they didn't capture its dynamics properly and then had to retroactively change it to get the observed data.

Again, I want to reiterate that I'm not disputing the process of climate change or saying it's not a problem. I'm saying that modeling it is hard and historically the models have overestimated warming.


That doesn’t seem like a fair criticism. Most climate models are physics models. They can’t know how much CO2 people are going to add to the atmosphere. But if you tell them how we altered the composition of the atmosphere, they predict the temperature change correctly. That’s what I mean by good performance.


Part of the job of the modeler is to make good decisions about the parameters and their uncertainties. If they are systematically overestimating CO2 and getting high predictions as a result then they aren't modeling properly, unless there are exogenous reasons why CO2 is lower than expected. Which might be possible for all I know (Global financial crisis maybe? China growth lower than expected?) but the dynamics of CO2 are part of the model they are using for prediction so they can't just punt on it by saying they didn't know what CO2 would be. Prediction's hard, especially about the future, as the saying goes.


Can you suggest good places to look for more info on this?

I have tried a few times to find info on the accuracy of these models and couldn't find much. And most models seem to be closed source.


It's probably impossible to accurately simulate the climate of the planet decades into the future.

That does not meaningfully detract from the evidence for human caused global warming however.


I have to applaud the subtle elegance of the sarcasm here


No sarcasm intended. The science of climate change rests on many legs. It was already established long before massive models on the scale we have today were possible. Computer simulations attempts to understand how it will affect us, but does not contribute much evidence.


All models are wrong, some are useful.


A lot of computer models are deliberately wrong but useful as well which is another ball game in terms of analysing results.


> I know it's not a simple answer, but how would you embed checks to flag or highlight potentially incorrect results to the user?

For starters, some problems do have analytical solutions. Yo can compare a FEM model of these problems with the known analytical solution and see if it's close enough or not. One of them is the elasto-plastic plate with a hole.

You can also run unit tests at the element level.


My favourite resources for FEM were the book and courses that Hans Petter Langtangen (RIP) wrote at Simula. FEniCs made FEM so easy, it’s truly an excellent software project that is unfortunately not as well known as it should be within the community.

http://hplgit.github.io/num-methods-for-PDEs/doc/web/index.h...

http://hplgit.github.io/num-methods-for-PDEs/doc/pub/index.h...


> FEniCs made FEM so easy

https://fenicsproject.org/

Indeed, was blown away when I saw it for the first time over a decade ago, compared to the convoluted C++ FEM libraries I had seen before that.


Used it in its infancy for my PhD work - absolutely loved it, despite its teething problems. Also +1 for Langtangen!


My numerical analysis professor once told the class a story about how he used the finite element method to solve stresses on pieces of metal for the soviet space program. Computers were too expensive, so they did it by hand.

They cleared a university classroom of all the chairs and desks, and rolled out pieces of paper to cover the floor. The team took their shoes off proceeded from one corner of the room to the other. If you found that someone had made a mistake, you had a record available to find it, and you could simply rip up the paper at the point where the mistake was made and roll out fresh paper to take its place.

Apparently a triangle took a few days.

SolidWorks does the same math today.


I once read or heard a similar anecdote about Ludwig Prandtl who set up computation method using a room full of people each doing one step of a calculation. I recall it describing a finite element method, but that would predate the origin of FEM in this article by a few decades. Maybe it was just some numerical method.


Huh? Theres's a long history of doing this — 'computers' were people long before they are machines.


Yes, indeed. My father's first job was as a "computer" at an astronomical observatory (in the fifties of the 20th century, when vacuum tube computers had just started to appear, but they had astronomical prices, so they were out of reach for an astronomical observatory).


I'm aware of human "computers" but my recollection was that in the case of Prandtl, it was an application FEM. I did not know FEM wasn't developed at that time, as this article shows, so it must have been something else.


As a poster above has already said, it might have been a finite difference method, because they solve the same problems.

Unlike FEM, finite difference methods have been used right since the origins of differential and integral calculus, with Newton and Leibniz.

There are also precursors of the modern FEM, like the Ritz or Galerkin methods, which could have been used by Prandtl.


I assume it would be a finite differences calculation, which is also very easy to parallelize/vectorize. They solve the same class of problems, but calculations with finite differences don't always converge the way that finite elements do.


That's correct, it was finite difference methods. There are many variations with different properties such as whether they'll definitely converge or not.


Very nice summary paper on FEM! FEM has now become a key part of any physical product development process. I used to work in FEM area and few years back switched to unrelated software domains - so felt a bit of nostalgia reading the paper. Thanks for posting !


I just skimmed this and I never worked in FEM but this really brought me back. I had the pleasure of meeting several of the people in the article (Hughes, Oden, Babuska, Demkowicz) and couldn’t help feeling some nostalgia as well.


Great! Hughes, Oden and Babuška are three of my heroes. My work is in FEM, specifically, the theory of Mixed Finite Elements and Eigenalue problems, to which both Babuška made important contributions. Oden I know from elasticity for the most part.


I was lucky enough to have taken two FEA classes from Hughes in the 80s. I worked with his DLEARN [1] code many times back then. He was a great teacher.

I never had the pleasure of taking classes from Juan Carlos Simo, but he was known to have outstanding classes. His was a very brilliant light & life cut too short by cancer at the young age of 42.

[1]. DLEARN is a linear static and dynamic finite element code written in Fortran. https://github.com/fit087/fem_hughes


I still work on FEA and it's as technically interesting as ever


Thanks for posting! Reminds me of long night spent fussing over ANSYS models.


I'm not sure if there's a simple implementation of FEM (might be < 50 lines of code) to understand the virtual of the algorithm then.


You can check this paper. Is the one my advisor recommended when I started my PhD. Is a FEM implementation in 50 lines of MATLAB.

https://www.math.hu-berlin.de/~cc/cc_homepage/download/1999-...


What I found amazing about FEM is not the detail of how to implement it in code, but all the PDE theory & approximation theory -- how you can express the original continuous-domain infinite dimensional problem in a weak form using an infinite dimensional space of test functions, then approximating the weak form of the original problem with a finite dimensional Galerkin approximation, using a finite dimensional space of test functions, and use that to define a finite dimensional system of equations to solve. Then the theory for under what conditions you can guarantee that approximate solutions obtained from your finite dimensional approximation converge toward the true solution, as you increase the mesh resolution, and how fast the convergence rate will be.

Some of this is summarised in this paper in 2. model problem & 3. Galerkin discretisation of the problem, but not in a way that will communicate the mathematical ideas to anyone who hasn't already taken a course on the theory -- probably need a couple of courses on real analysis & a course on PDE as pre-reqs.


Yup. I'm trying to learn computational fluid dynamics for fun, and realised it's basically all numerical methods for PDEs, with some fluid dynamics-specific shortcuts. I know I've previously seen this stuff in relation to pricing derivatives in finance.

It's clear the underlying techniques are very powerful any time you have a thing whose rate of change varies as other things change. Once I understand everything better I will try it on e.g. capacity planning cloud resources and such.


The algorithm for linear PDEs usually boils down to some kind of meshing/discretization (often the hard part) to make a (usually sparse) linear system to solve using standard numerical methods. In the basic 1d, first-order case, it winds up being exactly the same thing as equivalent finite difference method.


You can do it in excel. I don’t have a particularly good example.


A Springer publication that's free instead of a hyper-inflated price to extrort academia for just past what it can afford? What year is it?


This is great. I studied FEM in school and got my first job in industry as a structural analyst. I started my own firm when I was 25 and employed 22 analysts that I contracted to Pontiac Motors.


Former structural analyst here. :waves: If you worked on consumer products there (aka cars and their components), do you happen to remember what the design-to fatigue life was? I talked with another analyst several years ago who had worked at Chrysler, and he told me they had used 125,000 miles. (I was in aerospace so not sure how many miles comprised a fatigue cycle, etc.)


Not exactly what you asked, but:

I worked as a structural analyst on passenger train cars (metros, LRVs, etc) for a while, as my first job after grad school actually.

Depending on the project (client requirements), we designed for 25-35 year lifetimes, with 12-24h operation typical. That usually amounted to millions of kilometers.

We had load cases with varying numbers of cycles. Eg curves with light loading might have been millions of cycles, but max (or even over max) loading might have been 10s or 100s of thousands of cycles.

All load cases were determined based on on usage stats from the operator and testing conducted to measure accelerations on the operator's infrastructure.


I learned FEM at school. It really drove home the power of computers. The basics of FEM are pretty simple. You should need to do many of these simple calculations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: