My first programming role was on the back of work by an engineer in this article. The core of the solver was a FORTRAN implementation of a paper on p-convergence. It was really amazing seeing our software predict how a small crack in a part of an aircraft would propagate. The 3D model produced matched the photograph shared later.
The lead developer (at the time) once said that the biggest software failure we can have is not incorrect results, but incorrect results without the user knowing. This is probably why I am so bothered by silent failures in my big company role now.
For physical processes, you can sometimes lean on conservation laws. Financial, no-arbitrage "laws". Sometimes it can help to run two different models, or different numerical methods, and compare their results. More generally, it's very, very hard. I am fairly sure there are multiple published results in computational fluid dynamics that are subtly wrong.
All manner of tests are done in fluid-dynamical modelling. Test suites are developed along with models. Simulations are done with a range of alternative models, or individual models with different conditions are applied, and the differences between the simulations are studied in detail. Models must rely on a fairly large set of parameterizations of processes that cannot be resolved directly, and tests are done to see which parameterizations make the most sense.
Models stem from the academic environment, not the business environment. This means that the models are open-source, so everyone can see what everyone else is doing. And 'everyone' is a pretty large set.
Note that there is no commercial element to any of this, which means there is no incentive to hide problems. It's the reverse: reputation is earned by finding problems, not by hiding them.
A large part of the challenge of climate modelling is the leveraging of increasing computer power to resolve progressively smaller scales of motion, and this uncovers the need to understand those scales in isolation. Modelling is often used at this level also.
None of this is to say that climate models are perfect. They obviously are not. But the system of academic science is very good at improving models and, importantly, exposing their limits. An indication of the latter is the pairing of uncertainties with predictions: a hallmark of this scientific community.
They are all wrong. Historically they've overpredicted warming, and then the models are updated to fit the (new) data better. They perform worse in areas with less data, like polar regions.
To forestall knee-jerk downvotes: I'm not saying climate change isn't real or that anthropogenic global warming doesn't exist. I'm saying the models are not yet developed enough to predict very accurately. Early heliocentric models made poor predictions too because they assumed circular rather than elliptical orbits, they were still more "right" than geocentric models.
It depends on how you define "quite well". They are definitely directionally correct and in the right ballpark. This is a politically sensitive research area so the researchers are incentivized to make strong claims that you wouldn't make about, say, predicting sports outcomes or the stock market.
The first link you posted is kind of punting on the hard part by saying that the reason the models overpredict warming is because CO2 didn't rise as much as they expected so if you put the actual observed CO2 concentration in then the temperature prediction comes out closer to what was observed. But the CO2 concentration is a parameter of the model, so they didn't capture its dynamics properly and then had to retroactively change it to get the observed data.
Again, I want to reiterate that I'm not disputing the process of climate change or saying it's not a problem. I'm saying that modeling it is hard and historically the models have overestimated warming.
That doesn’t seem like a fair criticism. Most climate models are physics models. They can’t know how much CO2 people are going to add to the atmosphere. But if you tell them how we altered the composition of the atmosphere, they predict the temperature change correctly. That’s what I mean by good performance.
Part of the job of the modeler is to make good decisions about the parameters and their uncertainties. If they are systematically overestimating CO2 and getting high predictions as a result then they aren't modeling properly, unless there are exogenous reasons why CO2 is lower than expected. Which might be possible for all I know (Global financial crisis maybe? China growth lower than expected?) but the dynamics of CO2 are part of the model they are using for prediction so they can't just punt on it by saying they didn't know what CO2 would be. Prediction's hard, especially about the future, as the saying goes.
No sarcasm intended. The science of climate change rests on many legs. It was already established long before massive models on the scale we have today were possible. Computer simulations attempts to understand how it will affect us, but does not contribute much evidence.
> I know it's not a simple answer, but how would you embed checks to flag or highlight potentially incorrect results to the user?
For starters, some problems do have analytical solutions. Yo can compare a FEM model of these problems with the known analytical solution and see if it's close enough or not. One of them is the elasto-plastic plate with a hole.
My favourite resources for FEM were the book and courses that Hans Petter Langtangen (RIP) wrote at Simula. FEniCs made FEM so easy, it’s truly an excellent software project that is unfortunately not as well known as it should be within the community.
My numerical analysis professor once told the class a story about how he used the finite element method to solve stresses on pieces of metal for the soviet space program. Computers were too expensive, so they did it by hand.
They cleared a university classroom of all the chairs and desks, and rolled out pieces of paper to cover the floor. The team took their shoes off proceeded from one corner of the room to the other. If you found that someone had made a mistake, you had a record available to find it, and you could simply rip up the paper at the point where the mistake was made and roll out fresh paper to take its place.
I once read or heard a similar anecdote about Ludwig Prandtl who set up computation method using a room full of people each doing one step of a calculation. I recall it describing a finite element method, but that would predate the origin of FEM in this article by a few decades. Maybe it was just some numerical method.
Yes, indeed. My father's first job was as a "computer" at an astronomical observatory (in the fifties of the 20th century, when vacuum tube computers had just started to appear, but they had astronomical prices, so they were out of reach for an astronomical observatory).
I'm aware of human "computers" but my recollection was that in the case of Prandtl, it was an application FEM. I did not know FEM wasn't developed at that time, as this article shows, so it must have been something else.
I assume it would be a finite differences calculation, which is also very easy to parallelize/vectorize. They solve the same class of problems, but calculations with finite differences don't always converge the way that finite elements do.
That's correct, it was finite difference methods. There are many variations with different properties such as whether they'll definitely converge or not.
Very nice summary paper on FEM! FEM has now become a key part of any physical product development process. I used to work in FEM area and few years back switched to unrelated software domains - so felt a bit of nostalgia reading the paper. Thanks for posting !
I just skimmed this and I never worked in FEM but this really brought me back. I had the pleasure of meeting several of the people in the article (Hughes, Oden, Babuska, Demkowicz) and couldn’t help feeling some nostalgia as well.
Great! Hughes, Oden and Babuška are three of my heroes. My work is in FEM, specifically, the theory of Mixed Finite Elements and Eigenalue problems, to which both Babuška made important contributions. Oden I know from elasticity for the most part.
I was lucky enough to have taken two FEA classes from Hughes in the 80s. I worked with his DLEARN [1] code many times back then. He was a great teacher.
I never had the pleasure of taking classes from Juan Carlos Simo, but he was known to have outstanding classes. His was a very brilliant light & life cut too short by cancer at the young age of 42.
What I found amazing about FEM is not the detail of how to implement it in code, but all the PDE theory & approximation theory -- how you can express the original continuous-domain infinite dimensional problem in a weak form using an infinite dimensional space of test functions, then approximating the weak form of the original problem with a finite dimensional Galerkin approximation, using a finite dimensional space of test functions, and use that to define a finite dimensional system of equations to solve. Then the theory for under what conditions you can guarantee that approximate solutions obtained from your finite dimensional approximation converge toward the true solution, as you increase the mesh resolution, and how fast the convergence rate will be.
Some of this is summarised in this paper in 2. model problem & 3. Galerkin discretisation of the problem, but not in a way that will communicate the mathematical ideas to anyone who hasn't already taken a course on the theory -- probably need a couple of courses on real analysis & a course on PDE as pre-reqs.
Yup. I'm trying to learn computational fluid dynamics for fun, and realised it's basically all numerical methods for PDEs, with some fluid dynamics-specific shortcuts. I know I've previously seen this stuff in relation to pricing derivatives in finance.
It's clear the underlying techniques are very powerful any time you have a thing whose rate of change varies as other things change. Once I understand everything better I will try it on e.g. capacity planning cloud resources and such.
The algorithm for linear PDEs usually boils down to some kind of meshing/discretization (often the hard part) to make a (usually sparse) linear system to solve using standard numerical methods. In the basic 1d, first-order case, it winds up being exactly the same thing as equivalent finite difference method.
This is great. I studied FEM in school and got my first job in industry as a structural analyst. I started my own firm when I was 25 and employed 22 analysts that I contracted to Pontiac Motors.
Former structural analyst here. :waves: If you worked on consumer products there (aka cars and their components), do you happen to remember what the design-to fatigue life was? I talked with another analyst several years ago who had worked at Chrysler, and he told me they had used 125,000 miles. (I was in aerospace so not sure how many miles comprised a fatigue cycle, etc.)
I worked as a structural analyst on passenger train cars (metros, LRVs, etc) for a while, as my first job after grad school actually.
Depending on the project (client requirements), we designed for 25-35 year lifetimes, with 12-24h operation typical. That usually amounted to millions of kilometers.
We had load cases with varying numbers of cycles. Eg curves with light loading might have been millions of cycles, but max (or even over max) loading might have been 10s or 100s of thousands of cycles.
All load cases were determined based on on usage stats from the operator and testing conducted to measure accelerations on the operator's infrastructure.
I learned FEM at school. It really drove home the power of computers. The basics of FEM are pretty simple. You should need to do many of these simple calculations.
The lead developer (at the time) once said that the biggest software failure we can have is not incorrect results, but incorrect results without the user knowing. This is probably why I am so bothered by silent failures in my big company role now.