Hacker News new | past | comments | ask | show | jobs | submit login
The crash of Air France flight 447 (2021) (admiralcloudberg.medium.com)
174 points by gus_leonel on Aug 11, 2023 | hide | past | favorite | 221 comments



If you liked this article, definitely consider checking out other articles written by Kyra (Admiral_Cloudberg). She has done a ton of articles on almost all notable (near)crashes, including their root causes, investigations, and subsequent effects on the airplane industry.

For some crashes that have interesting causes (at least from an engineering perspective) beyond "maintenance failed to identify a problem before takeoff" or "pilots fail to identify problem or take wrong actions", I strongly recommend the articles on TWA 800 (1996) [1] and the near-crash of SmartLynx Estonia 9001 (2018) [2]. The first goes into great detail exactly _how_ the FAA discovered the root cause, whereas the second one involves a logic oversight in the flight computers of a modern Airbus plane.

[1]: https://admiralcloudberg.medium.com/memories-of-flame-the-cr... [2]: https://admiralcloudberg.medium.com/the-dark-side-of-logic-t...


The article on TWA 800 was excellent, especially for those that don't know much about what happened.

I've read the accident report, a few books, and even some of the conspiracy theories and this is the best "factual" summary.


What a great article on TWA 800. My gramps was a retired manager from TWA and when that happened he was all over it calling his buddies still working there. Thanks for the article!


I fly a desk herding cats, I mean devs, for a living now. But as a Navy-trained aviator, this and the Colgan Air mishap in Buffalo always make me cringe. I can't entirely judge the crew, because the human mind is a strange thing that has ways of rationalizing truly abnormal situations and behavior. That said, an obvious Crew Resource Management failure compounded the issues here.

That said, I believe aerobatics and out-of-control flight training need to be a lot more common in the civilian sector for professional pilots than they are. Awareness of your angle of attack, and knowledge of how an aircraft behaves at high AOA close to (and if possible beyond) stall don't seem to be taught well in the civilian sector, or at least outside the tactical jet community. The idea that a professional pilot can't understand that a deeply stalled aircraft can be nose-up and still have a heinous sink rate is profoundly bothering to me, just like the Colgan Air pilots who couldn't recognize severe wing rock as a sign of a deeply-departed state. Both of these required (if even possible at that point) aggressive nose-down pitch inputs to break the AOA. And even then, the aircraft is going to lose a ridiculous amount of altitude before it gains enough airspeed back to give you enough pitch authority to scoop out of the ensuing dive. Horrible.

It also boggles my mind that Airbus didn't think at first to design an aircraft which avoided the split control problem they had. One person and one person only needs to be in control of the aircraft at all times.


Time and time again we learn that people do stupid things under stress. Consider Transair flight 810 [1], in which a simple engine failure during takeoff results in a complete loss of the aircraft when both pilots mistakenly identify the _wrong_ engine as malfunctioning, despite identifying the failure correctly only minutes earlier. OP has an article on that crash too, actually [2].

[1]: https://en.wikipedia.org/wiki/Transair_Flight_810 [2]: https://admiralcloudberg.medium.com/dark-waters-of-self-delu...


Another very common failure mode, and really all you can do is mandate an SOP that the crew member vocalizes which engine they're turning off, and the other crew member verifies that it's the correct engine.

The other answer to "people do stupid things under stress" is the common maxim that every emergency procedure in every aircraft starts the same way. Somewhere in the cockpit is probably a standard-issue mechanical 8-day clock, and the first thing you do in any emergency is stop and wind the clock. Not because it does anything to the aircraft, precisely the opposite. It makes you stop for a second or two for the "oh shit" moment to pass. Then you analyze what's actually going on. The other saying that's always briefed is "no fast hands in the cockpit." You may need to be expeditious, but you still vocalize what you're doing with your crew or wingman and methodically go through the emergency procedure. You get in a lot more trouble flipping switches willy-nilly based on what you think you have than if you just stop, breathe, and look at what the aircraft is telling you. I still can recite the steps from memory:

- Maintain aircraft control

- Analyze the situation

- Apply the appropriate emergency procedure(s)

- Land when conditions permit


> The other answer to "people do stupid things under stress" is the common maxim that every emergency procedure in every aircraft starts the same way

A good friend of mine [UK-based] used to fly fast jets for the RAF a long time ago (we're talking Lightning/Jaguar/Tornado).

He recently commented to me that "there were operational deployments where it went from boring to bedlam in the blink of an eye much of the time"...

Teaching humans how better to manage going from boring to bedlam is the key, I think.


I see a lot of Navy pilots say "fingers and toes" to ground themselves, prevent white-knuckling, and as a "reality check". I started doing it in moments in life where I recognize that I am getting intense or lost in the moment. I've even used the phrase to calm other people down.

Random example:

https://www.linkedin.com/posts/kim-kc-campbell_a-10-pilot-re...


* Another very common failure mode, and really all you can do is mandate an SOP that the crew member vocalizes which engine they're turning off, and the other crew member verifies that it's the correct engine.*

They did do that on the flight in question.


Maybe not even acrobatic flying training but glider training. Pilots with glider experience are some of the best performers in emergencies.


I used to fly hang gliders. For almost every issue (except for flying too fast), when you get into trouble in a glider you lower your AoA and increase speed.


Sully's 'Miracle on the Hudson' is (largely?) attributed to his skill as a glider pilot.[0]

[0]https://en.wikipedia.org/wiki/US_Airways_Flight_1549


Also, Air Canada Flight 143, the Gimli Glider.

https://en.wikipedia.org/wiki/Gimli_Glider



I wish glider training was mandatory for every professional pilot. Nobody really learns to fly until they can fly without an engine.


You certainly learn to pay attention to the aerodynamics. Lots and lots and lots of high-bank turns. Many of them relatively low and close to stalling speed.

Some occasionally below stalling speed, if you're high enough to safely reduce margins so much that a gust of tailwind puts you below, which requires you to just calmly follow through with constant angle of attack, descending, rather than force the nose up.

Short-field landings in unfamiliar places, requiring judgements of both approach angle and terrain. Moderate to heavy turbulence all day, speed varying from stalling to redline. Retractable landing gear. Flaps if you want that source of mental load too, in all speed regimes. Constant consideration of wind. One shot landings. Mountain flying with constant consideration of where the safe exit is. Consideration of deteriorating weather. Always aware of nearest landing site.

It's really non-stop training in all the fundamental parts of flying, minus engine operation, airspace and ATC communication.


To be fair, it sounds like this crash caused airlines and regulators to demand much more adverse event training, including high-altitude stall recovery. But according to the article, Airbus hasn't addressed the split control issue…


> this crash caused airlines and regulators to demand much more adverse event training, including high-altitude stall recovery

Q: How many pilots would attempt a stall recovery if the aircraft's instruments were not indicating they were stalled?


In the case of 447, they were warned about a stall, repeatedly.


> In the case of 447, they were warned about a stall, repeatedly

It's not that simple.

When the aircraft's computer considered the inputs invalid, the stall warning was muted. When they decreased the pitch (would be essential to recover from the stall!) the stall warning sounded.

"The angle of attack had then reached 40°, and the aircraft had descended to 35,000 feet (10,668 m) with the engines running at almost 100% N1 (the rotational speed of the front intake fan, which delivers most of a turbofan engine's thrust). The stall warnings stopped, as all airspeed indications were now considered invalid by the aircraft's computer because of the high angle of attack. The aircraft had its nose above the horizon, but was descending steeply.

Roughly 20 seconds later, at 02:12 UTC, Bonin decreased the aircraft's pitch slightly. Airspeed indications became valid, and the stall warning sounded again; it then sounded intermittently for the remaining duration of the flight, stopping only when the pilots increased the aircraft's nose-up pitch. From there until the end of the flight, the angle of attack never dropped below 35°."[0]

[0] https://en.wikipedia.org/wiki/Air_France_Flight_447


Jesus Christ what a terrifying situation. I read about this over and over again and it's still mind boggling how multiple systems and "exceptions" collided resulting in the worst scenario possible. I know the pilots are also to "blame" in a way, but man... The safety systems sure do have a lot of obscure modes and different ways to handle error and failures...


Yes, all of that is true, but the stall warning did sound almost immediately once autopilot disengaged—the crew was just too freaked out to know what to do with the warning.


There isn't a "split control issue," the split control setup is fundamental to the way the Airbus control schema works. Changing it would be a non-starter.


You say this as if the split control setup is a feature and not a bug. Can you go into why this is the case?

For me at least, it seems like the condition where both pilots are inputting control on the stick or rudder is always an error condition. Even if they're both inputting the same inputs.

The scheme directly contributed to the crash for two reasons; obviously if the co-pilot wasn't pulling back on the stick, they would have been able to get out of the stall. But additionally, the two crew members were spending crew resources on redundant tasks: both of them thought that they were the ones aviating and the other was in charge of figuring out why the flight computer seemed to be behaving badly. Possibly if one of them did dedicate their attention to the flight computer, they'd have figured out what the actual flight parameters were.


According to the author, the split control design mitigated communication between pilots, since one cannot feel what the other is doing.

That seems like a crucial functional omission.


Why? The split control never made any fucking sense. It resulted in several crashes where inputs were averaged.


It seems like an avionics warning—“conflicting control input” or the like—would go a long way here, and not change anything fundamental. Maybe Airbus implemented that since?


There is an audible “dual input”, which was triggered on this flight.


Interesting, thanks! I’ve read a few accounts of this crash and never picked up on that.


Note that hearing is one of the first things that stop working when a human is under stress [1].

[1]: https://youtu.be/Dl-Fl66Jfao?t=1674


Isn't "Upset" training now mandatory for civilian pilots, partly because of AF447? https://www.easa.europa.eu/en/faq/44870


Upset Prevention and Recovery Training is mandatory for FAA Part 121 (Air Carriers) flight crew.

It is not required for Part 91 (General Aviation/"Private") or Part 135 (Charter) operators.

https://www.faa.gov/documentLibrary/media/Advisory_Circular/...


Upset and departure are different things. Some aircraft can seem strait and level, but still be fully departed and falling like a rock. This won't happen in a Cessna trainer. It is normally associated with faster aircraft. The F-16 famously has a deep stall mode that sees it fall like a pancake for many thousand feet, with little hope of recovery at low altitudes.


In the US, spin recovery training is mandatory for flight instructors. That’s the only requirement I’m aware of, although all pilots do get some relatively basic training in unusual attitude recovery.


I have zero flying experience, but man, just by playing Pilotwings on the SNES and Ace Combat on the PSX, it's ingrained in me that when you stall any aircraft, be it a hang glider or a fighter jet, you put your nose down, gain speed and recover your aircraft. I am Brazilian, and since the first time I read the accident report I couldn't believe how a professional pilot could keep pitching the nose up for minutes on end on a stall situation. It made me A LOT less thrusting of the aviation industry capacity to take me safely from point A to point B.


My lack of credentials are identical to yours (fuck Lance) but this is way too harsh. When you witness a car veer off the side of the road, "what a moron, why didn't they just turn the wheel" is a logical question, but in reality the driver may have thought to do so and been physically unable to correct a power steering failure in time.

The SNES controller had 8 buttons and was operated from the safety of your couch. I'd imagine trying to correct this sort of situation is more like playing Microsoft Flight Simulator, while you're on a roller coaster, with random features of the plane not working, hypoxia clouding your judgment, and knowing 100+ lives are at stake if you fuck anything up.


All of your arguments are pretty valid, but still it DOES NOT excuse Bonin for keeping the damn joystick pointing the nose UP for MINUTES on end. At some point in time you must reflect upon what you are doing. If my hypothetical car needed to do a turn right, and I keep pointing the steering wheel left for many minutes and still do not realize that what I am doing is antithetical to what I want to happen, the blame is on me. I place the blame for this accident 75% on the crew, and 25% on the stupid design decisions made by Airbus. Averaging control inputs is an inexcusable design decision, and not mechanically linking the controls is the cherry on top. If those 2 design decicions weren't made, hundreds of lives wouldn't have been lost that day, even with the amazing level of incompetence shown by Bonin et all. And every Airbus flying nowadays still have this fatal flaw.


Never been anything especially serious fortunately but I lose track of the number of times when something I've seen just didn't square with my current mental model of the physical world (like what day it was to give a trivial example) and, even though I consciously noted the observance, I've basically deep-sixed it into the "that's weird" can.


Holy crap, this. I've been trying to force myself to pay attention to my “that's weird”s but really struggle to do so. Anyone have any tips?


It is profoundly disturbing, but we do live in an age where everything is simulated, including expertise. How much time did Captain Dubois have to save the aircraft once he was on deck? Seems like Bolin had mostly doomed them by then.


Although they were not just hand flying. They were hand flying at a moments notice in IMC with faulty instrument data in a highly complex system. That combination of factors does seem to be a huge risk factor for accidents.

I wonder if reverting back to pilot control is really the best approach in situations with faulty instruments. Perhaps it would be safer to use algorithms that can deal with bad data in a sane way. A middle ground between blindly trusting bad data (as in 737 max) and reverting to the pilot. Use other data like GPS, accelerometers, strain sensors ,fuel flow rate sensors, to sense check the primary instruments.


In post-mortems involving pilot error, what strikes me is that airliner cockpit design sounds so convoluted, unintuitive, and just plain bad. The burden of disentangling this bad design is always placed on the pilots - through extensive training and rote memorization - which inevitably fails under stress.

In this particular example, consider:

- the opposing pilot inputs being signaled only by a pair of little green lights

- the cacophony of warning lights and alarms which, together, say little more than "something is wrong"

- instruments that direct a pilot to pull up during a full stall

- sensor failures with no clear indicator

- computer safeguards suddenly removed without a stated reason

Etc. And the expectation towards the crew is to quickly and corrently reason about this stream of conflicting signals, while embroiled in a sudden emergency.

It smacks of pure engineer-driven design, assembled with serious attention to the technical issues, but with near-zero empathy for the humans who will be operating the contraption.

Reminds me of internal web tools at so many companies. They present giant messy forms, with checkboxes and dropdowns for every conceivable edge case, which have to be manipulated just so or the system explodes. And when something breaks, of course it's the user's fault every time.


The ECAM is where one needs to look and start debugging. The pilots here didn't really do that. Audible warnings are only really used if they're extremely time sensitive (GPWS, Stalling, Dual Input).

There is no "cacophony of warning lights". The overhead panel is a designed to have all buttons not be illuminated, so if you're checking what's wrong, you can immediately tell by there being a light indicator on it. Here this was limited to two ADIRs indicating FAULT. Nothing more.

I think this might help: https://www.youtube.com/watch?v=0a06A78iXnQ


Not relevant in this example, but another terrible user interface in the cockpit is the FMC. That thing with the '70s era green screen, where pilots program routes with cryptic codes, seemingly inherited from the Apollo guidance computer.

So many accidents seem to happen when pilots receive a route change and have to hastily re-program the FMC.


There should be a calm AI voice stating what they should be doing based on some heuristics. Based on angle of attack, engine power and the last recorded reliable speed, I feel a simple system should be able to make projection of the current speed and throw some warning when pilot input are becoming real stupid.


It exists, and it'a called the ECAM. Last recorded reliable airspeed is usually never used, because an upset condition can change it very drastically fast. These systems help resolve upsets way more often then they miss and these engineers did their homework.

Hearing is also the first sense to go when people panic; this flight had the stall warning blaring for over a minute straight and it didn't occur to the pilots that they may not be in overspeed, but stall.

(It is of course not perfect, sometimes conditions become dependent on one another and their order is not always great - see this simulated simultaneous engine fire and engine failure right after takeoff, where flying on your burning engine might be better than turning into a 1000ft glider: https://www.youtube.com/watch?v=ZRbLLO385_c)


I also wonder why airspeed is derived by some formula applied to the measurement of flow along a tube that can be blocked by ice and become useless. Why not use GPS primarily and fall back to that if needed? Or the other way around even.

If you expect an instrument to routinely fail, it just seems logical to at least have a backup.


Jetstreams are hundreds of kph. Meaning groundspeed can be hundreds of kph off of airspeed. Meaning GPS is a very poor backup airspeed indicator.


It's trivial to calculate true speed from altitude and ground speed. Is the altitude not reliable?


Air speed is relative to the air around the plane, not the ground.


Ground speed and airspeed are not the same thing. You need to measure airspeed directly. And I think there are generally 3 pitot tubes on planes.


> Behind the scenes, the loss of valid airspeed data had triggered a shift in the Airbus’s complex flight control laws. In “normal law,” computers interpret pilots’ side stick inputs and move the control surfaces in accordance with what is reasonable at that altitude, speed, and configuration. This improves the handling of the airplane to such an extent that no particular skill is required to fly it gracefully. Normal law also comes with full flight envelope protections in roll, pitch, speed, and load factor.

> If sensor failures occur, the controls drop down a level to “alternate law.” This law contains several sub-laws with slightly different configurations, but in general, alternate law means that some or all computer moderation of control inputs remains, but flight envelope protections are removed. The autopilot and auto thrust cease to function.

> In the event of further failures, the controls can enter direct law, in which there are no flight envelope protections and side stick inputs correspond directly to the position of the control surfaces, with no adjustment by the computer. This makes the airplane fly rather like a classic airliner, similar to most older Boeing models.

When you think of all of the Tesla accidents, this still seems to be the failure mode for autonomous systems. It's safer 95% of the time. But when it fails, it's because the users are so dependent on the systems that they don't even have the simplest of skills to prevent catastrophe.


> it's because the users are so dependent on the systems that they don't even have the simplest of skills to prevent catastrophe.

One pilot was pushing full nose down. The other full nose up. The system _did_ tell them that they were doing this, but in the sea of other alarms, it didn't register with them.

The lack of alarm prioritization and the lack of crew resource management training in the face of an emergency seem like the major human factors here. Half of that is in the design of the plane, the other half in how the company trains their pilots to handle severe emergencies.


I'd be hesitant to compare these. The aircraft still operated as designed. It detected that the input data was bad and turned the automation off. The pilot flying just didn't seem to grasp what that means.

Most Tesla Autopilot failures aren't drivers that don't know how to drive, it's the automation making bad inputs.


No, some warnings were also thrown away and never shown to the pilots because the automated systems thought they were erroneous (since they were so outside the flight envelope IIRC).


If by warning you mean the audible stall warning - that's because below 50 knots, the airspeed is considered unreliable / erroneous, because to get to that speed without flaps you'd have to be stalling the aircraft for over a minute (getting that exact warning) and surely nobody would manage to do that...

As I said elsewhere, the airplane was quite literally falling as fast as it was moving forward. I don't really fault the plane for considering that condition erroneous. And it didn't throw away a warning, it instead marks the speed tape with a bright red "SPD" instead of showing any airspeed. This is likely the better alternative over getting persistent stall warnings if one of your airspeed indicators fails.

Here's how that condition would've looked: https://docs.flybywiresim.com/pilots-corner/a32nx-briefing/p...


Actually, you are right. I think I either got confused with another crash or thought it was more than just the audible stall warning.

And I totally agree, there's no good solution to helping the pilots in such a catastrophic situation. They were probably going to ignore audible warnings and most visual cues anyways, considering how tunnel visioned they were from the moment the auto pilot gave back the controls to the pilots.


It's tempting to blame the inexperienced first officers, their training, or the CRM failures on this flight, but the blame for this crash has to land squarely on the atrociously bad UX of the Airbus cockpit.

Sensory overload and no clear readout of what is actually broken (pitot tube icing in this case) is bad but the fly-by-wire joystick configuration is what really doomed this flight. In Boeing airliners and many other types of aircraft the control sticks are mechanically linked together: you can physically feel if the other pilot is fighting your inputs:

> “Controls to the left,” Robert said, still worried about their bank angle. Pressing the priority button on his side stick, he took control and locked out Bonin, but Bonin immediately pressed his own priority button and assumed control again.

If the controls were mechanically linked Robert would have recognized his inputs were being overridden and would have been able to save the plane.


Someone in my extended family works at Airbus as an engineer (hence the throwaway account)

When I asked him about this accident, I brought up this specific point - the mechanical non-linkage.

Sure, the Airbus is fly by wire (there are no "mechanics"), but you can still program one joystick to mimic the other joystick, right? As far as I know, the Airbus plane actually averages the inputs..?? [0]

Anyway, he sorta-angrily gave me the same explanation as I just saw posted here as well: it was a crew management issue. (which of course may have played a huge role).

I am not a pilot so I am probably missing something. But he (the Airbus family member) did seem quite defensive about this. What portion was internalized corporate-comm "we are not at fault" reasoning? What portion was engineering hubris of "fly by wire is unquestionably superior to mechanical linkage"?

I don't know. But I do find it strange... and indefensible. When does the average of inputs make sense? I'm open to an explanation. Is there a good one?

---

[0] See https://news.ycombinator.com/item?id=4224707 from 2012.

"This input was averaged (read: canceled) with the other pilot's nose up input."

(and further down in the same sub-thread)

"The AF447 inputs were averaged."


Averaged inputs on the sticks was the worst thing I read about back when this happened. I can't imagine a circumstance where this is desirable, and it can only lead to confusion.


> When I asked him about this accident, I brought up this specific point - the mechanical non-linkage. Sure, the Airbus is fly by wire (there are no "mechanics"), but you can still program one joystick to mimic the other joystick, right?

That's come up at least twice in NTSB ship accident reports. Some ships have more than one control station. This is usually to allow driving from a control station out on a bridge wing, where the pier can be seen clearly. A high speed ferry plowed into a dock in New York City in 1993 because of confusion over which station had control. "The NTSB concludes that the propulsion control system on the Seastreak Wall Street used poorly designed visual and audible cues to communicate critical information about mode and control transfer status."[1] Big throttle levers at each station, but only the ones at one station at a time did anything. The levers did not move together.

The U.S. Navy went all the way to touch-screen throttles. After a collision involving confusion over which of three control stations was driving, they're going back to big handles.[2]

The Airbus system has been criticized, but it's two people sitting side by side. The ship systems have control stations much further apart.

[1] https://www.ntsb.gov/investigations/AccidentReports/Reports/...

[2] https://www.theverge.com/2019/8/11/20800111/us-navy-uss-john...


The 777 and 787 are fly-by-wire aircraft with mechanically-linked yokes. In extreme situations (one yoke is actually stuck and won't move at all), there is a breakaway on that link, and if you exert enough force upon it (like, really hulk at the thing), you can snap it. I don't know whether this has been necessary in flight, but the facility is there. The control surfaces are split between the yokes, so for example the left yoke pitch control commands the left-side elevator, making the plane still controllable with only one working yoke.


> I don't know whether this has been necessary in flight, but the facility is there.

It happened in one accident, where pilots were literally tugging the yokes in opposite directions. They were literally struggling against each other to fix the attitude.


What would you have the system do instead of averaging the inputs? If you have 2 inputs when you should have 1, you warn with a light and sound, and what input do you use in the meantime?


There are plenty of situations where averaging two safe-but-opposite inputs would cause a crash. For instance, if you're holding hands/handcuffed with someone and walking toward a pole, you must both either go left, or both go right. This is a very common scenario in life and we have many solutions for the problem.

In the Air France situation, there's no way for the plane to "know" which pilot's input is correct, so I don't have a good answer, but "just average two drastically different inputs" seems crazy.


> It's tempting to blame the inexperienced first officers or their training but the blame for this crash has to land squarely on the atrociously bad UX of the Airbus cockpit.

The real problem is that someone who doesn't look at his most critical instrument* in instrument flying conditions, causes a stall, doesn't respond to stall warnings, doesn't communicate, refuses to hand over controls even though he doesn't understand what is happening, is in the cockpit at all. Either the training was inadequate, or deficiencies in Bonin's flying were overlooked.


Also, the PNFs hesitancy to shout "my controls" and/or hitting the priority button, especially as he moves to pitch the plane down when it was probably still recoverable.

That the off-duty captain standing behind them is the first to vocalize they're at 10000 ft and stalling is indeed not great.


He did hit the priority button and say "Controls to the left." Bonin hit his priority button to take back control but didn't say he was taking back control.


The reality is there is not a single thing you can legislate or train that will prevent a pilot from flying a perfectly working plane into the ground if they get stubborn.

Aircraft emergencies are all about keeping you out of panic so that your natural instincts don't surface, because pretty much every natural or intrinsic human instinct, if followed, will just result in you flying the plane into the ground. Every sense you have is overwhelmed with "things aren't working the way I know they are supposed to and my control inputs seemingly have random results", and in a panic, your brain REALLY wants to revert to "just keep going up, away from danger", which is the opposite of what you need to do 95% of the time.

The only fix in this case would have been literally evicting that pilot from the cockpit, and there's no button for that.


> The only fix in this case would have been literally evicting that pilot from the cockpit, and there's no button for that.

Military jets have such a button. Maybe commercial needs to take some inspiration?


what, an ejector to parachute the pilots, fuck the passengers? that would be pretty funny. It does save the airline pilot replacement training though. Or maybe an ejector to throw just the bad pilot out?


Just the bad pilot is what I had in mind


Reading the CVR again, he did in fact ask for the controls ("So give me the controls") several times, but still seemed to be under the impression that they were in overspeed, or at least had to climb.

I think my sibling comment is probably right, Bonin would've required much more severe shouting to take his hand off the stick.

Certainly worth reading: https://bea.aero/uploads/tx_elyextendttnews/annexe.01.en.pdf


I won't like to criticize someone with a different trade skill (much less when they're already dead), but to paraphrase two of my CFI friends, their summary view was "Bonin acted stupidly". He didn't communicate he had controls, kept pulling the stick to nose up. The pilot psyche element had lot to play in this accident. He was sticking to a naive understanding of aerodynamics in panic - which was counterfactual - rather than counting on prior training experience (like what we developers would say an antipattern). Relaying a similar point, his training still wasn't enough to gather situational awareness & apply common-sense instead of a panicky behavior.


Honestly, part of the mistake is also on Dubois. When flying as a team, it is essential to establish hierarchy and make sure who has authority. As the one with the lesser number of hours, Bonin should have been relegated in favor of Robert. That being said, hierarchy should not be as much of an obstacle that the one in authority disregards the suggestions of the copilot.


Bonin in his panic hit his priority button to take back control but didn't say he was taking back control. He should have announced he has the controls - and it is obvious knowledge to pilots that flight behavior us from the averaged stick inputs. But only when they know both are handling flight surfaces.

If a pilot stubbornly wants to keep control maybe out of desperation or panic - and performs actions which are counterfactual to the knowledge of aerodynamics every pilot is taught, there is no instrument system which can salvage this catastrophe.

["Bonin acted like an idiot" - not my words, but two of my CFI friends, who discuss aviation topics on weekends often over beers.]


Bonin was an idiot, I agree. But Dubois should have established a clear hierarchy to prevent the idiot getting control over Robert.


>It's tempting to blame the inexperienced first officers, their training, or the CRM failures on this flight, but the blame for this crash has to land squarely on the atrociously bad UX of the Airbus cockpit.

Hard disagree. No matter how bad the UX, there just isn't any conceivable situation where Bonin's pulling up the rudder for minutes on end wasn't suicidally dangerous, or made any sense at all.


Same sentiment heard elsewhere. Pulling stick up in a stall midflight is a rookie mistake. Pulling up & increasing thrust is a takeoff peculiarity. Even if AF447's altimeter was faulty, they both missed something else in their panic - they fell 20,000ft in few minutes, going faster & faster. The g-forces are palpable enough to experienced pilot (who reky on it sometimes when they fly in complete darkness).

This incident was rookie flying behavior coupled with a complete disregard of situational awareness. The flight UX of Airbus isn't that bad: 330 & 340 have a much better panel layout & less clutter than their Boeing counterparts from late 90s.


I have to agree with you. Even non-pilots who play video games have enough common sense not to do that: they notice that when they climb sharply they lose speed and eventually stall and that when they dive back down it gives them their speed back and allows them to recover.

His inexperience cannot be denied. Must have been terrifying for the pilot when the plane's safeties and automation suddenly disengaged. Uncertainty and panic must have seized him and never let go.


> Even non-pilots who play video games have enough common sense not to do that

But in video games you likely have good visibility and an external view. AF447 did neither. The human ear is actually really terrible from telling a pitch up stall condition from a pitch down overspeed one, lacking visual references.[1]

[1]: https://en.wikipedia.org/wiki/Sensory_illusions_in_aviation (somatogravic)


* The overhead panel would've shown FAULT on the ADIRs.

* The ECAM did show IF SPD DISAGREE: ADR CHECK PROC.

* The plane did shout DUAL INPUT several times, and a button to lock out the other pilot is right on the stick. You can hear it being used by Bonin.

I'd say this is a failure of lack of crew resource management (not even technical ability) more than anything else.


The thing is, people in high-stress situations tend to miss audio cues. That's why Boeing uses stick shakers for the stall warnings.

And there were _several_ incidents with Airbuses that were caused by dual inputs.


Armchair amateur, Pitot tube failure is such a frequent occurrence that I’m angry we display any other warning than this one.

- It should have a needle to eject caps on it, so that pilots don’t forget the caps (Brisbane accident, among many),

- The needle should also sense, if not remove, the ice (Air France accident),

- It should wake up the pilots and reset controls to fixed values for the current altitude, since when Pitot tubes fail, autopilot and autothrottle are worse than worthless (they will automatically crash the plane).


Pitot tube failures are incredibly frustrating. In one case it's theorized a wasp nest blocked the inside of one, after not being covered while on the ground a long time.

https://en.wikipedia.org/wiki/Birgenair_Flight_301


They seem so close to the cockpit window that it’s frustrating pilots can’t open the window (which they can, but not in flight) and clean it up.


If a “needle” worked to remove the ice, that’s what they would use instead of a complex anti-ice system.


The article misses the key takeaway from this incident IMHO.

When the controls were pegged at full aft deflection, the stall warning would cease, because instrument readings in the deep stall were considered invalid by the computer. Whenever the pilot would start to push forward to recover, the computer stopped rejecting the readings, and started sounding the stall warning again!

So every time the pilot started to do the right thing, the airplane would start screaming "STALL STALL" at him, and he would pull the stick back again to make it stop.

I firmly believe that they would still be alive if the stall warning had either not been installed at all, or had functioned properly.


Nope the article goes deep into this exact issue:

> During flight 447’s plunge toward the sea, the flight directors disappeared every time the forward airspeed dropped below 60 knots. This was because an airspeed below 60 knots while in flight is so anomalous that the computers are programmed to reject such a reading as false. Furthermore, at an angle of attack threshold which corresponded quite closely to 60 knots, the stall warning would cease for exactly the same reason. This created an unfortunate correlation, wherein Bonin would pitch up, the angle of attack and airspeed would exceed the rejection thresholds, the flight director would stop telling him to fly up, and the stall warning would cease; then if he attempted to pitch down, the angle of attack data would become valid again, the flight director would tell him to pitch up, and the stall warning would return. This perverse Pavlovian relationship could have subconsciously conditioned Bonin to believe that pitching down was causing the plane to approach the stall envelope, and that by pitching up he was actually protecting the plane against stalling. This violated basic aeronautical common sense, but by this point Bonin and common sense might as well have been on different planets.


Nope. One paragraph in a very long article is not "deep" at all.


> During flight 447’s plunge toward the sea, the flight directors disappeared every time the forward airspeed dropped below 60 knots. This was because an airspeed below 60 knots while in flight is so anomalous that the computers are programmed to reject such a reading as false. Furthermore, at an angle of attack threshold which corresponded quite closely to 60 knots, the stall warning would cease for exactly the same reason. This created an unfortunate correlation, wherein Bonin would pitch up, the angle of attack and airspeed would exceed the rejection thresholds, the flight director would stop telling him to fly up, and the stall warning would cease; then if he attempted to pitch down, the angle of attack data would become valid again, the flight director would tell him to pitch up, and the stall warning would return. This perverse Pavlovian relationship could have subconsciously conditioned Bonin to believe that pitching down was causing the plane to approach the stall envelope, and that by pitching up he was actually protecting the plane against stalling.


Yes. That's the only place it's mentioned, IMHO it is central to the whole thing.


It's a really long and clear paragraph, what more do you expect?

The summary blames the accident on poor training and the typical EFIS "pilot out of the loop" problem, rather than very specific issues like this. Which makes very much sense IMO.


He spends a lot more time shitting on the pilot then he does talking about this, that's backwards IMHO.


He doesn't really shit on the pilot IMO. He clearly states that the pilot was a product of his generation and mindset and far from the only one. He clearly states it's more of a systemic failure than an individual one.

And this is only part of the many things that went wrong. Also it's important to realize the system worked as designed and in every other situation this behaviour would be beneficial. I don't think this was even changed after the accident.

As one of the other things that contributed, the pilot should never have got into this situation in the first place. Which is explained by the poor stall training focusing only on low altitude stalls (which makes sense as this is by far the most likely time for them to occur, but the difference with high altitude should of course be explained as it is being done now.

As with any major incident it's a matter of multiple errors compounding in just the worst way possible.


I agree that they buried the lede. You might have stopped reading before it. I'm usually completionist, I'd rather skim than miss a big chunk of text.


Not a pilot but I believe that pulling the stick back is exactly the wrong thing to do in a stall. The pilot should have instinctively known that pulling back on the stick would not recover a stall. On the other hand, pilots are taught to "trust the instruments" but IIRC the aircraft had no angle of attack indicator, and the airspeed tubes were frozen over. So lots of bad or missing information, contributing to stress and panic, which does strange things to otherwise normal people.


I'm a pilot. Yes, you're correct, but there was so much else going on here... unless you're certain it's wrong, you're going to be inclined to listen to the airplane when it screams at you not to do something in an already terrifying unfamiliar situation.


Not a pilot either. And that’s what I learnt playing hours of simulator games. Isn’t gaining airspeed the primary goal?


This is mentioned in the article.


Only in passing. The dramatized retelling through the first part of the article completely glosses over it.


It's important to note the stall indication itself wasn't disregarded, but the indicated airspeed (that is a prerequisite). This plane was in the exceptional situation of falling out of the sky faster than it was moving forward.


Perhaps these pilots were "Children of the Magenta"[1] ?

These AA instructional videos by Captain Vanderburgh are fascinating - even to a non-pilot like myself.

They are especially relevant as we dip our toes into automobile auto-pilot and the "automation dependency" that comes along with it.

Of particular note in this video:

@ 12:30: "... tactily connected to the airplane ..."

@ 14:15: "... we see automation dependent crews, lacking confidence in their own ability to fly an airplane are turning to ther autopilot ..."

@ 17:35 - 18:15: (just listen)

[1] https://www.youtube.com/watch?v=5ESJH1NLMLs


Autopilot, automation, algorithms, safeties, guarantees, envelopes... Illusions, gone at the first sign of trouble. I easily get used to such technological comforts and it is deeply traumatizing when they're talking away.

The badassery men are capable of when they accept this and drive instead of allowing themselves to be driven is the stuff of legends.

https://en.wikipedia.org/wiki/Mercury-Atlas_9

https://en.wikipedia.org/wiki/Gordon_Cooper

> Cooper lost all attitude readings.

> [a short-circuit] left the automatic stabilization and control system without electric power.

> Cooper noted that the carbon dioxide level was rising in the cabin and in his spacesuit.

> "Things are beginning to stack up a little."

> Turning to his understanding of star patterns, Cooper took manual control of the tiny capsule and successfully estimated the correct pitch for re-entry into the atmosphere.

> Cooper drew lines on the capsule window to help him check his orientation before firing the re-entry rockets.

> "So I used my wrist watch for time," he later recalled, "my eyeballs out the window for attitude."

> "Then I fired my retrorockets at the right time and landed right by the carrier."


Just FYI, Admiral Cloudberg has an article titled "Children of the Magenta", but it is related to a specific AA 965 crash in 1995.

https://admiralcloudberg.medium.com/children-of-the-magenta-...


As I recall, it’s linked in the (excellent) article, but William Langewiesche’s article about this crash is also very much worth reading:

https://www.vanityfair.com/news/business/2014/10/air-france-...

All of his articles about transportation disasters (Columbia, M/V Estonia, many other plane crashes) are very good and highly recommended.


What an amazing article. I know comments here are meant to be more substantial, but this is truly one of the best written content I've ever come across.

My most sincere admiration and thank you notes to the author for putting all of this together in such a great way!


She writes one of these a week, and they’re all very high quality. I highly recommending subscribing. Seeing the new post is always one of the week’s highlights, just like how getting the “Money Stuff” edition is a highlight of every weekday morning (er, early afternoon).


Yeah, and even the older articles hold up very well now that tons of them have been updated. It has been part of my weekly routine for 5 years now to read them, and I've got to say that the medium articles are a huge upgrade over the earlier Imgur galleries.


Yeah, I’ve been a follower for just about as long as you have, it sounds like. It’s been interesting seeing her growth and evolution as a writer. I think this is the only thing where I subscribe to someone’s Patreon, but it’s well worth it. I should probably consider bumping up my support amount.


Oh I agree, Money Stuff always puts a smile on my face even before reading it, ha.


It's kind of amazing how many of these accidents can come down to the pilots forgetting the most basic things.

In this case, ultimately, the pilot and the co-pilot were exerting opposite controls. One to pull up, one to push the nose down. In a stall you push the nose down to gain aerodynamic lift. That's like Flying 101. But I guess when you panic, the natural instinct of wanting to pull up kicks in when you're rapidly losing altitude. Still, training?

I saw another one the other day: Ethiopian Air Flight 961 [1] from 1996. This accidently ultimately came down to a single switch being on: manual cabin pressurization. This was on because a pilot reported an error in a previous flight and maintenance were trying to replicate it. The pilots (and passengers) passed out from hypoxia and didn't put on their oxygen masks. Again, not putting on your mask? Pretty basic. Also, Flying 101. If you lose (or don't have) pressurization, drop your altitutde. You get below 8000 feet and you're fine.

There was an error but it wasn't ignored and it wasn't entirely obvious, causing revisions to be made. Also, hypoxia can impair judgement very quickly. But why keep climbing when something is wrong?

[1]: https://en.wikipedia.org/wiki/Ethiopian_Airlines_Flight_961


> Unlike most other airliners, the control sticks on Airbus models are not mechanically linked and the pilots cannot directly feel what the other pilot is doing.

This is a very bad design - at a bare minimum an incredibly loud and annoying alarm should go off if the pilots are fighting controls, because neither knows what the plane is receiving as an input.


It has been changed after this incident. There is now an alert for dual input. The resulting Airbus system is better than traditional connected yokes, because it doesn't have the unexpected opposite input anymore (with the alert) but it does also have an override switch for the captain's side. Which means you can overpower the first officer with just a button instead of physically having to wrestle for the controls.

You can see it used here: https://youtube.com/shorts/nBmPXrBpp3Y

You hear the normal 50.. 30.. etc. callouts and then when the captain goes full thrust you hear "priority left" which is the override button being pressed to pitch for go-around.

That's much nicer than what we have teaching with traditional controls. Because at that point overpowering a startled pilot is a bit hard.


The Dual Input alarm wasn't added, in fact it sounded several times during the Air France flight.

FDR visualization: https://www.youtube.com/watch?v=0a06A78iXnQ


Ah thanks. I thought I remembered from ground school that this was added afterwards.

Interesting that the FO flying from the left seat did not recognise that callout as a big thing to react to.


Of course that introduces the problem of "what if the captain is the startled pilot" - can the first office override the captain?


Yes, but whoever hits the button last gets control, so you can take priority back.

(Unless you manage to hit the priority button for 40 seconds - but I assume that's for locking out a broken stick that's doing inputs at rest)


Yes it will announce "priority right", but it's not normal to take control from the captain (and also highly unusual to take control from the first officer outside of training). Unless you're avoiding an immediate safety threat you would normally call out the problem but not take control.

It's a bit different for a training captain like in the video. They know these are the first officer's first landings on the real aircraft, so when things go wrong they take over.


https://safetyfirst.airbus.com/app/themes/mh_newsdesk/docume...

Section 6

> When a dual input situation is detected, the two green priority lights located on the cockpit front panel flash simultaneously.

> After the visual indication has been triggered, a synthetic voice “DUAL INPUT” comes up every 5 sec, as long as the dual input condition persists.

According to Wikipedia, this warning system worked as intended:

> Confused, Bonin exclaimed, "I don't have control of the airplane any more now", and two seconds later, "I don't have control of the airplane at all!"[42] Robert responded to this by saying, "controls to the left", and took over control of the aircraft.[83][44]

> He pushed his side-stick forward to lower the nose and recover from the stall; however, Bonin was still pulling his side-stick back. The inputs cancelled each other out and triggered an audible "dual input" warning.

> ...

> Bonin heard this and replied, "But I've been at maximum nose-up for a while!" When Captain Dubois heard this, he realized Bonin was causing the stall, and shouted, "No no no, don't climb! No No No!"[85][44]

> When Robert heard this, he told Bonin to give him control of the airplane.[2] In response to this, Bonin temporarily gave the controls to Robert.[44][85][2] Robert pushed his side-stick forward to try to regain lift for the airplane to exit the stall. However, the aircraft was too low to recover from the stall. Shortly thereafter, the ground proximity warning system sounded an alarm, warning the crew about the aircraft's imminent crash with the ocean.


> how many of these accidents can come down to the pilots forgetting the most basic things.

This is factually incorrect.

If you read the Final Report you will see that the FP1 (the one that have placed the abnormal inputs in the side stick) did not had the appropriate training; actually the scenario that happen has not even in the trainings (Nightly mid-atlantic flight with screen disagreement with one of the FP missing that I’m normal law).

At least for me, this accident was one of the top 3 most important in terms of safety measures and changes in aviation because the entire industry needed to think about the role of automation, training design, cockpit design, etc.


Thats makes me think of the guy who disappeared flying a Cessna test flight to Tasmania, his last report he was talking of a large object descending on him. They rekon he had lost his sense of which way up he was and it is common to fly upside down. He was likely describing the ocean. I recall reading bigger planes can also fly upside down for long distances without piloting, maybe something like that was going on. If you think you are the wrong way up and try to get down, you are going to go up.

here is the story - https://en.wikipedia.org/wiki/Disappearance_of_Frederick_Val...


Are you thinking of Helios Airways Flight 522? That Ethiopian Airlines flight was a hijacking.

https://en.wikipedia.org/wiki/Helios_Airways_Flight_522


The same author as the OP also has a great article on Helios 522: https://admiralcloudberg.medium.com/lost-souls-of-grammatiko...


Sorry, yes I am. Thank you. I guess I saw a few flight disaster things recently and got them mixed up.


Even as a flying novice, I was screaming to myself when they kept trying to pull up in a stall. I guess there had to have been some element of panic clouding their judgement, which is why pilots need to drill routine into their head so hard.


In normal law an Airbus doesn't stall with full back stick, that would just result in maximum AoA not a stall.

In this case it had degraded to loose some of those protections due to icing of the sensors.

That was confusing to the junior co-pilot, and they skipped basic procedures on who has control resulting in opposite inputs and a loss of control.


But if you are in a stall at 35k feet, there is never a situation in which the corrective action is to pull up, especially not to max AoA. You aren't anywhere near the water, you should be trying to gain speed.

Bonin just went to primal instincts, full fight or flight, and was "taking matters into his own hands" even despite the fact that he was aware he wasn't doing anything useful. The other pilot even told him to stop, but he didn't. He wasn't a copilot in this scenario, but rather a direct adversary to recovery of the plane.

He should have been treated as a rogue pilot the second he started panicking, but the other guy was kinda busy. He was not acting rationally in any way. Had he dropped dead from a heart attack, AF447 would be an interesting anecdote instead of a tragedy.

How do you train pilots for: "In very rare cases, especially during a crisis, you may need to physically harm your coworker to prevent them from causing the death of hundreds of people, but definitely don't do it if you're the one causing the problem, which you will never figure out if you ARE the one causing the problem"


I have an instructor rating, part of training is exactly that. If someone freezes up at the wrong moment, be prepared to use violence to get them off the controls.

One trick that often works is to put your hand in front of their eyes. Humans have a deep instinct to want to see at all times, so they'll let go of the controls and use their hands to remove yours. No need for physical harm.


Bonin was showing himself to be extremely resilient to coming back to reality and rationality. He needed to be physically restrained. He was far too stuck in his head to rely on anything less.

Alternatively, we just kinda have an understanding that if your pilot goes into panic, and the copilot can't figure out a way to get them out of it, everyone dies.

Of course, then what if they panicking pilot decides they have to restrain you so they can continue what they are doing?


Yeah, I guess as someone who's had minimal instruction it seemed obvious. But reading the article it's clear that the junior FO was trained basically exclusively on the fly-by-wire Airbuses and probably didn't have that intuitive understanding when that protection was stripped away.


The plane in this case doesn't matter. No training for stalls ever tells you to pull up. It's directly opposed to how stalls ARE trained. This pilot was a chaos monkey, and the other pilots were unable or unwilling to get the chaos monkey out of the situation. He had completely reverted to human instinct, which is terrible for flying, and also was letting himself override his copilot, who was handling the situation much better.

People act like this situation wouldn't have happened had they had boeing style connected controls, but it's pretty likely that in this state, Bonin would have actively been wrestling with the controls to point the nose up in that state too. The only way to keep Bonin from killing everyone would have been to restrain him. There is no reliable way to "knock" someone out of that kind of panic state.


Perhaps the physical effort of struggling against his co-pilot would have made him realize he was doing something wrong. It's too easy to pull back full on a fully non-mechanical control, and just lock up there.


No, this situation is the same as when lifeguards are warned about saving panicking and drowning people, in that if you don't handle them in specific ways they will happily drown you in their panic. You cannot rely on "shaking someone out of it" in these circumstances because there is no reliable way to force the brain to reject it's panic instincts and return to rational thought.

The only solution would have been physical restraint. Bonin was an active threat


Locking him out of the controls would have been enough (if the technology allowed it). Even just relieving him of his duty would have been enough. If Captain Dubois had been there early enough and decided to relieve him, Bonin would have almost certainly obeyed.

If he doesn't, then I guess then you can resort to force.


Indeed. Other controls would have caused a wrestling match over the controls with probably the same outcome.

I believe the best solution is what Airbus implemented after this incident. An alert if there is dual input, and a button for the captain's side to override the other side. No need to wrestle, press the priority button and take over control.


Those existed before this accident. Bonin pressed the priority button to steal control from the other pilot, who was fixing things, at least once, and "dual input" alarms sounded pretty much the entire time the plane was falling out of the sky.

The ONLY way to have prevented Bonin from crashing the plane would have been to remove him from the controls. He was doing the same thing that lifeguards are warned about; when the human brain is in that kind of panic state, it will happily do things that it should know will inevitably lead to it's own death, like scrambling in such a way that you drown the lifeguard trying to rescue you.


With linked controls the captain would have instantly been able to tell what inputs the copilot was making and could have taken appropriate actions.

Aircraft UX should be made in such a way that there's never the slightest of doubts about something as fundamental to its function as where the inputs that its using are coming from.


> But why keep climbing when something is wrong?

Two possible reasons in this case.

1) Bonin might not have understood he was in full manual control, and that the computer was no longer restricting his pitch up command to the maximum advisable pitch.

2) The stall warning was intermittent on the way down, because it turns off below a certain air speed. Of course, low air speed contributes to stalling. They were stalling the whole time, but ironically as they started to gain air speed (a good thing) the stall warning would kick in because it was no longer below the lower limit for a stall warning.

Also there's the ever present reason of panic, brain-lock, confusion, etc.


Often I don’t think it is as much “forgetting” but more getting behind the events to the extent they don’t properly assess / understand the situation and then make poor choices and it is too late.


Yeah the "insane pilot error" crashes always get me.

American 587 is another example where, in response to normal wake turbulence, the pilot mashed the rudder back and forth so hard it broke the stabilizer.


There was a widespread misinterpretation/incorrect belief of the certification rules at the time of that crash.

There is an airspeed, called maneuvering speed (Va), at or below which it should not be possible to use the flight controls to remove parts of the aircraft. (The aircraft should run out of control authority or stall prior to exceeding any load limits [which is well before the removal of parts].) Every student pilot is taught this prior to passing their first knowledge test and checkride.

The problem with that is the certification rules for Va are for a single input, not for a cyclically reversing sequence of inputs (which is what the first officer of AA 587 did).

As the details became clear, this was a big topic of learning for many pilots, including myself. AA 587 was caused by the flight crew, but I don't put it into the "insane pilot error" category.


The pilot in that case was essentially taught that doing so was an acceptable thing.


The alarm in question was the same one used for invalid takeoff configuration, as hypoxia was already setting in.


This was quite a big mess! Let’s see if there is a way to break it down in a Safety II fashion.

1. Airline human resources deviated from the traditional practice of hiring experienced rudder and stick pilots.

1. Overly permissive company attitudes toward crew rest and a lack of awareness about how it can affect vigilance and fitness for duty.

1. Breakdown of the ceremonies that transfer command and control in the cockpit.

1. Lack of training for high altitude stall recovery.

1. Activities performed in low altitude stall recovery training scenarios are orthogonal to a successful recovery in high altitude stall scenarios.

1. Automation surprise followed by alert saturation combined with incongruous perceptual signals and loss of situational awareness.

1. Aircraft control systems that failed to resolve the ambiguous delegation of authority (double PF) dilemma.

1. Highly compressed timeline of events in a physically disorienting environment.

1. What else?


I hate me for watching the news about it on that time.

That's the flight that seeing all the details about on television gave me the fear of flying that I have these days, specially when above the ocean.

I just can't overcome this.


I suppose you've heard it all, like the fact that there are more than 10000 planes currently in the air and we haven't heard about a plane crash for weeks now. If you go to https://globe.adsbexchange.com and zoom out you can see all the planes around the world, flying safely. I suppose the only way through the fear is to fly and realize at the end that you're fine (although obviously IANAP)


Also note that the last deadly airline crash in the US was all the way back in 2009. We've gone 14 years without a single one, which is a pretty remarkable safety record for the FAA and NTSB.


I don’t know if it’s any comfort to scared fliers but even in the incredibly rare circumstances you are involved in a crash?

It’s gonna be over quick.

It’s gonna be bang, people are gonna scream, you’re gonna be the most scared you’re ever gonna be for like 15-90 seconds and then it’s all gonna be over.

That’s if you even get to hear the bang in the first place.


If you are involved in a plane crash then statistically you survive.

So microscopically small chances of something bad happening times not so small chance of dying in a plane crash = nothing to worry about, you have to be incredible lucky to be so unlucky.


Maybe I’m just fucked up but I think you missed the humor in describing to a guy who is terrified of flying exactly how he would die but that it’s all gonna be okay.


haha. the thing is that when you have this kinda fear you want to avoid feeling that fear at all costs. Even knowing that it is extremely rare to happen, even knowing that its easier to die seated on my chair while I type for whatever reason.


I can’t tell you how happy I am you got it.


> Also note that the last deadly airline crash

Do you mean a total loss of a passenger airline?

Because there was:

- "Prime Air" crash in 2019 ( https://en.wikipedia.org/wiki/Atlas_Air_Flight_3591 ), it was a total loss of a cargo airplane.

- A deadly crash in SF in 2013: https://en.wikipedia.org/wiki/Asiana_Airlines_Flight_214

And a couple more deadly incidents.


Sorry, should've said "of a US airline".

I wouldn't count cargo planes if I were evaluating risk as a passenger.


The real scare accident for me is the Helios flight 552: https://en.wikipedia.org/wiki/Helios_Airways_Flight_522

Due to maintenance error, everyone in the plane including the crew black out of hypoxia and crash on the land. There are horrific pictures from the site.

What made me not being afraid of flying was to get familiar with the technicalities of the flights, it's less scary the more you understand it.


OP also has an article on Helios 522, for those interested: https://admiralcloudberg.medium.com/lost-souls-of-grammatiko...


> A brief interruption to their airspeed indications, lasting less than a minute, had thrown two trained Air France pilots into a state of paralyzed agitation.

I'm generally suspicious of technical solutions to human-factors problems. But I can't help wondering if some combination of voice stress analysis and measuring pilot coordination might be able to produce a warning that would have prompted the flight crew to see that they had stopped flying in an orderly way.

Easy to say speculate, though, when you're not in that situation.


> But I can't help wondering if some combination of voice stress analysis and measuring pilot coordination might be able to produce a warning that would have prompted the flight crew to see that they had stopped flying in an orderly way.

I guess I don't know what you could do with that, flashing another alarm at an already overstressed flight crew seems like it's unlikely to help?


The most likely thing to help in something like this, is for the plane to realize it's going to die, and just take over.

And that has usually been the "airbus way", whereas Boeing has more had a notion of throw things to the pilots often enough so they know what they're doing.


The plane can't take over when it has no idea of what it's airspeed is. That's what caused this whole chain of problems in the first place.


We expect human pilots to be able to stop the plane from crashing when the airspeed is unknown. If that’s possible for humans, why not for computers?


If a computer could have done it, it would not have needed to have dropped out of normal law to begin with.

When in normal law, the plane pretty much flies itself. When the computer has uncertainty with its sensors, it drops out of normal law, and expects the pilots to fix the problem.

You're asking that the computer, when it is uncertain in its sensors, to the point where it can't fly under normal law, and where the pilots are expected to fix the problem should actively stop the pilots from taking the 'wrong' actions.

What do you think would happen when the pilots are taking the right actions, but the computer's faulty sensors end up preventing them from doing so?

Who is ultimately in charge in a sensors-out situation? The human? Or the computer? Which of these do you believe should have the final say on flight decisions?

----

If you say 'The computer', I'll have to ask you: How well did that work out for the 346 people killed by Boeing's MCAS, which, thanks to a faulty sensor, was utterly convinced that the 737 was too nose-up, and happily flew the damn thing right into the ground, despite the best efforts of the human pilots.

Asking the computer to fly the plane when its sensors are broken is like asking the pilot to fly the plane with no eyes.


In this case - the plane was rapidly descending, and the pilots were completely failing to do anything to fix the situation.

Maybe at some point it became clear that the pilots were doing nothing to actually save the plane, and then the computer could have been justified in ignoring them, and trying to solve the problem itself?

> If you say 'The computer', I'll have to ask you: How well did that work out for the 346 people killed by Boeing's MCAS, which, thanks to a faulty sensor, was utterly convinced that the 737 was too nose-up, and happily flew the damn thing right into the ground.

Because the programming of that computer was criminally stupid. It continued making an assumption despite the fact that the other evidence it had available to it was sufficient to conclude the assumption was false, but it wasn’t programmed to consider any of that other evidence. That a computer with crap software kills people is a problem with that crap software (and the failed regulatory system that allowed it), and not evidence against any proposal not involving such software


And maybe in JT 610 and ET 302, the pilots should have been justified in ignoring the Boeing computer, which was making bad decisions based on a faulty sensor reading, and was trying its damn hardest to fly the plane into the ground (And succeeded, killing everyone on board, despite the best efforts of the pilots).

How about we let the system whose sensors are functioning ultimately be in charge?


At some point you have to have "disconnect everything, I'm going to fly this by hand" situations where the pilot can kill everyone (intentionally or no).

The goal is to reduce those as far as possible, and we may be at or close to the point where any further improvements in one area cause problems in others.

But I still think this particular situation could have been helped somehow. Maybe planes need emergency deployable pitot tubes like they have emergency backup ram air turbines for power.


A million different things would have helped in this particular situation. A stall warning that doesn't suddenly cut out when the plane exceeds a particular angle of attack [1] (That's an automation failure!). Emergency pilot tubes (That prevents the plane from dropping out of normal law!). Better pilot tubes (That prevents the plane from dropping out of normal law!) A pilot that actually received proper training (That's a human failure!). Not averaging control inputs between the pilot and the copilot (That's a design failure!).

I don't think the takeaway from this is 'In an emergency situation, where the plane knows that it can't make good decisions, it should lock the pilots out and try to fly itself.' By definition, if the plane knew how to fly itself, it wouldn't be an emergency situation!

----

[1] If the plane is so confused that it can't even detect that its in a stall[2], I can't trust it to fly itself.

[2] The stall alarm would turn off when the plane was stalling, and would turn on when it was recovering from a stall!


Yeah, any crash has any number of things that would have prevented it, and Airbus has had issues with the computer knowing better before (the famous airshow crash, for example: https://en.wikipedia.org/wiki/Air_France_Flight_296Q - the pilot wasn't allowed to stall the plane and it crashed; however if he HAD been allowed to stall the plane it probably would have crashed in a slightly different way).


As I said, MCAS was badly designed software - the computer would have had information from other sensors from which it would have been able to infer that it was crashing the plane, but it wasn’t programmed to consider them. Badly designed software is not an argument against computers, it is an argument against badly designed software


> Who is ultimately in charge in a sensors-out situation? The human? Or the computer? Which of these do you believe should have the final say on flight decisions?

This is the root of the main Airbus/Boeing disagreement about design; Airbus leans towards the "pilot and dog" [37] arrangement, Boeing leans toward "let the pilot figure it out".

But the reality is the more and more the computer can handle strange situations, the further the pilots get from being able to react to even stranger situations. That doesn't necessarily mean you shouldn't have additional laws to prevent going from normal law direct to "whelp, it's your plane now, bro".

[37] Old joke: https://jalopnik.com/the-thought-of-a-single-pilot-airliner-...


The plane can realize the airspeed must be incorrect, because the altimeter is dropping, the engines are at full throttle, and the nose is pointing up, ergo something must be wrong.

Even leveling the plane off would probably have recovered from the stall, if done early enough.

And the plane knew it was stalling, it kept yelling "STALL, STALL" - the stall warnings are triggered by another sensor, not the pitot tubes.


I agree with both your points. It's a difficult problem. Forced engagement of the autopilot might possibly have helped the pilots of f447 - but its unlikely to be safe in all situations or be acceptable to flight crews.


I strongly recommend Mentour Pilot[0] and his many stories about airplanes issues on YouTube.

[0]: https://m.youtube.com/watch?v=e5AGHEUxLME


He feels like tiktok videos to me, and is therefore unwatchable unfortunately.


I'm genuinely curious how an hour-long YouTube video, meticulously researched, scripted and animated feels like tiktok to you.

The production value on those videos is really high. They are information dense and the creator goes out of his way not to sensationalize.


Well, to answer your question, it's because I thought "Mentor Pilot" was an entirely different youtube channel! :/ I'll like this one


Just makes me glad how small the consequences are if I have a day of shitty performance at work


Yes. A few years after this event I broke production of some software payment service while working at AirFrance. Some avoidable error not caught during testing and validation. We had to rollback after some time and nobody died nor talk about it.


As someone who follows aviation and flew the Flight Simulator quite a bit when young, I don't quite understand how it all happened.

The mechanics are clear, but aren't pilots trained _ad nauseam_ precisely against this kind of events? It read like this was two panicking guys shouting at each other.

By contrast, listening to the recording of the Hudson River landing, it was stunning just how eerily calm the pilots was there. You hear them opening the manual and reading from it IIRC!

I don't mean it like, they're dumb cos I watched a YouTube video, I just don't understand. Especially since, as the article claims, pitot tubes all freezing at the same time is somewhat common.


I think it's just an "everybody has a plan until they get punched in the mouth" situation. In an actual emergency some people react well and go back to their training, others panic & freeze up.

And it's hard to predict, you do tons of emergency situations during all the flight training they would have gone through in sims and actual planes, but in the back of your mind you still know it's coming and not a real emergency when the instructor pulls the power out to simulate an engine failure or tapes up a piece of paper to block the airspeed indicator.


If you have an hour to spare, I found this video by an experienced pilot extremely helpful: https://www.youtube.com/watch?v=e5AGHEUxLME


You're right, a great video, thanks for the top tip.

If nothing else, it provides complexity to the situation, which I felt was somewhat missing from the article.


My understanding is that this incident is one of the reasons why pilots are now trained ad-nauseum for such events, but that training for things like high-altitude stalls was not the norm previously


Does anyone think lack of sleep is the major contributor of this event, or the invalidation of all three speed sensors?

I think we do test pilots for alcohol and drug before a flight, but can we also test them for reaction and a couple of hazard scenarios? Do you think it is too much?

About the sensors, is there any inexpensive way to increase the heat or anything to prevent it from happening again?


The main takeaway is that somehow the pilot who kept holding the stick back was under the assumption that the correct response to a stall in cruise was to apply full TOGA thrust, not to recover from aerodynamic stall. This is likely because in thicker air, close to the ground, this actually works. It also does not help that a mid-cruise stall is extremely unlikely to happen in normal operation, since normal law[1] prevents keeping the angle of attack this high. They also hit an edge case so unforseen, you could consider it a design fault. Their speed was too low for the stall warning to sound, so every time they moved to recover the plane the stall warning returned, and every time they pulled it back into the stall it disappeared (with the airplane assuming the inputs must be wrong).

I really recommend watching a breakdown of the instruments during the incident.[2] Things happen fast.

[1]: https://docs.flybywiresim.com/pilots-corner/advanced-guides/...

[2]: https://www.youtube.com/watch?v=0a06A78iXnQ


Thanks. Looks like a string of unfortunate events thrown altogether in a short period of time.

I hope I'm never in that position...


Every six months we go in the simulator and practice all kinds of failures and disasters.

But lack of sleep is a real issue. Even more on the US side since rest rules are worse than European. Somehow it doesn't get the attention it deserves.


> But lack of sleep is a real issue. Even more on the US side since rest rules are worse than European. Somehow it doesn't get the attention it deserves.

IMO this is almost a cultural issue. We don't prioritize sleep as much as we should and even hold lack of sleep as something to be idolized (i.e. sleep = laziness). How many times do we hear about those CEOs or Founders that wake up super early after only 5 hours of sleep. They're driven, they're hungry, they have something you don't. You're just lazy.

Everyone "knows" we need more sleep just like we know we should floss every day but we have better things to do.


A side topic, I'm aghasted to know that preschool and primary schools start at 7:40.

God it's going to be tough for my son who refuses to sleep before 10pm.


I think its well established that young children need at least 11-12 hours of sleep a night (depending on age). That would mean bedtime (or asleep by) 6:30pm the night before. These natural requirements don't fit well with societal norms.

I've noticed huge changes in my mood and the mood of my children when we get enough sleep. In my experience their meltdowns/tantrums (and mine as an adult) are either related to lack of sleep or hunger.

Sleep deprivation is a real problem that many may not realize until they get into the rhythm of healthy sleep.


You can enforce a bedtime for any child. You have to consistently enforce it for at least 2 weeks, and you have to consistently wake them up earlier. Depending upon the child, it may be a battle for 2 weeks, but it's a battle you can win, if you want to.


Re your last question, the article states Air France was in the middle of upgrading the pitot tubes across its fleet when the accident happened.


Thanks, that's good to know.


Although it might be the way its written, but Dubois leaving the cockpit seems to come at the worst possible time too, right before they're about to hit a storm and the junior officer is nervous. Although he may have known he really needed the sleep, which is also a problem due to lack of sleep.


Since I started to study airline accidents, so far this one of the most fascinating in terms of human aspects and Crew Resource Management.

The fact that the whole situation deteriorated less than 10 minutes after the Captain went to rest, it’s one of the biggest contrafactuals about this case.


> contrafactuals

As in “if the captain didn’t go to sleep, they’d have been fine”?


Not OP, but more specifically, if the most experienced flight officer didn't leave the cockpit just as they were flying into the worst part of a storm and left the matter to the reluctant and nervous junior officers.


Fine it’s a strong word in that context; the likelihood of the Captain and the Flying Pilot 2 make the same inputs as the FP1 it’s way reduced.

The Captain also lost the screen disagreement that happened.


I wasn’t questioning your use of the term. It was my first time encountering it and was just trying to gather the context of your use (and understand a new word). I think the usage there was fine


I'm amazed that the autopilot kicked off just because the pitot tubes froze. The A330 has an inertial reference unit as well as (almost certainly) GPS, both of which could have provided ground speed. Air speed could be extrapolated from previous air speed and inferred from ground speed, engine thrust, and known weather. IRU and GPS could also have provided altitude and thus climb rate. And we know the plane had a barometer, although it had to be corrected for airspeed according to TFA. I'm guessing the plane also had a radar altimeter, but maybe not.

With all these sources of information, the autopilot could have said "Hey the pitot tubes are giving me weird data, but no worries. Still flying the plane. Carry on."

Why didn't that happen?


> Air speed could be extrapolated from previous air speed and inferred from ground speed, engine thrust, and known weather.

I think this is probably the main sticking point. I don't think you can do this accurately without some direct measurement. If the plane thinks its air speed is fine but it's actually approaching its stall speed because it guessed its airspeed incorrectly, that'd be really bad.


Stall is determined solely by angle-of-attack, which TFA points out. Airspeed doesn't factor in to whether the computer thinks it's about to stall.

Here's the relevant sentence: "Crews involved in similar incidents reported that they assumed the brief stall warnings were generated by the erroneous airspeed readings, a not unreasonable interpretation which breaks down only when one learns that the stall warning calculations are based on angle of attack and do not incorporate any airspeed data."


Except that AoA vanes are not perfect and are speed sensitive.

> From the A330/A340 FCTM (Flight Crew Training Manual) section 8.110.4:

> The ADRs provide a number of outputs to many systems and a blockage of the pitot and/or static systems may also lead to the following:

> …

> Alpha floor activation (because AOA outputs from the sensors are corrected by speed inputs)

From what I can tell, these systems are all pretty complicated and have undergone significant real world testing to ensure that the expected protections actually work, assuming the flight computer is receiving accurate data. Which is precisely why they have Alternate Laws - if they can’t guarantee that these protections can work, then they won’t.


On a slightly different note, this sort of thing is what makes me question whether we can have an orderly progression from the current Level 2 self-driving in cars to Level 5. When a plane does most of its flying by itself, if it can cause this much chaos by handing control back to a trained pilot without much warning, how much worse would it be when a car hands control back to a inexperienced, distracted driver in tricky driving conditions... especially considering that reaction times need to be much faster on the road?


This is a fabulously well written article in the means of a technical thriller.

"All of this happened near instantaneously, leaving the pilots completely in control of the airplane with little advance warning."

Complex systems have such a low-window for handling or preventing failure that it is impossible to deal with it at manual scale unless we have a super-good control panels and copilots.

Why we need a lot more work on observability and interpretability while operating complex systems.


HN seriously just needs to watch Air Disasters. Goes into the details of the investigation, financial consequences, and changes in regulations after disasters.


IMHO something is still missing. Bonin's father was a pilot (in Air France AFAIK) and I can't help but think the likelihood he was the best candidate was low, but that legacy played a part.

I also wonder if he had much passion for flying or was just following his father footsteps. I wish they would value passion more, easily demonstrated with military/glider (I must miss some..) experience.


99% Invisible had a great podcast on this … it’s a particularly interesting listen with all the frothing over AI.

https://99percentinvisible.org/episode/children-of-the-magen...


One thing that stands out to me is the incorrect use of the word "law" when referring to flight modes. Is this just a bad translation? Are any native French speakers familiar with the Airbus documentation able to comment if the concepts make more sense in French?


The word loi (law) is indeed used in french controls engineering, not only for planes. See https://fr.m.wikipedia.org/wiki/Loi_de_commande_de_vol

Cannot say if it is good or bad to be honest, I believe Airbus themselves decided on the English term law to translate the french term.



I updated in the meantime, what isn't?


> One thing that stands out to me is the incorrect use of the word "law" when referring to flight modes. Is this just a bad translation? Are any native French speakers familiar with the Airbus documentation able to comment if the concepts make more sense in French?

The word "law" can mean several things depending on the context and it's not uncommon to hear of a system following some subset of rules as operating under a certain "law".

If it's more palatable, you can `s/law/mode/g` even though you may technically have a few different possible modes under some higher "law/regime" [0]

[0] https://en.wikipedia.org/wiki/Flight_control_modes


This crash was also covered on an episode[1] of the Black Box Down podcast.

[1]: https://roosterteeth.com/watch/black-box-down-2020-7-30


I love what the author is doing but I honestly don’t understand the gushing over their writing style. To me, it feels full of speculation and melodrama, exactly the language investigators don’t use when they write the accident reports:

> pilots into a state of paralyzed agitation..

> all the while trying desperately to understand..

> But there is a reason, written between the lines of the cockpit voice recorder transcript, hidden away within the mysterious code that governs human behavior, a key to the secrets of the profoundly irrational. Its lessons could not be more important, even for those who believe themselves above the doomed crew of flight 447, as the boundary between the responsibilities of man and machine grows ever dimmer.

> Overwhelmed by the noise of the warnings, the terrifying vibrations, and the wildly fluctuating instrument readings, his brain seemed to shut down, paralyzed by confusion and fear.

Citations needed.


After this crash the French word for idiot was changed to Bonin.


This brings up an interesting question: If the control was fully automated, the computer entirely in charged, would this have happened?


As shown in the article, when the pitot readings were incorrect and the initial 3 second stall warning was displayed, a veteran pilot would simply have had to let the plane continue on in normal law mode, without any action. But Bonin was too overreliant on the incorrect action from the autopilot (moving to alternate law), which itself was an incorrect result from the incorrect readings of the Pitot tubes.

Pitot tube icing is one of those issues where automation becomes useless entirely.


Anyone understand if the readings fully came back? If so why wouldn’t the plane go back to normal law?


Wow wow. What a read!


I got this:

  while (true) {
     try {
        airspeed = getAirspeed();
     } catch (Exception e) {
       log.warn(“”,e);
     }
  }




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: