Dude you’re describing Initech from Office Space. Kudos for making it sound legit and vague enough that it did take me until the end to fully identify it. But there’s no mistaking “Nina speaking. Just a moment…”
The alternative for most people is Windows, which Microsoft seems hellbent into making worse and worse (I didn’t think that was possible but hey, here we are). macOS definitely sounds like the least of two evils anyway.
But what do I know - the year of the Linux desktop for me was 1996.
Ubuntu Pro is still free for personal use on up to 5 physical machines, which covers my small home network just fine. It is annoying that they withhold security updates unless you fork over your email address, but I don’t recall them trying to sell me anything since I made an account
And that’s about it, I’d say! I find that everything else is really, really bad. It creaks, it wobbles, it warps, and it did so from day 1. The fan is loud and kicks in quite early. Well maybe the X200 isn’t as bad, but the X220 certainly is. And even after 14 years, it still smells when it gets hot.
Sorry for the rant. I really want to love it, but I just can’t.
Quality also went down while with later models - back in 2014 I was laptop shopping, based on the X2xx series reputation I tested an X240 and it was crap, even the keyboard was super bad, I ended up getting a Dell xps13 whose keyboard was miles better and it still works today.
Well "use it" is a bit of a stretch. I’m a bit of a device hoarder. It's one of my experimentation platforms for Linux stuff, currently running Fedora Kinoite (with Universal Blue).
My daily driver (of sorts, don’t really need a laptop anymore) is a MacBook Pro Late 2013, with NixOS. It’s so much better in every regard, it’s not even funny. It also still has its original battery.
“ In a study conducted in Milan, Italy, and published in November 2025, the sight of a person dressed as Batman led to a nearly doubled rate of people giving up their seat to a pregnant woman.”
This is awesome. Probably fun to be the person dressed as Batman.
Airbus is not immune to design & manufacturing issues with fatal consequences, they’re just not too-of-mind these days. A similar issue seems to have ‘cropped up’ on this flight: https://en.wikipedia.org/wiki/Qantas_Flight_72
> Temporary inconsistency between the measured speeds, likely as a result of the obstruction of the pitot tubes by ice crystals, caused autopilot disconnection and [flight control mode] reconfiguration to "alternate law (ALT)".
- The crew made inappropriate control inputs that destabilized the flight path.
- The crew failed to follow appropriate procedure for loss of displayed airspeed information.
- The crew were late in identifying and correcting the deviation from the flight path.
- The crew lacked understanding of the approach to stall.
- The crew failed to recognize the aircraft had stalled, and consequently did not make inputs that would have made recovering from the stall possible.
It's often easy to blame the humans in the loop, but if the UX is poor or the procedures too complicated, then it's a systems fault even if the humans technically didn't "follow procedure".
Both unsophisticated lay observers and capital/owners tend to fault operators ... for different reasons.
Accident studies and, in particular, books like _Normal Accidents_[1] push back on this assumptions:
"... It made the case for examining technological failures as the product of highly interacting systems, and highlighted organizational and management factors as the main causes of failures. Technological disasters could no longer be ascribed to isolated equipment malfunction, operator error, or acts of God."
It is well accepted - and I believe - that there were a multitude of operator errors during the Air France 447 flight but none of them were unpredictable or exotic and the system they were tasked with operating was poorly designed and unhelpfully hid layers of complexity that suddenly re-emerged during tremendous "production pressure".
But don't take my word for it - I appeal to authority[2]:
"Automation dependent pilots allowed their airplanes to get much closer to the edge of the envelope than they should have ..."[3].
or:
@ 14:15: "... we see automation dependent crews, lacking confidence in their own ability to fly an airplane are turning to ther autopilot ..."[4].
The relief second officer basically pulled up when the stall protection had been disabled and by the time the other pilot and captain realized what was happening it was too late to save the plane.
There is a design flaw though: the sidesticks in modern Airbus planes are independent, so the other pilot didn’t get any tactile feedback when the second officer was pulling back.
You do get an audible "DUAL INPUT DUAL INPUT" warning and some lights though [1]. It is never allowable to make sidestick inputs unless you are the single designated "pilot flying", but people can sometimes break down under stress of course.
The reality is that CRM is still the most important factor required to have a reasonable chance of turning what would otherwise be a catastrophic aviation incident into something that people walk away from. Systems do fail, when they do it's up to the crew to enact memory items as quickly as possible and communicate with each other like they are trained to.
Unfortunately, sometimes they also fail in ways that even a trained crew isn't able to recover the aircraft. That could be a failure that wasn't anticipated, training that was inadequate, design flaws, the human element, you name it. Actions of the crew being put in an accident report isn't an assignment of blame, it's a statement of facts - the recommendations that come from those facts are all that matters.
This is one of those situations where I think it'd be fun to be a flight simulator "operator". Finding new ways to cause pilots to figure out how to overcome whatever the plane is doing to them. Any pilot that ever comes out of a simulator thinking "like that would ever happen" instead of "that was an interesting situation to keep in mind as possible" should have their wings clipped.
Taking a grain of salt since it's from a movie, but one of the things about Sully setting the plane down in the river was due to his experience of not just the aircraft itself but also situation awareness to realize he was too low to safely divert to an airport. He instinctually "skipped" several steps in the procedures to engage the APU which turned out to be pretty key. The intimated thing being that the procedure was so long that they might not have gotten to the APU in time going step-by-step.
Faulting the crew is a common thing in almost all air incidents. In this case the crew absolutely could have saved the plane, but the plane did not help them at all.
Part of the sales pitch of the Airbus is that the computer does A LOT of handholding for the pilots. In many configurations, including the one that the plane was flying in at the start of the incident, the inputs that caused the crash would have been harmless.
In that incident the airspeed feed was lost to the computer and it literally changed the flight controls and turned off the safety limits, and none of the three people in the cockpit noticed. When an Airbus changes flight control modes, it does not keep inputs idempotent. Something harmless under one set of "laws" could crash the plane under another set of laws. In this case, what the pilot with the working control stick was doing would not have caused a crash, except that the computer had taken off the training wheels without anyone noticing.
As a result of changing the primary controls one pilot was able to unintentionally place the plane in an unrecoverable state without the other pilots even noticing that he was making control inputs.
Tack on that the computer intentionally disregarded the stall warning emanating from the AOA sensor as erroneous at a certain point and did not alert the pilots that the plane was stalled. You are taught from day one of flight training that if you hear the stall alarm you push the power in, and push the nose down until the alarm stops. In this case the stall warning came on, and then as the stall got worse, it turned itself off, with the computer under the mistaken belief that the plane could not actually be that far stalled. So the one alarm that they are trained to respond to in a certain way to recover the plane from a stall was silenced. If I was flying and I heard the stall alarm, then heard it stop, I would assume that I was no longer stalled, not that the plane was so far stalled that the stall alarm was convinced it had broken itself.
So yes, the pilots flew the aircraft into the ground, but the computer suffered a partial failure and then changed how the primary flight controls operated.
Imagine if the brake pedal, steering wheel, and accelerator all started responding to inputs differently when your car had a sensor issue. That causes the cruise control to fail. Add in that the cruise control failure turns off ABS, auto-brakes, lane assist, and stability control for some reason. Oh yeah, there's a steering control on the other side of the car on the armrest and the person sitting there can now make steering inputs, but it won't give feedback in your steering wheel, and also your steering wheel still can be manipulated when the other guy is steering, but it is completely disconnected from the tires while the other guy is steering. All of the controls are also more sensitive now, and allow you to do things that wouldn't have been possible a few seconds ago. Also, its a storm in the middle of the night, so you don't have a good visual reference for speed. So now your car is slipping, at night, in a storm, lights are flashing everywhere, nothing makes sense since the instruments are not reading correctly. However, the car is working exactly as described in the manual. When the car ends up in a ditch, the investigation will find that the cause of the crash was driver error since the car was operating exactly as it was designed.
Worth noting that Boeing (and just about every other aircraft on earth) has linked flight controls between the two pilot's positions that always behave in the exact same way so this type of failure could have never happened on a 737 for example.
At the end of the day, this was pilot error, but more in a "You're holding it wrong, I didn't design it wrong" kind of way. After all, there were three people with a combined 20k flying hours, including thousand of hours in that design.
If three extremely qualified pilots that have literal years of experience in that cockpit, who are rigorously trained and tested on a regular basis for emergencies in that cockpit, can fly the thing into the ground due to a cascade from a single human error... maybe the design of the user interface needs a look.
You also conveniently skipped over the parts of the wikipedia article where they charged the manufacturer with manslaughter, and documented dozens of similar incidents, and the entire section outlining the Human Computer Interface concerns.
Early in my career, I worked for a subcontractor to Boeing Commericial Airplanes. I've worked in Silicon Valley ever since. As a swag, the % of budget spent on verification/validation for flight-critical software was 5x versus my later jobs.
Early in the job, we watched a video about some plane that navigated into a mountain in New Zealand. That got my attention.
On the other hand, the software development practices were slow to modernize in many cases e.g. FORTRAN 66 (but eventually with a preprocessor).
As an aerospace software engineer, I would guess that, if this actually was triggered by some abnormal solar activity, it was probably an edge case that nobody thought of or thought was relevant for the subsystem in question.
Testing is (should be!) extremely robust, but only tested to the required parameters. If this incident put the subsystem in some atmospheric conditions nobody expected and nobody tested for, that does not suggest that the entire QA chain was garbage. It was a missed case -- and one that I expect would be covered going forward.
Aviation systems are not tested to work indefinitely or infinitely -- that's impossible to test, impossible to prove or disprove. You wouldn't claim that some subsystems works (say, for a quick easy example) in all temperature ranges; you would definite a reasonable operational range, and then test to that range. What may have happened here is that something occurred outside of an expected range.
3 dead, 133 survivors, started (undisputedly) with pilots intentionally and with approval trying to fly a plane full of passengers 30 meters off the ground, possibly with safety systems intentionally disabled for demonstration purposes.
Past that, there are disputes what kept them from applying enough power and elevator to get away from the trees that they were flying towards, including allegations of the black box data being swapped/faked.
In most capitalist organizations QA begs for more time. "getting to market" and "this years annual reports" are what help cause situations not here, not the working class, who want to do a good job.
Not involved with this particular matter. What I would want to see is logs of the behavior of the failing subsystem and details of the failing environment. This may be able to to be reproduced in an environmental testing lab, a systems rig lab, or possibly even in a completely virtual avionics test environment. If stimulating the subsystem with the same environmental input results in the error as experienced on the plane, then a fix can be worked from there. And likewise, a rollback to a previous version could be tested against the same environment.
reply