From the article: "Operators of Embraer Phenom 300 business jets are being urged to avoid the area entirely. “Due to GPS Interference impacts potentially affecting Embraer 300 aircraft flight stability controls, FAA recommends EMB Phenom pilots avoid the … testing area and closely monitor flight control systems,” the Notam reads."
That is beyond scary; how anyone can defend having critical aircraft control systems rely on an input which may be turned off at will is beyond me.
Let us at least hope the system fails gracefully and notifies the pilot that something odd just happened and you will have to do your own flying from this point on, rather than just going titsup and be done with it.
I'm pretty sure Iran captured that US drone with GPS spoofing. I have no idea how you could provide that data in a secure way. But, uh, i can envision some scenarios where the bad guys would want to remotely take over planes.
Also, i'm not so sure about the graceful failure. Hypothetically, the human takes over. But if the human hasn't actually flown in months or years the'll likely be kind of rusty. Now you're throwing them into a complex situation - the autopilot can handle the simple stuff. Coupling weak skills with difficult situations seems like a bad idea.
I kinda think autopilots and self driving cars should give a limited clock. Every, say hour you do it manually buys you a few hours of autopilot. Just to keep skills relatively fresh.
There's a recent econtalk episode that delves into transferring control from machine to human. Specifically, regarding the air france disaster:
"The Air France story is a story about a failed handoff, where the automation onboard an airplane found a relatively minor fault and handed control of the plane back to the human pilots, too suddenly and ungracefully surprised them. And they had lost some of their skills flying too much with the automated systems and lost control of the airplane. Which actually was a perfectly good airplane about a minute into the crisis. And so they went from tens of thousands of feet flying through the sky and ended up spiraling into the ocean, tragically losing all aboard."
That tragedy was probably the seed of my concern. Lately i've been thinking a lot about how automated systems become brittle. The systems don't change, people forget the dependencies and requirements and in something real time like an airplane, the feel of the system.
A configuration management system that incorporated spaced repetition would be cool. Every once in a while, go into an incremental mode where you actually type in the commands. It has the added bonus of getting new people aware of the system. Sure, you can always just go read the source and figure it out. Having the system force some human awareness from time to time would be handy.
Why hand control back to a human when you can do even better?
Have them both handling control at the same time, reconcile the inputs in a sane manner (or have a master/slave where one is providing phantom inputs). Added benefit of being able to error check each other.
This was exactly the problem in the Airbus crash shown above; reconciling the inputs is hard. One pilot kept holding one stick back, unbeknownst to the other. This plane averages them by default, and therefore the plane stalled. That couldn't happen on a Boeing because the yokes are yokes (inputs on one are easily visible to both pilots) and physically linked.
(Supposedly the plane is supposed to loudly complain if the inputs diverge. I'm not sure if this happened.)
On the other hand, typical Airbus A330 operators probably have a lot better training about what diagnostic messages mean than typical HP LaserJet operators do.
On the other other hand, two pilots issuing opposite control inputs generally has a bit more impact than an empty printer tray. Boeing's relatively low-tech solution of making the two controls physically interlinked seems like a common-sense solution to a potentially immense issue.
But obviously the training did not suffice in the Air France case, because those were experienced Airbus pilots and they did not detect and resolve the contrary inputs.
As others point out, just because the warning is displayed as expected, does not make it a good warning.
In flight testing, whenever we encounter situations like this (the aircraft does not react as the pilot expects for a given input), we don't necessarily blame the pilot. Especially if multiple pilots get into the same mess. Then we know it's time to change how the aircraft behaves and bring it in line with what the pilots expect.
A little blinkenlight among the masses is easy to miss. Having your controls physically fight you because the other pilot is doing something is impossible to miss.
It was actually a voice alert. I'm not making any judgement about its appropriateness.
In fact by the time it arose, the aircraft may already have been in an unrecoverable dive (or whatever the word is for rapid descent while stalled), I don't remember. So it's not a primary cause of the accident.
A voice alert in a cockpit where people are probably frantically shouting at each other is just as ridiculous.
There's no substitute for physical feedback. It's instantaneous and impossible to miss. It's completely beyond my comprehension that Airbus eliminated it.
As I recall, the one pilot held full back stick all the way down, while the other pilot tried and failed to recover. Releasing the back stick and executing a normal stall recovery would have saved the airplane at just about any time.
I understand the Boeing approach of physical linkage to be the best idea, rather than any kind of indirect force feedback which is possible to misinterpret.
As for your last point, I'm not an expert, and I'm not as incredulous about this scenario as you seem to be. (As a teenager I did fly a lot in PC flight simulators, though.)
The non-technical Vanity Fair article I referred to claims that the accident investigators estimated the last point at which the aircraft could have recovered from the dive was around the time it passed 13,000 feet:
"Though precise modeling was never pursued, the investigators later estimated that this was the last moment, as the airplane dropped through 13,000 feet, when a recovery would theoretically have been possible. The maneuver would have required a perfect pilot to lower the nose at least 30 degrees below the horizon and dive into the descent, accepting a huge altitude loss in order to accelerate to a flying angle of attack, and then rounding out of the dive just above the waves, pulling up with sufficient vigor to keep from exceeding the airplane’s speed limit, yet not so violently as to cause a structural failure. There are perhaps a handful of pilots in the world who might have succeeded, but this Air France crew was not among them. There is an old truth in aviation that the reasons you get into trouble become the reasons you don’t get out of it."
If that's what they did then I believe it. I'm coming from a light aircraft background where stall recovery takes much less altitude. Still, they had about two thirds of their descent to initiate recovery before it became too late. If the controls had been linked, the other pilot would have discovered the problem instantly.
I don't understand why you'd want this averaging behavior at all.
When trying to get the plane onto the center of the runway, both pilots' actions are probably pretty correlated, and averaging could remove any noise (e.g., twitches).
But in many other circumstances, it seems like it would be downright dangerous. Something's in front of the plane. One pilot goes to the left, the other pulls to the right, and the system averages the commands and smashes straight into it.
The benefits of a marginally straighter landing don't seem like they would outweigh the potential of a massive catastrophe.
Perhaps all critical automated systems should have some level of Chaos Monkey testing built into them, though it might be best if it was good enough to not fail even if that test was failed.
Chaos Monkey is great. I'm suggesting something slightly different. Imagine you have a bunch of cent 6 machines that have been running great for years and years. upgrades go smoothly and deployment is a breeze. Now you need to upgrade to cent 7. Stuff that used to happen in init.d now happens in the systemd, for example.
Reviewing changes by hand every few months keeps you aware of what needs to happen. Automation doesn't get brittle, people forget how systems are flexible and inflexible. Spaced repetition would give people a chance to keep up with the current design, so when that crazy security vulnerability happens, you can jump in and change stuff knowing how it works, rather than having to figure it out on the fly.
I don't think it would take much. Configure a box today, tomorrow, next week, next month, 3 months, then perhaps every 6 months.
There are very smart people that sort of intuitively keep up with those kinds of changes. But those people change jobs. How do you get a mere mortal up to speed on 20-30 machine configurations? config management can configure hundreds of machines a day, no problem. but it's easy to forget what's really going into each of those boxes.
This is more about preserving organizational awareness, not so much robustness of a running system.
The military has never used GPS guidance systems, the Soviets could disable it at will when it was designed. Consequently, no amount of spoofing GPS will cause a serious navigation error. The assertion of GPS-guided systems in the US military is a widely repeated myth that never seems to die even though even cursory research confirms that it is a myth.
US military systems pervasively use inertial navigation (INS), many of which can accept micro-corrections from the GPS system. However, they only accept corrections within the intrinsic error bound of INS, which is quite small. Spoofing GPS can buy you a deflection of meters at most, not kilometers.
The US military is developing new types of inertial navigation systems that are so accurate that it obviates the use of GPS altogether. (And the media will probably still call those systems "GPS-guided".)
The US military has many use cases for GPS far beyond weapon guidance and navigation. Many of these use cases cannot be served by INS and are both high value and done primarily in peace time i.e. the loss of the GPS constellation in a war won't affect the military utility of having the GPS constellation generally. Spoofing these use cases is still a giant nuisance.
There are a couple things that people forget about US military INS:
US military INS actually operates as a giant swarm of INS computers with other sensor inputs that compare notes -- wisdom of crowds -- to correct accuracy. If one INS computer drifts too much, the other INS computers will notice and correct it. Even without the swarm, many larger military systems will have multiple INS computers distributed throughout the platform that can compare notes.
The US military has been at the forefront of inventing exotic INS technology since the 1960s, and the details of their INS capabilities are a closely guarded secret. It is used ubiquitously in almost every combat system they produce. What is known is that even in the 1990s the accuracy of INS significantly exceeded design requirements, and there are current military research programs that suggest they know how to build INS that exceeds current GPS accuracy over long temporal baselines.
Given that they already have a GPS constellation that they use for a wide variety of other cases, GPS-corrected INS guidance is a very cheap way to squeeze some additional precision out of weapons that were already pretty precise in the first place. It allows them to build a dirt cheap and reliable guidance package that is also highly precise in environments where GPS spoofing is not a credible problem. Remember, these particular guidance packages tend to be modular and swappable: use the cheap stuff that leans on GPS more for low tech enemies, use more sophisticated and expensive INS for the high tech enemies.
Certain systems do indeed operate as you say, with INS as the primary guidance/navigation mechanism. Submarines for instance, cannot rely on a GPS fix. Strategic missiles as well.
But ground forces do indeed use GPS systems. INS systems are very costly to build and maintain, go GPS is a go-to for missions and requirements where it's a good fit. You simply cannot outfit every ground-pounder with an INS system and then keep it properly operational in their task envelope.
Also, parts of the GPS system are quite jam-resistant. Not entirely, but when it was first developed, GPS was pretty secure for it's time, using a cryptographically derived spreading code for the carrier.
You are right. The military relies on INS and GPS. Most systems such as the navigation in an F/A-18 are INS with GPS to maintain accuracy like you described. I thought JDAM was entirely GPS, but apparently it is only an INS supplement to.[1] Cruise missiles use both plus terrain mapping.[2] Military could probably handle total GPS denial, but it would be ugly, especially since it may mean no SATCOM either. I ponder what WW3 looks like and total space denial seems like a strong possibility. Spoofing is probably worse than denial since it isn't just no information but bad information that could throw off the INS.
> But if the human hasn't actually flown in months or years the'll likely be kind of rusty.
Along the same lines:
I work in manufacturing, and design systems to automate processes. "What is the current, manual process?" is the first thing I ask for when beginning a new project. I then try to design the automated system, so that the manual process can be used as a fallback should any part of the automated system fail. What happens when we have a network outage or something similar? The people involved suddenly forget the manual processes that they had used for years, and sit around twiddling their thumbs and calling IT every 5 minutes asking when the system will be back up.
They respect me and I respect them, and I'm far from elegant, so I'm not really sure how you inferred any of that. I just find it funny how fast they forget processes that they did everyday for years, after relying on the automation - just like the pilots in the story I was relating to. I'm the one that has to remind them what their manual processes were, which is why I ask the question and document the response first and foremost.
"... Iranian engineer's assertion that the drone was captured by jamming both satellite and land-originated control signals to the UAV, followed up by a GPS spoofing attack that fed the UAV false GPS data to make it land in Iran ..."
Kind of. The GPS signal portfolio is evolving. In the beginning, the encrypted code was broadcast on two frequencies, L1 and L2. The public code was only on L1. So you could in theory jam just L1 to block the un-encrypted portion, but why would you?
In recent years GPS modernization means satellites have been using new codes on L2 (L2C) which are un-encrypted. Only about 18 of the satellites carry that capability at this time.
I agree. At the time people were noting the drone the Iranians were showing off was the wrong color, which would imply they fixed some damage and repainted it.
> But if the human hasn't actually flown in months or years the'll likely be kind of rusty.
Cruising at altitude is not that hard. I'd be far more concerned if they were interfering with the various Instrument Landing Systems during inclement weather.
> Every, say hour you do it manually buys you a few hours of autopilot.
Most pilots already hand-fly in clear weather during landing and takeoff by default; somewhat for the reasons you suggest, but it's also the safer as it actively maintains pilot situational awareness.
You're generally only going to use the A/P during cruise, departure and approach or during instrument flight conditions.
Not in RVSM airspace above FL290 (29,000 ft). In RVSM [0] airspace its required to have a functioning autopilot coupled to an altimeter of a certified accuracy (+- 65 feet)
Most pilots couldn't fly the required accuracy, that high up, for any length of time, with manual hand-flying. Even in a FBW airliner, it would be hard.
The idea for FL290-FL410 is more to have a designated "highway" of sorts, where all craft are flying within the same system. There is no requirement for auto-pilot above FL410, for example... There are plenty of jets that can easily get that high, it's more about aerodynamics and engineering than pilot skill.
There is a separate encrypted GPS signal, which used to have better precision as the public signals had intentional noise added to them.
Of course, GPS is a rather old system and I'm unclear how you would add encryption in a way that it's not compromised when someone gets access to a decoder but also doesn't endanger "mission success" because a encryption key punch card is missing and the drone refuses to start. It's like the nuclear launch codes being all zeroes.
As I understand it, encrypted GPS (L2) does depend on key material being periodically loaded from a fill device for proper operation.
The keys are rotated periodically, so if somebody stole a decoder, it would only work until the key in memory expires. (Military GPS receivers also have a "zeroize" button to destroy the keys in case of imminent capture.)
I'm pretty sure Iran is full of crap. If they really had this technology, we'd either have drones raining from the sky, or they'd keep their mouths shut and save it for a special occasion. Also, this: http://www.popsci.com/technology/article/2013-04/6-most-absu...
For graceful failure, I imagine you would want something akin to radar and a topological map so you can match your position, probably combined with lower fidelity means of locating position (angle or location/angle of sun/moon/stars) to reduce the topological map search space if starting from scratch. I would be surprised if the military didn't already have ways to mitigate lack of GPS, considering it's an obvious information attack vector.
Terrain matching has been around for a long time, with the earliest systems dating from the 1950s, so that's definitely a possibility. This gets a lot of use on cruise missiles.
Another fancy way to navigate without GPS is to use automated celestial navigation. The SR-71 had one of these in the 1960s, and it's also good for submarine-launched nuclear missiles. The hardware is able to sight stars even in the middle of the day (and not just the Sun, funny guy).
For commercial aviation, the typical backups are radio beacons such as VOR and NDB, inertial navigation systems, and good old dead reckoning plus pilotage (i.e. looking out the window).
Old 747s had a window on the ceiling of the cockpit for using celestial navigation.[0]
Also with the reliance on GPS and its inherent fragility has caused the US Navy to restart training in celestial navigation. In order to reboot the training regimen, the Navy is relying on Coast Guard instructors, since the USCG never stopped.[1]
Military aircraft mostly use GPS and INS for navigation. Cruise missiles the same, as well as TERCOM [1], as you alluded to.
I recently spoke to old aircrew guys I worked with and apparently they stopped teaching manual celestial nav in the late 90s finally. Automated celestrial nav was, per my understanding, never too common although SR-71 aircrews commonly used its system.
A variation on terrestrial celestial navigation was used to help orient the Apollo spacecraft en route to and from the Moon. To this day, space missions, such as the Mars Exploration Rover use star trackers to determine the attitude of the spacecraft.
I figured it had existed for quite a while, but didn't imagine it traced back to the 1950's.
> Another fancy way to navigate without GPS is to use automated celestial navigation.
I figured as much. I'm just under-informed in this area, and try not to state things as fact that I don't know as such. I did mention the stars, but it didn't occur to me they are visible during the day with the right equipment. :)
> inertial navigation systems
I'm aware these exist (due to some military fiction I've read), but that's the extent of my knowledge. I'm not aware of how accurate they are.
> good old dead reckoning plus pilotage (i.e. looking out the window).
I was thinking of systems that replace pilots, even if for short whiles, not supplement them, so discounted human correction while in flight.
Inertial navigation is interesting because the error starts out at zero and then builds up with time. Let it run for long enough without recalibration and you'll have no clue where you are. Other techniques tend to have steady error bounds. (Aside from dead reckoning, of course, which is basically just inertial navigation done by hand.)
As far as quantifying that growing error, Wikipedia says it's typically less than 0.6 nautical miles per hour. An airliner after a long oceanic flight could know where it was to within a few miles, good enough to reorient and find the destination airport.
Early efforts in autonomous navigation came out of a strong desire to blow up the Soviet Union, so cruise missiles and ICBMs and such are a good place to look if you're interested in early examples.
Inertial navigation systems are used a lot by submarines, since there's no GPS down there. I think it's pretty accurate, particularly if you combine it with other sensor data like gravitational field strength maps.
As part of a larger system yes, but on their own they're only accurate for a while, as they constantly accumulate error. They require input from other sensors (generally GPS) to provide accurate location.
They need recalibration after every 1-2 sorties for best results but they don't require input from GPS or other sensors. GPS synchronization largely eliminates the need for support staff to recalibrate. INS is incredibly accurate on its own, assuming pre-flight calibration.
Source: Worked on said GPS/INS systems in the military.
And what do you use to tell it where exactly it is during pre-flight calibration?
They "require" GPS input in the sense that if I handed you an INS that was initialized/calibrated with some precision (Oministar/RTK/P-code GPS/ whatever) reference but had been running in the trunk of my car (or dangling from the collar of a feral cat) for N hours/days, there's no way you'd fly a plane with it without initializing it with some sort of position referene.
(If you came up with "sit it on some previously surveyed datum, and you don't need GPS" on your own, you get an A)
GPS already has an encrypted channel. It's a severe pain in the ass to work with computationally, so nobody does. Even the military.
It used to be that the clear channel was less accurate (even beyond the 'selective availability'), but signal theory got us to cm granularity without the computational expense of the military channel.
> I have no idea how you could provide that data in a secure way.
Just sign the signals with RSA. This depends on keeping the private key out of hostile hands, or allow key revocation and generation. But it's possible in theory.
Somebody else posted a link to r/aviation where a Phenom 300 pilot gives additional details on this issue. It has happened a couple of times to this pilot apparently.
"When the Phenom loses both GPS 1 and GPS 2 it also gives you an AHRS FAULT 1 and AHRS FAULT 2. For whatever reason this situation can cause the autopilot and yaw damper to disengage apparently, thus at high altitudes may cause the aircraft to dutch roll. I have had this exact situation 3-4 times, generally near the white sands missile range though and my autopilot worked great the whole time."
AHRS = Attitude and Heading Reference System
So basically, it seems it's not a big issue, especially since his aircraft kept the autopilot on. But it might require some attention from the manufacturer
If I was on an aircraft doing a Dutch Roll [0], I'd definitely feel discomfort!
(Note for readers: A "dutch roll" is when the plane rolls (when one wing dips down and the other rises up) and yaws (when the rear of the aircraft moves in one horizontal direction and the front of the aircraft moves in the opposite horizontal direction) at the same time.)
In principle, the system should be capable of dropping down to a stabilizing mode that relies on high-pass filtered rate and acceleration inputs. Even without GPS, the autopilot should+ provide dutch roll damping, short period damping, coordinated turning, and a few other stability control loops.
+ "should" means that it is technically possible, not that this particular flight control system does
The Phenom uses a GFC700 autopilot by Garmin. Interesting tidpit from its webpage:
>enabling it to fly fully coupled GPS-only LPV approaches into runways not served by ILS or other ground-based electronic approach aids.
GPS-only only approach is probably the issue here. If the runway its using doesn't have ILS then it only has the GPS fallback. If there's no ILS and no GPS, then I would expect issues.
This last comment in this forum points to GPS interference issues with the GFC700 that sounds significant. Sounds like the GFC700 is susceptible to GPS issues and because this is a corporate jet and may be landing on a privately owned airstrip, it may be the case that there's isn't any ILS or unreliable ILS at those strips. Unreliable/non-existant ILS and a wonky GPS? Yeah, an autoland procedure could be fatal in those circumstances.
I think they also just exposed a vulnerability that others haven't thought of. Not sure how easy directed GPS jamming is, but pointing out a specific aircraft model is a bit scary.
That is beyond scary; how anyone can defend having critical aircraft control systems rely on an input which may be turned off at will is beyond me.
Let us at least hope the system fails gracefully and notifies the pilot that something odd just happened and you will have to do your own flying from this point on, rather than just going titsup and be done with it.