No, autoland is pretty good. But there are two factors that cause pilots to land manually most of the time: 1) pilots think they are better at handling last-minute unexpected contingencies like wind sheer or bird strikes (and they may well be right about that) and 2) pilots don't want to give the Powers that Be any additional data to suggest that they aren't actually needed to fly a plane at all. The fact of the matter is that there are no technological barriers to making aircraft completely autonomous, but pilots want the world to remain in denial about this for as long as possible for the sake of their job security. (FWIW, I'm an instrument-rated private pilot.)
> The fact of the matter is that there are no technological barriers to making aircraft completely autonomous, but pilots want the world to remain in denial about this for as long as possible for the sake of their job security.
Two words: anomalous conditions.
As we both know, pilots aren't needed in the vast majority of commercial flights. The plane is perfectly capable of taking off, navigating, avoiding collisions, and landing with minimal, mostly unskilled supervision.
We don't need pilots on planes... until something unexpected goes wrong, and we do.
That's true, but 1) anomalous conditions are very rare and 2) as often as not the pilot is the anomalous condition. Pilot error is the most common cause of aviation accidents.
EDIT: Notice that you had to go back 30 years to find your third example of a human pilot saving the day. And automation technology has improved a lot since then.
Most of the time when a pilot saves the day he just disconnects the autopilot and writes an incident report. There's been at least four inflight upsets due to autopilot or sensor malfunction in the past few years that I can recall. Some of these caused serious passenger injury. Latest one I heard about here: http://www.aibn.no/Luftfart/Rapporter/13-18
Another one: http://en.m.wikipedia.org/wiki/Qantas_Flight_72
Devil's-advocate: fine, OK, we still need pilots, for the time being. But do they actually need to be on the planes?
In the case of two recent crashes (AF447 and this one, assuming pilot error turns out to be the cause as seems likely), it's safe to say that a computer would've easily done a better job than the pilots.
Would be interesting to hear Capt. Sullenberger's take on the question.
If you're going to use remote pilots to deal with anomalous conditions, you'd better have an emergency electrical supply that'll keep your remote control running for however long it'll take to glide at the plane's maximum range. You'll also need a communication system that'll work over the ocean, in poor weather conditions, while the plane is having difficulty maintaining a stable attitude (good luck maintaining the alignment of a satellite dish when the autopilot disconnects due to a sudden upset).
On the other hand, if your onboard autopilot is in control of everything, you run the risk of having autopilot bugs causing an accident - indeed, a software issue in the airbus flight protections (which, unlike the autopilot, are always active unless a very severe system failure occurs, and can override the pilot's inputs) has caused an incident that resulted in passenger injuries: http://en.wikipedia.org/wiki/Qantas_Flight_72
While none of this is impossible, it's certain to be expensive, heavy, have its fair share of bugs at the start, and by quite difficult to get past the regulators. Not to mention passengers probably won't be very happy being on an unmanned plane, even if the system really was perfect. So it's easier in the end to just put pilots on the plane.
If you're going to use remote pilots to deal with anomalous conditions, you'd better have an emergency electrical supply that'll keep your remote control running for however long it'll take to glide at the plane's maximum range. You'll also need a communication system that'll work over the ocean, in poor weather conditions, while the plane is having difficulty maintaining a stable attitude (good luck maintaining the alignment of a satellite dish when the autopilot disconnects due to a sudden upset).
(Shrug) Sorry, but given the complexity of the rest of the aircraft's systems, none of these engineering tasks sound impossible. 40 years ago we did this stuff on the freakin' Moon.
Although yes, I tend to agree that any move in this direction amounts to fixing something that isn't broken. (In the "freakin' Moon" example, the human pilot still had to handle the tricky part.)
> 40 years ago we did this stuff on the freakin' Moon.
...
> In the "freakin' Moon" example, the human pilot still had to handle the tricky part.
Since you're apparently well aware that the Apollo missions were conducted with manual docking and landing, why bring it up?
Especially given the relative simplicity of "final approach" on the moon, given the lower gravity, zero wind speed, and slower pace, I don't understand what your point is.
What "stuff", though? The moon landings were conducted by a human brain in control of a (mostly[1]-)fully-functional machine. How does that translate to fully automated landing of an atmospheric aircraft already experiencing a malfunction?
[1] Actually, the Eagle did experience a computer malfunction that almost caused an abort to the landing before an engineer said they could ignore it. You want a computer to judge for itself whether it can still safely fulfill its mission while malfunctioning?
For further terror, consult the Apollo 12 lightning strike. When the computer is FUBAR, how do you propose it fix itself?
So, just to make sure I don't misrepresent your position, you're saying that the people who built and programmed this hardware ( http://www.space.com/16878-mars-rover-landing-sky-crane-guid... ) couldn't safely land a 777 in SFO (or, for that matter, an A320 in the Hudson) any day of the week. Is this the case?
What part of the skycrane experienced serious damage or malfunction that the software corrected for?
The whole point is we're not talking about when things go according to plan.
For that matter, how many times has the skycrane been used? What is its safety record? If you go ask the guys at JPL how many times out of 1000 they think it'll work perfectly, what will they say?
Assuming we could make such a system secure and unhackable as well as unjammable, why would we want to? To save some money? I would rather pay a little more to have the pilot sit in the same airplane and have his life on the line as well.
I just find it fascinating how obsessed people seem with getting rid of pilots. You don't often hear the same discussion about train or bus drivers, or doctors for that matter, even though we are just as close to having automated robotic surgery and expert systems have been around for a long time.
Might it be possible to have a "house pilot" at each airport that could be patched in during landings or takeoffs? Latency could be made pretty negligible with a radio link only a few kilometers long at most.
That wouldn't do anything for patching in pilots for planes in the middle of the pacific, but I wonder how often that would be necessary.
Another big reason for piloted planes is, seriously now, who here would get on a plane knowing that it was being flown without a pilot.
If it's hard for most of us from varying technological backgrounds to say yes than it's going to be almost impossible to convince the more mature passengers.
This is one of the reasons why all planes look the same (they have the same configuration, low wing with engine below wing) even tho Airbus and Boeing keep churning out concepts. Would you get on a plane that looked even slightly funky?
I don't find it likely that pilots hand-fly approaches to make a point. Isn't it more likely that this is codified in airline operation instructions?
As for completely autonomous, do you seriously think we have the technology available today to make an autopilot handle any conceivable situation without a human being present and ready to take over? (non-IR PP here)
It isn't "just to make a point". It's also to stay proficient, or to make the job less tedious. But whether to hand-fly an approach is always the pilot's discretion (as far as I know -- I'm a just a private pilot so I could be wrong about that).
And no, automation can't handle "every conceivable situation", but neither can humans. Furthermore, humans screw up more often than autopilots. Pilot error is currently the single biggest contributor to the overall accident rate.
Mind you, I'm not advocating fully autonomous aircraft. I like having a human in the loop, but that's in part because I am the human in the loop. It's far from clear that human pilots are a net win for safety.
Neither can humans, but humans can handle a lot more situations. We can take into account a lot of information and come up with creative solutions, while current computers will need a set of humans to come up with and program potential scenarios beforehand. I think we'll have computer level AI that will perform better in accidents eventually (AI is improving and our brains are not) but it a long way into the future, and not an imminent threat to pilot jobs as some would believe.
In the meantime, the solution to the human factor isn't to eliminate humans, but to improve training (which is already happening after AF447).
I like having human pilots primarily because I'm a programmer and I know how difficult it is to design robust computer systems. There's been one runway overrun and one serious in flight incident with passenger injury due to software design faults. Now try to design a system that makes sense of audio, video and smell in addition to the existing sensors, and not have it fail in some spectacular unforeseen way..
Actually, it is extremely rare for a human pilot to come up with a "creative solution" to an emergency. The vast majority of emergency responses are established procedures for which pilots train. Most of the time they're following a check list.
I can only think of a single example of a "creative" response to an in-flight emergency that actually helped, and that was UA232 in 1989.
Aircraft accidents are extremely rare in the first place, so that's a given. The question should rather be whether computers would do a better job than pilots in the same situation, and currently the answer is no. The actual flying of the airplane which autopilots do today is just a small part of the pilots job, and automating the rest of the tasks a pilot performs is non-trivial.
Are you a pilot? Because I am, and I'm telling you from firsthand experience that you're wrong. Except for takeoff and landing, there's next to nothing I have to do. And the only reason I have to do the landing is because my plane is a small GA aircraft without autothrottle or autoland technology.
Yes, I am a pilot, albeit not an instrument rated one. Are you telling me you don't do anything? Do you bring a book to read instead of monitoring the instruments? You don't talk to ATC, you never have to make a decision regarding a route deviation? You never take into account weather information and make a decision underway on whether to press on or to find another place to land?
No, of course I'm not saying that. I'm saying all those decisions could be automated, not that they have been. Except for ATC communications, all the information I use to make in-flight decisions is already available in digital form. All the engine parameters are digital. I get en-route METARS via XM. I have an WAAS GPS coupled to the autopilot. The only thing standing in the way of making my aircraft completely autonomous is a throttle actuator and the right software. And no, writing that software would not be trivial, but neither would it be impossible.
For normal operations, I think it's possible to make it _mostly_ autonomous today (TTS and voice recognition for ATC, and some heuristic metar/weather radar analysis might work). I don't think we have the technology today to make such a system safe enough to not have a human ready as a backup. And for an accident scenario, I think it's completely impossible today since we would need to integrate audio, video and smell sensors and AI software to rival humans in situational awareness. This would mean exceptionally complex software.
The way we have solved reliability in autopilots and FBW systems today is to make them as simple as possible and to give them sensible fallback modes (like the Airbus FBW removing stall protection when missing certain inputs), and even then we have had real life accidents because of programming errors or design errors. So if you think pilot automation is mainly a question of politics as you said earlier, I think you are being overly optimistic (which, of couse, is not uncommon in the software industry).
I think a better approach to removing pilots is to see the strengths in both computers and humans and design systems where the advantages of both are maximized.
If you are interested in reading more on software safety, http://sunnyday.mit.edu/ is a good starting point.
Is it standard procedure to land A320s in the Hudson? Prior to 2009, how likely do you think it would have been for someone to have programmed an autopilot with such a capability?
Do pilots typically glide unpowered 767s to landings at abandoned military airfields? (A result, incidentally, that could not be reproduced by other crews in simulators; are you sure it's the crews, and not inadequate programming? Still trust the computer?)
Are you willing to bet your life that the computer on a 737 that looked like this[1] could find its way to a safe landing?
> Is it standard procedure to land A320s in the Hudson?
Yes, ditching an airplane that has lost all of its engines is a standard emergency procedure.
> Prior to 2009, how likely do you think it would have been for someone to have programmed an autopilot with such a capability?
What difference does that make? I'm not saying it's a good idea to take pilots out of the cockpit right now, I'm just saying it's a lot more plausible than most people think. The main limiting factor is politics, not technology.
> Are you willing to bet your life that the computer on a 737 that looked like this[1] could find its way to a safe landing?
Sure, why not? Losing the top of the fuselage looks dramatic, but it probably doesn't change the flight characteristics all that much. Also, very good adaptive control algorithms exist that could almost certainly handle this.
Also, you're cherry-picking your anecdotes. There are plenty of examples of flights that would almost certainly have ended safely but for some stupid mistake the pilot made. Controlled flight into terrain accidents, for example, are much more common than heroic rescues, and they could be entirely eliminated if you took human pilots out of the loop.
> Yes, ditching an airplane that has lost all of its engines is a standard emergency procedure.
See my reply to krisoft. You're changing a specific situation into a general one. You can't just say "ditch the plane if you lose power", you have to anticipate every possible variable that may influence whether that is actually the correct course of action, and the manner in which it is carried out. And you have to do that before it ever happens.
> What difference does that make?
Unanticipated situations are unanticipated situations. We don't have AI. We haven't replicated the ability of a human being to adapt on-the-fly. There is no reason to believe we will in the near future.
> Controlled flight into terrain accidents [...] could be entirely eliminated if you took human pilots out of the loop.
The problem I have with this line of reasoning is that so much more could be done to prevent them even without taking the pilots out of the loop, and yet it's not. That does nothing to give me confidence that the right thing will be done when pilots ARE taken out of the loop.
Edit: What it comes down to is this. All you're ultimately doing is completely and irrevocably substituting the unalterable judgement of somebody in a completely different time, place, and circumstance for the adaptable judgement of the person on the plane. When you do that, what you're really saying is "I refuse to give people in a scenario I didn't think about the chance to survive". I can't accept that.
CFIT used to be common, but it's not anymore on commercial flights thanks to GPWS. There's not yet been an accident on an airplane equipped with EGPWS.
Emergency landing on water is "standard procedure". You might have the impression that it's something which was creatively invented by the pilot at a moments notice, but it's really not. The engineers designed the airplane with the capability, included advice and checklist in the operation manual, they even put a button labeled "ditch" on the dash! Can't really see why they could not make such an autopilot program.
You completely misunderstand. I wasn't talking about a generic "emergency landing on water".
A plane suddenly loses power in a highly urbanized area. It's very near multiple airports. Prior to 2009, I have no expectation that an autopilot would have been equipped to judge whether it should attempt to land in a heavily-trafficked river. (I further have no expectation that it would have been equipped to notice in the highly plausible scenario of one or more small boats being in the way.)
It's not a question of landing on water, it's a question of making the decision to do so and where.
Humans have the judgement gained by a lifetime of learning. Computers only have what it occurs to us to put into them while we're sitting safe and sound in our little offices pushing little buttons that don't threaten our lives.
I can't imagine anyone stepping onto an autonomous passenger aircraft for a long time, not until long after cars and smaller commercial aircraft are regularly driven using computers only. And even that seems like a stretch. I don't see how pilots use or non-use of some automated systems is going to influence popular opinion on that one way or another. But maybe there's a perception among pilots that it will.
The fact of the matter is that most commercial jetliners today are fully automated. It is only takeoff and landing that are still done by hand. No one ever hand-flies a jetliner during cruise except in dire emergencies. In fact, at high altitudes, where the plane is flying in the "coffin corner" of the flight envelope (http://en.wikipedia.org/wiki/Coffin_corner_(aviation)) hand flying can be extremely dangerous.
Getting in a Google self driving car is one thing, where you still have the ability to take over control or apply brakes yourself. But how many passengers would be willing to get on a fully self piloting aircraft? I think not too many, at least at first.