A common phrase in aircraft cockpits nowadays is "What the heck is it doing now?" as pilots have migrated from actually flying the plane to simply being glorified systems managers.
While planes have become so, so, so much safer because of all this automation, pilots uncertainty regarding autopilot functioning is a major concern nowadays, and the reason for several accidents.
There are very interesting HCI challenges around properly communicating to the pilot/driver "what the heck it is doing" and clearly communicating just how much control the human has or doesn't have at any given point.
This "announcement" certainly doesn't inspire any confidence that they have really thought this through deeply enough (I think they probably have, but it should be communicated like it). As a huge Tesla fan, I can't help but feel like I need to hold my breath now and make sure something terrible doesn't happen because of this, and it ends up leading to even more regs setting us back on the path to full car automation.
I like 99%I's content but Mars' voice is my least favourite part of it. Sounds a bit smug and is not always clearly audible in the car. Much prefer Freakonomics.
More like a dog to gently nudge the pilot if he is relying on a system that isn't working. Most issues I've read about happened when the systems either weren't engaged properly or automatically disengaged because their operating parameters weren't met anymore, suddenly forcing the pilots to take control.
Example Air France 447, where (presumably) the airspeed sensors where blocked by ice, which lead to the autopilot to disengage. There is also the theory that the pilots then made some of their mistakes based on the belief that the avionics would stop them from bringing the plane in an unsafe state and didn't realize that the system wasn't able to do so with missing information. (I hope I'm presenting this correctly, but that's what I remember reading afterwards)
There is also the training aspect: if a system takes care of something 99,9% of the time, the pilot is less experienced in the 0,1% where it doesn't. There is a reason the safest airline pilots often fly way smaller aircraft as a hobby and get some instinct for manual flying there.
I think the main problem with aircraft is lack of redundancy. They still count on the ability of letting the pilots control the aircraft (e.g. .1% of the time as you mentioned), I think they should just try to go all the way as much as possible. For example, if they had 4 redundant pilot tubes on different places and other redundant methods to measure altitude and airspeed the AP would never have disengaged. For a large plane those sensors are basically free, and it's very easy to tell whether they're working correctly by cross-verification/calibration (i.e. maintenance/reliability is easy) -- they just don't go all the way because they're still on the paradigm that "well if a couple pilot tubes and/or other systems fail we can just hand it to the pilot".
>> I think the main problem with aircraft is lack of redundancy. They still count on the ability of letting the pilots control the aircraft (e.g. .1% of the time as you mentioned), I think they should just try to go all the way as much as possible.
I'm not an airliner pilot, but I have the impression almost everyone is always grossly underestimating how often a human pilot is still required to safely fly a commercial airliner. Yes, the autopilot may be able to handle 99% of the flights without intervention, or even 99.9% of the flights, but simple math shows that that means there are still tens if not hundreds of thousands of flights every day where the pilot needs to take action, and that any individual pilot will get into a situation where the autopilot will fail to handle a situation probably once or twice a year. Likely a lot more often.
Large parts of the world have weather conditions, diversions, or airports with spotty or unreliable ground systems by the way, in those areas the need for a human pilot will be several times higher than once every hundred flights. There's ample examples of plane crashes that were caused by automated systems, rather than pilot error (I don't remember the exact location and date, but I know that e.g. not very long ago a plane crashed because the ILS beacon at the airport was malfunctioning). In other cases the level of automation enabled lousy pilots with bad training to fly the airplane (case in point: the Air Asiana crash at SFO), which IMO is not 'pilot error' because these people should not have been flying the plane in the first place.
All this makes me think the solution to improve airliner safety even further is not more automation, but better pilot training.
As for autonomous driving, besides AI assisted highway cruising, I don't believe in the concept at all, and my bet is that in 25 years we've realized that we've been wasting most of the time trying to build fully autonomous vehicles. Limiting the possibilities of driver error seems like a much better investment (e.g. automatic cruise control, automatic emergency braking, etc).
Modern aircraft have excellent redundancy. Commercial aircraft already typically have 3 Pitot* tubes and 3 static ports. They need unobstructed airflow and are placed accordingly. In this kind of aircraft they do cross-verify each other as well, but if they all are reading different values, there's not much to be done. Conditions were such on AF447 that they all experienced some amount of icing until the aircraft descended enough. I don't see how having one more pitot tube is the proper response.
Well clearly having altitude/velocity readings is critical to autopilot function, so some kind of redundancy should be put in place. Not necessarily specifically more pilot tubes, but some kind of solution, like heating/deicing the tubes inlets, better placing them, etc -- guaranteeing in some way the chances that all measurements are unavailable is astronomically low.
"For example, if they had 4 redundant pilot tubes on different places and other redundant methods to measure altitude and airspeed the AP would never have disengaged."
There's a lot to be said for that. The pilots didn't know their altitude and airspeed either. They thought they were overspeeding when they were stalling.
Some military aircraft, mostly the stealth ones with terrible aerodynamic stability, need that sensor data to stay in the air at all. If they lose the sensor data or the stability control system, the only option is to eject. So they tend to go in for more sensor redundancy. There's certainly no reason that large transport aircraft can't have more sources of basic attitude/altitude/airspeed info.
That sounds like a UI issue. Like, it shouldn't be too hard to put a blinkenlight on the control that indicates it's under autopilot control (or the absence indicates it isn't).
There was also a stall warning that shutoff when the pilot pulled up because it no longer had reliable info, the pilot assumed pulling up was somehow stopping a stall
Wasn't there also the problem with other pilot pushing down in panic, rudder being so enclosed that nobody noticed, and the fact that plane averaged the two inputs?
There is no basically in an aircraft accident. If you wish to make such a statement, please explain what actually happened in this flight (icing of the pitot tubes, deactivation of normal flight control law for the fly-by-wire system, flight at night over the ocean and over a storm, stall under the specifications for the stall warning system). It's respectful for the crew.
There's an old joke about a pilot lost in a deep fog who shouts out the window "Where am I?", hears in response "You're in an airplane!", and immediately realizes that he is over the Microsoft tech support building, because no one else could be so accurate and so useless at the same time.
That's what you're doing here. Saying "the copilots crashed it" teaches us nothing; we need to know why they crashed it, what cues they misunderstood and what skills they lacked, so we can keep it from happening again.
There is more to being a pilot than the stick-and-rudder of flying the plane. Judgment calls such as whether to attempt a landing under less-than-ideal conditions are far more important.
Direct control vs abstract control vs intelligence interaction has been a long running topic in HCI. For the most part it has been evolving in funny ways (i.e. skumorphism used metaphors from direct control to execute abstract control).
I think the design language will evolve slowly with the users at a speed roughly related to the adoption curve.
It may tell users everything that the car is deciding to do now, and as confidence in the system increases with adoption, it will do less and less.
That is why more advance self-driving car researchers are working on the harder problems of getting abstract and intelligent interaction with a car working. The real market problem is just telling you car "take me to the store" rather than just getting it to drive down a straight lane.
Queue the James Bond movie sub plot where an evil villain takes control of his tesla/whatever autonomous car and he has to use a combination of hacker shell commands and sheer brute strength to save both him self and bond girl he was seducing while the car chauffeured him back to his luxury hotel suite...
In the newer Ghost in the Shell series, an antagonist writes a virus that takes control of all the cars in the city, causing a massive physical/real-world denial of service.
There are videos on YouTube of Bosch fitting their automated driving technology into a Model S [1] and performing demonstration runs on a private road. The software shown in the video displays quite a lot of information about what the sensors consider relevant from the car's surroundings.
She's barely 49% confident. I understand she doesn't want to enjoy a demo effect in a moving vehicle with 2 guests but it's a bit disturbing to see her hands constantly willing to grab the wheel.
This is so clearly a fake demonstration. Watch the way the "automation" follows the probe car. It's so clearly leading it on and the woman is just going through an act.
In another video a different pair of accompanying passengers record a similar sequence of events, so the demonstration itself is probably scripted and rehearsed. But I wouldn't rule out that there is an actual piece of software working under these (controlled) conditions.
>While planes have become so, so, so much safer because of all this automation, pilots uncertainty regarding autopilot functioning is a major concern nowadays, and the reason for several accidents.
If % of accidents caused by pilot interference on a working system > % of accidents caused by system malfunctioning and pilot ignoring it: people will still be against not allowing pilots to interfere. Even when it causes more accidents...
There's something about humans trusting humans more than machines that I don't fully understand. Systems can make mistakes but the amount of human mistakes is often exponentially greater to a degree of absurdity that humans are even trusted at all and yet people will side with the human over the machine.
Humans will always want human oversight - even when that oversight does more harm than good once automation reaches a certain threshold...
Special note: I'm not aware of avionics and the data on pilot interference w/ the system vs failure of pilot to intervene. So maybe this example doesn't hold very well for avionics...
A human can make judgement calls in unexpected situations. Objectively, a human is more likely to see falling debris and react to it than a self-driving car (of course depending on the programming and sensors).
Or when the road is slick and you are doing 5 under, and you see someone pass you doing 5 over, with a car following too closely to it... you have a better understanding that there's likely to be an accident ahead... (I've seen this happen several times). A computer wouldn't predict that most likely.
Unless ALL cars are automated, it is a must that a driver be able to take over quickly.
> A human can make judgement calls in unexpected situations.
A properly programmed machine can behave smarter and faster and it also knows its limitations, so it can account for them. "Judgement calls" is something not needed because the machine always keeps good judgement.
> Objectively, a human is more likely to see falling debris and react to it than a self-driving car (of course depending on the programming and sensors).
Very much depending on programming and sensors, but from some point on, I'll always be betting on programming and sensors.
> Or when the road is slick and you are doing 5 under, and you see someone pass you doing 5 over, with a car following too closely to it... you have a better understanding that there's likely to be an accident ahead... (I've seen this happen several times). A computer wouldn't predict that most likely.
The computer will track all vehicles, their velocities, past observed movement history, perceive road conditions that you can't due to limitations of Mark I Eyeballs, and will push all that data through an inference system that is capable to stay rational all the time, without being affected by "stress" or "surprise". Solutions will be computed literally faster than the blink of an eye. The machine will see the phase space of the road-cars system and be able to navigate it to safety.
> Unless ALL cars are automated, it is a must that a driver be able to take over quickly.
The driver needs to never take over. A dog should be placed, trained to bite the driver if he tries to touch the controls. Better yet, remove the controls.
The machine knows its limitations. The right way to avoid an accident is not to perform stunts based on intuition, it's to keep the entire system (of cars and the road) in a stable and known state, and navigate that state to safety.
People seem to get machines really wrong. Machines today are limited in their creativity. But they are orders of magnitude better than humans in getting things right. On the road, we need more of the latter than the former.
The thing that humans can do that machines cannot is react to something that they never expected and haven't been programmed for. A human can generalise or invent a solution to a new situation that a computer simply will not be able to do.
You are correct that in the vast majority of situations a computer will outperform a human. I've avoided accidents by luck more than judgement, guessing that the car will be able to do what I'm asking (whilst the autopilot would know at all times and react in milliseconds).
All the world needs to accept is that at some point someone will die because of an auto pilot error and that's ok because it's net lower numbers of deaths than the same number of humans driving.
I still am yet to understand how a self driving car deals with a cop directing traffic. Or even a construction worker holding traffic back from a backing up backhoe. But maybe I am not aware of the genius of the technology yet.
From my limited and slightly hopeful understanding they don't try to understand much more than 'something is moving toward our planned path and that is no good'. As long as the car avoid going over the cop or the worker it's ok. Every decision above that is optional.
They will (eventually have to) recognize hand signals from construction workers and police officers. Current generation, they won't run over the person, but you should probably take over and follow their instructions.
Our best of the best image recognition tech cannot see a difference between a zebra, and a sofa in a zebra print. I think "reading hand signals of a policeman/workman on the road" is far far far beyond what we can currently do. Or rather - I'm sure we can make a solution which will work right now, in perfect conditions. I'm sure it will fail in the dark/rain/snow or if the worker is making small gestures near his hip rather than moving hands high up in the air. There's just so much uncertainty in driving that I think for cars to be absolutely 100% automated, where you can genuinely go to sleep when the car drives, the roads would need to be 100% automated as well, with beacons everywhere. That will probably happen, but it's definitely not "3 years away".
Zebra vs. zebra-patterned sofa is a kind of problem designed to be hard for image processing algorithms. From the practical point of view however, you just have to require that policemen / construction workers wear specific patterns on the uniforms or even have special IR-reflective buttons/threads. That would solve like 95% of cases, and by the time self-driving cars become the norm, we'll have figured out at least some specific algorithms for that very purpose that would work in general case.
As a Tesla enthusiast, would you buy a Tesla that wasn't capable of breaking the speed limit? Perhaps one that couldn't ever under any circumstances move faster than 70mph? That doesn't require fancy tech. It could be done today in an instant.
That's where I see the real sticking point for automation. Driving isn't about getting from A to B, nor is safety the top priority. If it were, there wouldn't be such a thing as a V-8 (or whatever the electric equivalent is). I find it very ironic that a performance car like the Tesla might promote "the path to full car automation". You think the gun debate is tricky? Try telling people they cannot do 51 in a 50 anymore.
> Driving isn't about getting from A to B, nor is safety the top priority.
Yeah, this is a huge problem with drivers today. 99% of driving is absolutely about getting from A to B in as safe way as possible; anything else is reckless endangering of the lives of others. But people grow up dreaming about fast cars and freedom and adrenaline, then they get their licenses and can't confront the boring reality. That's the very reason we need self-driving cars on public roads. Driving for fun needs to be separated out and put somewhere else.
So my friends and I riding motocycles through the mountains on a sunny afternoon are reckless endangerment?
How about sightseeing buses? Or bicyclists riding purely to stay in shape? Or people learning how to drive? Or cops paroling neighborhoods? Or ice cream vans? There are lots of perfectly reasonable uses of roads that have little to do with A-to-B. The elimination of "fun" in the name of safety is a very uphill battle.
Many cyclists do ride on public roads for fitness reasons. Beginners might ride in local parks or smth, but I'm pretty sure most of the cycling hours are spent on the roads. Since people who ride more put in significantly more hours and tend to ride roads more. There's offroad cycling, but in many cases it involves cycling on unpaved public roads or other public areas not reserved to bikes only.
I myself spend more hours on public roads on a bicycle than driving a car.
In my personal and not so humble opinion, you shouldn't ride on public roads for fitness reasons. If you need to get from A to B, like from home to work, and you want to do it on a bike, fine. If you want to add some fitness routine to it, it's fine too (your lungs would probably disagree) - but you have to focus primarily on staying safe and not endangering others.
But pure sports? There are parks for that.
Don't get me wrong, I'm all for reclaiming cities for pedestrian and bike travel and creating a maximally energy-efficient public transport system (ironically, the best idea would probably be self-driving, publicly-owned electric cars forming a PRT network). But in current situation one has to stay pragmatic.
There are parks? Where exactly? Riding anywhere but on the road is dangerous. Most reasonably fit road bike cyclist can average 18-19 mph and possibly much faster on a slight declines.
We should be promoting cycling not making it even more difficult. Self driving cars can make cyclist significantly safer for everyone. The most dangerous thing on the roads right now is human controlled cars.
If I'd do "pure sports" at parks, many parents with children would be not so happy about that.
I love to go on long bike rides. Like 4 or 10 hours long. Parks is not an option for that. While backroads are awesome. Once out of town, I usually meet a car once in a while. And while in town and suburbia, I guess I can qualify as A to B commuter. Even out of town, sometimes it's sort of A to B commute, because I want to go to some sightseeing spot or fancy cafe in the woods or smth. I just happen to cycle instead of drive. Or sometimes I dispatch my family in a car and go by bicycle myself. It's still A-to-B, isn't it?
Saying what is and what is not a "proper road use" is a very slippery slope. Is it OK to drive to get to a grocery store? Is it OK to drive to some spot in the middle of nowhere for sightseeing? Is it OK to drive for few hours to bbq in an unseen place? Is it OK to drive to go fishing in a nicer lake instead of the one nearby? Is it OK to drive to see friends if you can video call them instead?
You want pragmatism, but what you're giving cyclists is the exact opposite of a pragmatic situation. Cyclists are being extremely pragmatic when they cycle on regular roads...
In most places a bicycle is given specific rights to use the road.
Generally such laws include things like:
1) May use the turn lanes when turning.
2) Must ride to a specific side of the road unless: You can keep with the speed of traffic, there is not enough safe distance for a car to pass you, or it is not safe to ride on the side of the road. (These rules are generally true of any vehicle that drives slower than general traffic.)
3) Protections to the bicyclist. Either in the form of "vulnerable traffic" laws, or specific bike protection laws.
4) There are even places with laws preventing bicyclists from using the sidewalk.
5) About the only place a bicyclist isn't allowed to ride is on high speed traffic areas like a freeway/highway/etc.
6) Protections for the bicyclist against negligent drivers such as mandating safe driving distances, when to pass, and protecting against specific activities that may endanger the bicyclist.
I do agree that a bike path is certainly much safer (and perhaps more scenic) for fitness, but many people are interested in traveling to a specific location. (Ex, the beach or a park. Riding along the coastline. Etc.)
The main problem with biking on the road isn't that you are doing something wrong... but that drivers do not respect you. Tailing too close, not giving you the right of way when you have it, etc.
Breaking the law /= recklessness. You are probably thinking of negligence per se, something less than recklessness.
And I wonder why you assume that a bunch of motorcycles driving through the mountains would be breaking any laws. I know a group of lesbian harley riders who never go faster than 50 who would take issue with any assumption that riding a bike suggests illegality. For them it is about being seen and showing pride. I did a ride for burn victims a few years back. The kid riding on my bike was all smiles even though I doubt we broke 30 the entire way. He just liked the wind, a bit like my dog when she hangs her head out the window. Riding for fun does not mean riding for speed.
I didn't mean to imply that driving a motorcycle or a bike == illegal, or == driving for speed. As for breaking the law - traffic laws exist to both protect people from deadly accidents and ensure some predictability on the road, the latter of which is needed because humans suck at dealing with surprises, especially over a longer period of time.
What I mean is - going on a motorcycle trip on a scenic route? Sure, it's fine (a self-driving motorbike would probably be even safer though :)). The problem really starts when some people put fun in front of safety and e.g. start speeding.
TL;DR: have as much fun as you want, but not at the expense of safety. Want to have additional fun coming from doing dangerous things? Go do it somewhere where you endanger only yourself (and those who consciously decided to participate in such activities). Racetrack, private roads, whatever.
Lol, a self-driving motorcycle isn't anywhere near possible atm. Driving a car is to riding a motorcycle as walking is to ballet. There are all manner of strange physics (Google "countersteering"). Add to this the vast differences in threat profiles, the regular need for evasive maneuvers, serious judgment calls re emergency braking in corners, the weight-shifting of the rider, the potential 'bail out' decision, the lowside v highside decision (laydowns) ... I haven't heard even a passing joke about an autodrive two-wheeler. Even automatic transmissions are near fantasy beyond small 2-gear scooters.
Any sudden, unexpected, movement by an autodrive motorcycle would probably see the drive thrown free, or at least result in a weight shift great enough to down the entire package.
I meant it as a little joke, but since you bring it up - no, I actually think that autodrive motorcycle is not only feasible, it's not that harder than a self-driving car. Why?
Because all those "ballet-like" things is basic feedback control issues, the kind of which you learn about on control theory 101 in college. You can pretty much convert the problem of steering the self-driving motorcycle into the problem of self-driving car by adding a module that accepts car-like inputs and translates them to dynamic balance control. We've solved the basics with segways, which are smart-high-schooler level electronics projects.
Also, generally, whatever you can do on your vehicle a machine can do the same using the same control inputs, only better. Manual gear shift included.
I take it you haven't driven a motorcycle. I can ride one, at speed and around corners, without touching the controls. Remember that the rider may represent 30+% of the vehicle mass. Moving your weight around on the bike has as much control over direction as the handlebars. Unless you are going to strap the rider into a seat (ie make it a car) attach him to a hydraulic arm, or install a 200lb gyro, no machine can take directional control.
This all comes to a head at the transition point of countersteering, where the controls reverse around 20ish mph. The rider's balance controls whether turning the handlebars right means right or left. If the rider isn't in sync with the autodrive, everyone is in the ditch.
I'm genuinely curious, as an occasional motorcyle enthusiast myself, whether it's physically possible to counter the movements of the rider in addition to and counter to their movements, which represent a large portion of the vehicle mass. My gut feeling based on the segway is, yes, it's possible. But I'd love to hear any analyses to the negative.
It's probably possible to some extent. It's like having a anxious pillion rider on your bike. I remember when I first took my little sister along on my motorcycle, she'd lean the wrong way in the curves because she was scared the bike might fall over or something. Made controlling the bike harder, but not impossible.
(That experience taught me to always tell people to lean into the curve if they've never sat on a motorcycle before)
My guess is, you could put a 1m vertical pole behind the driver with some mass attached to the top end of it and motors controlling it at the bottom. That should give you enough angular control to compensate for whatever rider is doing.
Given the close interaction between the rider and the bike, I'm not sure how comfortable would such a vehicle be. It may turn out to be nausea-inducing.
I wonder if a "feet first" design would be once solution. The lower centre of gravity should be easier to stabilise. I don't know whether the riding style of an FF makes the same use of the rider shifting their weight as with a regular bike.
> > So my friends and I riding motocycles through the mountains on a sunny afternoon are reckless endangerment?
> If you're breaking the law then yes, you're reckless, period. If not, then enjoy it all you want.
The speed limit (and various other laws) are very much all or nothing. Riddle me this, which of the following is safer to do on the same road with a speed limit of 50:
a) Driving a brand new porche with brand new tyres on a sunny day, doing 55
b) Driving a car from the 70s with tyres that are almost worn out (but still in the legal range), at night, in heavy rain, doing 48
now consider that a) is illegal and b) is legal.
Breaking a law should not be a binary decision, making it a 0/1 choice makes the world worse for everyone, because to be completely safe, the restriction has to account for the worst case and that means that the vast majority will be artificially limited and feel like the law is needlessly overreaching. This is not a good situation to be in.
We can barely handle a binary law. Humans don't have enough cognitive power to have fuzzy laws. If cars in both a) and b) were self-driving, we could consider flexible traffic laws, because the cars would be accurately aware exactly how dangerous the situation is. Humans are not smart enough.
His point was that you can't just say "they're reckless if they're breaking the law", and you certainly can't say "they're only reckless if they're breaking the law". There is a lot of nuance to it and in some situations, breaking the law is even required to prevent a catastrophy and it's reckless not to.
It's a temporary problem though. Speed limits are set to try and keep people safe, by enforcing a maximum speed. Once computers drive the cars, the speed limit can be cranked up because reaction time goes up drastically and distracted drivers are nonexistent. I expect to see lots of changes in driving laws once automation takes over for exactly this reason.
"Once computers drive the cars, the speed limit can be cranked up because reaction time goes up drastically and distracted drivers are nonexistent."
That doesn't mesh with areas where robots have already taken over. The robots moving freight around shipping terminals don't move any faster than humans driving trucks. The robots I see delivering drugs in hospitals don't move faster than the nurses.
Speed limits aren't just about the cars. Noise and pedestrian safety are big factors. There is no technology yet that can allow a robot to see around blind corners. They may have a 1/4second faster reaction time, but that won't mean they could be trusted move much faster towards a crosswalk that might see a bike cross at any second.
The hospital robots seem like the closest analogue for now, as it's automation being held back by needing to coexist with inferior-performing humans. If it were only robots in the hallway, it'd likely go a lot faster, no? Just like if it were only computers driving the cars, following distances could be greatly reduced because they can react faster.
If it were only robots in the hallways, it wouldn't be a hospital. They did try giving the machines their own pathways. That's what pneumatic tubes were: a fast lane for robotic delivery. Frankly, for a hospital, pneumatic tubes would seem the far more elegant solution.
I guess it boils down to perspective. The hospital robot is in no hurry. His life is no better or worse if he is faster or slower. But a human inside a robot-driven car does care. He doesn't see the overall traffic efficiencies, and even if he did he wouldn't take much notice. All he cares about is getting to the destination asap. So it is a different speed decision than the robot.
The hospital I'm employed at utilizes both pneumatic tubes and robots. Robots roam the basement and employee only areas shuttling around food carts, linen carts, and trash. The pneumatic tube system reaches everywhere in the hospital and sends drugs, lab specimens, and basically anything else small enough to fit in the container.
I knew of a hospital when I was a kid that had raceways mounted just below the ceilings. It was like little toy trains bringing paperwork and drugs to each room. But if you watched them long enough, four-legged critters also used the raceways.
While the robot can't see round blind corners in the manner a person could, visually processing it, we could require aircraft style transponders on all road vehicles, which would then allow your robotic car approaching a blind corner to know of there's anyone on the other side. It would also allow unambiguous communication as to how fast each vehicle is travelling, and their intentions - think in terms of the brake and indicator lights also being part of the transponder signal.
It's more likely to be cranked down though, because that's more efficient and people might be less frustrated with slow driving if they can kill time on their phones during the drive.
Or full automation might increase the number of vehicles and exacerbate the problem. (On the assumption that the vastly reduced cost of fielding driveless vehicles could result in many more commercial vehicles on the road.)
But automation can also increase vehicle density: "Stop and go" traffic is way more efficient when it's not groggy humans doing the stopping and going.
I'm not sure about that. Keeping the cars closer together would suggest more abrupt acceleration and braking to such to avoid space being created as the car ahead accelerates from a stop. Given these accelerations will not be initiated by the driver, they will come as a surprise and thus be more jarring. I imagine that might result in autodrives that accelerate/brake more slowly, leaving greater distances between cars in stop=and-go environs. And I cannot see autodrives leaving much less room between cars while in steady motion or stopped. Humans are pretty good at tailgating.
At least in theory with proper communication channels between vehicles, vehicles could alter their speed just enough to make space for merging traffic or to allow a vehicle to turn. In that way nobody ever actually stops or starts, they just become slightly slower or slightly quicker, which should be much less jarring.
I think the key is that computer controlled vehicles could synchronize their stopping and starting. You wouldn't have to wait for the few feet moved by the car in front to bubble down the line as drivers notice the extra space. Instead, all stopped cars could coordinate to advance simultaneously.
You wouldn't even have to explicitly sync them; it'll take just few milliseconds for a self-driving car to notice another car is beginning to accelerate. Machines can keep precise control to the level simply unavailable to and unperceivable by humans.
s/kill time/do something useful/, like reading books, working, having fun, sleeping. Time spent driving where a "self-driving" alternative is available is absolutely wasted.
Kill time? Regardless of who is driving, time spend sitting the car is time spend away from family/pets/work/food/bathrooms and everything else important.
That won't work for all the people who'll suffer car sickness when they spend time focusing on their phone while in a vehicle. Intermittent messaging is OK for most people but have you tried reading a book as a passenger? We might find that most people have to look out the windows so often that they can't get much productive work done.
I used to have car sickness as a kid. I recall some of it being caused by vibrations made by the engine. The smell of gasoline was also vomit-inducing for me. Both of those issues are solved by electric cars.
Some problems are less temporary. The required energy over distance goes up as well, thus limiting your action radius. Furthermore, the amount of road noise produced rises as well, which is another important limiting factor in high-density areas.
Have you ever ridden a horse? They're mad fun to ride. That's why there's a whole industry for racing and riding them.
But they're inconvenient for getting from point A to point B. Which is why they've been replaced by cars.
I suspect the same will happen with cars. I love my fast cars, but apart from that once a month weekend drive, I really don't get to flex its muscles.
I'd much rather relax in my car on my way to work every day. Then I could buy a more powerful car which I could take out to the race track on weekends.
I don't think inability to speed will be as much of a problem with automation. People speed regularly today because other than getting to a destination, driving is so unproductive. If you're able to do what you want while driving (read, watch videos, communicate, etc) people won't be in as much of a hurry.
Agreed. Speed, particularly on sub 1 hour drives, is mostly about competitiveness. If you're no longer competing with other divers, because you have a robot that is, what will that extra 10mph do for you?
Particularly when you can work in your car. Or watch a movie in your car. Or have sex.
Why assume that the robots won't compete? Cars today are marketed based on their potential for speed. With that out of the picture, why shouldn't we expect manufacturers to compete to have the "fastest" autodrive systems?
As an inexperienced pilot who has quite recently had the "what the heck is it doing" experience with my (40 yr old, analog computer) autopilot system, I can assure you that it is.
> pilots uncertainty regarding autopilot functioning is [...] the reason for several accidents.
And compared to that, how many accidents were prevented by the use of autopilot? How many times did the autopilot make a better decision than a human would have? You can't compare to an absolutely perfect state (zero accidents). You have to compare to the current situation.
I think in my comment i mentioned the fact that, taken as a whole, autopilot systems have made planes "so, so, so much safer."
I was only remarking on the possibility that some of the issues that this automation have now created in aircrafts might also be seen in cars, especially when there are things like still keeping your hands on the steering wheel despite the fact that the car is in control.
And, unlike your rational analysis, I don't trust politicians or the media to think about it like that. Instead, I'm sure the first accident that could even be remotely attributable to an automated system would immediately ignite a firestorm of bull-shit articles and possibly damaging regulation.
While the over-the-air update is novel, these features all exist on current luxury and even some middle class vehicles as part of driver assistance option packages.
They're typically called Lane Keeping Assistant, Adaptive Cruise Control, Blindspot Warning, Automated Parking, Traffic Sign Recognition, etc.
The emergency steering bit is interesting, though no further details are provided, as it requires the car to ensure that there is a safe space to steer into, which is dicey for a forward collision emergency braking system, so I'd conjecture it is connected to the side collision warning, and allows collision avoidance if there is enough space in the current lane.
I see this strange contradiction where some people are saying "this is nothing new, other manufacturers had this years ago" and yet somehow other manufacturers don't get reaction videos on youtube and don't get to the top of hacker news and reddit....it seems very clear to me that what Tesla has done is above and beyond anything in the past. If this already existed, nobody would really care.
You could make the whole brand argument, like with Apple releasing the iPad, but....I simply disagree. Tesla deserves the credit they're getting.
They may exist, but that's not the fundamental difference between Tesla and <enter most all other brands>. The first difference is long-term buyer delight and is clearly shown in the first sentence of the linked article: "Model S is designed to to keep getting better over time." Everyone else's stance? That was last years model - if you want the new UI and/or new features buy the current model year car. There is, most often, zero expectation for car buyers today to expect upgrades or enhancements that are as drastic and as positively interesting as how Tesla approaches it.
The second, and likely more critical differentiator (IMO), is security. Mr. Musk would not accept public shaming of such magnitude like this:
And I only pick on GM here since, well, you work there. When it comes to software in cars Tesla treats it as a true part of the vehicle engineering. Others seems to still treat it as an afterthought - and then the question starts to linger: how good is the software in all of the other vendors "features"? How much QA have they done with regard to lane keep, blind spot, adaptive cruise, etc?
I don't own a Tesla, I really wish I did, but if I had to place a wager on a car manufacturers QA process and ability to build fault tolerant vehicle systems I would pick Tesla to oust the competition handily at this point. While I realize it's a subjective matter, and there's really no good way to compare, the directive of the company seems pointed in a more apt approach than others.
To be clear: I, personally, am really happy there are strong challengers in the car market. Also, I am speaking solely for myself.
The quote: "Model S is designed to keep getting better over time" reminds me of kaizen (1). It is a really awesome concept that not only can the car manufacturing process and technology keep getting better, but the cars can improve as well.
That being said: OTA update is REALLY FUCKING SCARY for cars. What if someone puts the wrong update in the queue accidentally? (2)
The historic attitude to modules in cars is also important. Modules run "code", but it is treated as mechanically as possible. Could you imagine changing out a few pistons while driving? Probably not. This is a failure of imagination that is being addressed now!
It is a truism that Big, Old, Large companies are risk-averse. The downside to doing a bad OTA to a car is unlimited!
WRT to Tesla's QA - I don't know of anything that has been published in this regard. I would hope they are "doing it right". I hope they are successful, and I hope everyone is inspired by their leadership and learns from them.
1: https://en.wikipedia.org/wiki/Kaizen - Article claims it was introduced by American business people, but Japanese companies continuously improved it =D
> That being said: OTA update is REALLY FUCKING SCARY for cars. What if someone puts the wrong update in the queue accidentally? (2)
OTA update is no more or less scary than any other form of software update, or in fact any other form of mechanical update.
Software engineers are generally used to the level of rigour that goes on with their software. If you're a web devloper shipping a commerce application there's an appropriate level of testing and process, because there's only a certian level of reliability you need to hit, and spending more money on that would slow down your development. The way you go about delivering software for medical devices for instance (which we do), is a completely different process with a whole different level of rigour, testing and documentation. Because that's appropriate in that environment.
There's a whole lot more documentation and thought that goes into the beam that stops the top of your house from falling down, than goes into the beam that stops your garden shed from falling down. It's no different than software.
It's not as simple as that. NASA can update software on the mars rover or interplanetary probes, but that's one device at a time, and the amount of effort put into it is staggering.
At the same time, consumer electronics are routinely broken by OTA updates.
Cars fall squarely in the middle, high volume and high price. Additionally, failures carry a high risk. Nobody will die if your webshop goes down, but if your car decides to steer into oncoming traffic, well, bummer.
The support beam analogy is flawed in the sense that the beams are simply made bigger to ensure they're strong enough even with considerable material defects, but this doesn't work for software, where a single little bug can lead to a catastrophic failure.
I am not aware of anything other than cars where such a high number of devices carries such a high risk factor. Certainly doing OTA car updates in a commercial environment is possible, but there is not yet a relatively foolproof way to do it.
Regarding 1: I am a huge proponent of Kaizen and, what you're probably more after is the influence Demming had on Japanese companies. The interesting part is that if you've studied Demming you'll know he pitched his process to every American manufacturer first - and all of them wrote him off, which, as we all know came back in spades from a quality comparison perspective shortly after. Demming: Quality = Results of Work Efforts / Total Costs.
Regarding OTA: It is. But, ignorance is even scarier. Look at any company that embraces CI (continuous improvement). Amazon - how many changes to production do they push a day? Now compare that to a legacy F50 that has process designed to be change averse as they view that as risk in and of itself. This seems to be the viewpoint you're working from through GM and maybe (I'm speculating here) you're influenced by the process internally. Maybe your view is that it's risky because of what you're exposed to? My guess is that given the culture of Telsa - critical software OTA is not taken lightly. Speculation - but they likely have a far more rigorous process for deployment than many others since they've done this from the beginning. If I can suggest reading on this subject I would point you in the direction of Gene Kim's work in this area as he has studied high performing organizations and, in a nutshell, has found that companies that embrace change and do it frequently have less operational problems than those who don't. Risk-averse seems to compound mistakes and, this could very well flow into those hypothesis around practice. The more you do, the better you are and "10k" hours.
While I agree accidents can (and will) happen - again I wouldn't put my money that it happens to Tesla first or with any cadence of frequency. Keep in mind Tesla is riding on quality in software - full stop. If they have a problem there, it could be detrimental to the point of failure. This is a good thing for consumers because they're most likely getting a superior product comparatively. And we already know that non-OTA software that has gone out the door in vehicles has caused death and harm. Is it scary that those bits are note able to benefit from a timely OTA update? There are definitely two sides to this coin.
And finally... While I know that it's been said quite a bit that "we've done that". I'm not truly sure people are grasping the reality of what Tesla is doing. While I understand others have these features, the Jalopnik short sums up what they've accomplished that, in my opinion, others are definitely lagging behind in - if you haven't watched it definitely do:
Again... Even if others are kind-of-sort-of doing this today, the iterations will be across model years. No vendors have the long-term upgradability that Tesla does at this point. Not sure I would trust an American car to change lanes on it's own based on it's situational awareness as shown in the video.
Wait, so you see that it is a fact that many of these features have been in existence in regular cars under various names. It's undeniable, ofc. But __then__ you somehow refuse to believe that they are actually the same because it's on the top of HN and other media outlets. And then somehow you feel qualified to say Tesla deserve the credit their getting? It isn't "making the whole brand argument." You __are__ just being a sucker!
The Tesla can change lanes on its own while the others can't, but the others have some interesting gadgets the Tesla doesn't - for example, Audi has the super trick night vision display which also picks out and highlights/alerts on pedestrians and animals.
My 2014 Mercedes E350 can do everything in that video, except lane changing, that's new and cool.
But I looked exactly like the guy in the video when I first test-drove my car two years ago, but now I'm so used to it I don't think about it any longer.
But you gotta give it to Tesla's marketing department, that they can get people excited by a feature that you could get in a damn nice car, two years ago, at half the price.
Most companies need a marketing department to convince people they're making good cars, while they're making good money making cars. Tesla simply makes good cars. No need for a marketing department if your stated goal is your terminal goal, and not just an instrumental one.
You can take your hands off of it for short periods of time, but it quickly starts whining to tell you to keep your hands on the wheel, and if you don't the system disengages.
It's usually ok to just nudge the wheel a bit to let it know you're still there.
Note that this is for obvious liability reasons, not technical reasons.
Nice, it appears to be much better than my car at high speeds!
I'm quite envious of the OTA update, there's no way Mercedes will every upgrade the Distronic software in my car, if I want the improved version, I have to buy a newer model. :-/
Well, that's because they're Tesla. Benz has had very similar features on their higher-end cars, and it was well received (I think) in the automotive industry, but not the tech community, because Tesla is more well known in tech compared to Benz.
I think it's a mix of effects. Tesla is definitely pushing car engineering in some respects, but as far as I can tell the only thing new in this story is that the feature was added to the cars in a software update. Do you really think Audi would be #1 on Hacker News if they updated the 5 series over the air to stay in it's lane automatically while on the highway?
I've had a bit of cognitive dissonance recently while reading Elon Musk's comments on Apple's electric car project because I have to remind myself Tesla isn't owned by Apple. Tesla does do some great engineering, but I think they have also Apple levels of hype within technology circles.
> Tesla is definitely pushing car engineering in some respects, but as far as I can tell the only thing new in this story is that the feature was added to the cars in a software update. Do you really think Audi would be #1 on Hacker News if they updated the 5 series over the air to stay in it's lane automatically while on the highway?
If they played it the way Tesla did, they probably would. Recall that nobody buying Model S knew about those capabilities up until the moment Tesla announced, "by the way, at some point we've started packing Model S with sensors; if you've bought recently then you have self-driving capabilities that we'll enable soon with a software update". This came as a surprise for everyone.
It's tiresome enough to read ignorant people alleging that Apple's success is due to "hype" rather than superior product when Apple is actually the topic at hand. Do we have to read it at other times, too, now? I hope not.
And before Dropbox there was rsync and HN laughed at why anyone would pay for it. And before the iPod there was the Nomad and Slashdot laughed at why its hard drive was so small and didn't have wifi. And before the iPhone there was the Blackberry and who would take that seriously, Blackberry already owns that market!
Lane Keeping Assistant is very different from Lane Centering/Steering.
I've driven many cars with Lane Keep Assist - it is really a warning/minor correction system to keep you from crossing over your lane. If you let go of the wheel, most systems will ping pong from lane to lane.
Tesla's system actually centers you in the lane, steers around corners, and handles changing lanes. That is the innovation.
I've driven a brand new Range Rover with the assist package, and that car was fully automatic in city and motorway driving for all intentions and purposes, as long as there was a car in front of you. You would engage the "smart" cruise control, and it would just follow the car in front of you perfectly, making turns, stopping and starting. I've driven it on the motorway at 80mph, then a car in front of us was taking an exit, I put the blinker on, the car followed, and drove around the bends following the car in front, until we came to a stop at a set of traffic lights. Then when the traffic started moving again, all I had to do was to tap the accelerator, and the car started following again. That was without me touching the steering wheel at all.
I am unable to find any info on the Range Rover site about capability to auto steer (just Auto Cruise Control). There are also no Youtube videos showing this functionality.
What sensors does the Model S have? I'm surprised that Tesla sold a car with enough sensors for semi-autonomous operation without the actual software until now.
For those with more knowledge about cars, how does the sensor array in the Model S compare with similar models from companies such as BMW, Audi, Mercedes-Benz? I'm interested in knowing if it's software or the already installed hardware holding back recent luxury cars from similar capabilities.
Also, does anyone know anything about the (digital) security features of the Tesla? This announcement from Tesla makes it clear that the actual control of the vehicle can be modified by an over the air software update. With the recent Jeep hack[0] in mind, does any know if something similar is possible on a Tesla, or if there are some safeguards such as signed updates? As one of the most computerized cars on the market, I tend to think that the Tesla cars might also be some of the most (maliciously) hackable cars on the market.
Long story short, it's both hardware and software holding things back. On the hardware side, there is a lot going on with solid state radar and lidar sensors improving by the minute, while prices are down a factor of 10-100 from even 5 years ago. There are now sensors available for research and product development that in 5 years time will allow a semi high-end production car to robustly drive all traffic conditions on a highway.
Car makers (and suppliers) have to learn how to make software to make optimal use of the existing hardware, but they've got a long way to go, still. Every major player nowadays has a research center in Palo Alto or whereabouts, seemingly trying to learn how to do this via osmosis, but it will take some time until they really understand how to keep pace with information technology, and how to bring it into their mammuth manufacturing and legal frameworks. Nevermind the necessary mindset to pull it off.
It's no problem to autopilot on the highway today, 90-95% of the time. But that number needs to go even higher, still, before it can be unleashed on the public. And for that last edge, sensors and software have to get better. As a guess, going from 90% to 99% is the same effort as going 0% to 90%, software and hardware wise. Luckily, these diminuishing returns are (somewhat) offset by more resources being poured into development, and hardware costs coming down due to volume manufacturing.
Any estimate of how much the sensors as well as compute infrastructure cost in something like the Model S? $100, $1000, $10k, more?
I'd love to learn more about the increasing "computerization" of cars. Do you know of any good publications, blogs, etc for someone to learn about the computer systems powering cars?
The great point is, Tesla probably tested the sensors in shodow mode for years. What would the computer have done? vs What did the human driver do? They probably registered that the computer driving was 110% accurate compared to a human before enabling the feature.
They recently had a model update that shipped with a fairly extensive set of sensors (I have a family friend who recently updated his Model S for these features). The initial model didn't have these.
Releasing driving assistance features as a 'beta'? What on earth does that mean here? Are the features ready to use or not? Do Tesla warrant that they work and are safe?
Maybe they expect drivers to treat it like beta software - "Please don't use these features in production cars. Make sure you keep backups of all drivers and passengers in case of bugs."
Since it had to go through regulatory approvals, I'd expect it's not 'beta' as in 'potentially unsafe'. It's more of: it may require you taking over more often than we'd like to.
Response to both you and joosters: I think you're being too strict here. Who knows what the threshold is? Maybe it works 100% of the time on highways. Maybe has problems only when there are no lines on the street? It's not a completely unsupervised technology and is not advertised as such. You can't even legally treat it as such - you have to be prepared to take over at any point.
Compare it to the regular cruise control, already present in production for years. Would you say cruise control is not ready for production or release on public roads? It's a tiny subset of what Tesla's update does. (and it doesn't even tell you when you need to take over)
Driving a car is different from 'taking over', if you drive a car you are alert because you are controlling the vehicle, being prepared to 'take over' is like having to wait for something out of the ordinary to happen and people are notoriously bad at such tasks. Their attention will be sharp for a little while and then drop off to levels where if something requiring intervention does happen they likely will have very little situational awareness, whereas if the same situation would happen when they were driving the car themselves they would likely be able to respond adequately because they were already processing all the relevant bits of information and would have accurate situational awareness.
If you're required to be ready to intervene that's about the worst possible way to introduce automated driving. And more to the point: the better the implementation the longer between 'interventions' the more likely that such an intervention will not be useful at all.
I'd be happier with just the adaptive speed and letting me steer... that may just be instinctive, as I don't like having to fiddle with the speed, but steering can be very instinctive.
> Tesla requires drivers to remain engaged and aware when Autosteer is enabled. Drivers must keep their hands on the steering wheel.
Which sense of "must" is used here? The car seems to play an unwinnable game with the driver: keep your hands on the wheel or I'll...what? Disengage autosteer and perhaps crash? With no enforcement mechanism, drivers are incentivized to "abuse" (aka "use") the system as much as it allows.
Videos on YouTube[1] show that the car drives itself even with hands off the wheel. However, I've also read that the car will prompt the driver to keep their hands on the wheel and even turn on emergency flashers/slow to a stop if the driver failed to do so [2]
Well, that sounds annoying - what's the point of autopilot if you still have to keep your hands on the steering wheel and pay attention to the traffic...
It's like having to hold a button your hands free headset in order to be able to talk :-)
The car should flash the lights and scream "inattentive driver" to all around. After a minute that scream should become "Call police, my driver has fallen asleep" and the car should autopark at the local copshop to give a statement against its owner.
In all seriousness, this system to monitor driver attentiveness will generate lots of very discoverable evidence.
You can defeat the system any number of ways. This is just to wash Tesla's hands of any liability: if you get in a crash because you intentionally subverted the engagement requirements, it's clearly your own fault (not attacking Tesla here, it's what all the other carmakers do too).
Not sure about US law but pretty much all driving laws I've seen mandate that you have to keep both hands on the wheel unless you are operating another system in the car in which case you still have to keep 1 hand on the wheel.
It's not even a matter of enforcement; even with a fully "alert" driver with both hands on the wheel, I suspect it'd take a considerable amount of time for a person to consciously make the switch and take control of the car, especially after Autosteer has been engaged for a while.
Personally, I'd never use something like Autosteer. As far as I'm concerned, either I'm driving the car (i.e., directing its movement, even if that movement is realized by computers/microcontrollers), or a computer is driving it - not something in between or both.
> I suspect it'd take a considerable amount of time for a person to consciously make the switch and take control of the car
How different is AutoSteer from regual cruise control, in this regard? Or do you think that this level of automation might encourage people to distract themselves, without having quite enough technology to allow them to do that?
Cruise control may encourage similar behavior (and contribute to decreased reaction time), but the fact that cars naturally drift - even on straight roads - is a very effective reminder to drivers that their constant attention is required.
Take away the need to steer, and the only thing drifting will be drivers' attention.
With the regular cruise control you still have to stay engaged with the road. Look at the video a few comments up. That situation is a nightmare - empty seemingly absolutely safe road, plenty of glowing digital distractions, a smooth quiet ride, warm and comfy, and relaxing music. That car is a rolling relaxation pod. I don't know if I have enough willpower in that situation to not become distracted or fall asleep.
I've driven a volvo with this feature and it has two modes, one is below 50km/h where it will fully control the car and can do quite hard turns to follow the lanes and cars in front of it. Another is above 50km/h which requires hands on the wheel but the actions are not as strong, it is more like driving on a very concave road which automatically directs your wheels towards the center of the lane, if you don't resist the motion of the wheel it will keep you in the lane. Not keeping the hands on the wheel in the second mode will trigger a warning after a few seconds and later disengages the system.
Essentially you just need to hold your hands on the wheel and relax your arms. Before you've driven with this feature you might not realize but all the time when you are driving you are constantly doing minor adjustments to the steering wheel which takes quite much effort, both attention and physically, this takes away all that but still keeps your brain in the "i am in control of the car"-mode. If you were to keep your hands off i think it's easy to zone out and if something happens there might be too much context switch for you to handle the situation fast enough.
Is there any video of the tesla doing this at highway speed? I can only find city driving.
> Research that Stanford has done shows that drivers resuming control from Level 3 vehicles functioning in autonomous mode take 10 seconds just to attain the level of ability that a drunk driverpossesses. And to get back to full driving competence takes 60 seconds.
I was surprised to find that the Autopilot feature is a paid $2500 upgrade, according to one source.[0] I'm not surprised that Tesla is charging for the upgrade, but that in all the press and enthusiast coverage of Tesla, I don't recall it being mentioned before.
I may be wrong, but I think that they removed the "Tech Package" option (offered at initial purchase) and now instead offer the "Auto-Pilot Features" option.
I'm not sure if they install the hardware sensors regardless & only control access via software, though, but I'd be curious about this for sure. Anyone know?
IIRC, when the Auto-Pilot hardware/option was originally released the "Tech Package" was a pre-req to buying the "Auto-Pilot" option.
"Tech Package with Autopilot" used to be a $4,250 option including a bunch of small features such as LED cornering lights, fog lights, and a rear trunk that opens remotely. I don't know whether all those differences were actually just access-controlled in software.
For the autopilot features, the difference was only software. Even before the Autopilot option was first available to order in October 2014, cars were being silently shipped with Autopilot-ready hardware for several months. For those cars, it's possible to call up Tesla with a credit card and enable the feature over the phone.
All these controls sound very similar to those in my current year Mercedes...although i would be hopeful that the autosteer on offer here is better than the Distronic plus "lane assist" in the Merc, which while OK, does not do a great job on less than gentle turns above 50km/hr (but its actually great below that speed - to the point I wonder why i'm even in the seat, in particular in stop-start traffic situations). It certainly sounds similar from the "hands must be on the wheel" requirement.
I look forward to the next step up from all the car makers, which is clearly the car driving on its own in a much more confident way, with the driver simply there to manage exceptions as opposed to being 'assisted' by technology as is with the current implementations.
The hands must be on the wheels part is a CYA statement. There are many videos released today that show that you really do not have to have your hands on the wheel. Mercedes' system requires driver input every 14 seconds.
The auto-park feature would be super handy, but I don't see an auto-unpark feature... I look forward to seeing Teslas stuck in amazingly small parking spots!
This seems insanely dangerous to me. They're introducing a feature which could, potentially, cause massive highway accidents, but providing documentation that amounts to little more than a glorified README file?:
> Auto Lane Change
> Changing lanes when Autosteer is engaged is simple: engage the turn signal and Model S will move itself to the adjacent lane when it’s safe to do so.
A single sentence! What's the point of having drivers license lessons and testing if the fundamental operation of the vehicle can change so drastically?
Am I being a luddite, or does anybody else feel this way?
> What's the point of having drivers license lessons and testing
Hah, implying that these are actually useful. The Washington state test and generic lesson plan doesn't even include highway driving. It's a 20 minute test that quite possibly anyone could pass; all I had to do was briefly drive around, and the hardest part was probably parallel parking with a couple of feet of leeway, and controlling your speed down a hill.
The point of a driver's license is to have something to hold over your head so you'll pay your tickets. Oh, you thought it was about safety and competence? My 50 year old wife got her "time to renew" letter in the mail: "check this box if your eyes are still 20/40". Cataracts so bad you can barely see the dashboard? Been putting off that eye exam because you know you need glasses? No problem, just check a box and you can still drive! I guess the token eye check at the DMV was too onerous for WA drivers. Small wonder why WA drivers suck so much.
I recently moved states and was in person at the DMV. I handed the clerk an existing (out of state) license that had a corrective lenses restriction on it, and I informed the clerk that I was currently wearing contact lenses before taking their eye test. They still didn't put the restriction on it..
See my parallel comment: my wife could be blind and still hold a WA state driver's license if she were willing to lie when filling out a form. In which case I'll ask: what's the argument for licensing if we don't even bother to check?
We do bother to check. If you get pulled over for a traffic violation and you don't have a license, you'll go to jail.
These are really not convincing arguments. If anything, it seems the conclusion of what you're saying is that we need to be more rigorous in our testing and validation.
> We do bother to check. If you get pulled over for a traffic violation and you don't have a license, you'll go to jail.
If it's ridiculously over-the-top easy to receive and keep a license, even when it shouldn't be, then I argue the use of said license, and the checks around it, are useless. All your check is is checking that the person can do the very bare minimum of what would be considered "driving", and, worse, it's really checking that the person was able to do it X years ago, where X can be as far back as even 50 or so years ago. At that point, what's the use of checking? You are right that we should be more rigorous in testing and validation, but when that's the case, then the whole system is put into question.
The automated lane changing is probably safer than a human doing it.
Same concept as putting code inside a function. If you can make changing lanes a functional process that works safely the same way 100% of the time via computer, why not?
Right now it is re-written and executed every single time a human performs the actions which results in errors and more.
Isn't similar argument made about automatic transmission? In general I feel the input by the driver with the turn signal is a good one. After all, there is risk when you are trying to see your blind-spot during lane change esp. if the driver in front you stops hard.
Keep in mind all new cars come with a button that allows you to peg the car's velocity at nearly any speed and it will hurtle down the road with no awareness of the lanes, other cars, or any hazards. It won't brake, and it won't even detect whether you are touching the wheel or peddles. It's call cruise control.
I have no earthly rationale for why we ever legalized cruise control in the first place, but that's the status quo we are comparing against. Anything that makes cruise control safer is an improvement. Arguably without cruise control most of these innovations would need huge amounts of lobbying to pass, but they're actually pretty easy to sell in comparison to what people have today.
It's not universal that cars ignore the pedals. On my car (14 years old...) cruise control is deactivated when you press the clutch or brake, and the car responds to the accelerator as normal, slowing down once you stop accelerating.
Every car I've ever driven with cruise control has had basically that same behavior, with the only major variation being if you can set a new base speed without flicking cruise control off and then on again.
Not true. Many cars come with adaptive cruise control to prevent you running into the car ahead. And cruise control is seemingly quite safe as its been around for decades with largely no issues.
I drive entirely using cruise control to ensure compliance with speed limits.
The fact that people who bought a car that didn't self-drive now have one is probably the most novel aspect. Lane keeping, adaptive cruise control, parking assist, collision avoidance, etc are all features in other vehicles as well, slowly trickling down to more models each year.
Right now, IMO, we just need more self-driving cars in general to move the whole concept in terms of acceptability and availability. There is certainly lots of room for innovation, but for now just more product selection alone is valuable.
I think the part that's constantly fascinating about the Tesla cars is that these features are being delivered as software updates. This would be like a software update that switched your phone from 3G to 4G.
Although I agree it's a cool way of rolling out features it's more like buying a phone with 4G capabilities but it not being rolled out until a software update; It's always been possible in the cars they just got the federal approvement now.
In one sense, yes—though I would also compare it to some of the cool hacks NASA does getting orbiting telescopes and Mars rovers doing new things they weren't doing before. All the sensors and motors and antennae are there, but a firmware update can put them together in new ways to enable entirely new high-level behaviors that weren't being considered or planned for at launch.
An example in this vein: imagine a firmware update to a wi-fi router to give it MIMO support. A MIMO antenna isn't any different than a regular antenna; the difference comes in the baseband firmware doing clever-er math to pull out overlapped signals, spacially model their sources, and modulate its own output so the signal will constructively interfere for best performance at the destination.
In a Cliffs-notes sense, both vehicles seem to offer an autonomous driving mode in normal conditions. Like any other competitors, they both have a few unique features that are neat but don't really change the overall experience exponentially.
To be clear, I own neither a Tesla or a Mercedes currently and I'm basing this just on looking over the specs, but they appear to be fundamentally comparable.
Mercedes' version requires the driver's hand to be on the wheel during steering. If not detected within 14 seconds it is disabled. Tesla's version (despite their CYA language) requires no hands on the wheel, UNLESS the system loses sight of road lines OR a tracking car in front.
In addition the Mercedes' system can't change lines automatically.
This is fantastic. I'm psyched, not just because of the cool technology, but also because it will finally spur the public to demand more frequent and accurate road striping.
Massachusetts has terrible road striping; it seems as though they get around to it about every four or five years, waiting until the lanes and ramp markings are beyond dangerous. This has been irritating me for years. And then they seem to use some kind of cheap paint that wears off quickly. Public works job security, I suppose.
But automated lane navigation will require clear markings. Hundreds of thousands of deaths later, we just might finally get a safer road system. Pathetic, but better late than never, I guess.
I doubt relying on lane markers alone will be enough given the variety of markers, and possible road construction that will be encountered by these cars.
Yo if anyone at tesla is reading this, can you implement a feature for the car to move over into the far side of the lane when people are lane-splitting? People already do it manually on the highway, but if this car also did it would be neat.
thx
and look twice for motorcycles.
It seems like it'll create an uncanny valley between real control (the car immediately reacts to every little movement) and soft control (the car kinda-sorta follows what it thinks your intent behind the movement was; sometimes ignoring small deviations). I'm really curious how it feels.
In my car, the self-driving forces are very weak, so it feels more like you're driving on rails. With one hand on the wheel, it's not jerking your hand around, it's softly nudging you to do the right thing, keeping the car in the lane.
This machine will keep pace with traffic. OK. Does that mean it will break speed limits? Unless it is scanning for each and every potential road sign, it simply cannot be respond to arbitrary/temporary limits. The determination of the legal limit on a piece of road is a complex task. Road construction, local conditions, sunrise/set, time of year (school zones) and even weather can be a factor. And let us not forget "Speed limit X when children on road". You need some serious cpu time to work out whether that person walking along the road is a schoolgirl or a construction worker.
Imho any system not capable of determining the speed limit accurately is a legal liability. Have fun with the tickets.
>eliminating the need for drivers to worry about complex and difficult parking maneuvers.
No. Parallel parking is not a complex nor difficult maneuver. It is total beginner territory. No lives are at risk. With a decent bumper, even risk of property damage is minimal. Anyone not capable of learning to parallel park probably shouldn't be behind the wheel of much anything. Anyone buying this car to avoid such mundane tasks isn't someone with whom I want to share the road.
Actually, my dad has a Model S and one of the coolest features is that it actually does scan for speed limit signs and tell you the current speed limit on the dashboard (you can configure a limit over the speed limit for when this should show up).
Now, I have no idea whether or not this feature is utilized for autopilot (though I would kind of assume it would be), but it is there :)
Is it actually scanning for signs, or is it using map data to determine the speed for the current road segment? I suspect it's actually the latter; reading road signs is a somewhat difficult CV problem, whereas having a well-annotated map is largely a solved problem already. (My old Garmin GPS had this data!)
Reading road signs is exceedingly tricky. It's not just the OCR, there's plenty of other gotchas. e.g. what if you miss a road sign because you were overtaking a truck when the sign went past?
Road signs also have situational contexts. e.g. in the UK there are road signs indicating a max speed for an upcoming sharp bend (or series of bends) in the road. There isn't always a matching speed limit sign after the bend because drivers are assumed to work that out for themselves. But will the cars do so? Or will your self-driving car get stuck at the lower speed?
Speed limits almost always have repeater signs at regular intervals, particularly when it's not national speed limit. The recommended max speed signs on bends are different from speed limit signs too. If there's not a red circle, it's not mandatory. The exception is national speed limit signs, which would require the car to see if it's a dual carriageway or not so it can switch between 60 and 70.
In my jurisdiction the yellow "limit" signs before corners are not actually speed limits, just references. You are allowed to disobey them at your peril. Sometimes that means you can do 80kph around a "30" corner, a common situation on mountain roads.
That also avoids the enforcement problems such as where the new limit should being and end, and the difficulty of measuring vehicle speed through a corner.
You (and parents) vastly overestimate difficulty. They are reflective, high contrast, using known font and very limited symbols (0-9). Car CV systems can distinguish pedestrians from background, they can read a freaking sign.
You have to distinguish the sign from signs on trucks (indicating their maximum speed), reflections of signs in windows, you have to recognize the sub-signs which might limit the scope of the main sign (speed limit might only apply to exit or under certain weather conditions), then there are the usual adverse conditions like rain, reflections, dirt, plants, shadows etc.. Just search for "german traffic sign recognition benchmark" to see what the state of the art is (ConvNet).
The MobileEye system in the Tesla (and other manufacturers) has some of the most sophisticated CV software on the road - not just scanning for road signs, but identifying road markings, lane markings, curbs, obstructions, traffic signals, etc.
That's a good question. He is under the impression that it's actually scanning them, but I honestly don't know (I am aware that the mapping data would be much easier to accomplish).
The map is only the start. No map can accommodate all the local changes and rules (small/temp construction zones) nor will local authorities always forward minor changes (school zones).
When I teach an intro law course I use speed limits to explain why the law is so complex. It's much more than signs. The system you describe appears only informational, the car isn't the one picking the speed. It's a big step for a manufacturer to sell a product that will initiate a speed based on it's own determinations.
I used to have a silver motorcycle jacket (think power ranger). It was great for keeping the sun away, but with a couple lines of tape it would make a great mobile speed limit sign. Every telsa a I pass might suddenly read "Speed limit 15" and slam on the brakes.
The cameras can and do detect if a sign isn't stationary, etc.
For speed limits, the obvious technical problem is knowing when they end, if they apply to your road, or maybe a parallel road or an off-ramp, which is mostly easy for a human to tell, but much harder for a camera.
Even though the camera may be better on average processing all this information than the average human driver, it's an unanswered question, from a legal point of view, who's responsible when the camera is wrong.
Volvo has recently taking a stand proclaiming legal responsibility, but it remains to be seen if that's even a possibility in many nations.
> Even though the camera may be better on average processing all this information than the average human driver, it's an unanswered question, from a legal point of view, who's responsible when the camera is wrong.
There may be some jurisdictions where this is the case, but in most I'm familiar with it is fairly well settled that the human driver of a car is legally at fault if the car is driven in violation of the law, and the human driver of the car is also legally responsible for assuring that all mechanical features of the car are maintained so as to not interfere with the human drivers ability to assure that the car is driven in accordance with the law.
(In many jurisdictions, the manufacturer may have liability for accidents and injury due to manufacturing defects, but that doesn't generally absolve the driver for being responsible for driving consistently with the law.)
But does the camera obey the underlying "Speed safe for conditions" rule in effect in most jurisdictions. (They use this to avoid the possibility of there being no limit should a problem crop up with the signs).
It is very possible to get a speeding ticket well below the legal limit. It isn't common, but where fog/snow/ice/rain are a factor I have seen cops hand them out to idiots. And 'conditions' can include the condition of your vehicle. Driving with bald/track tires in the rain can result in an 'unsafe speed' ticket should a cop see you slide.
Really? Have you looked at any 'new driver' manuals? I'm reminded of the time I came out of school to see two motorcycle cops hiding behind my jeep, using it as a blind to catch speeders.
"Hey, is it a school day?"
"No, we take a couple days off before exams."
"Oh. It's a good thing then that we haven't ticketed anyone."
"But doesn't the law say 'normal school day'? It is a friday."
I don't think it's the traffic law that is complex, but the liability involved. If my car picks its own speed and gets the speed limit wrong, who is responsible?
You could say that the ultimate responsiblity lies with the driver, but taking this position effectively means that all self-driving features are pointless (since this implies that the driver should be concentrating on the driving at all times).
If the responsibility lies (partially, at least) with the car manufacturer, they are going to get a lot of lawsuits thrown at them when it gets the speed limit wrong. Every speeding ticket, every accident where the car was going too fast, etc.
Many GPS show the current speed limit (I live in France) and it's pretty accurate. They know when we're next to schools, when there's no explicit sign, when a particular city changed the default speed limit, etc. Who would have imagined 10 years ago that Google Maps photographed all houses? The same happened with speed limits.
The fact that some Sat Nav companies like Tom Tom have so many devices in use, they are able to report traffic in real-time.[1] It wouldn't surprise me if similar averages over a longer period are used to identify speed limits (e.g. What is the mode speed rounded to a multiple of 10, when speeds do not appear constrained by traffic).
> This machine will keep pace with traffic. OK. Does that mean it will break speed limits?
If the highway speed limit is 55 and everybody is going 70, you'll make the road a more dangerous place by going 55. I don't know how it works from a legal standpoint, but from a practicality and safety standpoint, I'd rather go with the flow.
>No. Parallel parking is not a complex nor difficult maneuver. It is total beginner territory... Anyone buying this car to avoid such mundane tasks isn't someone with whom I want to share the road.
Since you can't choose who you're on the road with, doesn't that make this feature more appealing?
I believe their response will be that the driver is still in control of the car, with the hands on the wheel and feet on the pedals. Any abnormal operating conditions will still be the responsibility of the driver.
You could look at it like crowd sourcing the speed limit... if I am following a good distance in my lane, and the person infront of me sees a cop/schoolgirl/construction worker and slows down, I should slow down too.
"I wasn't going any faster than the guy ahead of me" isn't a defense. But, should an accident happen and things wind up in a courtroom, "The car picked the speed" is a very good argument for joint liability (ie "it's partly Tesla's fault").
FYI, most every manufacturer of moving vehicles set their speedometers slightly high. On motorcycles it can be as much as 10%. This is to avoid any accusation that any inaccuracies in their product (ie changing tire diameter) might result in someone going faster than indicated.
> This machine will keep pace with traffic. OK. Does that mean it will break speed limits?
IIRC, you set cruise control at the speed you'd like to go. It'll try and keep that speed where safe, but if traffic is moving slower it'll slow to match. It won't try and follow someone doing 90 if you put cruise to 65.
Parallel parking is not complex or difficult. But if you're out of practice it's slow and annoying to fidget your position. It won't be in your muscle memory, and it's nice to push a button and not deal with tight navigation.
This is similar to what other high-end cars have, lane-keeping and smart cruise control, usable only in freeway-type situations. "Drivers must keep their hands on the steering wheel." Mercedes calls this "Active Lane Keeping Assist", and has offered it for several years now. Here's someone using it with a can taped to the steering wheel to defeat the "hands on steering wheel" requirement.[1] All the major manufacturers have demoed this.
This is NTSB level 2 automation, (Combined Function Automation).[2] ("An example ... is adaptive cruise control in combination with lane centering.") Google is at level 3 (Limited Self-Driving Automation), and going for level 4 (Full Self-Driving Automation).
The big problem at Level 2 is keeping drivers from using it when they shouldn't. Level 2 doesn't understand intersections at all, for example. Or pedestrians, bicycles, baby carriages, deer, snow, etc. That's why the major manufacturers are being so cautious about launching it into a world of driving idiots.
Volvo has now officially taken the position that if an autonomous car of theirs gets into a crash, it's Volvo's fault and they will accept liability.[3] Now that Volvo has said that, other car manufacturers will probably have to commit to that as well.
To me a big issue is the perception of the autonomous driving mode to other road users. How will other drivers and pedestrians know that this car is being driven automatically and treat it accordingly?
In the Bosch video[1] (also linked elsewhere on this thread), the system jumps out of autonomous mode at the first junction. The driver has to re-engage it. Drivers who are not closely monitoring the car situation might not realise what's going on and take 5-10 seconds to realise they need to re-engage autonomous mode or to take over fully.
Some following-drivers will get agitated by this, in the same way that some do with elderly or learner drivers and make silly impatient manoeuvres. If the state of the autonomous car is clearly communicated to other road users then they might be prepared to make allowances.
During this transition phase of mostly human drivers vs. autonomous drivers it will be these situations that frame people's perception of the merits of autonomous vehicles.
That and how the systems react to dangerous swerving or lane changing or conflict situations where collisions are impossible to avoid. The "Google system" or the "Tesla system" will be tested millions of times a day and face intense public scrutiny whereas human drivers are treated individually.
Any thoughts on the potential manufacturer liability for software bugs that lead to accidents?
Certainly there are a lot of precedents with anti-lock braking systems, cruise control, etc. But this stuff seems like such a massive expansion of complexity of software control I wonder what will go down in the courts when the inevitable happens.
That's already been release since the previous version via the traffic aware cruise control and emergency braking.
Specifically, the Model S locks onto the car in front of you and matches its speed, maintaining a certain number of car lengths behind it (which you can set). When the car changes lanes, your Model S speeds up a bit to fill the gap, and when a car cuts into the lane, the Model S slows down (although this one is a bit scary). If the car in front comes to a complete stop, the Model S also smoothly slows down into a complete stop.
This is correct. My Model S will fully come to a complete stop on the highway and will alert me if I'm approaching a vehicle too quickly and it senses a risk of collision. I've had the car save me from an accident at least once by applying the brake for me.
My wife's 2015 Mazda 3 has this feature as well although for some reason, it doesn't come to a complete stop. When needing to completely stop due to a car in front, it drops down to 2-3 MPH then beeps to let you know you're on your own for the last bit.
I for one would hope Autosteer isn't engaged in heavy traffic...although ability to autopilot during a sig alert on I10 in downtown LA on a Monday morning does sound worth the price of the car and then some
TACC in heavy traffic is worth the price of admission alone, I was shocked at how much unconscious cognitive load there is associated with stop and go traffic.
You still need to pay attention but I definitely felt much less exhausted after they rolled out that feature. Really excited to see how autopilot performs.
Has anyone made a TACC add on for normal cars? The self-driving stuff I'm reserving judgement on, but a cruise control that would let me automatically match the speed of the car in front of me would be awesome. I've wanted one for years...
Disagree - Adaptive cruise control has been liberating for me to know that if heave traffic comes screeching to a halt at just the minute I glance away, the car will recognize it and brake accordingly.
Are they going to take responsibility for accidents like Volvo will? While none of the features revealed are new to the industry he does a great job of marketing it. The ace Tesla has is that over the air updates, something the other manufacturers will need to work out, hopefully with an industry wide standard that can be regulated properly to insure safety, security, and liability.
As someone who spends a fair amount of time traveling between countries that drive on different sides of the road... I am always getting the turn signal and windshield wipers mixed up. So I doubt I can use the auto lane change feature.
> I am always getting the turn signal and windshield wipers mixed up
That happens even between cars that drive on the same side. It seems different car manufacturers have different standards as to where the turn signal and windshield wiper should be.
My biggest problem with RHD is that my right arm smacks into the door when I want to change gears for the first 50 km or so. After that it usually gets better.
I know what you mean! Everyone told me that I'd have trouble shifting gears with my left hand, but that wasn't a problem at all. The wipers, however, went on every time I turned.
This is a cool technical achievement, but I don't see the practical use nor does it seem like a big win for tesla drives. So it allows drives to kind of tune out while driving on the freeway?
While planes have become so, so, so much safer because of all this automation, pilots uncertainty regarding autopilot functioning is a major concern nowadays, and the reason for several accidents.
There are very interesting HCI challenges around properly communicating to the pilot/driver "what the heck it is doing" and clearly communicating just how much control the human has or doesn't have at any given point.
This "announcement" certainly doesn't inspire any confidence that they have really thought this through deeply enough (I think they probably have, but it should be communicated like it). As a huge Tesla fan, I can't help but feel like I need to hold my breath now and make sure something terrible doesn't happen because of this, and it ends up leading to even more regs setting us back on the path to full car automation.