This shows much the same info as Chris Urmson's TED talk in 2015.[1] Simplified for a non-technical audience, cleaner graphics, and without Urmson's clips of the vehicle encountering such things as someone in a powered wheelchair chasing a turkey with a broom. The tech video is more informative. It shows more of how sensing and planning interact. "Fences" appear in the graphics showing the keep-away area for a moving obstacle such as a bicycle.
Waymo is going at this right. They sense as much as they can, build a detailed 3D model of the world, and only use machine learning to help classify objects. Classifier failures aren't too serious; the obstacle still gets avoided, but you might not get as smooth a ride as the behavior prediction won't be as good. Compare Tesla's self-driving videos.
The bit with the person in a wheelchair chasing a duck with a broom is great. And it illustrates simultaneously two things: there is no hope that an AI driver will ever understand everything that is going on around it. But fortunately, you don't have to understand exactly a broom-wielding duck-chasing wheelchair rider to drive carefully around it.
> the person in a wheelchair chasing a duck with a broom is great. And it illustrates simultaneously two things: there is no hope that an AI driver will ever understand everything that is going on around it
I don't see what grounds you can have for saying that. We don't know if we'll be able to develop human-level AI (or beyond).
Another point is that I'm sure there are lots of unusual circumstances that human drivers sometimes face that they're poor at getting a clear snap-judgement of -- of what it is they're seeing and of how it will move (and thus how they'll need to react to it).
There's still the issue that self-driving cars are too courteous compared to normal drivers. I wonder how/if they'll fix that. Imagine sitting in a self-driving car that can't "push" itself through traffic; or imagine sitting in a car behind that car ...
I actually live in the greater Phoenix area and go to school at ASU, so I see Waymo and Uber autonomous around everywhere. I frequently see these things handle aggressive rush hour traffic with university pedestrians jumping out into the middle of traffic. They stay safe, but they clearly have a sense of when to go for it.
I’m also thinking of how human drivers will quickly learn to exploit this and get a feel of how to push the algorithm to get a slighgood advantage. Like, advancing a bit and stopping or getting a bit closer in order to merge in front of a self driving car.
Changing lanes is a normal and necessary part of driving. I'm continuously amazed by people on the internet who talk about "cutting in front" like it's something only other people do, or who are apparently proud of preventing others from getting into the lanes they need to be in.
Making the car more aggressive is not rocket science. The trouble is that this is where the safety benefit of self-driving cars will start to diminish. If you watch Chris Urmson's 2016 presentation at SXSW, he tells that the only accident Waymo had on a public road was when their car tried to push in front of a bus.
Self-driving cars may win out there because they have full circle sensing and can track many objects at once. They may be better at getting into a hole than humans delayed by head-turning time.
Another advantage self-driving cars have is reliable range rate. Humans are terrible at judging the speed of approaching objects. But radars are great at that.
The reflexes and attention on the self-driving car are going to be dramatically better; if there is a safe opening, the self driving car will be much better at exploiting it than a human would.
I hope this will overcome the "advantage" that humans gain by exploiting unsafe openings.
I mean, as is, I'm not a particularly aggressive driver; I lose some car-lengths because of that, and personally? I think that's okay.
Clearly this is an orientation video that is going to play in the waiting area for people who are using a self driving car for the first time. Its calm, everyone around it is calm, and nothing unexpected happens during the entire video.
I of course want to see the video where the human drivers do things that human drivers do. They illegally pull out of left turn lanes to go straight, they fail to stop when they turn right in the intersection you are about to enter, they make an illegal u-turn when they realize they turned the wrong way. What does the car do when someone tailgates it? What does it do when pedestrians step off the curb but then don't cross? Watching how the car navigates those situations would make me feel a lot better than watching it drive around where everyone was following the rules.
There's little upside for them to release videos from such testing. It's not sexy, it's not polished and they don't tip off competitors about how advanced (or not) they are.
Exactly. Or a car stopped blocking your lane so you need to break the rules and go into the on coming lane to pass. So many odd situations are possible.
It's like "The Truman Show" for autonomous vehicles.
The car was programmed to be sentient and think he was giving rides to real people... but it was all fake so they could make a fancy tech demo. And when the car finds out how people act in the real world, that's the beginning of judgement day.
Them programming for “unexpected” situations is not quite what is being talked about though.
The above scenarios are great, but I like the one Mercedes got in a little trouble for “is the cars priority your saftey or is it other people’s? You bought the expensive car, it should protect you first”, gets into a trolley problem pretty quickly.
I don’t. Basically someone high up in that division made the claim in some venue that of course your $80,000 car is going to put your security first, and it was a bit of an issue. I don’t remember if it was an internal Diamler event or a media snafu.
There is a logical answer though. He’s right. Each car is going to look out for itself primarily. They have to. No car is going to have a “sacrifice” mode.
Raises an interesting privacy question since the car is scanning with lidar and radar all of the pedestrians as it is going along. At what point does is capture gait analysis and allow home base to tell the cars to be on the lookout for someone with a particular gait [1].
Or fleet analyse your driving style and adjust their reactions to you and your car.
At some point they'll flag some other drivers as idiots and avoid them, speed past or fall back a safer distance. Other drivers will be aware of what the Waymo car is doing - oh look, it sped past that OAP driving, why? Are they jabbing the breaks unexpectedly? Drifting out of lane? Hm.
Then this data is going to get fed into other agencies and insurance dudes...
This whole thing is going to get messy and just push everyone to get rid of cars and just use taxi services.
I'm curious why the downvotes on this. SDC will absolutely be an immense tool for insurance companies to price their product according to the risk the perceive from specific driving actions.
They already do this, however their source of data is just the DMV via traffic violations.
This can also be a source of (small) revenue for mapping providers and sensor aggregators (eventually).
Having the rendered car's view visible to passengers inside is a very great idea for at least the first couple generations of self-driving cars, and especially being as prominent as they are in this car. It helps get people comfortable with the idea, and feel better knowing the car is picking up on everything in its surroundings.
I'm a bit skeptical about if those are real graphics though -- they look quite detailed beyond what I feel the 'sensors' can see, but maybe I just underestimate this technology.
The screens are also (unfortunately) a great place to blast ads at a quite literally "captive" audience, but I digress.
They drew a comparison to the early days of elevators, when everyone was terrified of them, especially after transitioning away from having a lift operator controlling it. The solution at the time was the big red emergency stop button.
The first elevators in Europe (mostly legacy ones in Germany now I think) that never stop and don't have doors still terrify me.
https://en.wikipedia.org/wiki/Paternoster
If you watch closely, the periodic "pulsing" on the view screens shows the actual lidar data. The rendered obstacles (cars/peds) are based on the vehicle's model of the world after processing that data. The buildings presumably come from its HD maps.
Disclaimer: I'm not involved in AV, but I have a telecommunication background that let me do some (hopefully) not too stupid guesses on how a LIDAR can behave here. If some expert in the field can give more authoritative feedback that's be even better.
Looking at [1], a typical LIDAR uses 10 ns pulses, at 140 kHz for pulse rate. So one LIDAR uses only 0.14% of air time. A LIDAR is also a rotating device, looking at reflections only from a narrow angle at any give time (how large this angle? TBD). That also reduces the risk of looking at another reflection in the right direction at the right time. Raw collisions should be rare, even in a crowded environment (you're 1000s of cars is extreme, 100s will already be very crowded).
Then if it's using the usual techniques for signal detection, each pulse is encoded with a randomized pattern, so that a LIDAR can recognize its own pulses reflections using correlation. So a given LIDAR will only consider reflections using the right coded sequence. That will further reduce the risk of wrong interpretation, depending on how many bits in the synchronization sequence.
Lastly, at 140 kHz you can do quite a lot of filtering and still have a fast update. If by extraordinary bad luck another reflection comes from the right direction in the right time window with the same sync pattern, this would create an odd input that's unlikely to fit with one's own echo pattern, so could be filtered out. And it's not likely to last long considering all that has to align.
That's my armchair analysis based on a quick search and a single set of slides ;) But it's enough to see that interference is not likely to be a problem. And I would expect this question to have been considered by AV LIDAR experts in much more detail.
I think the problem is more the analog side: If your sensor is saturated by too much light, it might not be able to pick up on your signal anymore. Direct LOS to another high-powered source is probably a lot brighter than what LIDAR normally detects. Then, all the reduction from sync pattern etc don't apply, and the sensor might be blinded considerably longer than the actual signal length if driven to saturation.
Possibly yes. I was assuming an RF-like behavior where there is indeed saturation of the A/D converter but the blinding doesn't last more than the interfering signal (or negligibly so). But these are for timescales in multiple of us not ns. For ns level time scales maybe it's a bit different, but if it's not too different the low air time usage would avoid problems here too. The blinding pulse would mean an angle between two circles in space (angle from the interfering direction, circles corresponding to distances associated to the blinding pulse start/stop with respect to the last interfered LIDAR own previous pulse transmission) would loose any echo and become blind. With the vehicles moving such a "lost patch" would not be fixed either, and interpolation based on past and surrounding data is possible.
I'm also wondering about the impact on pedestrian's health. If you add all the waves we're being exposed to, of different wavelenghts, i'm eager to read studies that confirm it's absolutely harmless.
But I’ll take all of this a LOT more seriously if there was a serious effort in V2V / V2I (vehicle to vehicle/infrastructure) systems - AND - if semi trucks that take known and constant routes were being deployed.
It seems obvious to me for SDCs to be a Railroad v2 at first. Vehicle takes a long distance and known route back and forth. Seems far easier and immediately makes money.
"Halt and stop resisting, citizen 9874653792! Google has observed you engaging in the consumption of unauthorized plants. Prepare for re-education factory enrollment."
Since the video said the robocar could predict where other object were going...any chance of a video where the robocar swears and call another drive a f*cking idiot? I would watch that video...
I understand Google/Alphabet is doing the "absolute regulatory minimum" here, as it usually tends to do in everything else, but what's the point of having a self-driving car with a wheel you can't access in case of a self-driving system breakdown?
The manual control should be easily accessed by the passengers. Will Waymo even allow people to sit in the front seat at the wheel and allow them to control it whenever they want?
> what's the point of having a self-driving car with a wheel you can't access in case of a self-driving system breakdown?
Regulatory arbitrage.
Also if the car is really broken down, as in doesn't start, you can get into the front seat and steer + push like a normal car, or emergency services can do that after an accident and so on. Likewise if the sensor system fails this provides a cheaper method than calling a tow truck to move it back to their shop.
In normal situations where manual intervention is required, I believe the plan is to have a Waymo employee drive it remotely, not have the passengers drive it. Apart from the obvious business reasons for this (partially discussed below, also insurance), this should mean that you don't even know how to drive to use one of these cars, which is a clear advantage.
> Will Waymo even allow people to sit in the front seat at the wheel and allow them to control it whenever they want?
Why the hell would they do that? The whole purpose of this vehicle is self driving car research, they don't get anything out of letting you drive it, so they'd just be providing you a free car.
Hah, my thoughts exactly. Besides, when was the last time you jumped in a taxi and told the taxi driver the let you drive? Is that even a thing people do anywhere in the world?
> In normal situations where manual intervention is required, I believe the plan is to have a Waymo employee drive it remotely, not have the passengers drive it.
What if the car doesn't have a network connection for whatever reason?
Hopefully it doesn't need a network connection to drive...
But if doesn't have a network connection, and manual intervention is required, that's just like the "sensor failed" case except you might have to rely on the passengers contacting waymo to tell them where the car is. Which they'll be motivated too since they are somewhat stranded.
The reason why the steering wheel is there is due to these cars being modified Chrysler Pacifica minivans. It is easier all around just to leave the wheel as it.
There is no need for a passenger to take manual control. If the car or the self driving system has a problem. The car will pull over and another Waymo self driving call will arrive to allow you to complete your journey.
This reminds me of Stephen J Gould's explanation for why men have nipples: In utero the male and female fetus initially develop the same features, and only after the nipples have been formed do males and females start to look different. Nipples aren't useful for men, but men have them because they are useful for women.
> If the car or the self driving system has a problem. The car will pull over and another Waymo self driving call will arrive to allow you to complete your journey.
So if a (perhaps malicious) environmental configuration consistently causes Waymo cars to fail, you’ll just collect an ever-increasing number of them at that point?
A lot like ants encountering the inside of a freezer. In this case, though, a human is going to be told "there is a broken car here someone has to come collect", and they will hopefully notice if they see two or three of them at the same location (seeing as how cars don't travel instantly so there is time to notice stuff in general).
I think the point is the car has to be able to drive completely independently if we are to allow cars to travel without anybody inside at all, which includes handling emergency situations. And of course as some others have pointed out the people inside may be old, handicapped, children etc - expecting them to take control may make things worse.
As far as front seats are concerned, why even have a front seat? Have people sit in a circle facing each other, with a table in the middle - so you can have a conversation. Not having a wheel allows us to reimagine the interiors of cars which are currently built around the driver.
I don't even think you have to appeal to people being old or children. The idea that a distracted human being is going to be able to take over for a machine with 360 degree vision in an emergency is absurd, no matter how able they are.
The case I was responding to was one of car breakdown. Of course the other case would be an emergency while the car is moving (which is what you are talking of) and here i think the evidence is compelling. To hand over control to a distracted driver at the worst possible moment is not likely to work well. It actually does not work well even for trained pilots, as the Air France crash some years ago showed.
"what's the point of having a self-driving car with a wheel you can't access in case of a self-driving system breakdown?"
If you sent your seven-year-old to baseball practice in your self-driving car, or you got wasted at a party, you're better off if the car stops and calls for help than letting someone incompetent try to take over.
That would be great, if not having the ability to drive was the only reason you wouldn't normally send a 7 year old out on their own in a car to travel a number or miles without an adult present.
Waymo is going at this right. They sense as much as they can, build a detailed 3D model of the world, and only use machine learning to help classify objects. Classifier failures aren't too serious; the obstacle still gets avoided, but you might not get as smooth a ride as the behavior prediction won't be as good. Compare Tesla's self-driving videos.
[1] https://www.ted.com/talks/chris_urmson_how_a_driverless_car_...