Hey Dan, could I ask you a few questions about what you learned at Starsky? I have a robotics background and what they tried to do is very very interesting.
Hey Stefan it’s very nice to meet you! I spent many years at GRASP lab at UPenn and most of my friends from there are at Waymo/Nuro/Cruise now. I do not work at a self-driving car company, but I 100% agree with your sentiment regarding the importance of assessing safety, this has obvious implications in how these cars are insured, and therefore priced and financed. On a separate note, I met a long haul trucker/Russian immigrant in a hacker hostel in Mountain View, and his story was eye opening. He said it was the worst job he ever had, and he was “treated like an animal”. So alleviating his problem is very worthwhile in so many ways. As for questions:
1. Referencing page 5 of your VSSA (1), how were you able to quantify "When weather, traffic or other conditions made driving unsafe." It seems like a tricky problem because this region of the ODD is not discrete such as {warehouse, ramp, highway} but rather depends on the quality of your cv stack as well?
2. In hindsight and with the company behind you, do you think it make sense for each car companies to do self reporting? Or should there be some sort of government oversight/technical validation process done by external 3rd party?
3. I would love to chat more! It must be a very emotional time for you, so I would understand if you do not wish to speak about it. But if you do, could we set up a time at lingxiao@seas.upenn.edu?
3) Sure! It isn't too hard to figure out how to email me, so send me a note or DM on twitter or LinkedIn or whatever.
2) AV has been a great testcase for federalism, and I think we've seen different regimes work differently. For the stage that industry is at, I think that insurance requirements is sufficient (insurance carriers have the sophistication to figure out if you're being unsafe). California's regime is overly-prescriptive in such a way that it confuses things more than it improves safety.
To be clear - if we were closer to deployment I'd advocate for a way tighter regulatory regime.
1) We figured out a programmatic way to measure ODD. Trying to specify all versions of ODD is a fools errand, and you're right that it would change a lot by how good you stack is.
ODD = Operational Design Domain. The conditions your system can drive safely in - rain, snow, sunshine, glare, high traffic, no traffic, pedestrians, etc ad infinitum
TBH - In the early days the "cool AV people" seemed to act as if teleop was an admission of bad engineering (your team isn't good enough to perfect L5). If everyone else is right on the cusp, and you're not trying, you must be some sort of loser.
There are still a number of folks who think that teleop can never be safe - surprisingly sophisticated people hold onto that dogma. We wrote a whitepaper about it for investors, and I'll share that at some point as I open up more and more of the Starsky vault.
Those sophisticated people were correct, at least for now (never say never). Teleoperation can work in some limited specific circumstances. But latency, reliability, and coverage of current wireless data networks are insufficient to allow for widespread teleoperation on public roads.
Lidars break. Radars fail. Cameras run into issues. Drive by wire systems malfunction. Computers vibrate to death.
All parts of an autonomous system break. Safety engineering is measuring those breakages, and designing a system that is safe when it inevitably breaks.
If you're the operator, you can choose to only drive on routes that should have sufficient connectivity. If your remote driver is only issuing high level commands (i.e. not responsible for safety) latency starts to stop mattering so much.
The problem here is the availability bias - you've seen your phone fail so you know that telecom links can fail. As a layperson you might not think about how the rest of the system suffers from the same limitations - but they do. You engineer around them.
The situation becomes clearer if you take a look at the traffic today with a human operator in each car. Also in this model, engineering and human failures happen. Engines break, tires get punctured, brakes fail, and humans make errors. We still accept the deaths and injuries caused by car traffic, and don't prohibit cars.
So it is more than sufficient if you can show that AVs - on average - cause less harm than human-operated vehicles.
To your specific question: when the AV loses connection, it would do the same as a human driver when a tire is punctured: turn on warning lights, slow down, and stop at the roadside. Like in other car failure situations, that might cause an accident in some cases. However, that is fine, as long as it is rare enough.
(disclaimer: I'm not working in AV tech, I don't know if current AV technology handles this case as imagined)
That doesn't make the situation any clearer. Coping with mechanical failures isn't the primary concern. The issue is how to handle edge cases where the AV software is unable to decide on any course of action.