> Moderate rain is not an acceptable condition for cars to refuse to drive. Especially in the middle of long trips.
It is absolutely acceptable for a self-driving system to refuse to drive in any conditions it can’t handle, but it must also deactivate in a safe way when such conditions arise during operation (e.g. pull over to the side of the road if the driver hasn’t positively acknowledged resuming control).
Commercial viability of any given system is a separate issue, but it’s pretty well accepted moral responsibility to not accept control of a vehicle if you’re unable to operate it safely, regardless of the consequences of that refusal(1). I see no reason to not hold consumer-facing self driving systems to the same standard. Otherwise, they require some specialized training for the operator to be able to recognize the situations in which they are safe to use.
(1) Actual life-and-death situations change this a little, but future availability of rescue personnel and equipment generally weight the conclusion towards operating safely in those situations as well.
The pulling over to the side of the road solution doesn’t work at scale. What happens when it starts snowing on a 10 lane freeway during rush hour and 50% of the cars are self driving with this limitation?
It’s unlikely that, in a region that gets such snowstorms, a self-driving system that can’t handle them reaches 50% market penetration. That seems more like a “we’ll cross that bridge when we come to it” scenario.
OK then QED. The point some are making is that with at least the current type of lidar, that bridge may not exist. It may make more sense to devote resources to better radars and radar processing.
And I’m not disputing that at all. None of my statements say anything about any particular self-driving technology as I’m not an expert in the various technologies. My point is that what’s “acceptable” is a fundamentally a moral stance, and stated the minimum bar I believe all drivers (automated or not) need to meet.
This is simply a constraint that any system needs to work within, and it’s entirely possible that precludes the commercial viability of LIDAR-based systems. It’s also possible that there’s some niche market that fair-weather-only systems can be successful in long before the general problem is solved, and we shouldn’t artificially throw those out as “unacceptable” when there’s a reasonable framework for them to operate under.
It is absolutely acceptable for a self-driving system to refuse to drive in any conditions it can’t handle, but it must also deactivate in a safe way when such conditions arise during operation (e.g. pull over to the side of the road if the driver hasn’t positively acknowledged resuming control).
Commercial viability of any given system is a separate issue, but it’s pretty well accepted moral responsibility to not accept control of a vehicle if you’re unable to operate it safely, regardless of the consequences of that refusal(1). I see no reason to not hold consumer-facing self driving systems to the same standard. Otherwise, they require some specialized training for the operator to be able to recognize the situations in which they are safe to use.
(1) Actual life-and-death situations change this a little, but future availability of rescue personnel and equipment generally weight the conclusion towards operating safely in those situations as well.