Hacker News new | past | comments | ask | show | jobs | submit login
Self-Driving Cars Can Handle Neither Rain nor Sleet nor Snow (bloomberg.com)
210 points by bookmtn on Sept 17, 2018 | hide | past | favorite | 332 comments



The article is kind of vague. What they seem to be doing is operating a ground penetrating radar under the vehicle so that underground details can be used as positional references. This is useful only if you have a map which includes that data. At best, this tells you where you are. It tells you nothing about what's ahead.

LIDAR units that return "first and last" returns are helpful in dealing with rain and fog. The first return (nearest thing seen) will be noisy in rain, but last (most distant thing in that direction) should be stable if it's from a solid object. "First and last" is used with aerial LIDAR surveying; "first" gives you the treetops, "last" gives you the ground.

Range gated imagers for seeing through fog and dust have been available for over a decade. These are active devices which allow filtering out anything that's outside a narrow image range gate. You can adjust the gate and get images layer by layer. This technology is used by the military and seems to be somewhat restricted.[1][2][3] In addition to fog, it can show what's behind a camouflage net, objects concealed by brush, and such.

The submillimeter radar people are slowly making progress, too.[4] Automotive radars are currently running at 77GHz, which is low resolution compared to LIDAR. The resolution gets better with frequency. There are now systems in the 300GHz range.

So the sensors needed are available now, in expensive forms, and can get cheaper.

[1] http://www.dvsmil.com/PDF/Imaging_Lidar_Seeing_Through_Cloud... [2] https://www.youtube.com/watch?v=hOTo-6jXOYo [3] http://www.sensorsinc.com/applications/military/laser-range-... [4] http://consortis.eu/


Small nitpick. The range resolution improves with increasing pulse/chirp/waveform bandwidth, not increasing carrier frequency. It's true that at higher carrier frequencies there is more bandwidth available and thus better range resolution is possible but the relationship is slightly different than "delta_r ~ frequency"


The relationship is cT/2, where c is the speed of light, and T is the pulse length, if the waveform is CW. Note that in this case T can limit the minimum range you can detect. Or you can use c/(2B) where B is the bandwidth of the signal as you stated. I believe the limit is wavelength/2 for resolution though.


Yup. And in order to get a pulse of T you need, at best, a sinc window. Resolution is intrinsically linked to bandwidth.


Range resolution isn't the problem. Angular resolution is.

Here's a video of what's claimed to be one of the higher resolution radars.[1] About one degree. It can see cars.

Many new startups in this area.[2] Most are operating in the 77GHz range, which is OK for sensing other cars but marginal for smaller targets. LIDAR-type resolution is still a ways off.

[1] https://www.youtube.com/watch?v=r0s5P1SQ34M [2] https://www.mwrf.com/systems/startups-trying-revamp-automoti...


It's important to note that scattering will get worse at the higher frequencies too. This is why radar can penetrate dust and visible light has more difficulty.


> The cluster of sensors, bolted underneath a chassis like a skid plate, can scan 10 feet beneath the ground to reveal soil, water, roots, and rocks. Once WaveSense has scanned an entire roadway, it creates a map of the subsurface strata that can determine the location of a vehicle within a few centimeters.

nononono! Even assuming you can create sufficient map data as soon as there is any kind of roadworks - or even only the winter cracking the asphahlt - your data will be useless. They don't even need to change the layout of the road. I'll never understand companies that focus so much on maps as priamry source of data. Every day, massive - temporary (construction site) or permanent - changes are occurring to the road network. Also, the centimeter-precise location of the vehicle doesn't matter that much. The actual situation on the road matters much more: You may need to drive a meter more to the left because of a parked truck, a vehicle may obscure signs, a tree branch hangs down into the road, some car is parked too close to intersection, etc.


Any time I drive a new road, I have to drive much more slowly - lanes unexpectedly turn into turning lanes resulting in quick merges, or a manhole has subsided a half foot making for an unpleasantly bumpy ride if I'm not used to it. Or in a rural environment, I have to spend more focus keeping an eye on the road surface for any number of reasons.

If I then drive the road and there's been some changes (traffic cones, lane closures, heavier traffic than normal, dark so I need to watch out for animals more closely....) I adjust my speed for the circumstances

I don't see why gathering surface information would be any different. Gather info, compare to previous fleet-gathered info, adjust driving to compensate. Seems reasonable to me (as ONE source of info for decision-making). And as fleet size increases, info gets gathered more quickly - look at how quickly google maps can inform you of traffic slowdowns in busy urban areas.


> I'll never understand companies that focus so much on maps as priamry source of data.

Maybe they don't really care about the product. Maybe the execs only care about making money, so they'll find funding for anything - whether it will pan out or not.

Self-driving cars are hot and for now there's a lot of money available. I think things will cool down when the hard part starts - like when they really do have to figure out how to make them drive in less than ideal conditions.


Depending on how good that data is, this might be a data play where the data collected in a single repository is extremely valuable. Sell it to the military for operating when GPS is jammed. Sell it to mining companies, insurance companies for a detailed very large map of what lies below.


That's what I was thinking. Worst case, the map data isn't all that critical for self-driving cars, but you have a huge amount of potentially very valuable data that can be used for a lot of other things.


That's an excellent point I had not considered.


It's very useful for navigation not accident avoidance. Cars needs to navigate without GPS in underground parking garages for example. Reading signs and inertial navigation are part of it, but knowing you just cross the same point is really important when trying to find your way out of a maze.

The highway value is when snow covers lane markers. The GPS will still work, but it's only so accurate without those lane markers etc.


> I'll never understand companies that focus so much on maps as primary source of data.

While not sufficient, this does play to one of the fundamental advantages computers have over people: you can copy information from one to the other.


I’ve taken downvotes from HN optimists for years about self driving cars. The slightest bit of criticism would have comments calling me an idiot 20 different ways.

Then a single dead pedestrian changes all the news articles and the opinions change.

We are a very, very long way away from self driving taxi cabs. What a total PR scam that was - but it did help valuations.


In the current legal environment, self driving taxi cabs everywhere at all times will never exist. Waymo seems to think they can work in Arizona desert though.

How many deaths per year would be acceptable? Like you mention, in the US, it seems that zero deaths per year, similar to commercial aviation, would be required. It is a bit of a shame that a tech that could save a million lives a year globally won't be deployed because it can't be made perfectly safe. On the other hand, I'm sure people in India or China will be happy to build and deploy them with much higher failure rates. Maybe we can import those to the US someday.


>> On the other hand, I'm sure people in India or China will be happy to build and deploy them with much higher failure rates.

You might have seen this internet classic:

https://www.youtube.com/watch?v=RjrEQaG5jPM [India Driving]

Driving in India is notoriously chaotic and if self-driving cars are ever deployed there in our lifetime, they will come with a safety guarantee about equal to a car driven by a blind dog with a missing paw. It would be impossible to safely navigate the streets there with anything less situationaly aware than a fully developed adult human brain.

The article above makes it sound like weather is the big problem for self-driving cars and once that's solved- wooohoo, we're on our way! It's far from that. Self-driving car AI is still incapable of reasoning about its environment, neither does it have any "undersanding" of it in any way, shape or form. Consequently it only works in very limited environments, in very limited conditions - of traffic, visibility, road quality etc.



Don't forget Uber in the US! A fantastic display of man and machine working as one and how they deal with unexpected obstacles.


> It is a bit of a shame that a tech that could save a million lives a year globally won't be deployed because it can't be made perfectly safe

This is a bit of a generous assumption. It very well could turn out that self-driving cars are worse, or no better, than human drivers.


Right. So many arguments just start from the premise that self-driving cars must be safer and then ask why we aren't racing to replace human drivers with them. In reality the proposition that they are safer is far from proven.


Not to mention that the risk profile is going to be different. When you are driving your car, there is at least the illusion that you are in control of your risks. A careful driver at least believes that their risks are significantly lower than average. Getting into a self-driving car you have no control and are just playing Russian roulette.

Not to mention that the failure modes of self-driving cars are likely to be very different from failure modes of humans, making it more difficult for other road users to predict what they would do in any given situation.


We already know that the assumption is correct for a lot of cases. Self driving cars have logged millions of miles on the road.

The only question is can we make them work in situations that are hard.


Meanwhile human-driven cars have driven billions. The only question is can we make them (humans) work in situations that are hard?

In other words, "it works until it doesn't" is tautologically true, but useless precisely for that reason.


The answer to your question is basically "no". Humans aren't getting better at driving in the aggregate - new ones are born, learn to drive, get better, then get worse, then die off. On average humans are as good as they're going to get (with small variations due to culture and different training programs).

On the other hand the self-driving cars are just getting started. It's entirely reasonable to suppose that they may surpass the average human in the next X years. And they won't get old and confused and eventually replaced with new cars that have to learn from scratch -- they should get more or less monotonically better. I think the optimism is warranted even if the timeline and technology is uncertain.


> It very well could turn out that self-driving cars are worse, or no better, than human drivers.

This is trivially true. I mean, self driving cars that are worse than human drivers already exist. Isn’t the point asking when they will be made widely available? I assume that is only possible when they are at least as good as human drivers by some key metrics.


I agree, the sane assumption should be that self-driving cars would be made widely available if and when they are better than humans at driving.

However, I've noticed that many work from the premise that self-driving cars are already safer than human drivers. That, or they'll work from the premise that self-driving cars will definitely be safer than human drivers within a reasonable timespan.

Worse, some assume that self-driving cars will dramatically, or nearly eliminate automobile accidents/deaths, where any brief skim over workplace casualties involving autonomous machinery would put that fantasy to rest.

For a group of tech workers that often overlap with self-described skeptics, it's an interesting blindspot to have.


How many years per death would be acceptable is highly culture dependent. Uber should ask the NRA for branding advice, though they might not be patient enough to put in the work to reach such an untouchable brand.


My gun isn't controlled by a computer and in fact that is fairly strictly banned by the ATF.


Ah, so you agree that avoidable deaths from computer controlled systems are viewed as much worse than avoidable deaths from human controlled systems?


Not sure if you have read the article, but it's actually about a company called WaveSense who claims to have developed ground-penetrating radar that can greatly improve all weather autonomous driving. So yes, it's a daunting task but lot of people are working on it and progress is being made. I'd be interested to hear your counterargument.


I’m not an expert in self driving car engineering. I have some experience with ML but not in a significant professional way. What I have is a decade of professional experience building all kinds of software and a skeptical eye.

You have a problem that is unconstrained, with infinite variables, where even a simple mistake can have catastrophic outcomes. Society itself may object to self driving cars for a ton of reasons, from safety to simply driving like a grandma and slowing everything and everyone down. The cost to develop this technology, plus the added cost of hardware to each car, will be enormous and is not obviously a cost savings over a $15 per hour human. If the self driving car is doing anything other than getting from A to B, you still need a human (or a human-like robot) to handle the unloading / delivery / whatever at the end.

Now, I’ve worked at companies with extremely talented and intelligent engineers, and something as constrained and seemingly simple as making a login form can take a long time to perfect - and no lives are at risk! Just imagine the challenges and requirements for building self driving cars. New hardware, software, real-time processing and analysis of tons of data, all to drive split-second decisions that can kill people if done incorrectly.

Huge challenge - huge risks - huge money - uncertain payoff. This is not something that will appear suddenly. If there aren’t convoys of self driving trucks operating in desert highways overnight, where it’s dry and straight and flat and no one else is there, then we aren’t going to see city taxis for a very long time.


> You have a problem that is unconstrained, with infinite variables

I'm not sure this is correct. I come from the optimal control world. The problem is not unconstrained and definitely does not have infinite variables (if you think in state-space, consider the state-space equation x' = f(x, u, theta). The x-space (state) is large but finite, and the u-space is fairly small -- steering, gear, brake, etc.). The x-space is also stochastic, and there are many observability issues.

That said, we've been designing control systems against the real world for a number of years now, and the key is not to model all the unknowns (because there will always be something you can never anticipate), but to model the known and safe path that the system can fall back to.

The problem is a complicated one, and it will take more than several attacks to make it work, but it is not by any means an impossible one if you break it down into the fundamentals.


I could not disagree more. The input to an AV’s sensors, not just the data but what it represents and must be determined, is infinite. If your solution is to stop the car, you must also determine if that’s safe to do. It’s extraordinarily complex. This isn’t a factory.


I strongly disagree. It is a large set but not infinite, otherwise humans would not be able to drive. It is constrained by physical laws.

Also, control systems have been used to control much more complex entities than just factories.

I agree it is an extraordinarily complex problem. But I also believe progress can be made to a point where it can be feasibly solved.


But it doesn't have to be perfect, just has better than human drivers. That doesn't seem too difficult.


I think you’re underestimating the ability of even bad drivers to accurately process and react to new information while driving.


I think you're also overestimating the abilities and self-restraint of drivers to not use their phones while driving, to look both ways, to not go too fast, etc.


That may be true logically, but I don't think it is workable politically.

People are conditioned to accept that we will kill each other with cars every once and a while. I'm skeptical we will get to the point any time soon where the general public hears about a family killed by a self driving car and just shrugs it off. There will be intense pressure to get them off the road.


My thesis is that “better than human drivers” is an extremely difficult problem that even software professionals have massively underestimated the difficulty of.

It may not even be possible to solve without something radical like banning human drivers or inserting electronic nodes directly into our roads and infrastructure to aid autonomous vehicles. The existence of human drivers might make the problem simply impossible to solve in a way acceptable to society.


> even software professionals have massively underestimated the difficulty of.

Software professionals are somewhat notorious for underestimating the difficulty of their projects. There's a massive amount of literature about that, proposing various techniques of mitigating this on a personal, team, or organization level.

That being said, I think in this particular case the problem was more of an overestimation of what the ability to work with highly dimensional data (as in ML algorithms) can give you (and underestimating the practical problems of deploying the ML-based systems). Basically, people tried running 30+ years old algorithms on GPUs while feeding them the ungodly amounts of data and realized that, with sufficient horse-power, they can make them (finally) work (as in: do something genuinely useful). This built up a lot of hype, of which self-driving cars are just one offshoot, I think.

Anyway, (if it's not evident from the above ;)) I find your arguments convincing and share your doubts. Self-driving cars are probably not impossible to achieve, but to get there we either need decades of research and progress or a couple of very high-profile breakthroughs in tech and theory. I won't hold my breath for neither :)


>> I'd be interested to hear your counterargument.

Not the OP, but like you say, the article is about a company who claims to have developed a technology that will solve self-driving cars' problems with rain.

People in industry make claims all the time. People in the sciences do, too. Just because someone makes a claim, doesn't mean it's true. It's only a claim.


I think it would be more succinct to describe WaveSense as ground penetrating radar for positioning purpouses. It's pretty orthogonal to autonomous driving which obviously would benefit from any extra position sensor, but it does not actually contribute autonomous driving.


The article supports his point. This is a press release for technology that may be many years from appearing in a production vehicle. It uses a sensor that isn’t included in the package of sensors used in other self driving vehicles, a package that’s already too expensive for consumer vehicles. Yet the technology seems to be necessary to handling a fairly common road condition.


That helps with localization only, right? Not with any other perceptual tasks?


>Then a single dead pedestrian changes all the news articles and the opinions change.

Frankly, it didn't change my opinion at all. As far as I can tell, Uber is a train wreck of a company. When I first read the story I was kind of surprised, until I found out it was an Uber car.


Waymo’s Michigan testing facility is very close to my home. I’m really curious to see how they do on snowy pothole ridden roads in Michigan. I’ve seen very few of them on the road this summer but if there’s supposed to be heavy testing and validation going on perhaps we’ll see an army of them on the roads here soon.

I’m curious how self driving focused groups are solving lane detection and managing surrounding traffic on snow covered roads. It’s not uncommon for a multi lane highway to be completely covered with snow and multiple tracks for different ‘lanes’ that overlap. It’s also not uncommon to see cars just making their own lanes. This occurs even after the snow has stopped flying.


"I’m curious how self driving focused groups are solving lane detection and managing surrounding traffic on snow covered roads. It’s not uncommon for a multi lane highway to be completely covered with snow and multiple tracks for different ‘lanes’ that overlap. It’s also not uncommon to see cars just making their own lanes. This occurs even after the snow has stopped flying."

Yep. As a Michigander, I totally agree. You see 6 to 12 lane roads reduced to "random" 2 lane roads when there is a good pileup of snow. And the snow drifts make it even worse!


Humans are good at improvising. Not using all the lanes when the lane markings are covered by snow, and the road is slippery, sounds like a reasonable strategy. And nobody ever told us that's what we should do.

With autonomous vehicles, you need to plan every situation in advance. Our current AI technologies can't improvise. It seems to me that current AI is basically just a decision tree with some neural networks sprinkled on top.


We don't want machines to improvise because we want to be able to test their whole gamut of behavior as thoroughly as possible. We can build AI that can improvise, but that's not the kind of software you want to be liable for as a manufacturer.


I am not at all convinced that improvisation is an easy add-on. While AI has demonstrated some creative ways to satisfy goals that have left options open (sometimes inadvertently), real-world improvisation has to be appropriate for the situation and the unusual difficulties it poses, and, in the case of driving, it must not create additional problems or (of particular relevance here) present significant dangers.

In general, I think, improvisation requires causal reasoning. It also often needs a broader knowledge than that needed for the nominal task (for example, when driving on a potholed road, one needs to have some understanding of how it affects the car's response, and sometimes of issues such as ground clearance.) Constraining the vehicle to avoid such issues would constrain the option to improvise.


>We can build AI that can improvise

Do you have any sources demonstrating examples of AI with true improvisational capabilities?


More like decision tree/rules engine with pattern recognizing neural networks underneath. You detect features and then take largely pre-programmed actions based on those features.


The larger the "largely pre-programmed", the less it is like improvisation, until, at some point, it isn't. As I suggested above, effective improvisation probably requires causal reasoning.


> Waymo, widely seen as leading the self-driving vanguard, says it has made progress teaching its software to better filter out “noise” from precipitation.

The last Waymo leak claims that it is more than just 'progress', but that 'rain is solved', incidentally: https://thelastdriverlicenseholder.com/2018/09/14/waymo-plan...


Perhaps San Francisco rain is solved, but I doubt the downpours we get in the South East are solved. Zero visibility is just one thing, how about hydroplaning when there is over 1" of standing water on the road because it's coming down faster than gravity can drain it away? Or spotting that your lane is fine but the opposite one is flooded so that car coming the opposite direction is going to have to borrow your lane for a bit?

These sound like edge cases but there are places where they occur very frequently and local human drivers, for all their faults, do have a way of dealing with them.


> hydroplaning when there is over 1" of standing water on the road because it's coming down faster than gravity can drain it away?

I'd say that's pretty calculable, and has a solution, slow down to a speed that the tires can shed water.

Perhaps most telling about your qualification is the misplaced risk; the light precipitation with a little oil slick is likely more dangerous, mostly because you don't slow down for it.

Anecdotally, an even harder problem is when water exceeds 4", negotiating with traffic so you won't float away or get water in your intake.


Yes, it is trivial to calculate how to respond to known hazards. What is hard to calculate is what the hazards actually are in any given situation. Humans don't merely look at the road and compute how much standing water is there, they have prior experiences and a sense of what other drivers are doing on the road and why they are doing those things.


Self driving cars also have prior experiences and a sense of what other drivers are doing and why. That's sort of the point of machine learning.

In fact, they will have a lot more prior experience than any human driver in the world.


Machine Learning still confuses Elephants for Cats.

While it's quite fun if you pin up a cat dressed up in an elephant costume and surely amuses a few colleagues, a self-driving car is not something that should confuse an Elephant for a Cat.

ML is IMO not reliable enough for use in self-driving cars, in complicated situations the driver should take over.


It's not reliable enough now, sure.

Similarly, my 6 year old is also not reliable enough to pilot a car.

Both will change as they mature.


Sure, multiple decades sounds about right (as opposed to current confusing statements like "has full self driving capabilities").


Unfortunately in the realm of machine learning, quantity does not equate quality.


The parent of any teenage driver can also attest to this.


They have a lot more experience driving, but humans do not only use their driving experiences as priors.


humans regularly get killed misjudging how much water is in underpasses, I guess an autonomous car based on detailed mapping can measure water deep quite better comparing sensor result with its map (eventually).


Those conditions sound like you wouldn't be driving. So, solved. Pull over and stop


This; arguing that a computer can't drive safely in conditions where a human can't either isn't an argument against computer drivers. Just because humans sometimes get away with it doesn't make it a good idea.


It's not always possible to pull over. If it's snowing so heavily that driving has become dangerous there probably is no shoulder to pull over to, much less a shoulder you can safely pull over to. Source: I've been in these conditions. You do the best you can.


you should pull over before then. There is in my experience plenty of warning to find a hotel first.


Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error.


I’d recommend visiting Finland. In these parts, there’s basically 3 months of optimal visibility. The rest is fog, rain, snow, sleet and road conditions that would shut down most of US. Yet people drive and do so safely. “What people can do” is a bit different from “what an average us driver can be expected to do”


And as a taxi driver once told me in Helsinki there are four seasons in Finland - Autumn, Winter, Spring and Construction. Even when the visibility is optimal they'll likely be tearing up the road anyway.


It may be that regions with better climates receive these vehicles first. It's relatively straightforward to detect the prevalent conditions and refuse to drive if the weather is outside the scope for which the vehicle is rated. As the technology matures, that scope should broaden.


So road trips are unaccounted for? Not a comforting idea that taking your car to another area will render it functionally unsafe on the (differently designed) roadways.


Essentially the same as "smartphones are about 95% useless without mobile broadband". Except for the tiny detail that lack of mobile coverage is unlikely to kill you...


That's a price I would very very happily pay for a self-driving car that works in my city. If I want to take a road trip, i'll rent a traditional car.


We clearly have different lifestyles. Renting a car 3 times a week to venture into the mountains is beyond unaffordable, much less once a month. Do you stay that static in your life?


Doesn’t findland have a higher vehicle fatality rate than other nations?



Ah, I was thinking of the UK. Finland has 50% more fatalities per mile than the UK.


You can drive in a torrential downpour. Short of actual flooding or the creation of large puddles, rain isn't all that bad of a driving condition. Be sure not to drive past the limits of visibility and increase follow distances from 2-3 to 5 seconds and you're good. Even motorcycle races are held in the rain: https://www.youtube.com/watch?v=tv4_425K8r4


I have been in a lot of monsoon level downpours where it is effectively impossible to drive as visibility goes to 0. There are different types of rain.


At some point "no one[0] should be driving" becomes a valid answer.

[0] I mean, there are going to be cases, but the number of people currently driving in horrific weather conditions because it would be inconvenient not to is too damn high.


It seems a lot of posters aren't willing to consider self-driving cars feasible until the cars are as foolhardy as the bottom-decile driver while still maintaining a perfect safety record. But most of accident prevention is in the planning more than the execution. It seems only reasonable to expect SDCs to avoid situations that some human drivers would brave if the risk is too high.


You must never have visited Hemet, CA, or driven anywhere it rains a lot sometimes, or has a real winter.


I grew up with that kind of summer rain in Illinois, too. After about five minutes of it, yes, a lot of (sane) drivers pull over. But it comes on very suddenly, and for the first two or three minutes, traffic flows continue.

If you're in the center lane and traffic is heavy on both sides of you, while your windshield (or camera) is overwhelmed with water, you need to keep moving with traffic for a bit -- or you get rear-ended.


While probably true, "then don't do that" almost never works as an excuse to convince new adopters of your technology.


I'm not so sure that's true. Or, maybe, "that's user error" is what works because it seems like it's far too easy to convince users to blame themselves.


Well, when all your friends can drive in the snow but your fancy-pants car can't, that's not a very compelling value proposition.


LOL. In Atlanta, highway traffic just slows down to 40 in torrential downpour conditions where you can’t see lane markings.


40 mph?

On I95 in the DE-NY area, it's more like 60 mph in heavy rain. Or even in moderate snow. Everyone knows that they could never stop. But they're counting on others' inability to stop ;) But occasionally, there are massive chain-reaction pileups :(


It sounds like you've never lived in the southeast US? These conditions can occur on a daily basis in the summer, especially in Florida. "Pull over and stop" isn't realistic advice here.

Humans can and do drive safely in these conditions, however, and it's a valid concern to point out that Waymo's West Coast-centric models might not yet be trained to handle these cases.

(To be clear, I don't doubt that machines will eventually surpass humans in whatever weather conditions, if they haven't already.)


"These conditions can occur on a daily basis in the summer, especially in Florida"

Can confirm, just spent 30 summer days in Florida. People don't pull over and stop, they just put on their "hazards" and slow down, sometimes to 25-30 MPH depending on the intensity of rain. This happens every single day.


> safely

I've seen people do all sorts of foolish things. When the rain is so hard that I can't see more than 15 feet, I pull over and stop.


Nobody said anything about 15 feet but you. If you're pulling over in a normal heavy downpour with 500 ft of visibility, you're less safe than the people who continue to drive. Being pulled over to the side of any high-speed road is an unsafe condition that should be avoided.

Self-driving cars would be putting lives at risk if they just pulled over and shutdown on every summer thunderstorm.


With 500 ft of visibility, the auto-steering/auto-propelled car shouldn't have any trouble either.


How does a car know it can't see well?


It uses its sensors. If the LIDAR shows only white noise it knows that it can't see well.


White noise certainly isn't the only indicator that one can't see well.


How does a human know it can't see well?


Well, that's a great question. It's also basically equivalent to the one that I asked. My point would be that I don't have an answer to that question, and given your response, I bet you don't either. Which leaves either of us at a loss when we try to create a safe and reliable autonomous vehicle.

It's certainly not an unanswerable question. It's just quite hard to verify that they answer you have is a sufficiently correct one.


> It's just quite hard to verify that they answer you have is a sufficiently correct one.

Why? It seems to me that estimating the confidence of a prediction by the quality of measurements is not an infrequent machine learning task. It'd be more complex for a driving algorithm than calculating the confidence interval of a linear regression, but it's in the same theme.


A computer can predict with 100% confidence if you give it few enough measurements. That doesn't mean it will make the correct decisions. You have to already know you're measuring the right things.


If you don't want it to do that, don't program it that way. Code it to assume some and to estimate the degree of mismeasurement.


Fair enough! You're absolutely right: I don't know the answer. I misinterpreted your post as suggesting/implying that there is no answer for a computer.


You have to pull over when it happens. Sometimes those southeast flash flood rains go from bad to opaque in an instant. Depending on the road you may not have enough of a shoulder to pull over, and any other driver will only be able to see you at 5-10 feet away maximum and (initially) going 45-60mph. Not ending up in a ditch can be difficult.


These are daily occurrences in parts of TX and FL in the summer, and people generally do drive through them.


And don't pick up your phone with your right hand.


From that article:

> Also snow seems to be solvable, as there will probably similar patterns that the sensors get in their signals.

Real snow is not just a software problem. Fancy rear facing sensors near bumper? I hope they function through a couple inches of frozen muddy spray. Front windshield? Sometimes needs scraping. Lidar on roof? Does it work through the big heap of snow on it?


I have a hunch that as cars get more sophisticated, they also need to have more sophisticated treatment. The reason I leave my car out to have a foot of snow on it is because it works anyway. If the car wouldn't actually work without the windscreen clear and sensors ice-free, I would either use a garage every night or have a less sophisticated car. And I suspect that once autonomous vehicles get good enough, people will actually want to ride in them - so will keep them indoors more than they do with their normal cars. Effctively then, the rugged cars have become machines we treat like fragile fighter jets.

Still, for a wide range of weather conditions the autonomous car will just say a big NOPE and refuse to drive anywhere. Which is why it must always have manual controls. The autonomous car that could take me to the city through the worse conditions I experience every winter, won't have to take me in it because that car is probably clever enough to do my job as a software developer too.


There's real irony in this statement, when the people who develop the technology and commend home garaging are also the garage-free, commute-free kind type of people who vehemently tear down the idea of a suburban home/garage setup.

But I guess when it's your product; you change your mind.

*This is not an attack to alkonaut.


Can confirm proximity sensors don't work well on most cars when even a bit of road haze develops on them.

Though I'm sure SV's real solution to this is to have perfect weather (ie. "can't reproduce; closing bug").


As it is now, sometimes ice needs to be scraped off the windshield before you can drive. It may be the same with the cars external sensors.


Also mud spray and other road debris covering sensors can be solved with hardware (wipers, sprayers etc.) so it's probably not at the top of Waymo's long list of difficulties.


Until I see self-driving cars driving in a Montreal blizzard aftermath navigating the various streets, they can't handle snow.


I'm by no means an expert for self driving cars or auto-pilots, so take it is just my personal impression.

It seems that developers for self driving cars are dealing with a very different, and in some ways more difficult, environment than aircraft auto-pilots. While cars only have to deal with two dimensions instead of three and cars are still less complex machines compared to planes there are a lot more challenges. Unregulated and not controlled environment, traffic ranging from cars to trucks to motorbikes to bikes to pedestrians to playing children. Less room for sensors in the car, changing road condiitions, animals, you name it. Not a simple problem to solve.

And with a near perfect track record of similar systems in the aerospace sector the bar for acceptance hangs pretty high. Add to that the fact that pilots are trained in the use of these systems while the average joe / jane in a car is not. And the car should be able to drive without driver in the end.


obstacle density is really extremely sparse in the skies compared to the ground


Dealing with failures is far more complex for aircraft. When a car gets a flat tire or suffers an engine failure the driver can just pull over and stop. But an autonomous aircraft carrying passengers or flying over populated areas would have to cope with broken sensors, uncontained engine failures, loss of cabin pressure, stuck control surfaces, etc.


Both autonomous cars and autonomous aircraft (likely) have manual backups. So the goal is to be 99% autonomous, or something like that. That works for flying because the failures are probably more far between. The planes are very well maintained, the route is clear from other planes etc. For a car, reaching 90, 95 and 99% is really difficult because of other drivers, bad weather, ice covered sensors, higher risk of mechanical failures. I agree making a 100% autonomous plane that needs to land safely in an emergency is probably more difficult for the reason you mention.


>the driver can just pull over and stop

Yes, sometimes you can pull over and sometimes you just have to put the blinkers on, stop in the traffic lane, and hope for the best--even though it can be very dangerous. One challenge is that, with full autonomy, the vehicle needs to be able to very quickly deal with a once-in-10-years scenario in the least bad way--both for its occupants and for any other traffic on the road.


Sure, thing is these issues are extremely rare (and not just over populated areas) And that is why you still have pilots, and redundant systems, so yes as a system an aircraft is a lot more complex. The traffic environment is not, the high level of control of air traffic and its standardization is making sure of that. And the latter is making i easier for aircraft because they have to deal with a lot less randomness.

So, I could imagine developers of self-driving cars could learn a thing or two from aerospace. But that is just my opinion.


https://www.telegraph.co.uk/technology/2018/08/29/waymos-sel...

Surprising that Waymo has figured out rain but not things like left turns. Different problem spaces I'd imagine, but still surprising.


Can that be that in US cities there is not trmendous amount of left turns? At least that was the rwason I heard why the Amazon Logistocs routes in Europe were planned without any left turns driving everybody crazy and adding a lot of wasted time.


US cities have plenty of left turns, except for in new jersey where they're frowned upon. However, there can be a long wait to turn left, depending on conditions, so it might not be more time to drive a bit farther but turn right more often.

The waymo cars have trouble with getting hit while making unprotected left turns because they drive very cautiously. Except for the time they hit a bus, because they somehow thought a bus would swerve to avoid them (although the then Google vehicle was being overly cautious and trying to avoid an inconsequential object in the road). All in all, I would rather the waymo style of overcaution and immobility than the Uber style of over confidence and ignoring of potential obstacles. However, I would currently still prefer mediocre to average human drivers on the road with me over a waymo.


> All in all, I would rather the waymo style of overcaution and immobility

From what I've read here and in the linked article, a Waymo car would be unable to pass a driving test here in Germany. Consequentially, they shouldn't/couldn't be allowed unsupervised on German roads, just like a learner driver. (In Germany driving schools are mandatory for a license and their cars are fitted with dual controls for the instructor).


The reason Amazon plans like that is because it is faster. Left turns with traffic lights have a higher average waiting time. Also, the route can have left turns, they just have a much higher penalty.


That sounds like an assumption that holds in the US much more than elsewhere, because in the US you don't have to wait for the traffic light if you turn right, but elsewhere you do.


The problem is usually more with unprotected left turns at stop signs [or pulling out of driveways]--not traffic lights. (Though traffic lights can be an even bigger issue if there's no left turn arrow--leading to people making turns when the light has either just turned red or it's just turned green and the oncoming traffic technically has priority.)


That Waymo has not solved left turning is not really a conclusion supported by that article. A supportable conclusion is that the impatient meatbags are angry that the Waymo van won't dangerously veer out into oncoming traffic.


LOL, yes, the meatbags. Waymo exists to serve Waymo's purpose, it is a higher entity beholden only to its funding source not to petty meatbag concerns.


Well, yes, companies are self replicators, just like living things.


Waymo's purpose is to not kill the meatbags. Unlike Uber, Waymo has been very successful at this. Also the claim was that the car cannot turn left on its own which is completely wrong. There is merely a very high safety margin so that the car will only drive when it is sure that no collision will occur.


YN is so full of Google FUD. How is this comment not voted down to oblivion is beyond me.


"robots sophisticated enough to [...] improve on lackluster human perception"

This underlying misapprehension is at the root of the self-driving car hubris. Humans have remarkable perception of physical environments. The fact that electronic systems can beat humans handily on a few metrics has led to an entire industry built on the misconception that those few metrics are all that matter. But those were just the trivial bits.

The more sophisticated perceptual tasks--even the basic ones involved in the simplest self-driving task of traveling down a restricted access freeway--like correctly interpreting the behavior of other cars; interpreting whether the debris in the road constitutes a dangerous obstacle or something easily driven over; or even knowing which objects speeding toward you are moving and which are stationary--require more than just raw sensory input. There's a mountain of implicit cultural awareness embedded into all of those perceptual tasks that we haven't begun to form an idea how to represent in a software system. And again, this is the easy stuff!


> This underlying misapprehension is at the root of the self-driving car hubris.

I will basically guarantee you that no one working on self-driving cars thinks human perception is lackluster or an easily matched capability. Journos interpreting their work might impute that belief to self-driving engineers, but nobody could work on the problem for even the shortest amount of time without realizing that it is an incredibly hard problem.


Driving in dry / warm conditions is simple mechanics. Driving in snow and ice is an art.

In snow and ice, variables that are typically constant or static become dynamic. Formally dynamic variables now become multi-variable equations. For example: travelling down a hill normally requires dynamically factoring for the extra speed due to the incline. Under blizzard conditions you have to factor ice/snow volume, as well as the incline.

I don't want to share the road with self-driving cars in winter for a long time.


This sounds like exactly the kind of job for a computer.

I live in an area that gets a lot of snow. The only 'safe' things for a human to do in snowy, icy, wet conditions are: slow down (a lot), leave a lot more following distance, stay home. I talk to a lot of people who seem to think that they are able to drive at full speed in the snow and complain about all of the 'out of town' drivers who slow down the roads because they 'don't know how to drive in the snow.' It's the people who make those kinds of complaints that frighten me the most - they are deluding themselves. I'd rather share the road with a computer.


"people who seem to think that they are able to drive at full speed in the snow and complain about all of the 'out of town' drivers who slow down the roads because they 'don't know how to drive in the snow.'"

The most raged filled responses I've even gotten to a comment on the Internet (20+ years) was when I called out aggressive snow drivers on Reddit. Despite being from, and living in the Northeast, the rage from fellow Northeasters whom I was accusing of driving like assholes was intense.

I recently drove several thousand miles this summer across many states (towing a trailer). It was only when I returned to the Northeast that I encountered aggressive, asshole drivers. It never occurred to me that maybe ... we're the baddies?


They're mad at you because it's indeed frustrating to get stuck behind someone who has no business trying to drive in the snow. If you get stuck because you're terrified at the thought of exceeding 10 MPH, they get stuck too.


Aaaaand there it is. ;-)


"Slow down" is not, contrary to popular belief, always the best advice. Slowing down too much in the snow is a good way to get stuck.


Yep the key to climbing a hill in snow is to keep your speed up so momentum carries you over slick spots. I've seen a slick hill look like a Tetris game with stuck cars in many different orientations.

It's the same with a muddy and wet 4WD road.


> I'd rather share the road with a computer.

Not version 1.

Factoring in all these variables is going to be a lot of work for a non-universal feature. I have no doubt they'll be included some day, but I doubt it'll be day 1.

EDIT: To be clear, I believe that dealing with ice and snow is an order of magnitude more difficult than driving under normal conditions. This is an 80-20 feature that many people won't need. I can't imagine holding the product back for additional development when just locking the feature based on a thermometer would suffice.


I'm worried about the south europeans jumping into their self-driving car thinking they can come visit us in northern Sweden early spring (their season, still full winter here).

How do you meassure how slippery the road is? How much wind from the side? Will the wind blow me off the road because it has been polished by the wind just after those trees end, next to the lake? Will the car know to stop before even going because the car doesn't have winter tyres with good enough tracktion left?

A self-driving truck, does it know enough not to slow down in the uphill because it will start to glide sideways or backwards where the road leans too much to the side?

I get gray hair when I start thinking of all the problems that needs to be solved before we have self-driving cars.

One of the developer voes, am I wise enough to be humble about my limits of knowledge or do I know enough to be dangerous?


Modern cars can drive at their traction limit already, it's a solved problem.

https://en.wikipedia.org/wiki/Traction_control_system

It's been a safety boon to have a computer limiting power to the wheels when they slip, humans are really bad at estimating how slippery the road is.

Estimating for the road ahead is a different problem, but there is data to feed into that estimate.


If you read the manual for the traction control it is full of big warnings on every page,'Does not cancel out physical laws'. It is not a solved problem, it can only do as best it can uphill until car does not have enough traction to continue upwards. At which point the car will slowly glide down or into the ditch.


So what's your point? Are you saying human drivers can safely navigate those same hills or something?


What he's saying is that a human will, in those conditions, build up a speed in advance that is exactly enough to get up there before the speed has dropped too much (due to having to back off on the speed to keep traction. The computer can do the latter, but can it do the former? Probably when we get AI. But we don't have AI. Just machine learning.)


How well do the south Europeans jumping into their rental cars and coming up north to tool around on your weird roads fare now?


You're presuppose individual ownership of self driving cars. Ride hailing replacing those seems like a very likely outcome of self driving tech and won't have your issue thanks to "destination outside service area".

Actually, even with private ownership i'd not be surprised we'd end with region locked software, even if just for business reasons.


Well, eventually, when that computer has sophisticated software and all the appropriate sensors. An array of optical sensors that you ignore anyway won't really cut it.


Yet electronic stability control[1] works far better than human control on snow, wet roads, or ice. If you don't believe it, try an obstacle course on an ice track with and without it sometime (a required part of driver's ed in Norway). Example: https://youtu.be/hd5JMQr4qe0?t=56s

https://en.wikipedia.org/wiki/Electronic_stability_control


ESP works magic because it can do things that a human physically isn't allowed to do, namely "stop this one wheel from spinning faster than the others".

Incidentally, if you've ever tried driving on snow/ice/mud in a 4x4 car with fully locking differentials, you have a completely mechanical solution that beats ESP.

And FWIW, Swedish insurance agency research pointed out last year that cars with permanent 4x4 and ESP were up to 40% more likely to get into severe accidents in bad conditions, because the driver does not get the "hey, this road is slippery, better slow down" experience, so they go too fast for the prevailing conditions.

http://feed.ne.cision.com/wpyfs/00/00/00/00/00/3E/5A/A2/wkr0...


That made interesting reading. It would have been even more interesting if they had also divided the population of cars by the drivers sex and age. At least here in Norway it is very much the case that accidents are more frequent and more severe when a young man is driving. Also I suspect that women on average are more likely to drive a two wheel drive car. My point is that we might find that four wheel drive cars driven by women are as safe or safer than two wheel drive cars driven by women and that the worsening of safety for four wheel drive driven by young men is even more dramatic than the study showed.


Stability and traction control is not the same as a self driving car. Keeping wheels from locking, and keeping cars from sliding is largely a solved problem.

I think the previous poster was more about referencing recognition of patches of ice ahead and slowing down preemptively. This is a big issue in places with bridges, since ice can form on bridges but not the rest of the roads (since bridges aren't kept warm by the earth, the dirt under most roads acts like a heat battery). Also expecting when other drivers will slide. I've actually witnessed people getting, how would you put this, reverse-rear-ended by drivers trying to go up a hill and then sliding back down.


I'd argue that Self-driving cars are better capable of recognizing ice patches. They don't just rely on visible light to see the ice, but have a more diverse sensor array.

The challenge there of course is having the car learn to interpret the input of these sensors. But that doesn't seem like a unpassable hurdle. On the contrary, it would be pretty high on the list of things to tackle when aiming for driving in snowy weather.


Ignoring the real problem you mentioned, camera dynamic range (for example) is pathetic compared to the human eye.


Depends on the camera, but mostly yes. OTOH, it's (relative) trivial to improve that for cars; it's way harder to improve it for humans.

(Also, as a data point, I think there was a guy who drove his Tesla with lights off, and it was able to stay within the lines)


Unscientific test. Give the human individual brake control (I can think of simple analog systems to do it).

I learned to drive on ice covered dirt roads, many times I would have liked to have the ability to asymmetrically brake. A simple 2nd order gimble would be awesome. Having a single symmetric pedal dumbs it down way too much to make a pre-programmed car vs human comparison.

Same thing goes for torque distribution.

It's generated dogma that pre-programmed devices can perform better than biological systems, so I bet my suggestion will be met with incredulity bordering on "y humans cant handle x".


> I would have liked to have the ability to asymmetrically brake

Yes, I have been in that situation too.

As for brakes.. ABS is very bad at handling hard snow with a drizzle of fresh snow on top. It's nearly impossible to stop. I've had to actually turn off the engine in order to stop the car when driving (slowly) down a hill from a hotel in a certain snowy area, in order not to end up on the main road in front. That was with various rental cars (yes, I go to that place regularly in my job). The first time this happened I thought here was a technical problem with the car. So I asked around. Same experience from others. The cars vary, but they're all much worse than fully manual brakes, in certain conditions.

And then there was the time there was a reindeer (yes, a reindeer) running along the road, zig-zagging in front of the car. Snow conditions as described above. ABS refused to brake. Didn't want to switch off the engine (busy steering), so after about fifteen seconds of this I drove the car off the road instead, safely into a lot of snow.

And people want computers to drive cars, when computers can't even be used safely for brakes?

In any case, I've yet to hear about a self-driving car which can predict that another driver will get into problems in a few seconds unless I slow down or increase the speed, in order to allow a third car to adjust. This is something I see or do nearly daily, on my commute.


I have A/B'd my xar in the snow with and without ABS.

Its a trade off.

If I mudulate my brake lightly while threshold braking, I can usually stop 15% faster than similar application of ABS. However, when I mindlessly slam on the brakes, ABS wins by a margin of 20%.

This is not the full picture however, and better driving skills alone do not negate some abs advantage. The main one being control in Turns while braking on poor surface. Without a knack for sliding sideways around corners, ABS was clearly superior when braking during a change of heading.

That all said, I run without ABS, as it suits my personal driving abilities. This would be a mistake for most people. Fortunate for me, most performance cars afford me the option, while some others consider it a critical system, and will not even activate the power steering unless detected as working. As if steering somehow became less critical without abs.

Modern car engineering is still trying to turn a tool into an appliance, and it feels like a severe mismatch of values to me.


A mind-machine interface would be probably appropriate there - I can't imagine a physical interface for this which could be operated by anyone except for an octopus (but I don't see many octopi driving anyway).


Variables I can think of that change:

- Lane lines cease to be a thing, even if you can see them, you're better off not following them

- Stoplights also mostly become a guideline, it's sometimes safer to run them than spin out

- Signage or signalling may or may not be visible at all

- Other cars or pedestrians may or may not be visible at all even in daylight

- Your stopping distance can effectively change every few feet

- A foot of snow accumulation may be transferred from the roof of the car in front of you to the windshield of your car with little to no warning


Of these, the following are attributable to "static variables become dynamic" (and highly nonlinear at that...):

- Your stopping distance can effectively change every few feet

- A foot of snow accumulation may be transferred from the roof of the car in front of you to the windshield of your car with little to no warning

The following are hard planning problems:

- Lane lines cease to be a thing, even if you can see them, you're better off not following them

- Stoplights also mostly become a guideline, it's sometimes safer to run them than spin out

But these are just damn hard and probably the true limiting factors:

- Signage or signalling may or may not be visible at all

- Other cars or pedestrians may or may not be visible at all even in daylight

But I'm not an expert on perception so... maybe that's my bias showing.


Infrared vision seems to handle a lot of these, if not especially the last one


Frank: Well of course I know all the wonderful achievements of the 9000 series, but, uh, are you certain there has never been any case of even the most insignificant computer error?

HAL: None whatsoever, Frank. Quite honestly, I wouldn't worry myself about that.


E.g., going down hill in the winter commonly requires putting the transmission in neutral since just the torque to the drive wheels from the engine idling or the engine braking can overwhelm the very tiny traction available.

There's a really short description for self-driving cars -- hyperhype.


What? I've been driving in the mountains in Canada for 15 years and have never heard of anybody doing this.


I do it every winter going down my driveway. Otherwise my tires would slide instead of roll; I'd lose directional control; my car from just some bump in the driveway could turn sideways; and I could roll upside down, over and over, the rest of the way down the driveway.

And I'm in two wheel drive and not 4 wheel drive because the extra torque it takes to turn the machinery for 4 wheel drive is too much for the poor traction.

I stop at the top of the driveway, make sure I'm in 2 wheel drive, hold the transmission selector so that I can get into neutral quickly and easily, just slightly get the car rolling, go for neutral, mostly just let the car roll down at most just touching the brakes very lightly or not at all, let the car get to the level, bottom of the driveway and a start on the back yard, and then stop the car, go into reverse, backup a little, and then take a left turn into the garage.

Losing directional control and having the car turn sideways and roll over is a big risk. The driveway is not very long, but it's steep.


I have always heard it in opposite; that it will be hard to brake & stop if in neutral as compared to if in appropriate gear.


There can be cases in two wheel drive where the engine is just idling with the transmission in Drive, touch the brakes just a little, and have the drive wheels turning and moving the car while the other two wheels, with so little traction, are not turning but sliding with no directional control.


Or most people. If it is actually blizzard conditions, one should have a very serious reason to be on the road. If not, stay at home instead of driving. Snow days are called for schools. Maybe work in general also?


I think this is very regional. I have never had a snow day in my entire life because we get snow several months per year. I think the "snow day" phenomenon happens in cities where snow is rare enough that there is no infrastructure in place to take care of it (having that would be more expensive than just accepting everything stops a couple of days per year) AND not every car has winter tyres.

After 60cm snow on one night everyone is expected to be at school and work, on time, just like any other day. It should be noted that as a driver I also expect the roads to be bare at 7am even after that kind of night, and amazingly they almost always are. At least all major roads will be, but residential streets will not. https://static-cdn.sr.se/sida/images/109/b924f65c-8cc4-4332-...


Where there is a lot of snow the infrastructure handles it. Snow plows run all night long. There is 60cm of snow on the ground, but at most 3cm on the roads, and plenty of sand/salt underneath so that there are no ice spots.

Where there is little snow they don't have plows or salt/sand. 4cm of snow and you better stay home because the roads are covered in ice, and nobody can help you.


If roads are covered with ice then you should have studded tyres or at least M+S. If even 10% of the cars drove around with the same tyres all year, I'd stay at home too (even if MY car had studded tyres).


How cold/humid on average is it?

Where I live, temperatures between 2C to -5C make the worst driving conditions (you're typically dealing with a mix of freezing rain, snow, ice, and slush).

Even though it's colder, -20C and just snowing isn't as bad to drive in.


It varies a lot, but yes - near freezing temps is much more dangerous to drive in than very cold.


Here in the snowbelt we usually see a few snow days per year, especially in rural areas. Not because of snow accumulation, which is easy enough to deal with, but because of visibility while it is snowing, which can be reduced to zero. The roadways are closed during these events and it becomes against the law to be on the road.


Yeah I don't think most bosses in Boston are going to let you stay home every time it snows.


The last 2 jobs I've had we've always been told pretty explicitly if the roads are bad with snow DO NOT try to get to work.

Hell, I've been told I've not even to walk to work when it's snowing. ALthough saying that the last time it snowed here badly, I tried to walk to shop and fell over twice. Maybe my employer just knows me too well...


Yes, but the experience of a software developer is not representative of most people's experiences. A lot of jobs they're going to want you there if the roads are not literally closed, come hell or high water.


There's a difference between "every time it snows" and "actual blizzard conditions".


Self-driving cars don't deal with either scenario very well yet.


The temp doesn't rise above freezing in Minneapolis for 3 straight months of the year. There is snow on the roadways for at least 1/3 of the year. Your solution is that no one in Minneapolis should drive from November through March.


Not only are both your generalizations about winter untrue for most years, you know the difference between "snow on the road" and "blizzard conditions" if you live in Minnesota.


This is a very locational consideration. In areas that regularly have snow and ice, everyone will have winter tires on their cars, studded if icing conditions are expected, or even snow chains, and a big crew of snowplow operators will keep the roads relatively clear of snow at all times. Starting with salting the roads in preparation for cold weather, and then sending out the crews the moment there is a few millimeters of snow on the road.

I have never experienced a "snow day", although there have been days with problematic traffic conditions due to snow. (Usually due to an experienced or unprepared driver going off the road on the first day of snow).


Bad trend when limitations of pre-programmed cars become apparent; treat the humans as the bugs.


Even worse: it works. Jaywalking 2.0, here we come!

https://www.vox.com/2015/1/15/7551873/jaywalking-history


In most places that have snow removal equipment “snow days” are actually called more frequently for cold. Leaving kids out in the cold is the issue.

Driving in sbasic snow is a solved problem for humans in much of the world.


> Leaving kids out in the cold is the issue.

I live in Norway and beg to differ. We see temperatures down to -40 C in winter, and the kids love it. We don't have "snow days", it's literally not a thing here.

All you need is proper clothing, some seal fat for rubbing on your face, and cold weather experience among the adults (checking the kids faces for warning signs of frostbite, etc.).

95% of Norwegian babies and toddlers sleep outside in their prams (in sleeping bags) unless the temperature goes below -20 C. Then they come inside for sleeping, but will still be outside playing.


I was baptised in -40 C weather. The temperature inside the church was just above freezing. My parents did take steps to ensure I was safely warm when outside, but there was no question of staying home.


“All you need is proper clothing, some seal fat for rubbing on your face”

These are not things widely available to many Chicago children.


Vasoline works fine. Affordable and available to even the poorest families. Lets not poke at cultural differences when its not relevant.


And what about coats & shoes? I know this comes as a shock to people but large swaths of your neighbor school children do not have access to those items.


> 95% of Norwegian babies and toddlers sleep outside in their prams

Can you elaborate? I don't understand this sentence at all.


Searching for the phrase on Google gives a good introduction: https://www.google.com/search?q=Norwegian+babies+and+toddler...

Summarizing, pram is short for perambulator, which Americans call strollers. In Nordic countries, it's believed to be healthy for infants to sleep in fresh cold air. To the horror of outsiders, babies and toddlers in prams are commonly left parked outside of establishments in freezing weather while the parent is shopping or dining inside.


Thanks. Definitely some culture shock there.


The one glimmer of hope that I have is that I am literally incapable of driving in the snow without substantial software assistance, and that's true for literally every other human I know.


I learned to drive on snow and ice in the midwest in a stick-shift car from the '70s.. of course, I would rather have traction control and ABS, but it can certainly be done (and it was for many years before the modern conveniences)


Sure, people could learn how to drive without software assistance.

Just like people could learn how to drive a stick shift.

But most people (at least in the US) don't know how to drive in a stick shift or how to navigate snow without ABS and TCS.


I think everyone should learn how to drive in the snow without ABS and the like - just like I did, and I'm 30. The car was $400, front heavy, and had rear wheel drive Learning when you should be applying gas, when you should simply take your foot off, and when you should brake (rarely!) in icy conditions is hard to learn otherwise.


You and the people you know may just not be suited for driving in those conditions. Mountain areas in California have plenty of people capable driving in some serious snow with older cars, ABS is usually the only thing they have, no traction control or other software help. Anecdotal, of course, but so is your comment.


> You and the people you know may just not be suited for driving in those conditions

...northern Wisconsin gets its fair share of snow.

> ABS is usually the only thing they have

I think you've just agreed with me. ABS and TCS are exactly the systems I was referring to, and most of today's drivers have never encountered a car that doesn't at least have ABS.

I've driven a car without ABS or TCS in the snow. It's extremely difficult. Kind of similar to driving a stick shift for the first time. The idea is simple, but getting it into muscle memory enough that you're not constantly making mistakes is non-trivial.


Some basic knowledge of traction and momentum go a long way. Your car will generally go the way it is heading, as far as it can slide. Every time I see people lose control in the snow and ice, it is because they're driving with their brakes and not their momentum.


How did people survive before ABS and traction control?

It's not that bad. You just dial back your risk taking to reflect the situation.


Frankly, many of them didn't. Highway deaths have come down a lot since the invention of ABS.


People not driving drunk less and wearing seat belts more has a heck of a lot more do with it.


I don't disagree. Tons of factors contribute to gradually making highways safer over time.


Is that with proper winter tyres or not ? With good tyres you can drive in pretty bad conditions.


Snow tires are a big help for getting going. They are much less help for stopping.


To the contrary, "winter tires" actually have comparable benefits for stopping. Here's are the first three findings (of ten) from a recent University of Michigan report:

Finding #1: The main benefit of winter tires is improved tire adhesion, braking and cornering performance–not acceleration performance.

Finding #2: Winter tires provide improved traction on roads that are below 7 °C (45 °F) even when snow and ice are not present.

Finding #3: Stopping-distance performance of winter tires on packed snow is typically about 35% shorter than all-season tires and 50% shorter than summer tires.

http://umich.edu/~umtriswt/PDF/SWT-2016-10.pdf


I agree, but "winter tires" are quite different from "snow tires".

Moreover, I was implicitly assuming that the stopping would be on snow, and not packed snow but, say, just snow as it fell although maybe a minute ago to several days ago.

For stopping on wet or slushy streets or just, say, 1-2 inches of snow, then, sure, "winter tires" not only help stopping but likely are better than snow tires.

But when my driveway has 8" of fresh snow, to get out I need snow tires, tire chains, or a tractor to pull me out. Then in stopping, say, from 30 MPH in such snow, nothing helps much except tire chains and there are questions about those.

Again, what is good about snow tires is that the big bumps on the tread let the tire, when it is spinning trying to drive the car, dig into the snow, kick it backwards out of the way, and let the rubber meet the road. For that, the aggressive tread, with the bumps, is essential. But the bumps work because the tire is spinning and digging, and that action won't work for stopping unless, say, going forward with, somehow, the drive wheels spinning in reverse.

The advantages of "winter tires" are how the tread cuts through water and thin layers of snow and has softer rubber with a better coefficient of friction when the rubber does meet the road.


I have driven in 8" of fresh snow with winter tyres, more than once, they work well. I do remember my grandparents having tyres with deep treads but that kind of thing isn't common in Europe.


This is so incorrect it hurts.


My point is clear if just consider a little of how snow tires work: The main point is the big bumps in the tread. These big bumps permit spinning drive wheels to dig down in the snow to find some traction, maybe the hard surface below. This digging down has the bumps grab a lot of snow and throw it back and out of the way and let the tires get down, through the snow into some traction. Ordinary tires just spin and throw far too little snow. Works great.

But in stopping, there is no such "spinning" or "digging". Instead, all the snow tires can do is roll or slide.

So, snow tires help a lot in going but not much in stopping.

Typically get into winter car wrecks from not stopping, not from not going.

So, once snow tires have you going, be darned careful about stopping -- the risk is not stopping, and snow tires aren't much help because the lugs don't get a chance to dig and throw snow.


>"My point is clear if just consider a little of how snow tires work: The main point is the big bumps in the tread. These big bumps permit spinning drive wheels to dig down in the snow to find some traction, maybe the hard surface below. "

That's not how snow tires work. Not at all. Independent of the design. And no, they don't 'stop' the way you describe either.


Modern snow tyres don't have big bumps in the tread.


The ones on my car do.


As a Canadian, it's the snow that's made me skeptical of self-driving cars. Seeing how much they depend on road markings to navigate in traffic properly is completely impossible in winter. Until that's solved, there's a lot of places that just can't use them for a lot of the year.


It seems plausible to me that LIDAR + radar could eventually outperform humans in white-out conditions.

Here's an interesting bit of research on filtering out snow noise from a LIDAR point cloud:

http://wavelab.uwaterloo.ca/?weblizar_portfolio=real-time-fi...


In those conditions maybe, problem is where is the road.. or lane. There are no markings of any kind when you have snow on the road. No marking you need to estimate where lanes are from what is near road and what other cars are doing. Exponentially harder problem


But if snow occludes lane markings, which is what the parent was discussing, then no amount of sensing helps you see them, right?

(You can have a map with lane markings and localize on that map, though)


Except that many times when the lines are invisible, new lanes form that are in a different location. So your map will not match up with what the rest of the traffic is doing.


I agree. I was just anticipating the common response to that challenge, or any other "static" perception challenge, which is "but there is a map"


I mean... the map at least bounds the problem. You can be pretty certain in almost every case that the lanes won't be outside of the mapped roadway.


Waymo seems much less focused on road markings than Tesla and Uber, and instead more focused on the geometry problem of not hitting stuff. They are also in a great position to analyze summertime satellite images to get a better understanding of road markings. I'm optimistic that this is a very solvable problem for Waymo.


Solvable, indeed - in the Dartmouth-1956-workshop meaning of "solvable": there's as of yet an unknown amount of research to be done, and it looks reasonably complex to be hacked at by a dozen people over the summer. Therefore, a few man-centuries of research and development will surely bear fruit.


I wonder how self-driving cars will handle highway lanes in active snow. Here in Minnesota four-lane highways regularly turn to three and two lane highways when the snow covers the roadway markers and people just start making their own general paths. Will the software just follow the crowd, or will the software car be the one weird vehicle ignoring the conditions and following the lane that no one can see.


I don’t see the fact that self driving cars can’t drive in all conditions as a reason not to still want one that can drive on the freeway in good weather. A lot of people spend a large portion of their driving time on freeways so if we could solve that to a standard of safety that is acceptable, that would be a huge win in my book. As soon as it starts raining, have the system disengage and I’ll take over.

It seems to me that a lot of people commenting here are a bit obsessed about the edge cases, and if it has edge cases then it’s not a “true” self driving car. I disagree. I think a car that can self drive in some situations would still be amazing.


The edge cases are a show stopper: You are talking about something like "driver assist" or upgraded cruise control. Thing is, still need a human right there.

Much of the dream of self-driving is for no human driver there, for taxi cabs, school buses, 18 wheel trucks, local deliveries from pizza or Chinese carryout, USPS, UPS, or FedEx, etc. For that, for current technology, for current traffic, on current roads, there's no hope at all because the edge cases are way too common in practice (can't put up with mean time to destroying an 18 wheel truck of six months -- 5 million miles is more like it) and require full, wide awake, sober, mature human intelligence with full ability at reading, talking, understanding, natural language understanding, hand signals, flag signals, tough to read road signs, etc. Edge cases.


Safely disengaging is a pretty critical feature that we haven't heard much about. We know that Tesla's don't do it, and that Uber's don't do it, and waymo says in their tests, humans aren't capable of reengaging in a short time frame. In a lot of the most vexing cases, I would guess the time between everything is fine and the computer can't find amy safe options is quite short.

A self driving car that can't drive in all conditions should be positioned instead as a driver assistant that can't assist in all conditions. If the expectation is that the human driver is doing all the work, but sometimes the computer will help keep the lane, or with emergency braking, it's OK for that to not happen in rain or snow -- it was always the human driver's responsibility, the computer will help when it can.


Even full autonomy, but restricted to very limited intervals could make sense as a compromise. E.g. up to five minutes of hands-off (and eyes off) per hour or something like that. It looks weird at first, because hey, if the car can drive itself for five minutes, why can't it drive itself for five hours? But only at first glance. I believe that it makes a lot of difference which is the regular case and which is the exceptional. Humans would be terrible at idling most of the time to occasionally take over, but computers can be just as attentive waiting to take over as while actually driving.

Also, that time-limited "hold the wheel" mode would typically be engaged exactly for those times when the human driver is not at their best (tempted by communication devices, tempted by the food basket or just wrestling with navigation), so achieving better than human would be relatively easy.


The first cars ever couldn't drive in 100F+ heat, and wouldn't start below 0F. And even worse, there were virtually zero gas stations! Especially in rural areas!

Does that mean they were useless? Absolutely not!

Lately the mindset seems to be unless a product is all things to all people at all times then it's useless.

In reality a product just has to fit a category really well and it will sell like hotcakes. Then it will evolve in iterations.


This reminds me of intel's optane. Yes they overhyped it and it doesn't perform 1000x faster than SSDs. But you have to consider that SSDs required decades of improvements to become viable and cheap. Even when Optane was a first generation product it was already better than SSDs in terms of latency and IOPS but despite that people thought it was a massive failure.


Sure. This is just pushback to the recent hype "it has full autonomy capabilities [small print: except not at all]". If the first cars had been marketed as "better in every aspect and in any conditions", you would have seen similar reactions.


I think it is because I assume that highway driving in good weather is a solved problem. I'm a skeptic so I also belong to those that believe that "complex" driving in bad weather is a nearly impossible problem to solve. So the interesting part of this discussion isn't "will cars have useful autonomous features?", but "will we see completely autonomous cars?". The second part is key because that is what things like autonmous taxis require.


My old summary has been that current driving on current roads with current traffic occasionally but too often for practice requires actual, full human intelligence, generally that of a prudent person over 16.

So, the intelligence of a mouse, crow, parrot, dog, cat, monkey, elephant, dolphin, orca, etc. just is not enough.

So, self driving cars requires essentially full AI for driving and nearly everything else in life. So, the self driving car problem is no easier than the full AI problem.

Sure, there can be some special cases that are easier -- trucks in a huge, open pit copper mine, a tractor on a huge, flat farm in Missouri, some military battlefield situations, and the public roads if heavily reengineer them with lots of essentially electronic tracks, etc., uh, right, on roads in perfect condition, no big objects falling off trucks, no drunks, no tire blowouts, dry weather, daylight, perfect visibility, no alarms from an extensive monitoring system, and no rain, sleet, or snow. Then, sure, a "self-driving car"!!!!


Honestly, it seems like a more workable solution would be to improve the roadways with built-in beacons to indicate where a road is, and where it isn't. The cars would work only on the improved infrastructure, but they would work. I suppose there is a danger of nefarious actors spoofing beacons to make you drive off a bridge, but it seems far-fetched.


To me that removes the "self" in self-driving car. A car that stays on the road because there are magic beacons is like bowling with thise giant bumpers in the gutters. It'd be more like cars with uber cruise control.


Don't we want driving to be more like bowling with bumpers in the gutters?

Safer is better, no matter how we get there.


> Safer is better, no matter how we get there.

Not really true. It'd be safer to just get rid of high-speed vehicles in the first place, but that would be disastrous.


Yeah, I knew that was a bit hyperbolic when I wrote it.


That would be a train, essentially. Nothing wrong with that - but some of the stakeholders would need to eat crow. That could prove problematic, as some actors have doubled down on the "full autonomy real soon now™".


it could be more easily done for limited access roads and lanes. like HOV and Express lanes. They are already heavily marked.

Still in the end it requires a combination of changes to how we mark roadways, indicate construction, detours and the like. Think of it like the ADA but for cars.


I would think of it less as an "accommodation" and more as an evolution. The way that roads are marked right now is a direct consequence of the fact that the markings need to be consumed by humans moving at high speed. When highways were for oxen, horses, and pedestrians, they didn't need guardrails, explicit breakdown lanes, and exit ramps with highly reflective signs. In the same way, when the majority of the piloting entities using the road have other input options besides visible light, to support them we can/should take advantage of those other channels.


Recent Waymo leak suggests rain is solved and snow looking good to be solved next year.

https://thelastdriverlicenseholder.com/2018/09/14/waymo-plan...


>For the local breed of unflappable seagulls—which can stop autonomous cars by simply standing on the street, unbothered by NuTonomy’s quiet electric cars—engineers programmed the machines to creep forward slightly to startle the birds.

seeing the fun my dog is having chasing the birds, i'd suggest that a Boston Dynamics dog would jump out of the car, scare the birds away, jump back (and continue to drive while watching electric sheep funny videos :)


Or construction, or detours, or cops directing traffic ...


I'm still curious about street signs.

Can any self driving car read parking signs because it's not like there is some API for when/where you can park or drop passengers off.


Waymo for one definitely reads signs, but I'm not sure about those. Lots are written in ways that the average adult human can't decipher.


I kid you not - “No parking Mon Tues and Thurs. Every Month” is a real sign. I'm yet to figure out what that means. Then there are the slightly confusing ones with conflicting messages which you learn to decipher with experience. Placement of signs (esp one-way) are not consistent or does not take into account and driving cars - how will a car exiting a garage or a highway know if it's one-way or no? Definitely not without a mapping software.

As someone who had to relearn the rules of the road for a new country - there is a lot that's unsaid.


I wouldn't be surprised if something like that existed now or in the near future. There are APIs for speed limit, it would make sense to have them for other traffic rules as well.


My naive view is that a simple RFID standardization would solve that.

Together with some regulations, of course. Though, this already has to be regulated. Not just anyone can put up a "only park during certain times" sign. (Right?)


Sure, put up a sign "do not park if moon full, no red cars at any time", on your own property. This only differs from the regulated road parking by attributes that are not visible on the ground (ownership).

But worse: RFID makes for duplicate signage. What if the visible and data signage differ?


For the last part: If the cars, for the most part, actually are able to read signs (which they should), the RFID would serve as additional security.

And in this case, any discrepancy would be reported back by any car passing by, so the error could be corrected quickly.


If they differ, then there will be regulation on which applies. No different than current practices. Usually happens if there are two languages.


Sensible 4 (http://sensible4.fi/) uses bunch of various sensors including LIDAR, IR and normal camera. Seems to do ok in typical winter environment: https://www.youtube.com/watch?v=nQWrngWvcnU


That's awesome, they also have some old videos from 1990-1993:

https://www.youtube.com/watch?v=t7jxensSdhE

https://www.youtube.com/watch?v=IMlz5tAKmUs

They've been working on this problem for a very long time. Which explains the progress they've made. I don't know how people can say this is an unsolvable problem when some high end research applications have made such progress.

Plus there's Microsoft's 2016 announcement with Ford about algorithms that can work within snow/rain by identifying raindrops and snowflakes then 'ignoring' them:

https://qz.com/637509/driverless-cars-have-a-new-way-to-navi...


This is one of the times where I am glad to be working in maritime autonomy and not (any longer) in automotive -- we have the considerable benefit of being able to modify the operating procedures for our autonomous vessels to restrict operations or modify operating procedures in environmental conditions where normal operation is no longer safe.


Not to be Cynical,Somehow the article sounded like a sales pitch for "Wavesense". They too want to have pie out in the self driving race. As mentioned in the article, each sensor(Radar,Lidar,Camera) has a limitation on its own. But, Self driving cars depend not just on one sensor but on Fused output of sensor. Also, they use V2V, HD maps and GPS/DR for localisation. Wavesense is another sensor that can solve localisation problem. They too might have difficulty in cases where there are fallen metal objects that reflect different signatures, causing trouble. More the sensors, more the confidence for Self driving.

Edit: Grammar & some terminology


All the extra sensors that can be added to cars in addition to the suite of cameras make self-driving cars seem inevitable, yet no manufacturers appear to be close to bringing one to market.


The problem isn’t sensing, it’s decision-making.


The trolley problem is a tough one.


I've never understood why. I can't imagine any programmer or self-driving car company having any interest whatsoever in going down that particular rabbit hole. The answer is simple -- we don't expect humans to do realtime analysis of who is worth killing when an accident is happening, so we will not expect that of computers either. It's an unsolvable problem. So we just tell the computer that in the event that an accident appears unavoidable it should do everything within it's power to stop the car immediately. End of story.


It's a thought experiment; do not expect a literal trolley (or even literal corpses!) any more than Schrödinger's cat deals with actual felines.

Here's a restatement without all those corpses: In the event that an accident appears unavoidable, the computer should do everything within it's power to stop the car immediately. (This is already a given: safety first.)

"If there are multiple ways of doing the above, what variables should the computer optimize for - stopping distance alone, or stopping so that it doesn't immediately get rammed from behind?" There's your trolley problem again, just restated so it doesn't appear so offensive: in both cases, the occupants of the vehicle are in danger, as are the occupants of nearby vehicles. Now is the interest in the rabbit-hole clearer? The problem doesn't go away just because it's inconvenient to solve...


Well, that's where crumple zones come into play. Trying to have a computer calculate whether or not it is feasible to stop slower to avoid or reduce an imminent rear collision gets very complex in a hurry. What if the guy behind you is much smaller, or much bigger, what if his car has better crumple zones, or none, what kind of energy is he going to impart to you when he hits and where will that send you, who is at that location, etc. These are the kind of questions that can't be answered with certainty even when you give a supercomputer a few days to work it all out. So I think the only reasonable solution is to not expect self-driving cars to make any kind of moral decision. As a bonus it is much easier to explain in court and saves us endless pontificating on whether or not the computer made the right choice and who is actually responsible if it did not.

We tell humans not to swerve for squirrels, because it's a great way to end up dead, so the computer should get the same instruction. Slow down as you can, the prime directive is to maintain control.


In that case, I present a SDV that never makes a wrong choice - by remaining stationary at any cost ;o)

In other words, there is always a balance between safety and usefulness, and the question of "is this the right choice" is always upon us, whether we want it or not. You can never have certainty anyway, what the software is doing is maximizing on some reward function. The TP also asks "is there even a moral component to this?" It seems there is, from the range of emotions this conjures up.

(btw "do not swerve to avoid unknown objects" has directly caused at least 1 dead person - Elaine Herzberg - so that's a really unfortunate maxim for illustrating your point: the car has also an obligation not to be a danger to others. "We just maintain control and everything else be damned" is easy to explain in court, true: IANAL, but sometimes you need more than a simple explanation to avoid being convicted.)


To be fair, I said don't swerve for a squirrel. History is filled with examples of people giving up their own life for a squirrel/cat/dog because they swerved and lost control. You should have planned in advance what your threshold is for swerving. E.g. just brakes for anything <= white-tailed deer, and active avoidance w/braking for anything bigger, or human.

Elaine is not a great example because not only did the car choose not to swerve, it also chose not to even try to brake. And on top of that, a human would have seen her from a lot farther away.


Because it was trying to decide if it's a shopping bag or a bike or a human or a squirrel...for six seconds, long enough to stop multiple times. The problem, from what we know so far, wasn't "car didn't see her," it was "car wasn't sure what it was, therefore squirrel."

And the larger point stands: "protect occupants, ignore outsiders" is a choice in the TP, always choosing the same strategy is not "TP is irrelevant."


And this kind of evaluation of one life against another is completely illegal in Germany. I.e. the german automotive industry legally can not make such systems in their automated cars and no cars operating in Germany can ave these systems either. I fully exect other countries to enact similar laws, if it ever becomes necessary.


In my entire life, I've never encountered the Trolley Problem, and I've never met anyone who has.

Most safety problems in driving can be solved by slowing down.

The problem with AI is all those weird, little edge cases that humans can reason through -- for example: if there's a deer next to the road, then I'll slow down, even if it's not on the road yet. I've known many people who have hit a deer when it spontaneously jumps into traffic. Or something like: someone's not quite staying in their own lane, so I have to be careful when I pass them.


Perhaps most safety problems. But if you need to outrun an erupting volcano, a tsunami, or the police, slowing down won't really help you.


"... outrun ... the police"

I doubt that this is a use-case that legal, for-profit companies will be pursuing.

For the other use-cases, you can just say: Manual driving only.


I think that this is a more realistic/relevant example of "slowing down or stopping is not always the safest action":

https://www.youtube.com/watch?v=eWuK-fi-D_w


It has nothing to do with the trolley problem. The trolley problem is a headline-grabbing non-problem in the actual design of driverless cars.


The real trolley problem:

[ ] Let self driving cars kill people during development but try to make it up by saving thousands of lives after they are perfected. (Uber)

[ ] Make self driving cars extremely safe from the start but more people end up dying from manual driving because development takes longer. (Waymo)


The underlying assumption "general autonomous driving can be made much safer in a short timeframe [years]" is not granted. It may be that the first option could have a death toll higher than what it would prevent - but the thing is, we don't know, it's an unknown, an assumption "it will be much better once everybody drinks the koolaid."


Well boys, looks like it'll be another couple of decades


Or it could start taking off 2-3 years after the bubble bursts. Amazon lost 90% of its value between 2000 and 2002, and many of the big players today entered the market between 2002 and 2005, when confidence in the "new economy" was at a minimum.


That's because of LIDAR, rain and snow interfere with LIDAR. A simple Google search shows that https://www.quora.com/Autonomous-driving-how-good-is-LiDAR-u...


As LIDAR's pièce de résistance to date has been its ability to "see through" forest canopies to the terrain below, I don't understand how rain or snow should present an unsurmountable obstacle to seeing the road ahead.


Trees don't refract and reflect light.


Of course they do. Airborne lidar usually works in the near infrared spectrum which is not absorbed by wood. It is however more absorbed by leaves which have a higher water content. My main point is: airborne lidar (which is what OP is referring to when talking about removing forests and keeping the ground) has multiple returns for one laser beam. Laser beams can have a radius of 1-2 meters or more when they hit the ground. By keeping only the lowest return in the z direction you most certainly have a ground point. It's not rocket science and interpreting ground lidar scans (velodyne and such) in bad weather is a much much more difficult problem.


What would prevent using the same technique of only retaining the maximum Z direction return to tell the difference between a refracted rain drop and car bumper?

Perhaps it is that the rain drop refracts the beam to all sorts of other objects which can make it difficult to tell which actually has the longest Z-distance return?


If I understand correctly the airborne LIDAR works in the timedomain and is much more expensive, while the LIDAR on automobiles is much more affordable but works in the frequency domain...


To be fair, rain and snow interfere with millimetre wave radar too.


And visible spectrum light (human eyes).


And humans have the unique ability to instantly and reliably differentiate between visual noise and objects on the road they wish to avoid.

Lidar's record much less information than than 2 optical lenses (eyes). You got distance in 1, 2, or 2+1 dimensions, and intensity. Eyes can detect distance, wavelength, reflections, refraction, contrast, brightness etc. All with the assistance of the brain of course. If today's computers and software had the pattern matching capabilities of even a human child's brain (reasoning aside), a few moderate resolution video cameras would be all that is necessary for self driving.

But we're not there yet so we employ sensors with limited detection that is simpler to interpret programmatically.

I think even humans would have trouble interpreting the mess of lidar distance graphs in a rain/snow storm, though they'd still do better than current computers.


It sounds like a post created by the public relation department of the company creating the underground sensor.


An X-prize for autonomous racing on lubed up figure 8 tracks would solve this real quick.


I think the real game changer will be when self-driving cars also communicate with each other in some kind of mesh network sharing road surface, weather, and obstacle information.


It's pretty funny how we're running into AI-winter at the same time the autonomous vehicle VC boom is moving all their money to scooters.



A basic question: What would we achieve if make successful self-driving cars. And can we achieve the same without building the self-driving cars.


Neither can most humans driving cars.


That is clearly not the case.


That clearly is the case, it was my experience growing up that even seasoned New England drivers couldn't take the weather seriously and ended up in a ditch on the side of the road.


That's not really a good point. When New England/Canada drivers drive through a snowstorm, the vast majority of them get to where they were going just fine.

In comparison, basically no self driving car could safely drive 100 ft during a snowstorm.

Getting self driving cars to be as good as humans during harsh weather (which is not that good I'll give you that) will take 10 years in my opinion (I say this as a grad student in mobile robotics).


I'd say betting any tech 10 years down the line is a fools game. in 2008, people were saying the same thing about object detection, or literally 100s of other things that deep learning has done that people said was impossible.

Autonomous cars are already safer on ice. being able to individually apply abs to each wheel in micro second speeds of detection of change in terrain is far more accurate than any human.

As far as snowfall, for the vast majority of places, hi def maps and semantic knowledge of signage already exists, so as long as the car can localize, it can obey snow covered signs, and google and others are already working on filtering out weather "noise".

The problem isn't as intractable as it seems


I think you're mostly right; the only thing I'm more skeptical about is the availability and reliability of high def maps and semantic knowledge as you say. Maintaining such thing will be expensive and I wonder how it will scale country-wide as opposed to a few select cities. I bet using the self driving cars themselves to do the mapping and correct mistakes will be a big part of that.

Rough localization in a snowstorm (as in good enough to use prior knowledge of signs) or rain shouldn't be too hard. GPS still works in bad weather and lidar localization (scan-to-map matching) should be hampered but still usuable.


In 2015 there were AI experts claiming that a computer wouldn't beat a 9-dan pro at go for at least another decade. It happened a year later.


"seasoned New England drivers"

Seasoned New England drivers are idiots. Source: Am seasoned New England driver.


The self-driving car hype has failed to recognize that humans are, in fact, exceptionally good at driving cars. There are many car accidents, and many fatalies from car accidents, but a relatively small number considering the number of cars, the number of drivers, and the amount of miles driven, every single day across widely varying conditions worldwide. A large percentage of humans drive every single day, as routinely as getting up from bed, and for most of them, this does not have a bad result.

The idea that we can quantify all of the subtle little ways the human brain instinctively has gotten so good at driving cars is supreme arrogance.


The Uber car failed in Dark of Night too.


It didn't fail because it was dark though.


Nor glom of nit


I wouldn't be surprised if these problems proved intractable enough that the solution will be adding electronic guides to the roads themselves.

Things like burying a wire that can be followed, embedding RFID devices in the pavement (along with an online database that gives their precise location and any updates on road conditions), etc.

It would be conceptually similar to all the ground based systems that guide aircraft.

Of course, these would be deployed only on high value roads like freeways first, and you could at least let the autodrive work on the freeways, and do the last mile yourself.


Oh really? Tesla tested in michigan winters. I find it hard to believe this is not possible.


If you’ve used any sort of ADAS this is old news.


Humans can barely handle these conditions!


The problem here, is that you are half-right. Its localized conditions. No country, much less worldwide, system can account for local anomolies.

Where I learned to drive, most humans could 'see' black ice by predicting where water will pool, the recent weather, and subtle clues that abound, but are not easily expressable. Any out-of-town'er would surely be spun out or driving 10mph, white-knuckled, aghast at the "maniacs" flying by at normal speeds. While in my college city, I had to relearn everything, as it was rainy, and the drivers hyper agressive, and 15mph faster. The same model would struggle to encompass both modalities.

Until AEB stops accidentally triggering constantly on colleagues cars, I have zero interest in trusting my life to an algo connected to a 300hp steel cage.


I've often wondered how much of this is hubris or fear vs. physics. What I mean is - are people in say, icy and snowy climates actually better drivers, able to adjust to the conditions, or are they fooling themselves? Likewise, the person going exceptionally slow and cautiously - are they misjudging the actual risk of inclement weather, relative to an actual scientific judgement?

It certainly feels like humans become improved drivers through various weather / road condition experiences, but I sometimes find myself wondering whether my gut instincts would align with a scientific approach.


I don’t think they are qualitatively “better drivers”. But they are more experienced at certain things. When I lived in Minnesota, I got lots of practice driving in snow and ice and rain. Now in NorCal, not so much :)

Its a skill. You get better with practice. The person going slow probably just hasn’t had enough practice yet with the conditions. Going slow is probably a good choice for them. Am I a better driver overall for knowing how to handle ice? Probably not, but I might be the best person to do the driving under those conditions if everyone else in the car is from warmer climes.

I remember standing in line at Minneapolis rental car counter one night when it was snowing an inch an hour with 4 inches on the ground already. Agent to the guys in front of me: “There are chains in the trunk if you need them.” Customer with heavy southern accent: “Chains? What the heck are chains?” I wonder to this day if those guys made it to their hotel.


It's also local, not just general snow/ice/rain. I can figure out where ice will probably be here. If I go north into the interior where it's colder, not quite so humid, and snow collects over longer periods? Not a chance - I don't know the local weather, what patterns there are, what the road maintenance is like, etc.

Same deal for where the rain will be bad, where I might need to worry about hydroplaning, etc.


I think auto racing has priven there are most certainly qualitively 'better drivers' adjusted for training and experince.


And learning to drive in the ice and snow is definitely a leg up; in racing (I have heard this in rally and F1) "If you want to win, hire a Finn"


For some definition of 'better'. I know excellent race drivers who were in rather unfortunate and totally avoidable road accidents.


I've always been curious what "skills" need to be practiced to drive in the snow. It isn't overly technical. Just don't accelerate or brake too fast and dont turn while you are doing either. If you feel the car start to wobble or fishtail a little, it means you are going too fast.

Thats literally all I change, and I have done fine all my life


One "skill" that very much needs practice is skid recovery (for those eventual times when the car, no matter how careful you are being, will want to skid).

It was described in the drivers ed manuals when I got my license, but it is a skill that has to become muscle memory habit (i.e., you don't have time to think "which way do I turn the wheel and how much", you have to "just do"). Without any practice (which was way easier in the 80's with rear wheel drive vehicles) you never got the "touch" down to just turn the wheel the correct direction and amount to recover when the car does start to skid.


Fish tail doesn't necessarily mean you are going too fast. If you are cruising, it means let off the gas. Have you ever driven a rear wheel drive car on a packed road? You will fish tail when accelerating from a light or stop sign. You need to build some speed and ride the fish tail out.


People in icy and snowy climates are not necessarily better drivers, but usually drive better in ice and snow than people who are rarely exposed to such conditions. It's a skill that can be learned through repetition.

They also tend to appreciate the importance of winter tires more.

This is anecdotal, but based on a rather large sample of people I know who moved to a nicer climate.


I am confident it is both. As someone who races rallycross for fun, I am sure I am an exeption, but I am oft amazed by ordinary drivers every day when skiing.

One cannot discont the psychological factor of feeling 'in control' when the control decides if you or others might die.


The people in snowy climates who are going slow are the better drivers. Moving isn't the problem, stopping is.

Personally, i'd say there's a pecking order from awesome to awful for the median driver in snow as follows: Buffalo/Vermont natives, NYC/Boston, DC Metro/South, Florida/Texas.

Likewise, if you're from the Northeast, driving in rain in the southwest is surprisingly hazardous.


I still cannot comprehend the current push for automated driving.

We, collectively, not just the US, have a problem with too many cars and instead of reducing said number of cars we want to make them self crashing?

Instead of investing heavily on public transit systems we keep feeding this fable that somehow, in the near future, cars driven by computers will do at least as good a job of driving as sentient, sober people.

I can't get my smartphone to understand my language and do simple things by voice commands and yet we seem to think that there's (nearly) available technology to make a car drive by itself...

These companies must not exist in the real world, that's the only possible explanation.


> I still cannot comprehend the current push for automated driving.

Google and Uber believe they can make an insane amount of money with the transition to automated driving. This is being sold to the public as the solution to a public safety crisis with the narrative that human controlled cars are death machines which need to be taken off the roads as soon as possible.


I wonder how long will it take them to realize that it won't happen before someone cracks the artificial general intelligence problem...

I don't know what ticks me off the most, people believing that these companies can easily solve the automated driving problem or these companies stubbornly pushing for it.


I'm sure plenty of engineers actually working on the problem are aware of how difficult it is... but they're not the ones doing the pushing or making the grandiose claims.


Fine by me, research is research.

I'm just utterly surprised by how so many people think this is a fixable problem in the near future, it's not.

How long until people realize that?

The next AI winter won't come soon enough.


Self driving cars, when perfected, will dratically reduce the number of cars.

Once there Uber self driving cars will then be both cheaper, safer and more reliable than your own car. And you don't need to park them. Only those with special requirements (ie professionals that need to store equipement in their car) need to have their own, the rest of us can use a taxi service.

Furthermore, when human driven cars are removed from the roads, road capacity could increase by 50-100% or more, as the distance between cars (in both dimensions) can be safely reduced.

Also, you will not need to pick up your children/parents/others that cannot drive. For instance, if you drive your kids to school, you likely have to drive both ways, doubling traffic (and wasting your time).

For commuters, self driving cars also lend themselves were well for ride sharing, having cars that function like full office spaces or entertainment sources, meditation rooms or whatever you can think of. If all cars are self driving, the speeds will could be regular, and uncomfortable acceleration/deceleration avoided.

Finally, the tech going into this is likely to be generalizable to other kinds of machinery, such as mining machinery, construction machinery, automated chefs, as well as the functions in manufacturing that have not already been automated.

I think the potential of the end state should be quite clear. The total economic potential is huge. Anyone able to attain near monopoly for one of the technologies going into this (software, sensors, traffic control centers, etc) stand to be the next Google.

That said, I think it is unlikely that it will take off in full until between 2025 and 2035. Now, we are in a situation similar to the .com era around 1998. If history repeats itself, 2023 may be a great year to invest in robotics startups, provided we see a bubble bursting.


I don't deny that the bright future you've just described will happen (well, might happen) but putting it in a time frame of less than 20 years seems to me just wishful thinking.

If only I could see the same enthusiasm for say, improving public transit range, infrastructure and operational costs... nope, auto-cars will save us.

Take a look at Germany, they've just launched the world's first hydrogen-powered train.

As a side note:

The total economic potential is huge. Anyone able to attain near monopoly for one of the technologies [...]

Economic potential and monopolies don't mix very well, unless you're one of the shareholders of said monopoly.


Well, about 20 years seems about right, for the full potential. But it is possible that part of the potential can be achieved much sooner, my guess is that the transformation will start in full around 2025.

On monopolies: Look at the big companies coming out of the internet revolution. Google, Facebook, Amazon, eBay, Uber and the rest are all near-monopolies in their core segments. As are older giants, such as Microsoft and Apple.

I'm not saying it is good, I'm just saying that the lesson from the previous wave, is that the winner takes it all. And as investors have learned this, they don't want to be late for the next party, hence the hype.

Of course, the risk that the bubble will burst at least once before the actual party, is pretty high.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: