Hacker News new | past | comments | ask | show | jobs | submit login
Automakers Are Rethinking the Timetable for Fully Autonomous Cars (designnews.com)
155 points by AndrewBissell on May 19, 2019 | hide | past | favorite | 311 comments



Colour me not surprised in the least.

Speaking as someone in tech talking to others in tech I'm constantly surprised how optimistic people are about the timetable for fully self-driving cars. Honestly, I think we're still 20+ years away.

Here's another thing to consider: people act differently when its a machine vs a person they're affecting. There are countless examples of this eg:

- I live in a building with a doorman. The main purpose of a doorman is to be a person to prevent criminal action from starting. After all a person can be fooled, distracted or bribed but people seem less inclined to mess with an unpredictable person vs, say, an unmanned security system on a similar building.

- Forget humans, this applies to animals. It's well-documented that just having a dog (or even saying you do) reduces the likelihood you'll get burgled. Thieves will generally pick easier, more predictable prey.

- ... and this brings us back to cars. Reading some of the coverage of Waymo, Uber, Tesla, etc you see that other drivers act in ways around a car they realize has no human driver than they would if it were a human-driven car. Cutting it off, messing with it, etc.

How exactly does an autonomous car deal with that aspect of human nature?


> as someone in tech talking to others in tech I'm constantly surprised how optimistic people are

Me too. I can't explain it. Here's one point I try to make (in vain): When people imagine such tech, the level of "just-works" is very high, almost sci-fi high. The car would would perform on a super-human level, making near-perfect split-second life-and-death decisions. There's very little room for software/hardware failures. Where do you, as someone in tech, see anything consumer facing that is even close to that level of polishness? It's <current year>, and I still have to restart my Firefox browser every few hours because it gets bloated [0]. We haven't figured out bluetooth yet [1]. Heck, state-of-the-art CAPTCHAs are using street-signs [2].

[0] https://www.reddit.com/r/firefox/comments/ak18uz/help_insane... [1] https://news.ycombinator.com/item?id=19956512 [2] https://meta.stackexchange.com/questions/296574/captcha-with...

Of course these are tongue-in-cheek anecdotes, but the points stands, that it seems we just haven't figured out how to make complex code work very well yet.


Software for consumer computing hasn't chosen bug free as a path. Because you can build software like NASA does, but you have to pay money like NASA does, and develop slowly like NASA does.


Sadly NASA does not pay as well as you think it does, at least from my anecdotal experience (worked with NASA and DARPA folks that left the gov sector for better pay.) You'll get better pay at a tech company in a major US city.


I think that GP is commenting on the overall cost of software at NASA opposed to individual salaries. At that level, software/hardware bugs will kill people so testing/safety is paramount.

Building non-safety-critical software is cheap when compared to building software that has been verified to virtually never fail.


Add country to the equation, and you can easily add another 20-50 years into the already 20+ you mention.

Take India, for example.

-- The infrastructure is poor.

-- Drivers ignore rules to suit their convenience.

-- A lot of people depend on driving for their livelihood. Union minister, Nitin Gadkari, has already made it clear that they will not let tech that affects the livelihood of so many people come in and take jobs. Most governments that rely on the vote bank of poorer sections of the society will not dare to move against this mandate.

-- The preferred and most popular mode of transportation is still two-wheelers. I don't see anyone building self-driving two bikes. That rules out almost half of the Indian population's transportation

I don't think India will be ready for another 50 years to have fully self-driving vehicles.


The poor infrastructure could work to their advantage, they could build it to suit new applications. In the West we have a huge legacy infrastructure that is hard to deal with. It’s somewhat similar to the way Africa skipped traditional telecommunications and went straight to mobile.


Might be possible on less dense cities.

Conjusted cities like Bangalore and reeling under bad city planning. Adding a self-driving vehicle is not going to be of much help.

The govt does not have money to spend on road repairs. Corruption at the contract level is rampant. Putting 1+1, I doubt India will use it to their advantage.


China on the other hand has horrible traffic problems and no where to build new roads in its most dense cities. They also have an autocratic government that can say “no more manual driven cars on or inside the 4th ring road” to optimize road usage.


But would autonomous cars be that much more efficient? you may get some efficiencies but I doubt it can solve the problem itself.


In my city in Europe, my average speed is 23 km/h but I rarely go less than 40, more often 60-70 - meaning that I wait a lot, and I'm alone in the car. Yes, shared autonomous cars would help my city a lot.


I didn't realise autonomous vehicles make the roads wider.


Maybe TomMarius means that autonomous vehicles could negotiate intersections without stopping. No traffic lights required, only rules of precedence. It's going to be a slow procession but maybe the average speed would increase.

I'm not sure how that would mix with pedestrian crossings. In some dense areas there would be a non stop flow of people walking across the street and cars wouldn't be able to move.


The cars can work together to reduce riding distances, move faster in tandem when going in a green light, and so on. They virtually make the road much wider by eliminating human inefficiency in driving.


Actually they do. There are streets full of parking empty cars, moving cars are at best a half.

Plus reduction of number of driving cars to roughly one third.


I don’t disagree, but I did want to point this out: https://www.cnet.com/google-amp/news/ces-2019-bmw-self-ridin...


In the article, they say that it's not self-driving 2 wheelers, but rather, assistance.

Couple that with 3 people sitting on the bike (quite common in India), I doubt this will work in my lifetime!


Driving is a social activity. You’re interacting with other humans, with only their behavior (and maybe the occasional gesture) from which to infer intent.

From that perspective, think of what a computer has to “know” in order to get along? It’s way more complex than just following the road and not hitting things.

Maybe the best chance for a fully autonomous driving experience is some city that takes the next step beyond congestion pricing and reserves their dense core for autonomous vehicles. If computers only have to deal with other computers, and pedestrians, that seems a much more tractable problem.


And cyclists. And animals. And malfunctioning autonomous vehicles. And road debris. And law enforcement activity. Etc.

Reserving city centers for autonomous vehicles doesn't really make it much easier to solve the collision avoidance problem.


City centers should not allow cars at all.


The only advantage inner cities have are lower speeds, otherwise they're full of unpredictable humans, just not in cars.

I'd say if we ever do get self driving cars, they'll be on Motorways first, there's just so many less edge cases.


they'll be on Motorways first, there's just so many less edge cases.

Not from what I've seen. Heavy construction, sudden gridlock with huge lane speed differentials, multi-vehicle collisions, road debris, heavy trucks, severe inclement weather, high speeds, gore points; these things combine to make for an extremely challenging and dangerous driving environment.

An autonomous car driving through a residential neighbourhood is moving slowly and can stop if a child chases a basketball onto the road. On the freeway, when you're going over 100km/h, the car simply can't stop if one from the next lane spins out in front of you.


> On the freeway, when you're going over 100km/h, the car simply can't stop if one from the next lane spins out in front of you.

Neither can you.

In fact, an autonomous car is much more likely to be able to react fast and well to save lives and property in that situation than a human.

Self-driving cars don't need to be absolutely 100% perfect and accident-free to be worthwhile. They just need to be better than us, and frankly, we're pretty lousy at operating tons of metal flying along the road at over 100km/h. Hell, we're not even that great at it when going 50 km/h.


Humans have the advantage of understanding context enough to (try to) avoid being in those situations though. You may not be able to avoid a car suddenly spinning out right in front of the you, but you may have realized 7 minutes ago the driver kept drifting outside his lane and backed off to give him a bigger gap; getting autonomous cars up to that level would require strong AI tech


Not at all. That level of pattern recognition is much easier than the actual hard problems involved in getting driverless cars working.

The hard problems, right now, are things like recognizing where the road is when it (and everything beside it) is under an inch and a half of snow, or where you're supposed to drive when there's construction and the lanes are shifted (assuming no new standard means are developed to indicate this), or how to recognize, and what to do, when the road you want to take is under a foot of rushing water—or washed out altogether.

This is the thing that continues to baffle me. There are genuine, hard problems between where we are now and a nation of completely autonomous cars. But every time there's a discussion about it, even on a site like Hacker News where people should have the background to recognize the difference, most of the problems people bring up are the kind that self-driving cars either are now or can fairly easily become good at dealing with.


>just need to be better than us

People say this, but I suspect that "epsilon better" is not enough. Personally, I think that to give up my (fallble) agency, I would require an order of magnitude better, perhaps two.


My dream - which I'm sure I'll never live long enough to see it come to fruition - is that manual driving is BANNED.

That's right. Some day I hope, and believe, that the average consumer actually won't be able to manually operate their vehicle (edge cases aside). I love to drive but there are some true idiots that we share the road with. I'd gladly give up my right to drive, if it meant those idiots also lost theirs. You will require some specialized license to do so, which would require passing some stringent checks.

So that's how I can see it being dealt with in the future. But the early stages? That's a very interesting question and one which I don't have any idea for.


I think it would be just as easy for the law to turn the other way, and say you must have a driver competent to take control over the autonomous car at all times for it to be allowed on the road. This negates the pipe dreams of empty cars appearing to scuttle people away, sleeping on your way to work, taking one blacked out drunk, etc., but is perfectly in the context of safety first auto regulations. Road laws are made to favor safety above all else, its why highways still are capped at 70mph across most of the U.S. despite cars being much safer at speed than when these highways were built in the 1950s.


While I fear you are correct, I enjoy driving. Taking away the hobby I enjoy, and my family has enjoied for generations will not be done with my consent, nor the consent of many of the car clubs, often populated by thoze who own the companies who will enact such measures. I respect your vision, but you have a fight ahead. I think an alternate type of transportation than cars is the future. Most current emissions are heat and brake dust, and autonomous cars are only marganilly better there.


You could apply the same argument against guns.


The corallary would be that you can only use guns on ranges and private property, which is basically true for much of the world.


That's too US-specific.

And the parallel works only if you change the primary purpose of cars to killing.


Have you looked at the ownership to death stats?

One of these items is killing people...others are not.

Also, you've easily conflated the usecase of "killing people" with the usecase of "defending oneself".


The CEO of Aurora (who previously led self-driving cars at Google) agrees with you...

https://www.theverge.com/2019/4/23/18512618/how-long-will-it...


> Honestly, I think we're still 20+ years away.

It is hard to be so pessimistic when you see cars driving autonomously daily.

> How exactly does an autonomous car deal with that aspect of human nature?

Aren’t we already collecting lots of data about this?


That must have been what people thought in 1994/1995 at the final presentation of PROMETHEUS:

"The first was the final presentation of the PROMETHEUS project in October 1994 on Autoroute 1 near the airport Charles-de-Gaulle in Paris. With guests on board, the twin vehicles of Daimler-Benz (VITA-2) and UniBwM (VaMP) drove more than 1,000 kilometres (620 mi) on the three-lane highway in standard heavy traffic at speeds up to 130 kilometres per hour (81 mph). Driving in free lanes, convoy driving with distance keeping depending on speed, and lane changes left and right with autonomous passing have been demonstrated; the latter required interpreting the road scene also in the rear hemisphere. Two cameras with different focal lengths for each hemisphere have been used in parallel for this purpose."[1]

How could you not expect all cars to drive autonomously in the year 2000 at the latest if we were already so far? Everything else was just a bit of doing, right? Unfortunately, sometimes we don't know what we don't know, so things may seem "just around the corner" when they are many years away.

[1] https://en.wikipedia.org/wiki/Ernst_Dickmanns



> How exactly does an autonomous car deal with that aspect of human nature?

Sure. So the way that they could handle it is to drive safely enough that it doesn't matter at all if this happens or not.

Having the car in front of you hit the brakes really hard is of no consequence if you were maintaining a safe follow distance to begin with. Let the human slam on the brakes. The self driving car will have been driving the speed limit, at a safe distance in the first place.


The other week I drove a hire car with "intelligent cruise control" which, as you'd expect, tried to keep a safe following distance from the car in front.

The problem was, the cruise control following distance was not the ~2 seconds recommended by my country's highway code, but more like ~3.5 seconds. Other drivers would interpret that as an invitation to merge, and my car would slow down further to restore its ~3.5 second following distance.

I don't think you can drive in such a way that other drivers' behaviour is of no consequence.


There's plenty of cities in the U.S. where the local driving culture is so aggressive, if you are unwilling to cut someone off you just aren't going to be able to merge at all or get through the intersection, and that will undermine the brand if people are supposed to be commuting these cars through rush hour to work and it takes twice as long because the cars are too passive.

It's common to merge on a highway under speed, knowing you will be hammering the throttle and the car in the right lane behind you will slow down, but could self driving cars make that same judgement to merge or stay in the onramp lane waiting for the nonexistant opening? And if they do make that judgement and other cars do let themselves be cut off, what happens in the 1/1000 time that this maneuver results in an accident, and the self driving company is brought to court? These companies are going to be held responsible for these judgement calls that human drivers make every day. There's a highway near me where to merge onto the next interchange you have to cross four lanes of traffic in 0.25-0.5 miles, could a self driving car fight for that opening during rush hour? I don't think there would be a safe way to do it unless every single car on the road is a self driving car, and that's just not possible when most drivers can barely afford $2000 cars from 30 years ago.


Ok, and why is that a problem?

Yes, people will merge ahead of you, and it will slightly slow down your journey.

That's fine, and falls under the definition of safe driving.

Let them merge ahead of you.


I originally typed out a reply saying that if I bought a sure-to-be-expensive self-driving car, I would be disappointed if it made my trip slower.

But then I thought that maybe if I wasn't the one fighting to get into lanes I wouldn't care that my trip ended up a couple of minutes longer, I would just relax and read a book and not even pay attention while all that was happening.

Not sure if that would work in reality though. In my city if a car that was that passive was recognized, people would just take total advantage of it and never let it in, cut it off, etc. I think that I would still get annoyed watching this as a passenger.

The car would have to make sudden stops to avoid accidents every time someone cut it off as well, which would be unpleasant for a passenger.


My firm belief is that we will have to design roads for self-driving cars before self-driving cars will be the majority. Right now roads are approximately designed for human sensors. We need roads designed for machine sensors. If we started today, and during every repaving we built roads for machine sensors, we'd have the roadways where most miles are driven repaved in about a decade. I don't know exactly what it will look like. It might looks like RFIDs embedded in roads, it might looks like self driving car lanes (similar to carpool lanes of today), etc. Until this process begins of redesigning roads for machine sensors, I'm not really optimistic for self-driving cars. I think you're right about the 20+ mark.


Duplicating road networks around the world would be extremely expensive, if not physically impossible due to lack of room.


Not duplicating. Overlaying and eventually replacing. For a very, very long time the roads will need to be compatible with human and machine drivers.


I agree with you and I doubt we'll see self-driving cars before we see self-driving trains (like trams in cities).

The problem space is vastly more simpler for self-driving trains; trains move on rails, rails do not move, trains/trams cannot move sideways at will, stations do not move, and so on.


Except the economies of scale aren't there to benefit much from replacing a train driver who is carrying multiple thousands of people per day.


Yeah, could be.

I thought it might be possible to effectivize intracity rail traffic via an on-demand mini-tram service, given self-driving trams and computer-optimized routing. Kind of "packet routing", if packets were trams.


Reply like a human or animal. Bark/swear and chase them?


Let's give the autonomous cars crowbars or, for the Americans amongst us - guns, while we're at it :)))


>> “It’s really, really hard,” Krafcik said during a live-streamed tech conference. “You don’t know what you don’t know until you’re actually in there and trying to do things.”

Now, where was that Rodney Brooks interview? Ah, here it is:

https://cdn.arstechnica.net/wp-content/uploads/2018/06/2nd-P...

>> Rob Reid: (...) Why do you think these people and others are overestimating the rate of development in this field?

>> Rodney Brooks: I think they're making a bunch of mistakes. I asked them when did the first car drive down a freeway for 10 miles at 50 miles an hour. They know that the Google cars did that in 2004 or 2005. It was actually done in Germany in 1987.

>> Rob Reid: Wow.

>> Rodney Brooks: When are we going to get the first car, hands off the steering wheel, feet off the pedals, drive coast to coast in the US? Yeah, well, it actually happened in 1995 with the Navlab Project from Carnegie Mellon University. My point is, everyone thinks, oh this is just a [inaudible 00:44:50], this is going to happen quickly. It's been around a long time to get to where we are. I have now demonstrated to them that their scale is wrong, their start point is wrong. It has taken a lot longer to get to where we are now.

So, to anyone who had any understanding of the technology it must have been obvious for a long, long time that "it's really, really hard" and that it wouldn't just take a few years after the time that Google announced the Google car. I struggle to believe that Krafcik is not one of those people, or that major automotive company CEOs are not.

Basically, those people have been bullshitting everyone and they're still bullshitting everyone, and they were probably not even interested in developing self-driving cars in the first place, it was all just some stupid marketing game supported by a very excitable tech press and a gullible public.


Rodney Brooks is misleading. He says things like, "It was actually done in Germany in 1987." but neglects to mention that the road had no other cars on it.[1] He also claims that the first autonomous coast to coast drive happened in 1995, but actually the project was autonomously controlling steering, not gas or brakes, and 150 miles of the trip were driven by humans.[2]

Modern autonomous vehicles are much more impressive. They were successfully navigating urban environments 12 years ago.[3] Waymo's cars have driven over 10 million miles and disengagement rate is once every 11,000 miles.[4] We're at the point where no major breakthroughs are needed, just incremental improvements. That's what's different from earlier eras of autonomous vehicles.

1. https://en.wikipedia.org/wiki/History_of_self-driving_cars#1... has this quote:

> In the 1980s, a vision-guided Mercedes-Benz robotic van, designed by Ernst Dickmanns and his team at the Bundeswehr University Munich in Munich, Germany, achieved a speed of 39 miles per hour (63 km/h) on streets without traffic.

2. http://www.cs.cmu.edu/afs/cs/usr/tjochem/www/nhaa/nhaa_home_...

3. https://en.wikipedia.org/wiki/DARPA_Grand_Challenge_(2007)

4. https://medium.com/waymo/an-update-on-waymo-disengagements-i...


Think about what that means. Waymo has been doing this for many years, drives in basically ideal conditions (Phoenix rather than Philadelphia), and still disengages at a rate that amounts to once a year for a typical personal vehicle. That’s not enough for “real self driving” because at that disengagement rate the human must be actively engaged the whole time. Human drivers go about 500,000 miles between crashes. (And that’s not 500,000 miles driving through Phoenix. That includes miles driven through places like DC where freeways have no acceleration lanes. That includes drunk drivers and teen drivers. You can’t control the other people on the road, but if you don’t yourself drive distracted, drunk, speed, etc., I’d bet you can expect to go at least a million miles between crashes.) Disengagement rates would have to improve by a factor of 50 to allow a human to not be paying attention at all times while achieving an acceptable level of safety.


A disengagement doesn't necessarily mean that the car would have crashed. There are amusing disengagement stories such as a cyclist doing a track stand, causing the car to get stuck.[1] Many disengagements are false positives. Think of how often you've had to do the equivalent to human drivers by telling them to slow down or watch out. For me it's certainly more often than every 11,000 miles.

Given that humans aren't great drivers, you'd think that after 10 million miles Waymo would be at fault for some crashes. But only one of Waymo's crashes was even partially their fault. One of their cars was moving at 2mph to get around some sandbags in the road and hit a muni bus that was trying to squeeze by at 15mph.[2] Waymo has since tweaked their software to account for aggressive bus drivers. That's over 10 million miles and only one collision that was even partially their fault. That sounds pretty safe to me.

And regarding weather: Though their pilot program is in Phoenix, Waymo doesn't just drive in places with nice weather. They've been testing in Michigan for the past 2 winters.

1. https://forums.roadbikereview.com/general-cycling-discussion...

2. https://en.wikipedia.org/wiki/Waymo#Crashes


A disengagement generally measures when a human test driver had to take control. It’s not just telling a human driver to watch out—it’s taking the wheel from them. They wouldn’t necessarily lead to a crash, but there is a pretty good chance they would. If even 10% of disengagement’s would’ve led to a collision, you’re still not close to a good human driver.

One crash in 10 million miles isn’t as great as it sounds. First, it’s a meaningless number because a human is intervening every 11,000 miles. It’s not a true measurement of what a purely autonomous collision rate would be. Second, humans crash once in every 500,000 miles, and that’s under the full gamut of circumstances (drunk drivers, unfamiliar roads, teenagers, etc). Waymo is running with trained drivers on thoroughly mapped test areas in a place with easy traffic and weather. You’d expect humans doing nothing but running the same routes over and over in that carefully geofenced area to do better than one collision per 500,000 miles. (Especially with someone looking over their shoulder, like the self driving car is doing!)

If you look at the improvement in disengagement rates it’s not particularly compelling: https://cdn-images-1.medium.com/max/1600/1*oX-ykZtxiMiuguSow....

They’ve improved by a factor of 6-7 since 2015, but most of that was 2015-2016. That suggests the pace of improvement is getting slower.


>> A disengagement doesn't necessarily mean that the car would have crashed.

Conversely, lack of disengagement doesn't mean that the car is driving safely. There is simply no way to know how close to an accident a car came, without the human driver having to take control.

Like I say above, disengagements don't really tell us anything about the car's real world driving ability. They're just a silly proxy mandated by bureaucracy, and only a very weak measure of real progress.


Who drives 11000 miles in a month?


You’re right, once a year.


A coach driver.


Yes, of course there has been progress since the '80s and '90s. Things would be really bad otherwise.

>> Modern autonomous vehicles are much more impressive.

It really depends on what you consider impressive. The DARPA Grand Challenge involved one road loop with stunt drivers instead of real traffic. These are still strictly controlled, laboratory conditions that tell us nothing about the ability of robot cars to operate in the real world.

Waymo's disengagement rates don't really say anything, either. Perhaps Waymo is now driving its cars in easier conditions after noticing that they tended to disengage too often. What we know for sure is that Waymo doesn't have autonomous cars -as noone else does. If they did, they'd be out on the streets without safety drivers and counting autonomous miles, not miles without disengagement.

According to the post you link, reporting rate of disengagement is required, but the fact that Waymo chooses to advertise theirs as a measure of improvement of their cars tells me that they have no real results to show and instead choose to tout a meaningless proxy just to make people believe that they are further ahead on the road to autonomy than they really are.


> the fact that Waymo chooses to advertise theirs as a measure of improvement of their cars tells me that they have no real results to show

It tells me that's the weakest number they can possibly report. If I were Google and I knew I had the strongest ML teams in the world by miles, the strongest internal results by miles, I would say as little as possible for as long as possible. Get as far ahead as possible.

I think the Alphabet board learned their lesson on announcing early. They still shut down products, of course, so does Intel, Facebook, and every other company. And I think they're learning their lessons about sales (Cloud's hiring 10k sales people) and customer support (I'm sure they're painfully aware of the issues).


Btw, this is from the same source as your link no 1. above ("Prof. Schmidhuber's highlights of robot car history"):

>> 1995: UniBW Munich's fast Mercedes robot does 1000 autonomous miles on the highway - in traffic - no GPS!

>> Dickmanns' famous S-class car autonomously drives 1678 km on public Autobahns from Munich to Denmark and back, up to 158 km without human intervention, at up to 180 km/h, automatically passing other cars

A most impressive result that is easily the equal of modern results- but in 1995. That puts your claim that "incremental improvements" are all that's needed, in perspective. Major breakthroughs are needed.


The 1995 result was nowhere close to what we have today. 158km was the maximum distance between disengagements. Waymo's average distance between disengagements is over 100x that, and they're going on more than just highways.


>> 158km was the maximum distance between disengagements.

That was 158km doing 180 km/h on an authobahn with no upper speed limit and with unrestricted traffic.

Anyway, like I say in my other comment Waymo's disengagements mean nothing because, unlike Dickmann's authobahn experiment, there is noone there watching the performance of their cars, other than Waymo employees. As far as anyone can tell, their impressive disengagement record is the result of their cars being driven in the mildest, friendliest conditions possible. It certainly seems that way, taking into account where they drive their cars - in sunny, peaceful Phoenix AZ, and then again, only on roads they've actually mapped.

Put these two things together and it's obvious that Dickmann's experiments were run in as close to real-world conditions as possible, whereas Waymo is consistently keeping its cars in closed, controlled, simple environments that tell us nothing about their capability in the real world.


But they're still using easy environments. Not sure a German motorway without speed limits is easier to navigate than what Waymo faces.

I doubt that maximizing the miles between disengagements should be our goal. The goal should be for the car to face the worst conditions imaginable (snow, ice, dirt roads, other drivers ignoring traffic) and somehow manage to survive in those situations.


A German motorway without traffic is very easy to navigate, but a typical day to day situation involves several kinds of cars driving at different speed limits and engaging in various maneuvers like overtaking, exiting, merging, switching lanes. There's also traffic jams or heavy traffic situations.

If one wants to drive dynamically, one has to overtake and switch lanes quite a lot, which makes this challenging. If one is content with driving like a snail they can stick to the first lane, which is pretty simple and could be managed even by a so-called self-driving car.


The usual roads are probably very easy, they have wide markings. The problem there arises with repair works which mess up the markings and lane width in all possible ways. During summer they are very frequent, and I bet they were the cause of human interventions in those old experiments.


No problem, disengage there. The problem seems to br no safe way to disengage.

Tesla crashes kill people because of this. (Ding ding, you have 1 second before impact.) Waymo probably wants to nail city driving first, because it's much lower speed, safer.


A limited access road is absolutely easier to navigate than a city. Once you can stay in one lane with sufficient following distance, you're done. The problem space is tiny.


If you have a lane. A gravel road with two way traffic (not that uncommon in rural areas) only works because drivers communicate their intentions (esp if one needs to wait at a point where the road is wider). They don't work by fixed rules.


https://en.wikipedia.org/wiki/Limited-access_road

I guess I should have said 'controlled access' but anything gravel is not in this category.


Or nothing is needed other than cranking up the safety factor. Let me read a book while driving and if anything is out of the truly ordinary start to slow down. Much better than what Tesla does (keep velocity and signal the human godspeed, what could go wrong!?)


I don't think his point is that autonomous vehicles haven't made any progress, rather the rate of progress has been pretty slow if you adjust the start point to the 80's/90's rather than the last 10 years or so that the public usually thinks of. Of course, the rate of progress isn't necessarily linear, especially given the rapid advancement in machine learning, so there's no reason at all to assume any further progress here on out will take just as long. Even so, I don't know if it has been demonstrated adequately if AV's perform just as well (if not better) than humans, given the vast, vast variety of unideal situations human drivers have been subjected to over the past century.


> given the rapid advancement in machine learning...

No trollin' -what do you see as the rapid advancements in machine learning? Like, say, in the last 2 years?


Your point reminds me a bit of Theranos, and how they were able to coast on a completely fraudulent but awe-inspiring idea.

It might not be related, but whenever I read stories this, I'm reminded of the ways that tech is marketed in the world. It's impossible for the public to understand whether or not a certain technology is feasible, practical, or even possible, but marketing teams get an unbelievable amount of leeway when marketing products and services. And they get to do it without really telling you what data is collected and how it's used, etc., etc.

The power asymmetry between what tech companies are allowed to say versus the limited technical understanding of the public is brutal, and not getting any better.


Perhaps not the general public, but everyone in the clinical lab industry knew that there was no way that Theranos could achieve accurate results with a tiny blood sample. A single drop of blood extracted from just under the skin is not representative of the blood in the patient's circulatory system and contains too many contaminants. This is fairly straightforward biology and has been know for many years.


True. Still, Theranos happened and flew fir quite a while. The problem here is the smaller the population of people able to judge new tech (which is generally speaking getting smaller everyday tech becomes more specialized and advanced) the longer it takes for reality to catch up with the money.

Which also means that it is ny impossible to distinguish between a growth company with a solid product and one that is just trying to out run reality.


Absolutely agree.

Also, ny is actually ‘nigh’


That's why it looked wrong in the first place!


But I read on these very forums, by prominent posters, that the only reason we were skeptical of Theranos was that we didn't want to see a young woman succeed! What happened?


Having bold dreams are important too. Trying hard and failing is a perfectly useful data point. Pretty sure there would have been many naysayers 20 years ago, when presented with what is today's mobile technology.


I don't think they're the same thing though. Touchscreen phones with high speed internet was an unsurprising goal, even if the pace of smartphone adoption has been fantastic.

Driverless cars... unless there have been some fundamental breakthroughs in AI tech that allows for such a fully unsupervised system, its hard to see how this can be a possibility.

Now, reducing the complexity far more... assisted driving tech is already pretty mainstream. Autonomous driving for predictable, long haul routes (inter-city trucks on US highways) is an exciting market too.


Everyone is wrong about long haul trucks, frankly. It will be easier to get fully autonomous cars than trucks. They're much longer and wider. They're at least twenty times heavier. They require much more room to maneuver, to start, and to stop. If they're involved in a crash, they cause a lot more damage.

Not to mention that there's nothing really predictable about a long route. Traffic, weather, construction, and every other variable is more likely to change over a longer route as well. It would be far better to focus on making autonomous cars first.


All those extra difficulties of long haul relative to cars are big challenges to humans because the scale is so far off from the baseline of our embodied intelligence, but not much of a difference to machines.

A human driver eyeballing a difficult curve will easily be half a meter or more off over the length the trailer. But for a machine it it does not really matter wether it plots a path for a Twizzy through the LIDAR point cloud or for a semi-trailer, as long as the model is accurate. Humans have all their relevant sensors at a single point, awkwardly mapped to vehicle dimensions with mirrors and guesswork, therefore human driving gets worse with increasing vehicle size. A driving machine however can take input from all over its body, its "skilll does not deteriorate with increase in vehicle size (arguably it might even improve because a bigger vehicle will have more computing and a wider range of sensors at the same fraction of total mass and cost). The bigger the rig the easier it becomes for robots to compete with humans.


It's not that trucks will have it first, per se. It's that trucks spend a much larger fraction of their time on freeways, so they can get a much bigger benefit from a system that only works there.

Specifically on freeways, the difference in difficulty is pretty minor. The lanes are wide and you don't need to accelerate very rapidly.

> It would be far better to focus on making autonomous cars first.

Mu. The same system's going to be on both types of vehicle.


How about reusable rockets?


In 1999 we had all the basic ingredients for a modern smartphone. The components were too slow, expensive, heavy, power hungry, and unreliable to allow for building a viable consumer product but no one really claimed that improvements were impossible.


If my smartphone malfunctions I don’t die though.


My smarthphone has issues sometimes: no mobile data, or no GPS, or a Bluetooth-device fails to connect, or an outright crash. Not very often, perhaps once a month. Not too big a deal for a phone: I can simply can reboot it.

If an autonomous car has issues even 1/10 as frequently, I won't ever trust it to transport me.


One of the things the article names is the challenges caused by snow, ice, etc. I realize I'm in the minority, but I actually think there's lots of conditions that we shouldn't be driving in -- conditions where the obscured lines of sight, slipperiness of the road, etc, actually mean that we should either be off the road entirely, or traveling at speeds far far below normal. I know this happens, but I honestly believe that it should be far more common -- so common that it would likely take a chunk out of the economy.

For instance you're driving on the freeway and it starts to rain -- can you really reduce your speed, and/or leave an extra "car length" between the car in front of you, so that you'll still have time to stop if needed? In my experience, no one else wants to slow down, so it's safer for me to keep up with the slowest traffic than to go what I would now deem a safe speed.


> One of the things the article names is the challenges caused by snow, ice, etc. I realize I'm in the minority, but I actually think there's lots of conditions that we shouldn't be driving in -- conditions where the obscured lines of sight, slipperiness of the road, etc, actually mean that we should either be off the road entirely, or traveling at speeds far far below normal.

I don't think snow, ice and rain will be as problematic as predicted for self-driving cars. Altogether I expect them to perform much better than humans - more so than in clear conditions.

Self-driving cars have more sensors than humans - lidar, radar, cameras, possibly at multiple wavelengths, which should let them see through conditions which are difficult for us. And they can control braking, acceleration and perhaps steering on each wheel separately and faster than us, allowing better control in slippery conditions.

---

But what I expect the fundamental problem to be is risk vs speed trade-offs.

Humans routinely drive beyond safe speeds in all conditions. People tailgate and drive close to the speed limit in snow storms, in a way where stopping in time is impossible if the car in front brakes hard or an obstruction appears from the blur. People drive the city speed limit next to parked cars, unable to stop in time if a child steps out between them. People drive the city speed limit across blind intersections if the crossing traffic has stop signs, assuming that the crossing traffic will give way (but emergency vehicles don't have to give way, there was a bad crash just like this a few months ago in my city).

All these are relying on the prior probability of a dangerous situation occurring being low enough to drive at "beyond safe" limits.

So self-driving cars will have to choose to match the human behaviour, and accept occasional crashes - or appear infuriatingly slow.


They can stop much faster even on crowded streets, and while tailgating. I wouldn't mind if it was slow when it knows it cannot stop in time.


Absolutely agree. People don't realize how long it takes to stop your car when the roads are wet, and they keep going at high speed. I think the digital speed limit signs that are adjustable should be used more often to greatly reduce speed limits and discourage the standard 55mph+ since the coefficient of friction can be half or less then during dry conditions (not to mention loss of visibility). http://hyperphysics.phy-astr.gsu.edu/hbase/Mechanics/frictir...


As a Norwegian who drives in snow at least a few times a year, it’s funny and tragic to see videos of pileups in the US when there’s 5cm of snow on the road. It’s easy to drive safely in snow, assuming you have snow tires, but you do need to slow down a lot. Even here, the first snow of the year results in multiple accidents every year, before people learn to adjust.


In reality, the single thing you need to focus on when driving in show is drastically reduced traction. You know to take corners slower, you know to keep longer distance, and you know to simply always think of the next point where you need to be stationary (or not if on an incline). But you are right, even in countries that get snow every year, there is a week or so of adaption before people relearn it.


> As a Norwegian

I think, Scandinavia is a different story. A closed snow cover at say -5 degrees Celsius is quite safe compared to muddling zero degree conditions with continuous freezing and thawing, producing ice surfaces or slush. The US have this in some states the same way we have it in Central Europe (I am from Germany).


We’ve got this on the west coast where I’m from though :) The coastal climates are normally just a few degrees below zero. Granted, our snow-clearing teams are very effective, so there is rarely deep snow.


Note that the US is huge, there are parts that get just as much snow as Norway and others that never see a single flake.


Many northern US locations get much more snow than most of the inhabited parts of Norway. Most of the places where substantial numbers of people live don't get vast amounts of snow and anyway the roads are typically cleared and easily drivable before it's time to go to work.

One really big difference is that we use tyres specifically designed for low temperatures and snow.


Many road signs through Europe have an alternate speed limit for rain, I see them very often.


Where? I've never seen any. I've heard that Italy has rules for it but I've never seen any signs in the UK, Scandinavia, northern Germany, Netherlands, Belgium, or France.

I don't even know what it would look like.

Looks like I should brush up before this summer's road trip.


Standard 55? Aren't folks pushing 75 (or 100) on highways?


I am sure there are plenty of studies out there about this, that would be more useful than speculating. There’s an increased risk, but is it that much greater than typical driving risk? I’m sure more accidents happen in bad weather but I don’t think it’s anywhere near “no one should drive in the rain at night” level of panic you suggest.

But regardless of the extreme cases, there are plenty of scenarios in which human drivers may be slightly impaired Which current state of the art autonomous technology is not just impaired, but entirely helpless: simple, common road construction, missing or obscured lane markings, precipitation of any sort, malfunctioning stoplights, delivery trucks or dumpsters in the lane, negotiating a busy parking lot, any atypical road usage of any sort, roundabouts, dirt roads, unmapped roads, stationary objects...

Humans can handle all of these situations without special training or instructions.


I was thinking basically the same thing.

> Krafcik went on to say that the auto industry might never produce a car capable of driving at any time of year, in any weather, under any conditions. “Autonomy will always have some constraints,” he added.

I've driven I-95 in heavy rain and/or snow, when stopping at all -- let alone in time to avoid hitting another vehicle -- would have been ~impossible. Plus the fact that visibility was extremely limited.

And yet there we were, cruising along at highway speed. I recall thinking "Hey, it's cool, because nobody could stop." But then, there's the occasional mass pileup, after a semi jackknifes.


>> Krafcik went on to say that the auto industry might never produce a car capable of driving at any time of year, in any weather, under any conditions. “Autonomy will always have some constraints,” he added.

But who cares? If it can drive in the easy 90% of the time that will be enough of an achievement.


It seems like a good intermediate solution to this problem would be for autonomous vehicles to simply know when it's not possible for them to drive safely, and to require manual control in those scenarios.


It's a great solution safety wise but it turns all of this engineering into a colossal waste if it's just for glorified cruise control. You wouldn't be able to hail an empty car. You would have to be sober and awake. You would have to be staring at the road at all times with your hands on your knees, ready to take the helm at a moments notice, because if you are reading a book or something and the car beeps you might not have time to refocus and safely take control of the situation. This is a nightmare scenario for autonomous car investors, and a very likely scenario to appear out of safety first traffic legislation.


It'd be a good start nevertheless.

The staring part is the problem, the safe disengagement basically. I don't mind if it starts to slow down and beeping because of some imagined or real obstacle. I don't care if it makes others look funny at me (or even honk). But beeping and keeping speed (you have ~1 second to do something to avoid a fatal crash) is ridiculous.


Based on what I hear from those in the know, the perception is that it's decades away instead of years. And an even more complicated problem isn't whether or not they can make it work with considerable constraints (geofence, no highways, avoid several spots that are hard to handle, weather, assistance via remote operation when stuck), but how to do it profitably. The cars are expensive. The maintenance is expensive. The ongoing cleaning, charging, and prepping of vehicles is expensive. Constantly updating HD maps is expensive. Of course the engineers to keep everything running smoothly are expensive. And the small army of people you need to actually talk to and help customers - something Google has basically never done before - is expensive.

I love the idea of self-driving cars and I really can't wait until they're ready. But I'd say that's going to happen about as quickly as changing the infrastructure in this country. Personally I'd rather have better public transit anyway.


Yes,

The thing about self-driving you can get to 90% or 99% or even 99.9% success and then find the cost of closing part of that extra sliver of non-success is the same as all the work you've done. Especially since testing those corners cases is hard since they are unusual. There's even some part of the sliver that even is "AI complete". Very small but you don't know where it is, etc.

But I'd say that's going to happen about as quickly as changing the infrastructure in this country.

Key point!

Automation has traditionally been much, much easier when you are engineering a situation where you control all the variables. Automated factories are common. Automated warehouses are coming. Automated dock systems exist. In these situation, there's a hard border between "inside" and "outside". Roads are generally regular but the border between insight and outside fuzzy. Google can run it's self-driving taxis in a suburb in Arizona where roads are as regular as could be imagined but expanding beyond that is much, much hard than one imagines.

Note: I recommend this blog which periodically tracks progress in self-driving cars as well as making intelligent, critical comments on current AI.

https://blog.piekniewski.info/


Good points. And I'd add that most things we've automated are things that have been routinized in a way where people conform to the system, rather than the other way around. E.g., a ton of software development has basically been, "take this paper form and make a computer do the paperwork". The hard part was already done for us when everybody got trained to be subservient to the bureaucratic process. So societally we have this limited sense of how quickly computers can take something over.

Real-world things like cars and domestic robots are much harder, because those domains are ones where people come first. As you say, key elements could well be AI-complete.


Even when you control all of the variables, there are plenty of situations where the automated system just runs out of runway, and kicks the can to a human. Can't read the label on this package? Push it to the exception lane, and have a human figure it out, and relabel it. Works great, but is useless for a car, because when the car hits the exception, it's 10 miles away on a residential street, and it's not going to make it to you anymore.

For me, the most important thing a car can be is reliable, and every form of unattended driving cannot hit that level.


Sometimes the taxi you called never arrives. It happens regularly.

Of course the real problem is economic. Competing with the taxi service that uses humans is hard, because human labor is cheap.


>people you need to actually talk to and help customers - something Google has basically never done before

As a google advertiser this made me laugh.

At the same time I don't think expense will be a setback for autonomous vehicles. The reason being that if vehicles are fully autonomous there would be no reason for any individuals to own them. It would make more sense to have them operate like a taxi service. This is because without the costs of having to pay for a driver, it would be cheaper to just order an autonomous vehicle whenever you need to go anywhere than it would be own your own vehicle.

Just from an efficiency standpoint it is terribly inefficient for each person to have a car. We're not driving most of the time and the only reason we have our car sitting in the driveway is we need it to be available should we need to drive. With autonomous cars, my car could on it's own drive across town to pick up other people when I'm not using it. The reason we can't do that today is because of the additional logistics and costs involved in transporting my unused car from my driveway to the driveway of the person across town who needs it. This problem would not exist in a world with autonomous vehicles.

To me, it seems likely that a world with autonomous cars would be a world where the majority of vehicles in the world are owned by services like uber.


I doubt it. Relying on taxi services (driven by humans or computers) might work fine for childless urbanites. But I keep a lot of stuff in my car, and there's no way I'm going to waste time loading and unloading a taxi every day. Plus my family likes having a nice clean car. I'm never going to give up convenience and time savings just to save a little money.


I fully agree with you. Another argument against this ride sharing that most people conveniently forget are peak times. How many millions of vehicles are standing still during office hours only to power up when everyone needs to go home? How is a robotaxi service going to solve that? The number of cars on the roads at peak times won’t go down much imo.


Peak times are already constrained by road capacity.

That said just having smart buses picking crusing and picking up people when and where they are and dropping them off at their destination would be great. (After all usually a lot of people want to go from similar A to similar B places at the same time.)


>> At the same time I don't think expense will be a setback for autonomous vehicles. The reason being that if vehicles are fully autonomous there would be no reason for any individuals to own them. It would make more sense to have them operate like a taxi service. This is because without the costs of having to pay for a driver, it would be cheaper to just order an autonomous vehicle whenever you need to go anywhere than it would be own your own vehicle.

I have a better idea that we can implement right now without waiting for autonomous driving to become a real technological capability.

We could make big cars with many seats and have them driven around town by humans- just a single human could drive a single vehicle capable of transporting 50, maybe 100 others. We could have those big cars stopping at predetermined points around the city, so that people would know where to get on and off. And we could support that service by asking the users to pay a small fee upon boarding the big car. This would cover the costs of the driver, and the big car, and still leave some to pay for infrastructure, etc.

These big cars would probably have relatively restricted areas of operation, but, for the foreseeable future, so will autonomous cars (which have to be geofenced) and, like I say, we already have the technology for those big-cars-with-many-people-sitting-in-them-and-a-single-driver that I'm talking about.

And since a whole bunch of people would be carried by one or two of those big cars per day, instead of each riding a single (autonomous or not) car, we could drastically reduce the number of automobiles-per-person, and, therefore, the CO2 emissions, thereby reducing pollution and protecting the environment to boot.

So? Anyone want to invest on that?


Getting off topic here but actually I would love to invest in transit. And judging from the kickstarter in the link below so would many other people.

https://www.cbc.ca/news/canada/toronto/crowd-funded-bus-take...

Unfortunately in my city (Toronto) and I would assume in many others it's illegal to run a private bus service.

https://www.cbc.ca/news/canada/toronto/liberty-village-shutt...

So we're stuck with the government monopoly burdened with the enormous costs of public service transit unions with some transit union workers making six figures for things like being bus drivers or ticket collectors. Then you have bus routes that are not decided by numbers business purposes but by politics. Finally you have rude drivers and buses and trains that smell like urine. And you have buses and trains provided by special favored contractors that are technologically and mechanically less than what is available. My aging mother says it hurts her body to ride the bus because it doesn't provide a smooth comfortable ride.

So if we want funding into buses and trains we can start by dealing with the political quagmire that has made buses and trains inefficient and unfriendly to users. But the familiar government cry of "just keep throwing more money at it until the problem is solved" is likely not the best solution.


> transit union workers making six figures for things like being bus drivers or ticket collectors

You say that like it's a bad thing. We also have "tech workers making six figures for things like increasing engagement on social networks by 0.1%." What do you think about that?

> Finally you have rude drivers and buses and trains that smell like urine.

This would change right-quick if congestion pricing pushed wealthier people onto public transit.


I have utmost respect for anyone who can safely and responsibly drive a huge vehicle full of people, day in and day out, on cramped city streets with precision down to centimeters on some complicated turns and crossroads.

If anything, good bus drivers aren't paid enough.


Do you know what the disadvantages of your vehicles? they don't take riders to their house. They don't allow riders to bring along any oddity of luggage. They aren't available at the moment notice. They aren't personal escape room when one is needed. I'm sure you know what kind of vehicles providing such comfort.

So take the high moral ground all you want, be snarky all you want, but the real world is a strange and messy place. Car has it place, along with bike, scooter, planes, trains. Until teleporter arrives.


People who rely on public transit adapt to public transit.

For example, they become more savvy about how much stuff they own and what they carry around.


I adapt by taking Uber/Lyft instead. I got fed up with crazy people threatening and attacking me on public transportation. Thankfully, I'm fortunate enough that I can afford to be driven around. I hope that in the future such a pleasant experience can be made available to everyone in the form of autonomous vehicles.


>> I got fed up with crazy people threatening and attacking me on public transportation.

As usual, this seems to be very different between EU and US citizens. I've been using public transport of all possible types (trains, boats, busses, trams...) in more than half a dozen EU countries since I was a wee bairn and I've never been threatened or attacked on any of them. Like, not even close. The absolutely worst thing that has ever happened to me on public transport is some guy puking his guts out on a London bus on a Saturday night. And that was just once. And it was more like "what an idiot" than "oh god I'm scared".

Which tells me that this is nothing to do with public transportation, as a means of transport, but with - other issues, for example various social issues that are particular in the US.

Or people are just exaggerating the misery of taking public transport in the US? I struggle to believe it's really that bad as you say, for instance. Public transport descriptions by US citizens always sound a bit like The Warriors, to me, and a bit unrealistic, to be honest.


Taking a driver to their house sounds like a luxury, not a real necessity. Same for having the car available at a moment's notice, except for emergency situations for which there are emergency vehicles.

Carrying luggage on public transport is perfectly possible and for larger or more items it's always possible to hire a van.

I don't understand the "personal escape room" comment. Escape- from what? In a car? How?

And note that even if all of the above were true limitations of public transport, the benefits would still significantly outweigh them. Just the reduced emissions from multiple people using the same vehicle instead of each their personal vehicle, would be (are!) an enormous advantage.

The fact that everyone has their own car and drives it every day has caused serious problems, not least of which is environmental destruction on an unprecedented scale from the necessary infrastructure and emissions- and that's before we count the number of deaths from accidents and pollution. We need technologies and solutions that take more cars off the roads- not ones that put more on them.


I think you're right in that mass transit is the best solution to transportation from a social and environmental standpoint. But the reality is that most people prefer cars. It's unrealistic to force people to use the socially optimum solution rather than their prefered solution. Many governments have tried and many have failed to force people to use the socially beneficial solution over their preferred one. A good example of this is the failure of carpool lanes.

Any system that caters to humans has to account for what people want. The way to make people take transit is to make transit comparable to cars - in terms of speed, reliability etc. And to make it better now rather than holding out some promise that it will be better at some future date when enough people take transit. People will take transit now if its better now, not if it promises to be better in the future.

I mean at the end of the day we can't force or guilt people into taking our preferred solution. We have to create a solution that caters to them. People will do their part for society and the environment - recycling programs are proof of this. You just have to create a solution that's not too much of an inconvenience. People would take transit more if it was cleaner, faster and more reliable. Rather than pouring more money into it hoping it gets better, or goading more people to take it under the promise it will get better - I don't think its unreasonable to ask to make current systems more user friendly and then increase funding as service improves and ridership increases


> It's unrealistic to force people to use the socially optimum solution rather than their prefered solution.

It is perfectly realistic and has happened, like when schools were desegregated much to the dismay of white people in the south, who abandoned the democratic party as a result of it. The hard part is getting a law passed and not diluted along the way by people who refuse to accept inevitable change. Making transit comparable and better than cars is the goal of every transit agency in the world, the difficulty comes from getting people to vote for funding and approve construction projects. Los Angeles is pointed to as a slow meandering failure of a transit build out, but the only reason why it is a slow meandering failure is that the people who would benefit from transit the most do not vote, and the people who vote care about their view or have a preconceived and bigoted notion of who rides public transit, latching on to psuedoscience along the way to further their racist worldview into political action (underground subway construction was banned in LA from the 90s to a couple of years ago due to an unfounded fear of blowing up the city, for example, but the real reason was that the concerned citizens of beverly hills felt they might see more black or brown people on their stretch of sunset boulevard).


>> It's unrealistic to force people to use the socially optimum solution rather than their prefered solution.

That sounds ...wrong? You might as well say that it's unrealistic to force people to pay taxes. If the benefits of using public transport outweigh the disadvantages, people will just have to get used to the idea.

This has certainly worked in many other cases- for example, with rules about smoking indoors etc. I'm sure that smokers prefer to smoke indoors. But there's good reasons not to, and a very strong push with fines and all to not do it, so they just have to suck it up and conform. Sad face.


Driving in the car is one of the few times one can be completely alone with one's thoughts, for many people. I think I'd lose my mind without being unplugged for an hour or so a day.

Hunting and fishing isn't bad either, but I've got to get up at 4:30 these days to get out there before the turkeys fly off their roosts...


> I don't understand the "personal escape room" comment. Escape- from what? In a car? How?

My car is my second home, one that I can bring with me anywhere. It's why I prefer to drive long distances rather than fly. It even saves on hotel costs in the summer.

It's a place to escape from "out there". When I worked in an office, I would take my car to get lunch, then spend the whole hour absorbed in my thoughts or reading a book, away from my co-workers.

I'm a bit of a loner, but I can say that I've observed similar behaviour in others. After a many hour hike in the forest, getting back to the car in the evening is a relief. It's comfortable, familiar, and safe.

Sure, I could get used to public transit. But as long as I don't live in a dense city, I'll keep my car, and love it.


Door to door transport is an individual perk but a network wide con. Imagine if american airlines was tasked with bringing you to your door or hotel rather than just asking you to find your own way to the airport hub, or if there were no connecting flights at all. It's way simpler to just ask people to walk (or bike, or scooter, or skateboard) a half mile (10 minutes, I'd do it in a blizzard), than to serve all those random last mile connections. Available at a moments notice is also another con, better to wait for a ride every 10 minutes than have the network strained by serving minute to minute stops. Plus I don't think I've had an uber arrive in 10 minutes ever.

The last thing, with the luggage, well try going to Portland, one of the only cities I've been to where you can ride a train from your downtown hotel to within ~100 yards of airport security in 20 minutes. People rarely are checking more than one bag, and you can strap your carry on to the handle of your roller bag and it isn't a strain on the system, as hard as this might be to believe. In Los Angeles, people find room to set up little potato chip shops on the trains, and there are people who manage to transport all their worldly possessions with them on the subways in NYC. And if you really needed to haul a lot of junk, you can rent a uhaul box truck (or a tall van, or a pickup truck) for $20 and change plus gas for a local trip.


Buses are great but cars are 2x as quick to any destination more than a few km away (at least in Singapore). I don't own a car fwiw.


Buses really need to be grade separated from traffic, but most cities are reluctant to remove a lane out of a road to make it possible. I believe rio de janero has a huge BRT (bus rapid transit) network. Grade separated, and they are pretty much a subway with rubber wheels and it's waaay cheaper to build even an elevated roadway vs. a tunnel with rails that is more constrained with how sharp you can turn and what elevation changes your track needs to avoid. In Los Angeles, BRT busses are twice as long as well, all the more riders in one go.


You would need to split subway-like buses and less frequent suburban buses. Suburban buses need to navigate streets.


I wish they also did this for bicycles and e mobility devices


> but how to do it profitably. The cars are expensive. The maintenance is expensive. The ongoing cleaning, charging, and prepping of vehicles is expensive. Constantly updating HD maps is expensive. Of course the engineers to keep everything running smoothly are expensive. And the small army of people you need to actually talk to and help customer

I believe that only applies to the waymo model where you go with the full time robo-taxi using top of the line hardware like lidar and which pretty much requires level 5.

The tesla model is to use cheaper hardware which they have proven is cheap enough that consumers will pay for. I believe they also only need it to be level 3 [0] for it to be a massive success (I know a ton of people that would pay the $6,000 for the FSD upgrade if it allowed them to watch a movie/work/text during their commute most of the time).

Granted, Tesla's tech has a lot to catch up but I wouldn't count their solution to the profitability problem out since they have a lot of more cars on the road collecting data/testing their algorithms, and I'm guessing they have a solution to the imperfect tech that ends up killing people problem (Selling their own auto insurance which they have already announced plans for and which will probably have special conditions for autopilot failures, and lots of cars running their quasi level 2 tech which they can use to lobby congressmen to limit liability for non-tesla drivers suability during autopilot failures).

But, as with all big tech predictions, who knows.

[0]: Level 3 ("eyes off"): The driver can safely turn their attention away from the driving tasks, e.g. the driver can text or watch a movie. The vehicle will handle situations that call for an immediate response, like emergency braking. The driver must still be prepared to intervene within some limited time, specified by the manufacturer, when called upon by the vehicle to do so: https://en.wikipedia.org/wiki/Self-driving_car#Levels_of_dri...


Pretty much every manufacturer is already selling tech similar to Tesla, they just don't call it autopilot but driving assistance. If that's our goal we could just stop researching self driving cars and put the money into EVs instead.


It doesn't seem like Tesla knows how to make it profitable either, to be honest. Their current plan for profitability appears to be the million robo-taxi fleet by 2020 rather than the auto manufacturing business.


I think it would seem insane for them to present this opportunity to make a ton of money (robotaxis), and not fully commit to it.

I mean that they are positioning themselves as if they truly believe this is going to happen.


What other options do they have? It seems like they lost the bet that they were going to become profitable with the Model 3. Now they need a new big bet, or it all falls down.


They have to stop pretending they are a luxury brand and release a cheap tiny little box with knobs instead of an ipad that will compete with the leaf or any other subcompact. The price is just too high right now, and if they can't get it lower then it sucks that they entered this market too early to benefit from the economies of scale on EV tech that might come years later. Any other automaker has other revenue streams to wait decades even for this to happen, but Tesla has 10 months to live.


> They have to stop pretending they are a luxury brand and release a cheap tiny little box with knobs instead of an ipad that will compete with the leaf or any other subcompact.

Believe it or not, the screens are cheaper. Other companies (most notably Volkswagen) are moving towards entirely digital interiors as well. They cite reasons for it being a better experience despite decades of research pointing out the importance of touch controls while driving to limit distractions, but most people believe it is because they are cheaper to build and service.


A screen is cheaper than a knob? I can see it easier to remove a screen vs. a knob, but how often does a knob go bad?


But there is no way that they can get the from-scratch design, engineering and production line of yet another model up to capacity in less than a year. And everyone knows this. OTOH, making bets on AI are much harder for investors to evaluate the reasonableness of.


It does not seem like that to me.


Thing missing from the discussion is the different safety and threat model fully autonomous cars have compared to human drivers.

Mass incidents are one of them.

* If one autonomous cars fails catastrophically on some road, all other cars with same model and/or software are likely to do the same. Huge pileup in fast tragic can become nightmare for the car maker/operator.

* If there is a software error in the overnight update or planned sabotage, hundreds of thousands of cars can get into accidents in very short time period until the failure is noticed.

* Simple loss of trust due to security incident, discovery of exploit, server configuration error, hacking in the network can lead to security groundings of millions of cars at once. If all Toyota's are grounded for a week, the economic impact around the world is huge.

Even if the autonomous driving works, designing the security and safety infrastructure that keeps them working will be real challenge. I'm sure it will be worked out but it's not going to happen in few years.


The amazing thing to me is that I don't anybody who thought it was coming in the next couple years who based their estimate on anything but hope and magic.

Of course, that's nothing new. We've all been on software projects where the launch date was the earliest day nobody could prove it was impossible to finish. But I'd like to think that investors dropping zillions on companies would have had better due diligence.


The amazing thing to me is that I don't [know] anybody who thought it was coming in the next couple years who based their estimate on anything but hope and magic.

I'd agree that a bit of the proper sort of analysis would show that this wasn't easily happening. But I think the kind of thinking that imagines this as possible is extremely easy to fall into.

We've all been on software projects where the launch date was the earliest day nobody could prove it was impossible to finish.

Sure and I'm pretty sure a lot of software engineers have given or agreed to completely unrealistic estimates after having had this experience. If anything, the "software crisis", the inability of software engineers to provide good estimates even after hard experience and warnings is an expression that over-optimistic thinking is natural for human beings in this situation.

My theory is that in situations where a human is asked about the difficulty of a mental task, they essentially can only fall back on a basic language-expression of said task and rationally try to calculate with that - the brilliance and the limitation of we language-generating, language-using creature is the ability to correctly distill a complex and messy task into fairly simple sequence of symbols. But that correctness actually exists in a complex and dynamic context, most of which we screen out. A command like "Drive to the store" doesn't take into account all the compensations needed to do the driving just as "Create an inventory system" fails to specify all the client-specific "gotchas" involved in such a task.

So going from our normal language-distillation process to a complete software system that doesn't have the continual dynamic compensation of our intelligence is an inherently difficult problem - even though we can probably only do it at all because of our "language facility".


> the inability of software engineers to provide good estimates

I don't think it's inability. I think they're embedded in a system that doesn't encourage or reward honesty or accuracy.

I agree it's easy for people to say, "Gosh, how hard can that be?" Which is where we get the classic, "I could build that in a week" estimate. But I don't agree that there's something inevitable about people going from a finger-in-the-air SWAG to a "we'll have self-driving cars in 2021" business plan.

Normally I'd say that it's a straight up failure of both management techniques and the professional standards of the engineers. Which is true. But I think the problem is that a lot of fundraising in the "Uber for X" era is only slightly less of a con than Theranos. So I think it's more correctly seen as a failure of founders and VCs.


Sure,

At some point, when you're talking about a concrete and complete plan, you likely go from naive self-deception to the willful deception of others.

Of course, one should consider how people's tendency to naive self-deception makes things easier for willful deceivers.

As a scheme for selling an impossible dreams takes shape, I suspect the actors have a strange and contradictory mindset. A rational conman is going to sell a smallish scam and vanish. The folks running Theranos certainly deceived many but they also rode the train far past the point where they could avoid being caught in the wreck (so they wound-up ruined, looking at jail time, etc). Maybe they had belief that if they kept the charade going long enough could make the original scheme work but they could also have a sort of thinking that simply doesn't look at the possibility of failure once they have decided to seek success.


I fear the unavoidable cost is going to ruin the model. The advantages of these things is that they are able to work as a unified network, keeping space, allowing space, preventing pileups from simply driving too close, etc.

What about all the 30 year old $2000 corollas you see on the road? Are these people expected to shell out $50k+ to get a self driving car? Absolutely not. There will always be people who can only afford a 10, 20, 30 year old car. What happens when they own a 30 year old self driving car, outdated for decades and based on how tech companies operate today, probably unpatched and unsupported for 20 years?

Self driving would really shine as a self driving bus, in a dedicated lane. Single occupant vehicles are irresponsible in cities that are choked with too many of them, but there is no alternative if all you can afford is a $2000 corolla and rail connections to work are nonexistant. But then again, a self driving bus is much less sexy to investors than a self driving car...


That sounds like one big company doesn't know how to do it faster by keeping doing the same things they're doing now.


>Constantly updating HD maps is expensive.

Do any of the players in this space use HD maps?


Waymo.


Seems like autonomous personal air travel would be a far, far simpler problem to solve than road-based driving? Far less edge cases, once you are in the air the only obstacles you really need to worry about are other vehicles, birds, leaves and the odd stray plastic bag. With VTOL the whole thing could be an order of magnitude easier than driving. By the same principle, sea travel would also have less edge cases, at least in calm conditions. Is it the case that people aren’t seeing markets for those applications of autonomy or is there some other reason why there isn’t the same hype in those areas?


Vehicle cost; fuel cost; takeoff and landing; handling of edge cases (there's a reason that commercial airlines still have pilots despite autopilot being very effective in 90% of conditions).


how expensive would it be creating areas of road that only allows autonoums vehicles, something like carpool lanes we have now.

Wouldn't that cut problem size massively.


I can see that happening on major freeways, the way we have carpool lanes now, but not on a general scale.


A: Air travel generally is a far simpler problem to solve, because there's national standardization of the air navigation system, the airport paint and signage, and very clear rules of separation with very few edge cases. And yet we do not have anything close to self-flying planes no matter how much money you throw at it. And we're not even close to getting there. Whereas with cars, none of that is true, municipalities get wedged into violating their own equivalent of human interface guidelines all the time. Worn paint and signs for years, busted cross walk signals, confusing intersections, human driven cars that consistently do not follow the rules but in inconsistent ways.

B. Personal air travel has a significant regulatory burden in the transition from ground to air. The ground is city + state regulated. And immediately once airborne it's FAA regulated, but not under any kind of Air Traffic Control, as almost all airspace below 1200' above ground is uncontrolled. So ATC has nothing to say about it, and no central mechanism for negotiating conflicts.

Where are you allowed to takeoff and land is easily figured out today: airports only. And that's because it takes all kinds of things into account like obstruction clearance, noise abatement, anticipating engine failures and crashes. We can't even automate commercial flights - it might seem like it's more correct to say we don't automate commercial flights, but we could. That's not really true. The amount of changes to the air traffic control system, and on-board equipment for airplanes, is presently so cost prohibitive that it is effectively a "cannot be done".


TBH this is the pareto principle at work.

Sure you can get a car roughly driving around the city autonomously (there are basic self driving cars back in the 1950s that drove on a wire track) but getting it perfect is where the vast majority of effort is going to be expended.


> I love the idea of self-driving cars and I really can't wait until they're ready.

What if I were to tell you that you can buy a self driving car right now, and that there are 100s of thousands them in consumer hands, right now?

Tesla sells them. And they aren't that expensive.

If you can press a button, and take your hands off the wheel, that is a self driving car. And you can buy them today.


Sorry, nope. If your car crashes after you press that button and take your hands off the wheel, Tesla will point to the line in their terms of service which says, "keep hands on the wheel, driver must pay attention at all times."

Tesla (and Elon Musk in particular) are of course happy to encourage the misconception that they already sell a self-driving car.


You can call it whatever you want. But at the end of the day, this definitely fits the vast majority of people's definition of "self driving".

You press a button, and it stays in lane, at the right speed. Most people are happy with that feature set.

That is the MVP, that provides most of the benefits to most people, even if it isn't perfect.


It kills people... it's not ready.


Most people think of "self-driving" as allowing them not to pay attention to what the car is doing. Autopilot is emphatically not that, even in the limited highway settings where you can actually use it.


That you can get from any number of other makes, except that they do not slam you into firetrucks, and don't call it self-driving when it obviously isn't.


[flagged]


Please don't get into old habits of breaking the guidelines. It's absolutely not OK to address other community members like this and we ban accounts that keep on.

https://news.ycombinator.com/newsguidelines.html


It doesn't do everything. But it does enough to be very useful.

Most people in the world are happy to call that self driving, even if it doesn't work in the snow, or park itself.

It does not solve everything, but it solves enough of the problem space that most people consider that to fulfill the MVP of "self driving car".


It's got extremely limited assist features that still required a human to be legally licensed and competent to drive, hands on wheels, not drunk or on disqualifying drugs, etc. That is not at all self-driving. You can keep on repeating yourself, and attempting to wordsmith this as if it is self-driving - it's not a convincing argument.


> It's got extremely limited assist features that still required a human to be legally licensed and competent to drive, hands on wheels, not drunk or on disqualifying drugs, etc.

Irrelevant. Most people would still say that this falls under the category of self driving for most situations.

For the vast majority of people in the world, it solves most of their MVP usecases.

It does not work for everything and that's fine. What matters is that it works for most situations.

Since it works for most situations, most people would be happy to call that self driving.

Or in other words, it is a self driving car that is only allowed to be used when you have a license and are not drunk.

Yes, that is a limitation, but it is a small limitation and still mostly fulfills most people's definition of self driving.


>Most people would still say that this falls under the category of self driving for most situations.

Saying things doesn't make it true. You can provide a citation for a study that tells us what most people want and need when it comes to either autonomous driving or self-driving. I don't know a single person who considers these very narrow feature sets with many caveats to even approximate either what they want or need.

>For the vast majority of people in the world, it solves most of their MVP usecases.

This cannot be a true statement. A top requirement would be "keeping me safer than driving myself" and in numerous instances we find that's simply not true even when people are paying attention.

>Since it works for most situations, most people would be happy to call that self driving.

Most people routinely drive on surface streets with traffic lights, pedestrians and cyclists. It absolutely does not navigate that environment by itself.

You really have no compelling argument here.


> top requirement would be "keeping me safer than driving myself"

No, the usecase is "able to take my hands of the wheel and not pay attention". Regardless of what the Tesla term of service says about hands on the wheel or whatever, I am sure that many of their owners do this already.

Even IF this is more dangerous, there are still lots of people who would be happy to make this tradeoff.

We trade off our safety for convenience all the time. This technology does that and it does so right now.


They are still working on the do not kill the driver by crashing into stationary objects feature though it seems.


Can I get in the car, set destination, then sit in the back and watch a movie with my feet up? Can I call my car to me when I'm in LA and the car in NYC?[1]

If not, it's not self driving as generally understood, it's just driver or lane assist. A perception intentionally created by Tesla promoting their driver assist as giving a car that could effectively wander off on its own. Maybe take a part-time job as autonomous taxi. Long having offered full self driving in the paid options list.

[1] https://twitter.com/elonmusk/status/686279251293777920


You don't need those features that you describe in order to be a self driving car.

For most people, the MVP of a self driving car is merely "able to take your hands off the wheel".


That's such b.s. Tesla explicitly says you can't take your hand off the wheel. So what are you even talking about?

"I'm going to drive to the store" means a complete sequence of events, start>unpark>integrate merge with traffic on route>follow route>navigate parking lot>park.

Nothing other than a human does that today. That is what is meant by either "self-driving car" or "fully autonomous". All exceptions to what a human can do is just making excuses, and it's b.s. Driving is a broad term that encompasses all responsibilities for getting from A to B.

Does a Tesla lane change automatically to follow a route? Does it enter turn lanes and turn onto new streets? Does it turn into parking lots? Nope. Human prompted lane changing, and staying in a lane, is not "driving", it's a vertical feature that partially offloads some task from a human.


Einstein's annus mirabilis was 1905 and it took decades of determined, international promotion of the Einstein brand, World War 2, a letter from Einstein to Roosevelt, plus another 6 years and 33 billion 1945 dollars (473B USD today), and a staggering collective of the smartest minds to ever walk the Earth in an existential race against a mortal enemy to develop nuclear weapons.

NASA's shuttle project started in 1969, cost $200B, was shut down in 2011 and never got close to it's planned 50 launches per year.

This stuff is hard.


> This stuff is hard.

Luckily we attract our best minds with highest salaries to solve the pressing problem how to get people click more ads.

(Yes, sarcasm.)


But both of these were mostly science problems. Self driving cars are about making machines interact with humans. That has never been solved before.

Self driving in itself wouldn't be an issue, it's the other humans on the road that make it so hard to solve.

One of the main problems for manufacturers is that we have never before solved a similar problem, so there's nothing to compare it to.


>it's the other humans on the road that make it so hard to solve.

Humans, cats and dogs, deer, tumbling rocks, low flying seagulls, tumbling trashbags or a toppled trashbin bin, palm fronds, drunk people jaywalking, drunk guy in a jeter jersey directing traffic after a yankees game, edgy teenagers pulling the invisible rope trick (or just yanking a sheet on a string), puddles reflecting humans on the sidewalk, blind turns and hidden driveways, competitors self driving tech, the metawars that will result by trying to raise a competitors accident rates (and not doing that would be leaving money on the table).


I have a book from 1982 that predicts fully self-driving cars by 1995.


Geo-fencing counts, IMO. "Driver assist" does not.

Full autonomy respresents an economic paradigm, geo fenced or not. When private cars went mainstream, the economic cascade was so broad that no one could have predicted it. Supermarkets, for example, and the subsequent impact on food industry was a side effect of private cars. Suburbs. Disneyland...

Self driving is such a change. Starting from geo-fencing is just a constraint on how fast the change happens. Private cars were limited by affordability instead. There's always something.

Once these cars are on (geo-fenced) roads, infrastructure (eg signs) and vehicle can start adapting to eachother. There will be real economic pressures to "open up" roads and highways to autonomous driving.

The tech problems in the space are so interesting, we forget about other aspects. Ultimately though, geo-fencing or any other hack could get us to the point where product problems get solved without a 1:1 relationship to solving tech problems.


Hmm. I was driving a rental Hyundai that has auto-lane steering. I constantly wondered if it would follow the wrong path at choice decision opportunities (exits and forks) or side-swipe another vehicle trying to follow a lane. It's quite aggressive in basically overriding the driver's input without significant force and not notifying audibly when it disables itself (except for a visual indicator, a small icon in the cluster turns from green to white, but it requires "polling" behavior of constantly looking down).


You can feel if it's on or off from the resistance. It requires you to apply force to the wheel anyway, otherwise there is an no-hands-on-the-wheel alert.


We might have to contemplate that we need to change our roads if we want automomous cars.

We have lanes for carpooling - why not lanes for autonomous vehicles? The road surface could have special markers to centre the vehicle, and to position it in the correct lane for a certain destination. This could be a single lane in a highway, with shorter distances between cars (relying on the quicker reaction speeds of computers and the modern, well-maintained brakes of the cars themselves), allowing humans to overtake and recognise them easily.

Then we need a standard method of communication between vehicles - braking, movement etc. - rather than relying on cameras and radar.

Remove as many variables as possible and the task becomes much easier.

Heck, we'll need a new method of charging road users for road use, without gas taxes, so we might as well introduce a standardised road pricing scheme as well (that takes into account vehicle size, weight, time, and location to calculate the price/km - and can communicate that to the end-user to allow them to optimise their travel).


Driving on highways is relatively trivial with current tech (with a bit of exaggeration for effect) compared to surface streets which don't lend themselves to these neat type of solutions. This is the last mile problem once again and as always that's where the majority of difficulty and cost lay.


Does the last mile problem really need to be solved to make autonomous vehicles fantastically useful?

A vehicle with an autopilot that's as reliably safe a highway driver as I am would be awesome. It'd give me back hours of attention (or sleep) during commutes or interstate travel. Even if it would seek a parked or human-piloted state on encountering problematic weather.

All-weather door-to-door capability may be what's needed to replace a large chunk of individually owned vehicles with a shared on-demand fleets, but highway-only would make individually owned vehicles much more convenient.


I agree, and I don't understand why this concept isn't given more consideration. If, as other commenters have said, autonomous freeway travel is so "trivial" why can't we reap the benefits now while the industry grinds away at edge cases for another 20 years?


This could work well for commercial transit - have standby drivers for last mile, and have them board at the off-ramp and disembark at the on-ramp. For personal transport this could be more difficult though.


You can't rely on what other cars are telling you because a bad actor could create a device that sends out false information that leads to crashes (send out "I'm now accelerating from 50mph to 60mph" then brake immediately, or even just not accelerate and continue to go 50mph). You could of course solve this by verifying all the information sent by cars, but then you have negated any benefits because you have to be able to detect anything they could send in a message.


We have a justice system that would prevent this. Same reason why I don't worry that a gunman is going to kill me every time I go out into public.


This argument doesn’t hold though because a single bad actor who figures out how to spoof the interface has tremendous leverage to hurt a lot of people at once. And it’s likely they could leave a device at night and trigger it days or weeks later. One instance of this could literally cause enough public fear to get AVs banned for years.

I agree with OP that each car managing its own cameras and radar makes the most sense. Cooperation by way of individual optimization is the cornerstone of modern society and something we should absolutely imbue our autonomous vehicles with.


>And it’s likely they could leave a device at night and trigger it days or weeks later.

same could be said about leaving a bomb on a bus, and yet there aren't bombs going off on busses every day, and even when it does happen, it's not enough to scare people away from busses for good.

Society kinda works because the vast majority of people are not homicidal.


When you think about it, it's amazing there isn't more terrorism.


True, and that is certainly not due to impotent efforts to increase security in basically every facet of life.


This is true. Not the sentence I expected to give me hope on a Sunday night, but it did.


" and yet there aren't bombs going off on busses every day, "

I don't know if that's true (except maybe in the literal sense that it's not every day, just frequently). As far as I'm aware this is a pretty big problem.

"Society kinda works because the vast majority of people are not homicidal."

I agree with this, and that self driving cars will only work under this assumption, but I don't think having vehicles communicate for safety-related (braking) information is the solution at all. If you're relying on that and can't get by with observed data alone, you open yourself up to a huge accident caused by a tiny rock or bird or anything unexpected (way more so than we do now).

If it's just supplementary as an extra layer of safety (attach this pod to your bike to make double sure all cars see you, but they're also still looking) then sure.


Bus bombings are (relatively) common in some countries. One went off very recently, actually: https://www.theguardian.com/world/2019/may/19/egypt-roadside...


Yup, that justice system seems to stop all those gunmen shooting people.

Except for, you know. The occasional bad actors.


I think the potential for 'bad actors' on the roads is significantly reduced just by eliminating the human driver!


How come? As it stands, anyone can be a bad actor but their impact is limited, and it requires a pretty big sacrifice (their health). For example, nothing is stopping someone from driving the wrong way on the highway other than a couple of Do Not Enter signs, but by crashing their car into oncoming traffic they risk losing just as much as the people they crash into.


How would one do this? Wouldn't all cars in public road have software signed by strong cryptography?


Cryptography just guarantees that the message hasn't been tampered with or read by a 3rd party, there's no way to mathmatically prove that you will accelerate


> why not lanes for autonomous vehicles?

That would be such a huge change to infrastructure that "on-demand-trains" probably would be the cheaper alternative. There are also a lot of cases where you just lack the space for additional lanes.


Road are so expensive as they are, you want to ask for even more spending on roads?


Roads aren’t actually that expensive. Resurfacing a highway costs about a million dollars a mile. Rebuilding the track on the CTA Redline recently cost $40 million per mile, and that was a miraculously cheap and fast rail project. A new freeway costs 5-10 million per mile. A new subway tunnel can cost a billion dollars per mile. Building the DC Silver Line through existing freeway medians cost $240 million per mile.


Maybe in the boonies. Adding a carpool lane on 10 miles of the 405 cost 1.1 billion and construction was a shit show. Then we need to add sensors and whatever else into the road surface so private companies can get their profit margins? We can't even maintain our existing road surfaces. We have the lanes already, remove a car lane on every road over 1 lane wide and make it bus only; same grade separating benefits of a subway but requiring zero investment beyond retiming lights and buying plastic bollards and maybe a few more busses to pick up demand. The insurmountable difficulty is in not letting emotional NIMBYs kill the bill.

https://xtown.la/2019/05/02/the-sepulveda-passs-failing-grad...


Some UK costs for adding a lane for comparison - https://publications.parliament.uk/pa/cm201617/cmselect/cmtr...

(roughly £4.5-6.5M a mile, probably $7-10.5M in 2015 money?)


It's awesome watching the race play out. It very much does look like at this point that Tesla are on a spint to the finish line while everyone else took a wrong turn near the start following Google.


What on earth makes you say that? Tesla’s whole model seems like a bust—unlikely to evolve beyond cruise control+.


If you watch the presentation at the Tesla Autonomy Day, the dude in charge of their deep neural network explained that they think basing the whole system upon cameras is necessary since HD maps and LIDAR are too limiting. Basically, the network needs to be able to see what people are doing to drive safely.

They've devised a method of upon training data for their network and is basically iterating through the problems, finding training data and counter-training data, as far as I understood the presentation.

Whether this is going to work or not, we shall see.


Have they solved the world model problem, or mentioned anything about that? If not, they are lost in noise.

Video streams are so full of maybe signals, and ML loves that. As long as they don't have a fundamental (sub)system that knows how much space it needs to safely proceed at the current speed, how the 99.99% of the roads of the world work geometrically, and how to infer this from visual data, they are in trouble.

(That said, I guess Karpathy is perfectly aware of this.)


I highly recommend watching the presentation. It's quite a deep dive into their entire plan including solving the world model problem.


It depends a bit whether you believe their CEO or not.


I do. Media circus aside Musk has a reputation of delivering working solutions to hard problems.

If they still had Chriss Latner heading Autopilot I'd be skeptical, but they have Andrej Karpathy who is a god when it comes to machine learning and computer vision.


I would be 99% happy regarding autonomous driving if I got a platooning hardware kit installed into my car so that on highway I could connect behind another car/truck that is going to same direction than me and take a nap or watch a movie. This should be easier problem than full self driving car by orders of magnitude, and I do not quite understand why platooning gets so small share of the self-driving hype.


The first driver ("platoon commander") would still need to be human. I'm not sure people would sign up for this, as your error could result in deaths of people in cars following you.


The we should have problems finding bus drivers?

Anyway, I am more thinking the platoon commanders would be truck drivers that already have legal requirements for rest and no capability for speeding in highway.


After reading this I'm thinking Tesla claims of being close to achieving level 5 autonomy aren't pure hot air. Toyota's requirement of "8.8 billion test miles for safe deployment of self-driving vehicles", puts Tesla well on the path to this. I found articles online stating Autopilot has driven 1 - 1.2 billion miles. Assuming that they've gathered many more miles of test data of human driving, then at least they have the raw data on which to iterate their platform. Whilst I'm still skeptical I've gained a new appreciation of how pushing the autopilot package allowed them to gain a big advantage in terms of raw data.


I personally think Tesla simply doesn't have the sensors for self-driving.

I imagine the huge amount of data they've collected is largely useless because they aren't collecting the right data (e.g. LIDAR, radar, higher definition video)


They probably have the right data for safe disengagement. If they can solve that, the rest truly can be incrementally added.


Racking up a very high number of fair weather easy miles has zero effect on the machine's ability to tackle more difficult situations.


Very valid point, but Tesla's fleet will drive in a greater variety of weather than Waymo's, potentially gathering a better data set. However as another comment points out it could all be garbage data.


Let's have it try to commute in boston during a noreaster before we call it safe.


After riding in a Model X recently using Autopilot I am less skeptical than before.


"Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."

- Marvin Minsky, 1967, Computation


>Krafcik went on to say that the auto industry might never produce a car capable of driving at any time of year, in any weather, under any conditions. “Autonomy will always have some constraints,” he added.

This is an out-of-context quote. Right before that, the linked CNET article said "John Krafcik, head of the self-driving car unit of Google parent company Alphabet, said that though driverless cars are "truly here," they aren't ubiquitous yet. ".

There is a big jump between L4 & L5 autonomy which is what Krafcik was discussing. But even achieving L4 (which is what Waymo has in its Phoenix tests) at mass scale will revolutionize the industry since it means cheap robocabs for large portions of the US for much of the year.

The other companies are still struggling with L2 & L3, but Waymo's lead appears to be getting larger based on the public results they are showing. Tesla is the only one that's close since it has a public L2 system.


Yeah full autonomy that matches or exceeds human driving is a loooong way away for sure. Closing the gap is not just about sensing the environment, but also understanding it. That's (probably) a Hard AI problem.

Why is no one looking at retrofitting existing roads with markers that the car can use to follow? I've read suggestions of using magnetic markers which would have the added benefit of working through snow.

Roads get new lines marked all the time, seems like embedding a magnetic marker at the same time would be quite feasible. This would basically be a tram line on every street.

Who cares if the car needs to hand off to the human in the event of road works or missing markers? Those are fairly rare events on my commute at least.


Winter driving in cold regions is hard. Knowing where the middle of the road is and the car's position relative to it is only the very start of the problem.

In regions that have roads not cleared of snow for multiple days the lane positions change. The evolution of the position of the lanes is not strongly defined by where the previous lines were. The obvious spot to drive for a human would contradict the magnetic, ground penetrating radar database, or whatever marker is used.

Markers of any type would be useful for awareness of car self position but not sufficient for what the lane has become. Even as a human you have to guess sometimes because it's not possible to sense it. You just have to know what humans would do. Machines aren't great at heuteristics and guessing what humans might do.


> In regions that have roads not cleared of snow for multiple days the lane positions change.

Or even in the same day. I've seen major freeways shrink from three to two lanes after heavy snowsqualls blanketed the road completely in less than an hour. Yet, traffic still flowed smoothly and quickly.


Lane following isn't the hardest problem to solve. Current autonomous vehicle technology can follow lanes pretty well. The problem is with avoiding collisions while doing so, like if a pedestrian steps into the lane.

And if the vehicle discovers a magnetic marker is missing, by that point it's too late to hand off to a human. The human might not be paying attention. The lag time is too long for safety.


Some are working on this. Check out https://theray.org.


I have made this prediction multiple times throughout the last 8 years based on my knowledge of robotics. I have argued with family members who are VCs as well as enthusiastic technical observers based on simple first principles reasoning.

It is highly unlikely that completely autonomous vehicles will be present on the streets - interacting with humans, cars, and random events in real-time - within the next decade. What is more feasible and likely to occur is that large chunks of trucking may become "automated" in the form of a hyper-advanced cruise control, but these vehicles will lack Level 5 autonomy for their whole route. To understand why, requires a deeper examination of where this technology came from; Stanley, Thrun’s robotic car that was acquired by Google and is the basis for Waymo’s tech. Thrun notes;

> “In the last Grand Challenge, it didn't really matter whether an obstacle was a rock or a bush, because either way you'd just drive around it," says Sebastian Thrun, an associate professor of computer science and electrical engineering. "The current challenge is to move from just sensing the environment to understanding the environment."

http://news.stanford.edu/pr/2007/pr-junior-021407.html

More precisely, to interact and function in the wild, the robot needs to understand, distinguish, and model the properties of wildly different entities. And these entities aren't just other people - they are everything in its environment. And this includes random events like; people walking into traffic, objects falling onto the road, sudden rain and hail, or just potholes.

This is not a problem that can be trivially solved with better sensors or cheaper LIDAR. It is a conceptual problem that can not be trivially brute forced nor simulated in advance as each unique permutation is just different enough to confound existing models. Even if such different and confounding events are six sigma in nature and have a probability of occurring 0.00034% for every mile driven, then at 3.22 trillion miles/year (https://www.npr.org/sections/thetwo-way/2017/02/21/516512439...) Americans will experience 10,948,000 anomalies per year. Or, about 20.8 events per minute.

20.8 anomalies per minute is a pretty big number. That’s a potential accident every 3 seconds or so. And that’s a six sigma estimate. It is hard to emphasize how hard six sigma performance is for the current generation of autonomous vehicle hardware.

You might dispute the numbers, but they are meant to be an illustration and an intuitive explanation for why this problem is Hard with a capital H. Unlike some professionals, I don’t believe Strong AI is necessary to solve this problem, but Weak AI is still a form of AI. Technology that's just out of reach today.

At this point, people counter with Google’s cars. After all, seeing is believing, and haven’t they been seen to run successfully for millions of miles at this point?

Yes, but... Waymo/Google's success is hard to replicate. No one else can bend the rules far enough to achieve their headway.

Most of us forget that before starting the project, Google used LIDAR to create highly detailed maps of every environment their cars will ever be in. Engineers then captured standard human driving on that route over and over across different hours. These elements were then combined within their model to pre-compute routes before any real world driving happened.

Google was able to achieve its miraculous results because of its ownership of GMaps, Street View and the resources to deploy a fleet to map everything in advance. No one else has these resources. No one else can acquire them cheaply. And even Alphabet, a trillion-ish dollar corporation can't replicate this package for the entire world. Maybe someone will figure out a way to create precise sub-eighth-of-an-inch level 3D models of environments using drones and satellite images, but until that time this technology is too expensive to deploy.

It is hard to emphasize how important a distinction this is. The "only" thing a Google/Waymo robot does in real-time is that it checks the difference between the recorded dataset and whatever the real-time LIDAR + camera feed is saying. Or in other words, for every point of the route, Google has generated an expected set of values, and any deviations from these are used by the car to make real-time decisions. https://spectrum.ieee.org/automaton/robotics/artificial-inte...

However, even this partial solution is tempting. Because it works. But, sadly, that's not the whole story. What gets lost in the re-telling of this story is that this technique means that the cars can't work in unstructured environments. They are unsafe for most roads, especially uncontrollable, unpredictable and narrow residential roads. It's why these cars exist inside strict geo-fences even when they're ferrying passangers around, like in Phoenix. Where the car seems to mostly serve the downtown area with very little residential coverage (caveats apply, I couldn't find a map).

But, even getting this far is a huge technical achievement, even if it's not the fundamental breakthrough we've been looking for.

We should be prudent, but optimistic. There are a lot of smart people out there who are working on this problem right now. They are figuring out really clever ways to deal with these edge cases and they will solve this problem. Even if it's not on the hype cycle's schedule.

With that in mind, I'd like to suggest a better model for thinking about Driverless Vehicles is another Google service, Translate. It's quite good, but when was the last time someone translated an entire book with it (and it came out to be readable/sensible)? The system's performance - due to its statistical roots - seems to be fundamentally asymptotic w.r.t. the data presented to it. It can't be improved beyond a certain point even if you throw more data at it. I suspect that the current generation of driverless vehicles will be the same. http://people.csail.mit.edu/brooks/papers/elephants.pdf http://arxiv.org/pdf/1604.00289.pdf

Before the needed breakthrough happens, it is likely that the cutting edge won't advance beyond Level 3 in the next decade;

> I would guess Tesla’s position on this would be that most of the time, yes, you can rely on it, but because Tesla has no idea when you won’t be able to rely on it, you can’t really rely on it.

https://spectrum.ieee.org/cars-that-think/transportation/sel...

And that's okay. We might not have driverless cars by 2020, but we probably will by the mid 2030's and that sounds pretty darn fantastic to me.


> Americans will experience 10,948,000 anomalies per year. Or, about 20.8 events per minute.

One possibility is that driving is simply too challenging in the general case.

Note one of the problems discussed in the article, that self-driving algorithms can't cope with bad weather such as snow when road markings are invisible. Humans also can't cope with this situation. We don't drive with strict adherence to the rules of the road, and for example the tracks left by the car in front of you are far more important than any buried lane marking.

If we demand (for example) that autonomous vehicles cause less than one accident per 1 million km driven, then some situations that humans brave on a daily basis (thunderstorms, blizzards, icy roads, residential streets with children around) may never meet that standard even with a "perfect" autonomous system. Thus far, we've papered over the contradiction because we have human drivers to blame.


> Humans also can't cope with this situation.

What exactly does that mean? People will happily drive on I85/95 in Atlanta in torrential downpours where the lane markings are almost impossible to see. For the most part they don’t crash, because they know where the lanes should be. Humans are unmatched at filling in missing information like that. People drive in rain, sleet, snow. And they still manage to go 500,000 miles between crashes on average.


In Atlanta people are coped to that situation, but in places like Los Angeles people come to a crawl in a drizzle with otherwise zero traffic. In places with snowstorms, people still regularly loose traction and spin off the roads or start sliding backwards down a hill.

I remember a picture a couple years ago from a snowstorm that hit the south, could have been Georgia even, that looked like a scene from an apocalyptic movie complete with a flipped over car burning in the distance.

Some people crash, some people do fine with skill. Will a self driving car be able to counter steer an ice induced drift into safely regaining traction like an experienced human car driver could? All up in the air right now.


Subsurface mapping is worth keeping an eye on. [1] It offers a more-or-less instant solution to most of these concerns, if it can be made to work.

[1] http://news.mit.edu/2017/lincoln-laboratory-enters-licensing...


> Yes, but... The issue of using Waymo as an example is that Waymo/Google bent the rules to achieve these numbers.

Why does it matter whether or not Waymo uses maps, so long as they can solve the problem of cheaper, safer, reliable transportation between point A and point B? Yes, L5 autonomy is significantly harder and has huge advantages, but true L4 autonomy (which I do not believe currently exists) is still revolutionary. You're completely focused on the robotics challenge without understanding that the actual problem is just getting someone where they want to go. If they can solve the problem, it's not cheating - they just figured out an easier way to do it.

That said, I think L4 is still more than a decade away from where it can compete with Uber and Lyft.


Assume 0.00034% chance of an anomaly for every mile driven, we have 340 anomalies per 100 millions miles driven.

According to AAA, drivers between the ages of 25 and 29 have 526 crashes per 100 million miles driven.

Humans are incompetent drivers to an extent we would never accept for robots. If we tried releasing automonaous vehicles that were even 100x safer then humans, we would end up banning them for a generation.


That’s not how the math works. At a 0.00034% chance of an anomaly every mile, you have a 50% chance of an anomaly every 200,000 miles. (1-0.0000034)^200,000. Humans go 500,000 miles between crashes. Of course an anomaly doesn’t necessarily mean a crash, but if you don’t require the human operator to be paying attention all the time, it could well lead to a crash with a high probability. Indeed, because a anomaly would likely confuse a whole bunch of cars on the road at the same time, it could well lead to catastrophic and cascading failures. Unless you posit the existence of inter-car communications technology which doesn’t get exist.


Considering the average person drives 10,000 miles per year, that means the average person could live to 10,000 and not have a single crash.

> Humans are incompetent drivers

No, not really. Humans are actually such good drivers that computers have absolutely no chance of even coming close. Computer can't even stay running without crashing that long, never mind actually driving a car.


Considering the average person drives 10,000 miles per year, that means the average person could live to 10,000 and not have a single crash.

Your math is a little off. 100,000,000 / 526 is a crash every 190,000 miles. So every 19 years, not 10,000 years.

I guess I've been unlucky. In probably 500,000 miles of driving in my life, I've been rear-ended 3 times. Fortunately all at low speed, by distracted drivers, e.g. a mom with screaming small children.

That doesn't even count the time I tried to turn right from the left lane. It didn't end well for my car and was the only accident I was at fault for. And there were a few other accidents as well.

Perhaps "crash" is defined as including an injury? That makes the statistic more believable, because nobody ever got hurt in any of my misadventures. All my "crashes" just involved car body damage.


I didn't look into their methodology, but I would guess that crash was implicitly defined as reported crash. I've had a fair number of low speed collisions where we never bothered to report it.


From a marketing/business perspective for the automotive industry, my hypothesis is that it’s in their interest to favor publicizing the ‘best-case scenario’ earliest dates over realistic target dates for these kinds of milestones to hype the public up and make a company look like they are ahead of the pack in innovation.

This is the sentiment I gathered from also looking back at previous announcements in various industries (see r/retrofuturism on reddit) where a CEO will say that something is 5 years away when in practicality it’s at least 5 years away from being 5 years away.


Uber is screwed. They clearly stated in their SEC filing that the possibility of deploying full autonomous cars will significantly impact their future earnings.


That would be true even if they believed the chance that autonomous cars would succeed was 1%. If it happens, it will still significantly impact their earnings.


Related article:

> Tesla Will Have Full Autonomy in 2020, Musk Says


Reminds me of Uber ex CEO saying he'd buy every single autonomous car Tesla will build by 2020.

The dream is slowly fading away, Tesla, or any car really, won't be fully autonomous for a long time, Uber economic model doesn't make sense anymore because 90% of it was base don removing drivers entirely. It's like a dream built on top of a dream...

https://www.businessinsider.com/top-vc-claims-uber-ceo-said-...


Tesla has an advantage here in that they don't feel the need for their autonomy to be particularly safe.


I believe virtually everything I read, and I think that is what makes me more of a selective human than someone who doesn't believe anything.

David St. Hubbins


If safe, reliable, autonomous driving is a decade away that’s half a million lives lost. If it’s two decades away then a million.

The incentives are there to make this work. The funding is there to make this work. The technology is actually there to make this work, if you can change some of the rules to get there.

It’s worth over a trillion dollars annually to make this work. It’s the great infrastructure product of the 2020s and we should get fucking started already.


Are self driving cars going to eliminate traffic deaths? Tesla already has a body count.


Eliminate totally? Of course not.

But we have a long way to go before 1 fatality per 1 billion miles would be a bad thing. That would be an order of magnitude improvement. I think we can get two orders of magnitude in 10 years.

Autonomy absolutely will have a body count. It cannot save every single driver. And so if we want to be able to enjoy the benefits of autonomy that saves almost every single driver, we better get comfortable with the notion that some people will die in self driving cars.

If we can’t accept that the technology will never be perfect we will just keep dooming 30,000 people (in the US alone) to die each year, not to mention countless maimings (2 million actually) and property damage.


It was always clear to me that to accelerate the arrival of autonomous driving providing more consistent and controlled roadways would be key. That is why freeway driving will come well before any autonomous city street driving.Even the freeway is not consistent enough, but could more easily be adapted to some standard much more easily than, say, the roads in NYC.


If we are rethinking the timetable of the self-driving car, is it safe to rethink the whole future path of transportation? Do cars have to be it?


> Krafcik went on to say that the auto industry might never produce a car capable of driving at any time of year, in any weather, under any conditions.

What's the likely business model for Waymo if level 5 autonomy is unattainable?

Also, what did he mean by "never"? Did he mean "within our lifetimes" or something? A strangely confusing statement for a CEO of a self driving company.


There are areas of the country with relatively stable weather and mild conditions (no snow, no storms, etc). Could work reasonably well there.


I thought the conditions he is concerned about are not merely bad weather. Doesn't his statement hint at giving up on self driving in SF/NYC/London urban traffic?


I don’t understand the obsession or even the belief that fully autonomous driving is around the corner.

When has anyone, much less Google, released a product that worked at such advanced levels from day 1 and starting from no product in the market?

They should deploy autonomous cars at limited production scales in specific environments, and build it out from there—but we haven’t even seen that yet.


That's exactly what you're seeing Waymo do, they've been operating a paid taxi service at scale outside Phoenix for 6+ months


Paid taxi service with humans at the wheel. It's not an autonomous taxi service until you can take the driver out.


Yes so the logical conclusion would be to scale that up, in whatever limited capacity it is currently serving.


How many places in the U.S. have the weather and traffic level of a region outside of phoenix? Maybe a region outside of scottsdale? They would scale if they could but that adds more variables that they haven't felt confident enough to tackle yet, otherwise you would see them scale this up and around the U.S.


Great example of no true scotsmen - we went from them not existing to "well they're not everywhere, so do they really exist?" in 3 comments


I don’t understand how can anybody take it seriously an approach that relies on statistics for self driving cars. Even if it is 95% correct who would risk getting killed 5% of the time while driving?


Of course not 95/5%, but we rely on statistics that all kinds of bad cases will likely not happen every time we enter traffic. What the right statistic is, how to get good data on it, and mentally accept it and the loss of perceived control, that's going to be interesting.


Self driving probably won't really take off until it's clearly safer than human driving. Tesla's iffy autopilot statistics don't really do that yet.


I think there's another thing going on here and we've seen a similar pattern before with electrical cars about 10 years ago.

Ten years ago was when most of the car industry believed it had decades to adapt to full EV. It turns out that they had much less than a decade. By 2017 most manufacturers were scrambling to accelerate timelines for internal R&D when they realized competitors had other timelines and were about to grab some serious market share. This is now playing out in the market. Several ICE production lines have been shut down (GM, Ford, etc). Most manufacturers have launched or are about to launch full EVs. At this point several manufacturers are putting billions in making mass production happen in the next few years.

I predict the same will happen with autonomous driving. Most manufacturers are completely unprepared for it and behind with their R&D. They are actively looking to buy as much time as they can get because they know they won't be there. However, some manufacturers are ahead of the curve here and it's not just Tesla. When this stuff gets good enough, it means some manufacturers will have a huge competitive advantage over basically everything else. Everything you read about this needs to be put in this context.

In other words, some manufacturers are dragging their heels here and it's the same manufacturers that are also struggling to adapt to full EV. And they are using the same strategies to manipulate the timelines. Think futuristic looking concept cars that will never drive, lots of lobbying about how cool their R&D is, and a distinct lack of commitment to any timelines, combined with regular reports on serious safety issues that are obviously going to delay everything by decades. That's what this article says. What it translates to is: several manufacturers are getting to level 5 autonomy in the next few years and it is going to take decades for everybody else to catch up and mass produce this stuff.

Tesla is probably over confident about the timelines but well on track to making lots of progress in the next few years nevertheless. Waymo looks like they are making progress as well. However, they are held back by the notion that they are more of a pure tech play and less of manufacturing/logistics play. In other words, they are dependent on the before mentioned manufacturers dragging their heels.

IMHO several manufacturers are close enough to solving most of the issues here that there are going to be a few leaps forward in the next few years. These leaps are going to force the agenda for the rest of the industry. Mostly the timelines for autonomous and a switch to full EV are actually aligned.

By the time most cars will be EVs, most of them will also be fully autonomous. That's roughly 15-20 years from now. Plenty of time to tackle any remaining issues with sensors, snow, and other edge cases. Yes, that's hard but people are throwing billions at these problems and making rapid progress. The first level 5 cars might be on the market within as little as 2-5 years. The biggest hurdle here is non technical: legislation. Tesla is being ambitious here (obviously) but even in 5 years they'd have the market largely to themselves.


AI Winter is coming.

(Sorry to riff on GoT, but I couldn’t resist)


Having 5+ companies each pouring billions into trying to make this technology a reality separately is ridiculous. All these companies should come together and work on a standardized single solution that can be shared between them. It will come much faster and be less expensive if they are pooling their resources.


Everyone here worships at the temple of competition


But what if they standardized on some compromise, as they tend to? That alone would set progress back, not foreward. Think if there are multiple orgs with billions to throw at it, the more the merrier. If there were just a few with a few million each, your plan makes much better sense. At some point the beuracracy of collectivism overshadows its usefulness.


I dare you to name a good technology that came about due to a race of proprietary competition and not in close collaboration with the best talent in the field. It really is a waste to bin all this engineering talent apart just because their free backpacks have a different logo embroidered. You could have two engineers struggling with the exact same problem sitting in the same uber pool ride to work and they wouldn't be able to speak a word of it to each other due to shortsightedness by shareholders. There will be a point where a clear winner emerges, 5 competing companies turns to 3 and turns to 1, and all those hours and hours of engineering effort by the failures becomes a moot point. It's not like the 1 company that emerges will hire another 5 companies worth of engineering either, this is redundant work. You wouldn't even be able to learn from these failures until decades after the fact in a blog post from a retired engineer no longer bound to an NDA.


Multiple companies are currently testing level 5 autonomous fleets in some European cities.

I think the article misses that you can't reach 5 without efficient and secure v2x.


Could you point to some evidence that somebody has actually achieved level 5? Hopefully with a clear description of what the limitations are (e.g., weather, geography, time of day). Claims are easy. Especially claims of testing.


I don't know the details, these were from companies I never heard of. But these were driving around in fairly busy European streets with zero human interaction.

(Don't like the new HN atmosphere were people down vote any comment they don't like)


It's not any comment; it's a bold claim with no evidence. Also, it's not clear to me that you understand the levels. A car "driving around in fairly busy European streets with zero human interaction" is evidence of level 3, nothing higher.


V2X?


Apparently vehicle to everything [1].

The car industry seems to love V2* terminology. V2H is connecting an electric vehicle to a home to provide power in an emergency.

[1] https://www.investopedia.com/terms/v/v2x-vehicletovehicle-or... [2] https://thedriven.io/2018/10/19/v2g-whats-the-state-of-play-...


After seeing some autonomous cars in action, I believe they are 20 years away. In that sense I mean a fully automated driverless car that you can command to do what you want.

It would be great if you could buy a car, have it bring you to work while you read or slept, then after dropping you off, go earn money while it was ubering people around until it was time to pick you up. But I highly doubt this will happen in less than 20 years at this point. There's just too much lacking. It needs real artificial intelligence, not just machine learning and pattern matching. The car needs to know "that's a paper bag flying in the wind I'm about to hit", or "that truck in front of me is stopped."


Considering that we have made ~0 progress toward true AGI, I'm skeptical that level 5 autonomous driving will arrive even within 20 years.


We certainly don't have true AGI but there has been progress in that general direction eg Deepmind's AlphaZero, StarCraft and this thing https://news.ycombinator.com/item?id=17313937


No those aren't progress toward AGI at all. They are tremendously impressive technical achievements, but from an AGI perspective they're basically just parlor tricks.

The reality is that for AGI we don't even know which direction to go yet so it's literally impossible to determine whether we're making forward progress toward the goal. Show me a computer as smart as mouse and then I'll believe we're making progress toward AGI.


> "that truck in front of me is stopped."

Easy if you're not Tesla and have lidar.


How much of this delay is just perfectionism and excess risk aversion? In pretty much any pursuit, demanding perfection ensures that you'll never be finished. Based on what I'm seen, autonomous vehicles already work. Waymo's cars have driven millions of miles. Other companies have working systems too. The technology is already at the point where it's useful.

I understand wanting to keep people safe, but the benefits of this technology so are enormous that I'd rather companies launch now and iterate than spend decades wringing every last disengagement out of the system while the carnage on our highways continues.

I feel like we're in one of those situations where everyone is just afraid to be first. Meanwhile, people die every day.


You'd rather have car companies release beta software to millions of cars than stick with the status quo? This isn't the Windows 10 Insider Preview. Cars can kill people, and production vehicles should never contain beta software, which is beta because it's buggy/untested/unreliable.


No software is perfect --- and that goes for the wetware in our heads too. Why would you want people to keep dying on the roads just so we can avoid shipping imperfect-but-still-superhuman autonomous driving vehicles? We need a more nuanced outlook than "Disengagement! NO RELEASE!".


Do you think beta software belongs in airplanes too? We sure could do with a lot more 737 MAX incidents.


[flagged]


Plane crashes are so rare because of the risk aversion. It's perfectly relevant.


In any case, that degree of risk aversion is unachievable, unaffordable, and unproductive when discussing personal automobiles. Cars are killing people now. It is not reasonable to wait for some Utopian future where every car has the ability to pilot itself with NASA-grade software.

Human drivers are already "moving fast and breaking things," and they're not getting any better at it.


But we're not waiting. Cars today are absolutely crammed with automated safety features. Far more than the airbags and ABS on my 20 year old VW. Think automatic collision avoidance, blind zone detectors, traction control, lane departure warnings, back-up cameras, adaptive cruise control, and more that I've probably never heard of yet.[0]

To think that we're waiting until perfect level 5 automated driving before we release it to the public is ridiculous. As the article clearly states,

> Most OEMs are now more forthright about the fact that autonomy will be a succession of small, graduated steps.

We didn't go straight to landing on the moon. Neither will we go straight to autonomous driving, or flying cars. Flying cars will never happen, because there are no gradual steps between flying and not flying, but software can be gradually improved. That doesn't mean running untested beta software in cars though.

[0] Also, advanced crumple zones, numerous airbags, proper safety restraints, crash guards on transport trailers, high-tech road makings, restrictive driving tests, and strict laws on driving while intoxicated or distracted.


> I feel like we're in one of those situations where everyone is just afraid to be first.

Tesla is not afraid. They killed few people already with autopilot. Best part about it where they fixed bug that lead car into concrete barriers and then regressed few months later...


Waymo has already come out and stated that realizing AVs as they've been pitching them are much further away than they've previously claimed, and now they've moved to try to become a Tier 1 component supplier to try to monetize at least _something_ for all the money they've dumped into this endeavor.

Additionally, despite driving the exact same corridor for several years, there are still trivial and commonplace scenarios that their system can't handle reliably due to environmental ambiguity (i.e. multimodal human intent).

Though, not to gang up on Waymo here. Probably most programs are stuck/grappling with these problems. You just mentioned them directly, so I picked on them.


That you are willing to let other people die for the sake of increasing your convenience isn't very compelling.

And I think that's also unnecessary. From what I can tell about Waymo's approach, they are launching with something well below level 5, but where they cut over to a remote driver whenever the car is unsure. It's a smart approach, in that they can offer a level-5 service with level-3 technology. They've turned the technology cliff into a cost-optimization problem, which they can chip away at over time.


1.25 million people die in human driven traffic per year, 37,000 of those in the US.

SDCs should only need to beat that safety level, not be perfect.


> SDCs should only need to beat that safety level

And they can't as they current are, and won't be able to for quite some time.

Also, what they need to beat is not the aggregate safety level, but the average reliability of human drivers. Not all deaths are due to driver error. And they need to beat it by a large enough margin that people will accept that they are more reliable.

Finally, as another poster pointed out, there is the issue of liability. We know how to assign liability if a human driver makes a mistake. How do we assign liability if an autonomous self-driving system makes a mistake? I can tell you that, if I'm the human who owns the car and I'm still going to be held liable if the car's autonomous self-driving system makes a mistake, I'm going to be very, very hesitant about letting that system control the car, even if the statistics show it's more reliable than I am. At the very least I'm going to want to closely monitor what the system is doing--which of course defeats the whole purpose of having it. And if I'm an automaker and am going to be held liable if my self-driving autonomous system makes a mistake, even if a human driver is in the car and I can't control how they operate the system, I'm going to very, very hesitant about selling cars with that system, even if the statistics show it's more reliable than human drivers.


Like I said in the the other thread, it has to be the automaker that stands behind its product with a financial safety guarantee.


> it has to be the automaker that stands behind its product with a financial safety guarantee.

And, as I said, if I were the automaker, I would be extremely hesitant to give such a guarantee with human drivers in the car whose operation of the systems I am unable to control. So cars with such guarantees would basically have to be entirely self-driving, i.e., human intervention not even possible by design. That is a huge increase in required reliability, not to mention a huge change in how cars are currently used. Which is not to say it won't eventually happen, just that such a liability regime would, I think, significantly increase the time it will take to get to ubiquitous use of self-driving cars.


Yes, I am talking about fully self driving cars.

You could still have "dual mode" cars that could be switched to manual mode, perhaps only when parked.


More than 10k of those U.S. people die from alcohol related crashes. Would you be happy with your car being just marginally better than a drunk driver? It could save so many lives!

Okay maybe just drunk drivers are a bad comparison, let's look at all of those accidents. They aren't all drunk people. Turns out like 95% of them are due to similar human stupidity though. So unless you want your car to be better than a terrible driver, it doesn't have to just reduce accidents by a little, it has to reduce accidents by 95%. Otherwise you're advocating for the analogue of a lyft driver who is looking at their phone and only a little tipsy to replace you behind the wheel. Yeah maybe it'll decrease deaths, but I'm not buying one, nor am I advocating for legalizing the equivalent of a driver on their phone and kinda tipsy.


I understand the theory, but good luck selling a car on the basis of "probably as safe as the average driver!" And good luck convincing the parents of some kid run over by Google that it's just one of those things.

Note also that for many of those 37,000 deaths, a human is found culpable, often doing time. Who do you propose should do the time for autonomous cars? An engineer? Their manager? The CEO?

In practice, our corporate accountability is such that nobody will pay any penalty. Maybe that's fine when, as with the 2008 financial crisis, the losses are abstract and distributed. But when it's the life of somebody's kid, somebody's mom, somebody's sister? That at best will be an extremely volatile situation. Especially if Google or Uber captures 20% of the self-driving car market and is now responsible for 37,000 * 0.20 = 7400 deaths per year, or about 20 per day.


I wonder how many of those deaths anyone is punished for? My guess is under 10%. Mostly it's just considered an "accidental death" and life goes on, as I understand it.

But I (fortunately) don't have much experience with this.

The only possible solution is that the SDC and/or software manufacturer is legally responsible for any accidents the SDC causes. I think that coverage has to be part of the legal package for these things to ever be sold in the US. But that would be money, not jail time, of course.

Fortunately the car(s) should have video and data logs from the event, so the evidence situation should typically be very clear.


> The only possible solution is that the SDC and/or software manufacturer is legally responsible for any accidents the SDC causes. [...] But that would be money, not jail time, of course.

I don't think there's any "of course" there. As we see with the 2008 crash and the financial industry generally, corporate financial liability for the high-risk decisions of individual managers and execs doesn't do much to reduce systemic risk. For the managers, it's a "heads I win, tails you lose" play.

The difference between banks and robot cars is that a banking failure just means slightly higher prices for the customers and/or slightly lower dividends for investors. Whereas with robot cars, in comes in funerals. Especially given the endemic low quality in the software industry, there's little doubt in my mind that unless "disruptors" face actual jail time, we'll see what is in effect scaled-up negligent homicide.


I mean, we have a century of settled tradition/law for how to handle deaths and injuries from badly designed cars. It's far from a new issue.

I think that system can keep working.

I don't really see the analog with the banks.

Software has endemic low quality where it doesn't much matter. In mission critical systems, it is usually reliably as solid as it needs to be.


> I mean, we have a century of settled tradition/law for how to handle deaths and injuries from badly designed cars.

It's settled for cars with human drivers, because the drivers bear most of the responsibility. For cars without drivers, who's to blame for bad driving? "Somebody's bank account" is not a compelling answer.

> I don't really see the analog with the banks.

What part is unclear for you? I'm saying that with banks for the last 10+ years, we see that financial penalties don't solve problems of risk or accountability.

> Software has endemic low quality where it doesn't much matter. In mission critical systems, it is usually reliably as solid as it needs to be.

I don't think either of those statements is correct, unless you mean the first one tautologically. But the latter is definitely untrue. Look at the way Toyota was building their software. Or the recent Boeing fiasco. Or Volkswagen's emissions scandal, which is responsible for circa dozens of deaths.


I would buy a car guaranteed to drive as safely as I do, except autonomously. I'd do it in a heartbeat, and millions of people would line up behind me. It's not lack of customer demand that's keeping autonomous vehicles out of the hands of the public. It's risk-averse bureaucracy.


What's being discussed is not driving as safely as you do. It's as safe as the average driver. Or, more accurately, a company claiming it's as safe as the average driver, which is why I said "probably".

Most people consider themselves above-average drivers: https://www.psychologicalscience.org/news/motr/when-it-comes...

So it seems pretty reasonable that any car that just kills the same number of people as the population average (which includes drunk drivers, tired drivers, distracted drivers, etc) will be seen as worse as worse driver than the person purchasing the car.

And I think it's a given that regulators, and the general public, who are the ones who will be hit by the self-driving cars, will see an automated car that only kills 37k people/year as a non-starter.


> That you are willing to let other people die for the sake of increasing your convenience isn't very compelling.

That's how we got the original automobile, and that's been a great benefit to the human experience.

Everything is costs and benefits. In exchange for reducing the amount of very dangerous manual driving that people do, we plug in autonomous technology that's very good, but not perfect. It still saves lives overall.

This idea that autonomous must be perfect or you're a murderer --- it's simplistic and counterproductive, and following this kind of absolutist thinking guarantees that we'll never get autonomous technology and that people will continue to slaughter each other with manually-driven vehicles.

Taking an absolute stance against dangerous technology might seem good at first, but in the long run, it increases the sum of human misery.

It's perfectionism, not autonomous, that's killing people every day.


> The technology is already at the point where it's useful.

Useful for certain limited purposes, yes. But that's not where it needs to be to start taking the place of human drivers in everyday usage, in order to reduce harm in everyday usage (which I agree is a good thing to do). For that purpose it needs to be more reliable than the human drivers are, and it's not even close to that and won't be for quite some time.


>I'd rather companies launch now and iterate

Well I think they have in a way. The Tesla Autopilot is an example, and probably not a bad example of where the level of self driving cars is really at. Given Tesla's demonstrated in service capabilities, it seems that we can do a few things, but remove the steering wheel self driving is many many moons away.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: