Hacker News new | past | comments | ask | show | jobs | submit login
Ford CEO says the company 'overestimated' self-driving cars (engadget.com)
335 points by paganel on April 11, 2019 | hide | past | favorite | 505 comments



Ford's problem may be Argo. Ford spent $1 billion for Argo, and paid for a big building with the Argo name on top. But Argo didn't get their California license to test autonomous vehicles until early this year.[1]

Self-driving has had too many "fake it til you make it" startups. Cruise started that way, and suckered GM into buying them for $1 billion. Uber bought Otto, and we now know how fake that technology was. Tesla hyped their basic lane-keeper and car detector into an "autopilot" then repeatedly plowed into clearly visible obstacles and killed people.

(Despite all the blithering about edge cases, they haven't been big problems in practice. The serious Uber and Tesla accidents were not edge cases. They were blatantly obvious obstacles: a semitrailer, a fire truck, a fixed barrier, and an isolated pedestrian on an open road.)

Waymo, meanwhile, keeps plugging away, driving around, getting their level of disconnects down each year, improving their sensors, and doing large scale tests. Once in a while they get rear-ended. If they don't get killed by Google/Alphabet's attention deficit disorder problem, as happened to Google's robotics efforts, they're going to get this into production.

It's not easy, and it's not impossible. It's just hard, like television or xerography. Those took decades from first demo until they worked well. This doesn't fit well with the startup make-money-fast model. It does fit with the big-company R&D lab model.

[1] https://techcrunch.com/2019/01/29/argo-ai-acquires-permit-to...


I agree Waymo is doing the best here, but Waymo is nowhere near a real self-driving car. They can't make unguarded left turns. Or drive in the dark, or the rain. Much less the snow, or in construction, or on poorly marked roads, or roads that aren't marked at all. Or city roads that aren't laser-mapped to the centimeter. Or roads with many pedestrians. Or cyclists.

And even when they do make it work, how is money going to be made by someone like Waymo? I don't see a route to the supposed massive profitability that would justify the huge investments being made.


> I agree Waymo is doing the best here, but Waymo is nowhere near a real self-driving car. They can't make unguarded left turns. Or drive in the dark, or the rain. Much less the snow, or in construction, or on poorly marked roads, or roads that aren't marked at all. Or city roads that aren't laser-mapped to the centimeter. Or roads with many pedestrians. Or cyclists.

I work for Google, opinions are my own.

I don't know the specifics myself but I trust what you're saying is true and agree those are real problems.

Nevertheless, one of the best things to do when you have a really hard problem is to simplify the problem. It's not as though we have to have fully self driving cars before they're released.

I think what would make a lot more sense is some middle ground where we have certain sections of the road where self driving cars will be able to work well and only allow them there.


That was the idea with Google's little bubble car. It was supposed to have a top speed around 25mph and cruise around retirement communities. That seemed like a feasible goal. But it cost too much to make as a product. The LIDAR units alone would have put it over $100K.[1]

Voyage.auto [2] claims to be deploying such cars now. Or rather, their web site contains announcements from late 2018 that they were doing so. Later information seems to be lacking. It's a basically good idea, but they claim an on-site staff of 5 for three cars, so they are nowhere near this making financial sense.

A real problem with self-driving cars is the false-alarm rate for emergency braking. If you're conservative about crash prevention, every once in a while the vehicle is going to detect something it doesn't identify as safe and will brake hard. That's why Uber turned off automatic braking in their cars - "to reduce potential for erratic behavior."[3] False-alarm braking, or even strong braking conservative by human standards, limits customer acceptance.

[1] https://arstechnica.com/cars/2015/05/googles-quirky-self-dri...

[2] https://voyage.auto/

[2] https://www.latimes.com/business/autos/la-fi-uber-arizona-nt...


Or properly merge. Or recognize proper behavior in right-on-red situations. Or prevent their "safety drivers" from stopping in traffic to pick up friends for joyrides.


> And even when they do make it work, how is money going to be made by someone like Waymo?

SDC as advertising venue and usage info as advertising data source.


> (Despite all the blithering about edge cases, they haven't been big problems in practice. The serious Uber and Tesla accidents were not edge cases. They were blatantly obvious obstacles: a semitrailer, a fire truck, a fixed barrier, and an isolated pedestrian on an open road.)

> It's not easy, and it's not impossible. It's just hard, like television or xerography. Those took decades from first demo until they worked well. This doesn't fit well with the startup make-money-fast model. It does fit with the big-company R&D lab model.

Amen, and to my view, that's probably a good thing. As much as I think driverless technology will be a boon when it eventually arrives fully, the interim is damn scary. I would much rather have a decade or two of intervening half-solutions that actually provide well understood levels of safety. If a company was willing to shoot for something less than the moon, we might have been on track to have specialized carpool lanes that support driverless conveyance (e.g. buses or taxis) within major cities and possibly within 30, 50 or even 100 miles of them, depending on the metro areas and funds to adopt them. The problem is, who is willing to fund research into or consider implementing at a city scale technologies that required infrastructure investment when there's all these players acting like they'll have a perfect solution in a year or two?

I would much rather have a more modest but well understood solution in place in 3-5 5 years and a good beginning for a stable and trusted fully autonomous system in 10-15 a than the broken promises, and IMO incredibly optimistic PR we get now that serves no purpose other than to steal market share of money and public perception.


Who is willing to fund research into or consider implementing at a city scale technologies that required infrastructure investment when there's all these players acting like they'll have a perfect solution in a year or two?

Yes. Volvo demoed a system where they fired strongly magnetized nails into pavement to create markers they could detect through snow. Good idea, but doesn't seem to have been pursued. Volvo pointed out that it would be useful for snowplow guidance as well as self-driving vehicles. It wasn't intended to be the only reference, just a hint for use in bad weather.


  edge cases... haven't been big problems in practice
Waymo doesn't acknowledge any "problem" that doesn't result in a collision. They don't seek input regarding such cases. Heck, most reputable trucking companies at least have a "his is my driving?" referral number clearly visible.

I've seen three cases where a driver had to take evasive action to dodge a Waymo fail in the past month alone.


My personal experience with Waymo autonomous driving has been pretty good. Never seen them do anything poorly and I see them a lot. They're known in Mesa/Chandler Arizona as decent enough drivers, but they can take a little long for a left turn and they also follow the speed limit rather than the flow of traffic which is the normal Arizona case.


Our industry failed here in setting expectations. We had people like George Hotz showing how easy it was, and Elon Musk promising "autopilot", when in reality he has fancy lane following. These people knew exactly where these systems break down, but glossed over that fact.

I'm optimistic on self driving technology, and I work at a company which builds a part of that tech stack. Here, in the bay area, there are a number of companies that got large amounts of funding trying to do self driving with cameras and computer vision alone. They'll fail. Camera based perception is not a solved problem, and it won't be for a long time, which leaves the traditional robotics sensing approach of Waymo and Cruise. These guys will have something workable first, but it won't be safe enough to be considered L4 or L5 anytime soon. Everyone else is miles behind Waymo and Cruise when it comes to generalized self driving, while others are quietly making good revenue on things like parking lot shuttles, mining equipment, and shipyard hauling.

I would love to have an autonomous mode in my car which requires no human intervention, so that I can sleep, or go out for a few too many drinks and have the car bring me home. Working directly on this stuff, I can see that I won't have such a car within ten years, maybe even twenty. I sure hope it's available in twenty, since i'll be too old to drive safely.

You're right, it's possible, but very hard. The barrier to cross from L3 to L4 is very high, and part of it is demonstrating to regulatory agencies that you have a safe system in all conditions, which will take a very long time.


"Here, in the bay area, there are a number of companies that got large amounts of funding trying to do self driving with cameras and computer vision alone. They'll fail."

Probably right.

..."which leaves the traditional robotics sensing approach of Waymo and Cruise. These guys will have something workable first, but it won't be safe enough to be considered L4 or L5 anytime soon."

L4, with slowdowns when the data is iffy, is probably achievable with available technology. If you're willing to accept occasional hard braking, and situations where the vehicle stops safely and asks the driver to take over. Customer acceptance problem with that.


Ford was working on autonomous stuff before Argo though, from what I understand. Argo was bought I think in hopes that they could significantly advance their tech with startup ingenuity. It's a bit disappointing because I think it's a good testing platform to develop these vehicles in Detroit given our variety of weather and road conditions that would be great for applying all of this in non-ideal conditions, but all I ever see in this city are May Mobility shuttles.

I don't think Waymo is really all that much ahead. Their testing is largely still in very ideal areas. Arizona has been attracting firms not just because they passed some permissive laws, but also because it's a place with lots of sunlight and relativity mild weather compared place like the Great Lakes or the Northeast. Same goes for California. Their progress however looks much better because they're really aggressively testing this stuff in ideal real-life conditions, but I think even they are a long way off from deploying those vehicles to places like Michigan year-round (a state that also has allowances for autonomous driving).


Waymo can be found on the roads in NW suburbs of Detroit (Novi). Admittedly I did not see them much over the winter. Point is simply that they seem to be testing in the more ‘real-world’ conditions found in the Midwest. I have no idea what the progress is or number of autonomous miles driven but they do have equipment in the field in Michigan.

Hopefully as other companies advance they’ll also move to Michigan as an ideal test bed and bring the crossroads of tech and auto to Michigan.


Waymo has already launched and also fell short of being completely driverless.

https://arstechnica.com/cars/2018/12/waymos-lame-public-driv...


Waymo hasn't been in any serious accidents yet because they have human safety drivers and because they don't put their vehicles in situations where they can cause a serious accident even with a safety driver.

In that sense, expecting that Waymo is doing better than the rest and it's going to get self-driving cars into production is a bit like expecting that a person who learned how to walk on a line drawn on the floor is going to perform tight-rope walking because they're more careful than all those other idiots who actually tried to walk on a rope.

The thing to keep in mind is that self-driving is hard, much harder than tight-rope walking. It's so hard that it remains unsolved, currently, and no amount of careful application of non-solutions will result in a solution.


You are right. The main problem is that companies thought the full solution would come in a few years and not on a few decades.


I miss your comments on slashdot.


Funny how everyone on slashdot is still all in on driverless cars after years away.


Maybe Softbank will inevitably buy waymo


Why? Alphabet is more than happy to fund the venture.


They're reportedly seeking outside funding: https://techcrunch.com/2019/03/11/report-googles-waymo-seeks...


[flagged]


Tesla "autopilot" is only L2 that had multiple accidents including fatal[1]

[1] https://en.wikipedia.org/wiki/Self-driving_car#Tesla_Autopil...


> They were blatantly obvious obstacles

Each accident you cite happened under supervision of a human driver. Without knowledge about accidents prevented by supervisors it’s difficult to tell how much worse ai is than the average human.


The supervisor of the Uber that killed the pedestrian was watching Hulu on her phone at the time of the crash.

edit: source: https://www.businessinsider.com/uber-driver-rafaela-vasquez-...


The problem with self-driving cars is you can get most of the way to a working solution but that’s not good enough.

If the solution is flawed even a little bit, people will die. That will give self-driving cars a bad reputation and people will be unwilling to ride in them.

If you are going to geofence self-driving cars it is questionable whether they are better than trains and buses.

I think the best we can hope for in the next 10 years is better driver assistance systems, like GM Super Cruise or Tesla’s Autopilot (if they can make it stop crashing into things).


I recently used a rental that had GMs "adaptive cruise" and it was mostly awesome even in dense traffic.

There were two main issues that had to do with following distance.

1: I had it set to 80mph and the other cars in the hov lanes on I-80 were going 75-80 and I was impressed how the car maintained a relatively safe following distance. This seems to irritate the other drivers and I would have tailgaters and eventually people would pass me on the right to occupy the space in front of me causing the car to slow down to get back to a safe following distance, causing the tailgaters to receive an autonomous brake check.

2: In slower traffic around 30mph (still set to 80) that same safe following distance allowed other drivers to cut me off, and several times the brakes engaged hard when the collision detector was activated. The first time that happened I was not prepared and I barely got my foot to the brake pedal in time. I decided after the second time that happened to turn it off as the time it took initialize my foot press the brake was too long, that when I was actively pressing the gas/brakes I could react much quicker to traffic.


> There were two main issues that had to do with following distance.

The trouble seems to be that they're being too conservative. Following distance is what it is because of human reaction times. You have to process the input with your brain, decide what to do, move your foot, then the car starts to slow down. In principle a computer can do it faster and so doesn't need as much space.

Moreover, everybody would like following distance to be dictated by safety, but in practice it's dictated by traffic volume. If you have traffic averaging three car lengths between each car and then twice as many cars merge into the highway, either you end up with one car length between each car, or you end up with stopped traffic. There is no third option.

The problem right now is that they're still using the human driver as a backup, so using the safe-for-computers following distance doesn't mesh with that, but using the safe-for-humans distance is a violation of standard behavior. So they're stuck choosing between the unsafe thing everybody does, or the "safe" thing that then becomes unsafe anyway because of how everyone else reacts to it.


I had the same issues in a Tesla. I adjusted the following distance down to dissuade people from jumping in front of me, but that felt even more unsafe. Seemed like there was no good solution.


The solution is to get people to stop being assholes on the road. Fine tailgaters. I care about the safety issues caused by them a lot more then I do about people going 6 over the limit.

Following any closer then 2 seconds of traveling distance on a clear, dry day, 3 seconds at night, or in the rain, 4 seconds on a rainy night should be considered reckless driving.


I feel like driving cars turns most people into idiots, myself included at times.


I don't have the citation, but I remember this quote:

"The easiest way to make people stupid is to limit their degrees of freedom"

Cars really only have 2 degrees of freedom: linear acceleration and radial acceleration. In traffic, these DoFs go down by a lot. You ability to de/accelerate linearly is very limited by the cars behind you and in front of you. Your turning also goes down to nearly a binary choice of 1 lane left/right.

Get people 4-wheeling and the stupid is very much still there, but just to get back to a main road, you have to use your noggin a fair bit more. Folks tend to think things through and weight choices more.


Are the horn and middle finger(s) not also degrees of freedom?


Actually, you make a good point.

The social dynamics are themselves a dimension. Social network graphs can behave as fractals. In a gymnasium filled with people, there is a significant 'fractallyness' to the social network. In traffic, the fractallyness is very limited to just about all the people in the road near you. The social graph, though more fluid in a traffic jam than a pep-rally, is still fairly small.

3Blue1Brown has a great video on the dimensionality of fractals [0] that is useful here. I'm just going to spitball, but I'd bet that the dimensionality of the social graph of a traffic jam is really close to 1.

So in the end, the DoFs of a traffic jam are probably very close to 3, but just a bit over. Say, 3.14-ish. ;)

[0] https://www.youtube.com/watch?v=gB9n2gHsHN4


The easiest way to solve this problem is to have serious standards from driving license tests. Have would be drivers perform reliably in a variety of conditions and in longer trips and mercilessly fail people who do not meet the high bar.


Rather than apply the brakes, barring the person jumping in front of you and braking, it could just stop applying the accelerator. Nice easy slowdown to create the gap without brake-jobbing somebody behind you.


Haven't driven a Tesla but I'm quite sure the safe braking distance is a lot shorter than it _feels_ like for a human and this is actually one of the tasks I'd rather leave to a computer than a human.

I'd assume the Tesla to have a lot better (quicker) reaction skills than you and you should probably just trust it (in this particular scenario) and relax, even though the distance might _feel_ too short.


Driving a Tesla for almost a year now and when I had mine for 2,5 weeks it crashed into a car in front of me on AutoPilot in slow moving traffic. Stop and go. After 15 stops, it moved again, but this time didn't stop. I just sat there expecting it would brake. It didn't. There was no major damage, just about $1500 due to a small dent in the hood. AutoPilot is beta. Don't trust it to be quicker than you. It will fail in ways humans do not ever expect. Always pay attention and take control if it feels off.


This is terrible. This is the exact reason why I would want to buy a Tesla. Just for the AutoPilot to drive the car in bumper-to-bumper freeway traffic. If it does not work reliably in this situation, then the AutoPilot is essentially worthless to me.

On my regular commute, the freeway interchanges that I must take, are too frequent, that I don't think the AutoPilot is worth the trouble to switch on. Especially when the freeway is empty, since I would have to take control of the car every few minutes.

The only scenario where I think AutoPilot is worth the effort to switch on, is if I must drive long distances on the same freeway, where you don't have to switch freeways. Like the drive from LA to SF. Or LA to Vegas. Or even just staying on the same freeway for 20 to 30 minutes.

I hope they can fix this deficiency. This AutoPilot technology is definitely beta, if they can't perfect simple stop-and-go freeway traffic.


I stand corrected.

It's weird because this really feels like one of the areas where technology could really excel in.


This is true, the car can panic stop very quicky. The problem is I want to leave enough space to dodge pot holes that the car in front of me just drove over, since Autopilot can't really do anything about that (nor do I expect it to)


I used a Honda Accord with a similar system and even had a similar experience to scenario 1 on the interstate (though I was set at 69). After I realized that the following distance control was the reason the car was going slower, I was initially fine with it, but then someone cut me off which then forced me to driving slower resulting in me turning off the system.

I have a feeling these systems won't really be great until all cars are using them and considering I don't see that happening, I don't see them gaining a ton of traction, unfortunately.


Don't cruise in the passing lane, that's bad form.


I wasn't? They passed on my left side?


My mistake. Yeah, people easing over from the left lane going slower than you or just in front of you is annoying. The only real way to combat this is to make them commit to going faster than you by speeding up, or going slower than you by keeping them boxed in. All in all, sounds possible like not great cruise conditions, which is pretty much all the time in most metro areas now.


We have a Honda with similar system. It can be annoying in heavy traffic, but wow is it amazing on long drives between cities. There's no more fighting the cruise control when you and the vehicle in ahead are a little out of sync speed wise - either due to 75mph being a little different in each or due to performance differences on hills.

On the whole, I find a 3+ hour drive a lot less stressful with the adaptive cruise control, and I arrive feeling much less fatigued.


The solution would be to detect attempts at cutting you off and automatically close the gap momentarily to discourage this behavior.

Ideally your autonomous car would also take part in the community of tailgaters and communicate forward that the current speed is unacceptable if this is the consensus of several tailgating vehicles, and gently reduce distance in a controlled way until the car in front gives in and speeds up or yields.


What you're describing is a car that eschews defensive driving in favor of offensive driving. Speeding up to keep people from cutting in front of you only works until the other driver doesn't see (or doesn't care) and now you're in a crash. Or traffic suddenly slows as you're speeding up to close the gap, and now you can't respond quickly enough and now you're in a crash. Letting someone cut you off and slowing to return to a safe following distance is by far the safer option.

There's a reason it's called a safe following distance. Anything closer and you cannot stop in time. Even temporarily. Especially when autonomous cars in front of you may be suddenly braking as well.


Safer from the minimizing personal liability perspective. If cutting people off is a valid tactic because they bend over and take it then more people will do it more of the time and that makes everyone less safe.

We don't negotiate with terrorists for a reason.


This is basically the opposite of the ideal solution.

The solution is to remove dangerous drivers from having control of 2-ton deathtraps, not contributing to the same behavior that is causing the problem.

Opportunistic, unsafe behavior is exactly the reason we need autonomous vehicles, rather than human-controlled vehicles.


Just give the user an up/down button and let them dial in the following distance themselves.

Or detect their preferred following distance for any given speed when the system isn't running and copy that.


Then you'll have AI engaging in unsafe driving habits because of their drivers and people will blame AI for every single accident rather than take responsibility.

Good way to get self-driving cars banned before they take off.


If the engineers allow the system to follow closer than is safe they should be personally liable for any accidents.


I wonder if we can't take all these advanced sensors for self-driving and use them to help LEO fairly detect unsafe driving in order to increase mean driver safety.

Ironically the parent was probably technically going over the speed limit, though everyone goes over the speed limit there. Whoever builds automated traffic violation software would need to be very fairhanded in order to not upset the status quo.


There is an Israeli startup that does this (using machine vision and freely issued iPhones to rideshare drivers; the name escapes me at the moment, I apologize) to provide data to insurance companies for better pricing of auto insurance rates.

I'm not sure I want to live in a future where my Tesla is a mobile sensor platform for someone else's benefit besides myself and Tesla, although I can see the allure of negating the need for LEO patrols on road infrastructure once a critical mass of sensor platform vehicles is reached in an area. This would require a herculean amount of governance and transparency to be okay with me, and I would expect more sanity in road legislation (higher reasonable top speeds, for example) as well.



Yes, thanks.


I'm in Hawaii RN and the speed limit on the interstate here is 45 - 55! It's bizarre world compared to bay area interstate driving.


The speed limit is irrelevant. What's the community accepted reasonable speed? I would wager that it's not much different than the same highway conditions elsewhere.


people drive the speed limit here. its a much different driving culture than the mainland and it helps that speeding is aggressively ticketed.


There's also the fact that there are only 4 Interstate highways in Hawaii totaling 54.9 miles, while in California, there are 25 Interstates totaling 2,455.7 miles. So, we've got more space to get up to speed ;) and, of course, longer distances to travel...


Why does it let you speed?


going 75-80 on I-80 isn't speeding. that's the pace of traffic even in the slow lanes. people do slow to like 70-75 if there is a highway patrol car nearby but since speed limits aren't really enforced 80 is the de-facto limit. and since cars are no longer the greatest cause of death in the US, they gotta let it go to cull the herd a bit and get that death rate up. (/s)


It seems to me that self-driving cars can't be just as safe as the status quo, but have to be far far better. It's an unreasonable need, but human nature.

I'll bet that to really make it work well, you also need to redesign roads to suit the new cars.

In any case, I'll believe it when I see widespread/universal adoption of self-driving in constrained environments (mining vehicles, warehouses, storage yards). Step two will be things like garbage trucks (slow, expensive, otherwise automated, phone home when it gets in trouble).


Humans have a tolerance for human-style accidents, but not robotic-style accidents. A dipshit that pulls in front of you to t-bone is a human-style accident that is acceptable. A tesla that sees a river and thinks it's a road is a robotic-style accident. It's going to take centuries for us to become acclimated to how machines mess things up.


Is it unreasonable?

One can learn to drive defensively, and be able to spot the said dipshit in advance, decreasing the likelihood of collision.

Yes, not in all cases, but there's a bell curve. As humans, we can read other humans.

Robots aren't inherently bad either. I love my Roomba precisely because if it fails, it fails predictably. I can and do make my apartment Roomba-friendly.

But the self-driving deep learning-driven cars don't have this feature. They fail unpredictabily, as in your example. We don't have an understanding of what really drives the decision-making there. And we keep discovering failure modes as they manifest.

One day, the car might think the river is a road. Another, it will not recognize a road block as such. But only if it's a Tuesday. Or something. We really don't know.

So I don't think we'll get acclimated to that. Unpredictable failure is a reasonable thing to be afraid of.


I'd argue it is unreasonable given that (afaik) no one is marketing a self-driving car that doesn't also require a driver be attentive at all times.

You can learn to drive a self-driving car defensively, in the same way you learn to drive a normal car defensively. Pay attention to your surroundings, blind spots, and other cars.

Maybe self-driving is a poor term to use.


>You can learn to drive a self-driving car defensively

My point was that you can't learn to drive a normal car defensively if there are self-driving cars on the road (in their current form).

When a self-driving car decides to do something stupid, you won't see it coming because the decision tree is literally beyond human comprehension.

See: http://www.bbc.com/future/story/20181204-why-we-should-worry...

And Uber's and Tesla's accidents have shown us that you can't rely on the human in the car to catch it before it does something stupid. After all, teaching someone how to drive and watching them attentively is difficult work.


> (afaik) no one is marketing a self-driving car that doesn't also require a driver be attentive at all times.

Tesla's marketing videos for Autopilot have a caption along the lines of "driver is only present for legal reasons".


Wow, that's downright irresponsible.


>You can learn to drive a self-driving car defensively,

Then it is not a self driving car, maybe let's named them human supervised self running cars


Exactly, that was my point.

There are no self-driving cars on the market, or in development by major car manufacturers.

They all explicitly require human attention.


It's especially bad because us humans supervising the cars are going to be worried about preventing human-style accidents. Maybe you're worried about it following too close or not noticing the red light - so you don't notice the warning signs in time when it veers into the concrete barrier.


Humans fail in unpredictable ways all the time. Look at the cases of people using vehicles as a weapon of terrorism. How are we supposed to predict that?

Yeah, we can go back and investigate the person's life and say "see, they had a history of browsing extremist websites, making bigoted comments, suicidal thoughts, etc." The problem is that this description fits a very large group of people, compared to the subgroup which actually carry out the attacks.

So while it is true that we may have difficulty (or it may even be impossible) auditing a self driving car after the fact, it should not be any more comforting to know that human failure can be explained after the fact, given that the human failure was not predicted beforehand.


The US has 5 million car crashes a year. Vehicular terrorism is not even visible on the chart.


You can get 99% of the way there with extremely detailed maps. Where the road is, where the lanes are, what the speed limit is, where to find stoplights and rail signals, where to be extra-cautious, how to navigate through a construction zone, all can be pre-programmed.

A Waymo is basically just following a virtual track on an extremely detailed LIDAR map of the area, obsessively watching for pedestrians and following the rules of the road the best it can. It will never think a river is a road.

Not to say that there aren't a million little things to be concerned about with this approach, or that there aren't major things to overcome (like heavy snow on the ground making your LIDAR map useless). But I think we'll get there, and in many places soon. There's nothing that says this tech has to exist everywhere out the gate.


Tesla's approach thinks white trailers are open sky. The river comparison is not unreasonable at all.

Almost all of the startups in the space are following the Tesla approach of just throwing machine learning at the problem which means almost all work in the space suffers from novel input producing unpredictable results. This is one of the things that has killed trained and evolved systems in the past and the fact that few of the companies in this space are even trying to manage it (either by building interpretable models or by using models and using ML to do parameter fitting) is a good indicator that the whole business is either a fraud or is built on the premise that with enough data or, for the un-cautious and unaware, enough simulation, the problem solves itself.

I think what we are actually seeing is that Waymo (where, in ten years, they might have a solution) and Tesla (which is mostly worst-in-class but as a company is the master of hype) drove hype around self-driving. Then Uber and Lyft latched on (because they needed a story to paper over their terrible economics) and pushed it even higher. The Otto thing and Cruise's acquisition made VCs pay attention.

So these companies are acquisition bait for the assumed-clueless big auto companies. They will not deliver self-driving cars. At best they will deliver next-generation enhancements to emergency braking, etc.


> Tesla's approach thinks white trailers are open sky. The river comparison is not unreasonable at all.

A Tesla is not a self-driving car. Autopilot is an assistive technology.


They can change the construction zones on any given day - adjust the lane changes, move the barriers a foot over, or add a new fence where there wasn't one yesterday.


You would think then a cooperative system between human and robot would be therefore be the best option. The human prevents the robot from driving into a river and the robot prevents the human from t-boning someone else. Except most people on HN seem to be completely against this type of cooperative system and would only support self driving tech when it gets to level 4 or even level 5 autonomy.


I'm not sure how such a system could possibly work; human attention doesn't work that way. If the robot drive 2 hours and 35 minutes just fine and one minute later decides to drive off the road, how is a human going to react to that? It's not really possible.

If a human has to pay attention all the time, then they might as well just be driving.

And one of the advantages of self driving cars will be freedom for the elderly and disabled who already have difficulty driving cars. This is the group, in my opinion, that will benefit the most from self driving cars.


>I'm not sure how such a system could possibly work; human attention doesn't work that way. If the robot drive 2 hours and 35 minutes just fine and one minute later decides to drive off the road, how is a human going to react to that? It's not really possible.

Right now the self driving tech doesn't do anything to communicate with the driver besides just a general warning. However that doesn't have to be the case. Maybe some type of confidence indicator should be added. A simple green, yellow, red warning system could help a driver know when a self driving car might not be as certain in its surroundings.

>If a human has to pay attention all the time, then they might as well just be driving.

I just don't get this mindset. There are different levels of "paying attention" that require different levels of mental energy. The discussion also isn't made with nearly any other driver assist technology. No one suggests that automatic transmissions or cruise control are worthless because the driver still has to pay attention.

>And one of the advantages of self driving cars will be freedom for the elderly and disabled who already have difficulty driving cars. This is the group, in my opinion, that will benefit the most from self driving cars.

I agree with that, but it doesn't mean that other drivers won't benefit as well.


> Right now the self driving tech doesn't do anything to communicate with the driver besides just a general warning.

I think this probably fails to understand how quickly it's making decisions, how quickly the situation could change, or how it could even judge it's own confidence. It could quite confidently drive you into a river; it's not likely to do that unless it's pretty sure it's a road.

As self-driving technology gets better, the robot is going to be much better and the failure situations much more rare.

> I just don't get this mindset. There are different levels of "paying attention" that require different levels of mental energy.

If one doesn't have to provide direction or speed, there is literally nothing to keep a human attentive enough to make a difference. You expect someone to passively look at a green/red/yellow indicator for hours at a time while not otherwise engaged with driving? And then when that indicator turns red you expect them to immediately be able to make a useful judgement? I think humans are more than likely to make that situation worse!

> The discussion also isn't made with nearly any other driver assist technology. No one suggests that automatic transmissions or cruise control are worthless because the driver still has to pay attention.

There is a huge difference. If you have cruise control or automatic transmission, the vehicle still doesn't go where it's supposed to without you paying attention. The driver still has to pay attention because they have to pay attention or they don't anywhere. With self driving cars, the driver is now a passenger and doesn't have to pay attention for the vehicle to work.


I haven't seen any evidence of a self driving car accident in which the car had no potential to warn the driver. All the examples I have seen are when a car misidentifies on object in its path as a non-threat. The problem is that breaking and/or swerving are very drastic actions. There is therefore a deservedly high barrier before a system is willing to take those actions. At the worst you could have two systems running in parallel and one is tuned to be much more aggressive in making those decisions and that information is communicated to the driver.

No self driving car that doesn't have at least level 4 autonomy allows the driver to not pay attention like you are suggesting. The benefit of a cooperative tech is that it takes mental load off the driver. Like I said previously, there are different levels of mental energy that a person can spend on a task. It isn't like the choices are either 100% or 0% of your attention.


Such systems work fine. The human does have to be constantly driving, but the computer intervenes when it detects the human doing something dangerous. Look at automated GCAS in aircraft.


That's not self-driving though. There are plenty of driver-assist technologies (automatic breaking, for example) but that's not the subject at hand.


Full self driving in the general case will probably not exist in our lifetimes. It's like chasing an imaginary pot of gold at the end of a rainbow. So the industry should aim for goals that are actually technically feasible and will deliver tangible safety improvements.


I disagree. The progress of camera and laser technology, in addition to AI, is improving at an incredible rate. The market for self-driving vehicles is huge -- any company that figures it out will make billions (maybe trillions).

Computers can have a perfect 360 degree view and track and categorize every object around them. Humans will be both physically and intellectually outmatched eventually. And humans just aren't that good at driving on average.


You're welcome to disagree, but so far there's no hard evidence to support your viewpoint. Just a lot of hype.

Past rates of progress are not indicative of the future. The easy problems have already been solved. Now progress is already slowing down.

In particular the notion that computer vision can reliably track and categorize every object is just laughable. The state of computer vision research is nowhere close to that capability. Errors are frequent, especially under adverse conditions.


In November 2017, Waymo announced that it had begun testing driverless cars without a safety driver in the driver position. In October 2018, Waymo announced that its test vehicles had traveled in automated mode for over 10,000,000 miles (16,000,000 km), increasing by about 1,000,000 miles (1,600,000 kilometres) per month.

In Arizona, Waymo has fully autonomous taxi service you can use right now. https://waymo.com/apply/

Is that a lot of hype? Maybe.


> Humans have a tolerance for human-style accidents, but not robotic-style accidents.

I think it's more because there's someone to yell at - someone to blame.

Think of it like a taxi driver with a passenger; people ride in these all the time, or uber, or lyft - and have no problems doing so. Even though they aren't in control, and they have no idea about the driver's driving capability. They assume the driver is an average to better driver, when that may not be the case at all. So what happens when an accident occurs?

Well - if the passenger is still alive, and believes it to be their driver's fault, they can yell at their driver. If they believe it's the other driver's fault, they can yell at them...and so forth.

But you put a robot in place of any driver - then who can you yell at when an accident occurs? Nobody. You can yell all you want, you can even blame the corporation that made the car/robot - but you are still yelling at a machine. It won't change anything. It won't feel bad. It probably won't even make you feel better, because it won't restore your agency in the situation.

That's my guess anyhow - pulled straight from my nether regions, with nothing to back it up. I wonder if there have been any studies on this? Are there any data on what happens in other "self-driving" incidents; like perhaps completely automated trains with no humans other than passengers on-board? Do such things even exist?

Or do companies already know this - and put a human "in charge" to take the blame and heat for when an accident happens (maybe they also provide an e-stop button for the human to press - whether it's actually connected to anything or not - but it's logged as to whether it was pressed, just to help "assign blame" later)...?

While I have no proof or anything to back this assertion or theory up, it seems like it would match human nature. When things go wrong, humans want to blame somebody - some actual, sentient being. I'm sure there's a psychological term for this need and the action; I don't know what that term is, but I'm almost certain it's been studied.

Essentially one of two things would have to be done to overcome this: Either the machine would have to become perfect at driving and avoiding accidents (even to the point of avoiding things of natural random consequence, such as rocks falling on the road, or something of that nature) - or it will have to become more sentient, and ideally emotive; that is a GAI with human traits.

The former is likely impossible - randomness will always get ya; even trains have what should be "avoidable" accidents.

The latter - well, if it's achievable; that is, if we can make such a GAI, etc - it will likely also suffer the same problems humans do, most notably that we are failable and we make mistakes.

...but at least then we can blame them on something, right?


We all accept risk in our lives, in most cases the qualitative characteristics of the risk are the more important one. I don't really have a good enough answer to your point, but I find it morally unsettling that many people would agree with this.

One example I can think of is Russian Roulette. I would be quite a different game if each player had to shoot the neighbour instead of himself. It would be literally the same expected outcome, but an entirely different game.

Each human I encounter everyday has a small chance of murdering me (let say one in a million) would I like a robot wandering around with one in a billion chance of murdering others?

We accept that we cannot endlessly manipulate and control other humans, that we cannot forcibly "fix" arseholes, we have no such limitations for robots.

We already have an example of strong automation-driven transports: aeroplanes. Honestly I see no reason to have lower standards for self-driving cars.


>When things go wrong, humans want to blame somebody - some actual, sentient being.

Exactly! When (not if) a fully autonomous self-driving car causes a terrible accident resulting in death, how are the loved ones going to feel when they've only got a company to blame? In such an emotional situation, the response needs to be better than Uber or whoever saying "we're very sorry our car did that, but hey, we're trying our best, and these things are going to happen". Are people really going to accept that? That's simply not going to fly.


>It seems to me that self-driving cars can't be just as safe as the status quo, but have to be far far better. It's an unreasonable need, but human nature.

What would be the point of self-driving cars which were no safer than the status quo? That's just removing freedom from drivers and adding more complexity to infrastructure for no added benefit.

It's also exactly the narrative that proponents of self-driving cars have been driving (pun intended), self-driving cars would eliminate all, if not nearly all, accidents and fatalities.


What they're talking about I think is the usual statement that self driving cars only need to be slightly better than the status quo (in deaths per mile driven) to make sense because then anyone moving from driving to using a self driving car decreases the number of deaths. From a raw statistics POV this makes sense, if a self driving car is even just a little bit better than the average driver on average anyone switching to an autocar will decrease the number of injuries.

What they're saying (and I agree) is that human nature means people won't be willing to get into a car that's 'just a little better than average.' There's a couple reasons I think that will be true: 1) people generally think they're better than they are at driving 2) moving from people to autopilot moves the responsibility from generic 'bad drivers' causing accidents to a single system. Going from a diffuse responsibility to a more concentrated liability on the part of the manufacturer will probably mean they'll be blamed much more for the failures of their cars.


I think insurance companies can affect this behavior. If self driving cars are safer, then the insurance companies could lower rates for those cars, creating some economic incentive to override the irrational human brain.


The main attraction is that automation will make car ownership non-sensible. Even right now, taking a cab every single time is economically at break-even or better for many urban dwellers. If you take out the driver, then cabs can be much cheaper.

Second, what you call "freedom", is called a "chore" by a good chunk of the population. I can't wait to not touch another steering wheel ever.


To each his own, but I don't want to have to get permission from Google, Tesla or anyone else before driving somewhere, then be escorted under surveillance like a prisoner in the back of a cop car with every possible metric being mined and correlated for the benefit of corporations, insurance companies and the state, leaving my safety to the whim of algorithms and systems likely built to the cheapest standard possible.

I'll gladly deal with the "chore" of being able to turn my own key and drive my own car as an alternative.


Yeah, as someone from a rural background, it always strikes me as far-fetched that people are going to give up ownership of vehicles because of driverless tech. People routinely spend 10x what they need to on a car, living well outside their means for essentially status (buying a new BMW instead of a used Honda for example). Moreover, there is more to owning your own car than privacy. It’s really damned convenient to have your car loaded up with your stuff. Your snow gear, your bike gear, your tools, your emergency electronic tools and first aid. I like to keep a folding chair in my car, it’s surprisingly convenient. I don’t have a ton of experience living in cold climates but I did spur-of-the-moment cross country road trip in February a few years ago. We left from the Bay Area headed to Indiana, and when we stopped outside of Chicago for gas and I realized how naive our plans were. I had to get back in the car while gas was pumping it was so cold. If we had car problems or an accident dying of exposure could have been a real thing. We didn’t have cold-weather gear with us, I was wearing jeans and a hoodie, and I really felt like an idiot for it. Such fears can be easily solved by having blankets and food and chemical heat sources in the car, which I’m pretty sure is normal for people in those climates. You can’t really have gear with you like that if the driverless car goes and gets another fare once you reach your destination.


I wonder at what point they'll just solve the safety issue by removing the windshield and adding shit loads of impact absorption and a few steel plates that isn't possible on vehicles where the driver needs to be able to see outside.


The limiting factor right now is mostly that nobody wants to wear 4/5/6pt harnesses or have a properly fitting bucket seat (these things are a big pain in the butt for a daily driver) so we have to keep shoving explosively deployed cushions into places so humans don't bounce off of harder things.

The fact that we can't replace the windshield with something else will likely never be a practical limitation for the foreseeable future.


> What would be the point of self-driving cars which were no safer than the status quo? That's just removing freedom from drivers and adding more complexity to infrastructure for no added benefit.

What?? Safety is a great side dish, not the main course. The point is to free up hundreds of millions of man-hours spent daily focusing on roads.


Then simply a better public transit system would be more efficient and effective.


No added benefits? People don't have to own cars commuting drivers regain hours of their life.


Why are you assuming no one would own self-driving cars? It's likely they'll simply be sold the way any other kind of car is nowadays, and be far more expensive than typical cars.

Also, commuters will still have the same commutes whether they or their car is doing the driving. The deployment of self-driving cars is not going to magically transform a multi-hour commute into ten minutes.


Why would I want to own a "far more expensive" self driving car? I would love to get rid of my car and rely on fully autonomous Uber/Lyft. Even factoring in car rental for travel or moving or whatever I think it would be far cheaper that way. My car is a never ending money pit with insurance, gas, maintenance, registration, tires, it never stops.


You're going to pay for all those things even if it's indirectly through someone else who owns the vehicle. Most vehicle costs are related to mileage, not time--especially outside of areas where rust is a major factor.

I'm sure there will be a difference at the margins in cities where many people don't need a car on a daily basis. I assume that's already the case with Uber/Lyft today. And self-driving when it eventually arrives in those areas presumably will decrease costs some. But for someone who lives outside of dense city centers it seems likely they'll continue to want to own a car that is a model they like, equipped to their specifications, and storing all the various things they keep in a car.


>Why would I want to own a "far more expensive" self driving car?

It doesn't matter what you want, it matters what car companies do. They're not going to spend millions of dollars on R&D and marketing hype for self-driving cars only to make far less money on them than they do now on conventional cars.

>My car is a never ending money pit with insurance, gas, maintenance, registration, tires, it never stops.

Yes. That's the entire point, cars are micro-economies supporting multiple businesses and profit centers. And self driving cars will be the same, for the same reasons.


I'm not sure exactly what you're saying. Are you talking about some kind of monopoly or trust that prevents Uber/Lyft et al from getting access to self driving cars? That seems unlikely to me. As far as the various industries centered around car ownership I don't see why they'd go anywhere. Self driving cars will still need maintenance, oil, tires, etc. The only thing changing here is who owns the car. I don't want that to be me. As soon as it is economical for me to pay for transportation as a service and get rid of my money pit that is parked outside my apartment 24/7 I will do so.


I think you're reversing supply and demand...


This completely overlooks that in 90% and over of the cases a bus is all you would need


You're right I didn't mention it, but that fact is actually crucial to my point. I'm lucky enough to live in a place where walking, taking the bus, riding a bike (the one I own or a bike share), or taking a Lyft get me almost everything I want at a very reasonable cost. And as new transportation services emerge I need my car less and less. So I'm hoping that in the near future there is some service that can take over the few things I do need a car for. Something like Car2Go or ZipCar is actually almost there.


It's in all likelihood already safer than that status quo by an order of magnitude. Not that they are that good but the status quo is really that bad.

But it's the devil we know, It's fine that we average 3,287 fatalities a day[0].

We also have more empty homes than homeless, more food than starving people, crippling homelessness and 1:4 malnourished children in the US alone. I expect nothing from humanity yet I am still disappointed.

[0]https://www.asirt.org/safe-travel/road-safety-facts/


They're a lot better already safety-wise (and likely in other ways too) than drunk drivers I'd assume.


> self-driving cars can't be just as safe as the status quo, but have to be far far better. It's an unreasonable need, but human nature

Human nature doesn't end up meaning much as soon as insurers start seeing the statistical difference.

As the difference gets more pronounced with the lane assist and auto stopping features, rates on any new cars without these features will go up relatively.


Insurers don't care all that much if you rear end someone or get in some other "normal" accident. That happens enough that they have figured out how to cover the cost at scale. They don't take that much of a hit anyway because of your increased premium over time. They'd obviously prefer you do it less but so long as their slice of the population doesn't to it any more than their competitor's slice it's not a big deal because they come out even. That's why they offer $10 and $20 discounts for safety and security features. It's not a big deal to the insurer since they know exactly what the "value" of each of those features is at scale but of course they want the cheaper to ensure customers so they'll toss them a small perk, a fraction of the money that feature statistically saves them.

What insurers are really scared of is when your 16yo kid and four of his/her friends drunkenly go off a cliff on their way to prom. That's hard to assess the risk of and it costs them big bucks. These kinds of oddball high dollar edge case accidents are exactly the kind that self driving cars seem to be getting into. A Tesla killing someone by driving into a barrier or semi truck sends chills down the insurer's spine because it's only a little bit of bad luck (from the insurer's point of view) away from a not so dead customer that's maxes out their collision and medical coverage.

If I were an insurer I would not be offering any automation discounts until we have more data.


> until we have more data.

Well, yes, obviously. The commenter's point is that there will be a tipping point where insurance agencies look at those "crazy prom death" accident stats and see they happen 2x or 10x more often with human drivers. At that point, the cost of insuring a human driver is going to be 2x or 10x the cost of insuring an AI driver.


> It's an unreasonable need, but human nature.

Not at all. Computers are supposed to be orders of magnitudes better than humans at certain tasks. If you're going to put computers in charge of our lives on the road at high speeds, why wouldn't we expect them to also be orders of magnitudes better at driving?

Also, if I'm a lot better than the "average driver" (where I assume such statistics also include people texting and driving or driving drunk or having not slept for 40h, etc), then I certainly wouldn't be satisfied with an automatic system that is only a little better than the average driver.

And I didn't even get into the whole thing about car companies, including (especially?) Tesla, which compare highway Autopilot accidents to all road conditions accidents, to show how their Autopilot is "better," or the thing about such systems making errors that humans would never make, just like language translation systems can make totally different translation errors compared to humans, even if they "score the same."

To keep things short, yes, I think it's completely reasonable that self-driving systems should be far FAR better than your "average human driver." I don't even know why this is controversial.

Also, I agree with the Ford CEO that all carmakers, even Tesla, overestimated self-driving capability over making an electric vehicle. The ability to make a great EV will matter far more for these companies' survivability in the next 10 years than making a good self-driving car. But it's almost like most of them focused more and invested more in the self-driving capability than switching to EVs. Big BIG mistake!

Even Tesla, the only major pure EV company, mistakenly almost "bet the farm" on self-driving to the detriment of making a high-value EV. How? Well, by putting very expensive "full self-driving hardware" (that turns out is not actually full self-driving) into every Model 3 car out there. Terrible decision. They would've done a lot better if every Model 3 was $5,000-$10,000 cheaper by default without all of that gimmick.


> Computers are supposed to be orders of magnitudes better than humans at certain tasks.

Who claimed that driving is among those "certain tasks"? No, in the near future computers will at best be marginally better than human drivers. Computers are better at decision latency. But that merely compensates for some of their current shortcomings.

And that will be sufficient to save thousands of lives every year. Do you not like preventing deaths?

Delaying "marginally better" for "orders of magnitude better" means delaying the deployment of life-saving measures.


> It seems to me that self-driving cars can't be just as safe as the status quo, but have to be far far better. It's an unreasonable need, but human nature.

They need to be safer than me, not the average driver. I suspect the lower 50% of drivers cause > 50% of the collisions.


> They need to be safer than me, not the average driver

But I'd they are safer than the average driver, you gain from their widespread adoption even if they aren't safer than you. Because you don't have the choice of replacing other drivers with you...


Defensive driving allows me to counter most bad drivers.

If we're talking about removing dangerous drivers from the road, I'm all for it. It should be next to impossible to get a license for most people, if what we really care about is safety.


> It seems to me that self-driving cars can't be just as safe as the status quo, but have to be far far better.

They probably, for acceptance, have to be consistently as safe in like circumstances, they can't just be as safe on average. Which, if there are circumstances in which they are much safer, also means they end up needing to be much safer on balance, but it's not really the on-balance comparison that drives that.


I am perfectly happy to accept autonomous cars to kill even a little more people than human drivers do in exchange for the utility that they offer. You can use your time in the car to do other things. That's actual life-hours saved.

And the promise is that they're only going to get more reliable with time.

Conversely, keeping back self-driving cars just because they upset your sensibilities who specifically should die, i.e. random group A instead of random group B even though A is larger, means you're effectively advocating that more people should die. That's grossly negligent.


"Survey: Self-Driving Cars Need to Reduce Traffic Deaths by at Least 75% to Stay on the Roads"

https://www.claimsjournal.com/news/international/2018/06/01/...


Interesting.. I would actually think garbage trucks would be one of the last vehicles that are automated. They have to do some insane navigating in small spaces. It seems like the absolute hardest thing to automate that I can think of.


Do they? Most I see just cruise down pretty open residential streets grabbing convenient cans. Like most things it'll probably be a mix, most service would be covered by automated trucks with the more troublesome routes still driven by people until the kinks get worked out.


I would imagine actually picking up the cans is the hardest part. they often look different from each other, people place them haphazardly, and you might even have to distinguish trash from recycling.


> If you are going to geofence self-driving cars it's questionable whether they are better than trains and buses

They run on demand 24/7, they run point to point, you never have to stand, you don't have to share the space with people who behave antisocially, you have a range of choice of comfort options, there are unlimited competitors for the same route driving prices down and service standards up.

So, quite a few game changers for me at least.

If you live/work outside the geofence, this obviously doesn't apply, but if you do then you get the advantages 80% of the time, and for day to day purposes it absolutely is fundamentally better than trains and buses.


And they pollute much more and take more space.


I'm first in line to agree with you, I think cars in cities are awful because of these two problems, and honestly I support all sensible efforts to reduce the number of (current generation) cars on city streets.

I'm also a daily user of public transport in a major world city, however, and whilst buses and trains work they also have the fundamental problems highlighted above that are in my view insurmountable, and which (self driving) cars solve. So I'm in a dilemma over whether I think they'd be an improvement, but I ultimately believe they would be.

But yes, absolutely pollution and space saving would have to be addressed. I'm optimistic they could be:

Pollution: hopefully solvable with electric. Yes, of course, there are efficiency gains that trains/buses would perhaps always have (having to move many small things rather than one big thing carrying the same number of people) but I can see electric reducing the absolute level of pollution from cars to a point where it basically doesn't matter. Maybe overly optimistic, but I hope that'll happen.

Space: Driverless cars, particularly electric driverless cars, can be much smaller and can drive much closer together. Different cities have different challenges, but I don't see why we couldn't get to a situation where roads might effectively have capacity for 2x vehicles across and 2-3x vehicles along if the vehicles can be coordinated to drive closer together and are physically smaller (remember that on-demand unlocks certain design constraints - you don't have to provision for peak - if the average number of passengers is 1.5, then most cars on the road could be ultra small 2 seaters). Again lots of conjecture and optimism here, and I am probably assuming that 'manual' cars would eventually disappear or be banned at least from cities too, but nonetheless I think there is hope here.


Public transport will always be able to pack more people into the same space.

There is a minimum amount of space occupied by the vehicle's own matter. The ratio of that material to the occupant is a scaling law: the larger the vehicle, the higher the volume-to-surface, the denser the passenger-to-vehicle ratio can be.

A standard R160 subway car carries up to about 250 people in an area that would be about the same as 5 Corollas positioned nose to tail, a starting ratio of about 10:1. If cars are made twice as dense, it's still 5:1.

This is ordinary batching. The only thing that can carry more people more densely than a bus or train is the footpath.


> Public transport will always be able to pack more people into the same space.

While that's true, that's also just an argument for something we already understand. Are there possible arguments to be made about the number of busses and routes because it's not taking people point to point. What about the lul-time of day, where on-demand model is more efficient than driving empty busses around. They reduce schedules obviously. But now you're punishing passengers that start a shift at noon. There are more things to solve than just "the least amount of pollution and space."

Totally unrelated but similar argument. The FAA introduced a new system and process for creating standard flight paths a few years ago. Flights around the nation started doing new things. TONS of people are now inundated by jet noise. They're now discussing just going back in certain areas to the old routes. They ONLY optimized for efficiency of route and didn't consider anything else.


> There are more things to solve than just "the least amount of pollution and space."

The argument I was responding to was that self-driving cars would be sufficiently more efficient users of space that it would obviate the need for public transport. This was demonstrably false.

> But now you're punishing passengers that start a shift at noon.

Sure, but (a) that's a financial question, not a capacity question and (b) rush hour is when both finance and capacity collide.

Public transport is a more efficient option per passenger-kilometre and per square metre of road or cubic metre of tunnel than self-driving cars. It is always going to be. Self-driving cars will have other virtues and will reshape transportation, but ascribing magical powers to them does nobody any good, especially if it leads to defunding of subways and buses.


Yes you're absolutely correct - I guess my main question is could the space savings, like the pollution savings, be 'good enough' for it not to matter in practice, assuming that driverless cars become the 80% usecase for road traffic and today's 'massive 5 seater car for an average of 2 people' model goes away.

It's also important to point out that of course, buses and trains shouldn't go away - they could indeed also have a bunch of problems solved by being self driving too, most notably the 24/7 running thing - they'd just be another option in the mix.


>A standard R160 subway car carries up to about 250 people in an area that would be about the same as 5 Corollas positioned nose to tail, a starting ratio of about 10:1. If cars are made twice as dense, it's still 5:1.

You're not accounting for the space between trains. They will only take up 5-10% of the available rail space because you need gaps between the trains.


You need gaps between cars, too. Two seconds of following distance in good weather conditions, four seconds of following distance in bad weather conditions.

At highway speeds, 4 seconds of following distance is 120 meters.


> You're not accounting for the space between trains.

NYC's least efficient system is block train control, requiring 300M separation between trains.

In 300M you can fit about 66 Corollas, again packed bumper to bumper. That's 66 * 5 people = 330 people, or approximately 1.5 subway cars.

Subway trains are 6 or 8 subway cars long, depending on track and time of day.

With CTBC some platforms can run at 60 trains per hour (30 local, 30 express). That's 8 subway cars every 2 minutes, or approximately 1,000 people per minute.

To move 1,000 people at all with the Corollas you will need 250 * 4.55M of them, or approximately 1.1 kilometres of cars, bumper to bumper.


>With CTBC some platforms can run at 60 trains per hour (30 local, 30 express). That's 8 subway cars every 2 minutes, or approximately 1,000 people per minute.

Unless your trains are going at 10 kph you will have more of a gap than 300 meters if you see a train every 2 minutes.


Or the trains can move much faster and much closer because of improved control.

You should recognise this argument: it's the same logic given for why self-driving will solve everything.


But you're always limited by loading time. If you take X seconds at each stop then you will end up with X * speed distance between each car. Even at 30s loading and 50 kph you're looking at 400 meters between each train.


Packing more self-driving cars onto roads is an appealing notion, but I don't think it holds up to scrutiny. Even if a computer detects hazards instantly and continuously negotiates with and anticipates the movement of other vehicles, the physics of moving cars remain the same. Safe stopping distances won't be dramatically different. Road surfaces, stray animals, mechanical failures, and any number of other hazards don't care who or what is driving a car.


Say you have 3 lanes of traffic. Each lane is occupied by 100% driverless cars.

Let's say, there is an object completely blocking the middle lane, and partially blocking the two outer lanes. What will the driverless cars decide to do?

My guess, is they wait there until the road is completely unblocked, there's no way for the cars to understand the proper course of action. It will be great fun to bring absolute gridlock with an empty cardboard box in downtown NYC once this all kicks off. Naturally, by then, card board will be illegal there.


Why would driverless cars be smaller? They still need to be able to carry the same number of passengers; the only difference there is that one of those passengers is no longer also a driver.


With autonomous cars, robotaxis become feasible. If you're buying a car for yourself, you're going to want a certain amount of space for passengers or cargo in case you need to do something like drive your family to the airport. You'll also want long range in case you decide to go on a road trip. Even if 90% of the time you'll be commuting alone 20 minutes to work, you'll want this extra range and capacity for the times you need it.

If you're deploying a fleet of robotaxis, most of your fleet can be vehicles that only carry one or two passengers at a time on short trips. If a family wants a ride to the airport, then they'll specify that they need a larger vehicle for more passengers and luggage, etc. So you can buy a fleet of mostly smaller cars with a shorter range, and compliment them with a few minivans for long range trips. Overall your fleet will be quite efficient.


One thing that is often not accounted for is car seats. Do you have to bring your own?


Yep, and given that some big percentage of car seats are not properly installed, do you then have to take the requisite minutes to install and uninstall each seat properly? There are a lot of challenging edge cases.


> the vehicles can be coordinated to drive closer together

How do you make this work? There's no standard in self-driving systems, no standard in sensors, and no standard for vehicle to vehicle communication.


Not necessarily, it's not that simple.

I've been on double-length public buses where I was the only passenger for most of the journey. And many times where there were only 1 or 2 other people. This is necessary e.g. at night where a bus only runs every 30 minutes, there has to be a bus, but there just aren't that many people.

Obviously in this case individual cars would pollute much less, and even take up less space.

Any realistic solution for a city is going to combine self-driving cars for the long tail of origins/destinations and times, with public transit for the busiest routes at the busiest times, including links between the two.

I'd love to be able to take a self-driving car from a house in Queens directly to the subway, the subway to Brooklyn, and then another self-driving car to my final destination that's 20 blocks away from the subway.


On average public transit produces 2% of pollution but transport about 28% of riders. (I don't remember where exactly I got the numbers but I've seen them in a few places).

The buses need to run at night so the people can rely on them during the day -- it's just the price of public transit, the public uses transit way more if buses run on 10-minute interval during off-peak rather than 30-minut interval. (another study I can't find right now).

I think as programmers we tend to try and optimize problem for least pollution, but the problem here lies in the domains of sociology and urban planning; we might not be the most qualified people to solve it.

For example if buses pollute 3% because of the extra buses on off-peak routes but now more people use them, this would have net effect of reducing pollution.


Was going to downvote but this probably merits a reply.

All your points are correct, but you're not responding to the point that SDCs can be superior to buses for those low-capacity times. If it were one or the other, then yes, buses could be less pollution on average, but that doesn't address the case the parent was talking about.


Prior to low cost ride sharing, a good bus system that people will use as an alternative to cars pollutes more because it has to run a lot mostly empty buses to satisfy long tail demand.

To the extend low cost ride sharing is a short term unicorn phenomena, that will again become true, although self-contained electric buses might change that a bit. Maybe someday self-driving cars will change this, but that's not in the predictable, foreseeable future.


Yeah this is the really interesting part - like where does the 'supply provisioned precisely to match demand' thing for self driving cars meet the 'buses/trains are more efficient assuming some % capacity filled', how does that relate to demand changing with hours of the day, where people live, etc.

It's just a fascinating optimisation problem. I think the answer is far from clear, and certainly couldn't be reliably approximated without lots of real world data.


Also if you live for a part inside the geofence, You can give full AI autonomy inside the geofence and take over outside. This would be especially useful when you have to travel long distances on the highway.


I totally agree, and they can take care of the last mile problem for trains at both ends, making trains more practical.

And GP said, "If the solution is flawed even a little bit, people will die." Well, yes, and if humans drive, people will die.

If the flawed machines reach the point where they are provably statistically better than the flawed people who currently drive, lawsuits claiming that people who choose to drive and get in accidents are negligent will cause insurance companies to jack up the rates on "self-drivers".

That's where the state-change sigmoid curve will suddenly turn upward.


Yeah, this is something that really bugs me about public transport.

It's simply not acceptable to have to stand on a bus. Ever. It should be treated as some sort of breakdown/failure condition. If it happens on more than a few % of journeys, the public transport authority should be putting on more buses.

Otherwise people are just going to take their car because they actually have control over their own comfort.

With trains it's a little harder due to physical constraints on the network, but with buses it's inexcusable to not just have more capacity, particularly at peak time (London buses get silly in the morning peak, for example).

edit: Replies here are completely missing the point. A car has comfortable seats whenever you want them. It's honestly a comical joke to have to explain "why I don't want to stand up". The obvious answer is because I don't have to in my car.

Public transport can be, and should be as comfortable as that. It can't be as private, sure, but there's no need for it to feel like cattle class.

I'd gladly pay twice as much for it to be better. It easily could be; in London for example if everyone gave up their cars and put the money towards public transport we could double or triple the number of buses.


Why is it not acceptable to stand on a bus? It increases capacity hugely and if you're only going a few stops it makes perfect sense.

Most buses are rated for "X Standing, Y Seated" passengers.


Also, wishing for a blanket increase in total number of busses by some huge percentage to avoid anyone standing probably sounds like a real nice plan until you have to pay for it... Capacity management isn't as easy as throwing drivers and busses at the problem.


People don't mind in say, SF Chinatown. They pack those buses like sardine cans.


Because people will take a taxi or self driving car instead.

You're making a comparison here that's based on having no other choice.

There is obviously a difference between a 5 min journey down the road and a 30+ minute commute.


Will they? When I drive I have to worry about parking and, you know, driving. If I take a taxi or ride share I'm paying on the order of 6x what the bus costs.

I think you're overstating the annoyance of standing on a bus.


> I'd gladly pay twice as much for it to be better.

Some other comments mentioned classism, and anti-social passengers, but a less emotionally charged way of framing it is just in terms of price discrimination.

Busses and subways have no "first class" or "business class" section. The passengers who would pay double or triple are doing so in their car payments, insurance, and gasoline, and riding in relative comfort, and the people left riding the bus are mostly those who aren't willing to pay more.

It's going to be difficult to get people to put their car payments toward better bussing, while still keeping it accessible to those who want to pay less.


> Busses and subways have no "first class" or "business class" section.

It depends. New York City has "express buses" that are basically long-distance buses repurposed for commuting. You pay an additional fare and are more comfortable. Commuter trains are of course more comfortable than subway trains. (This is why I usually fly out of JFK. Taking LIRR to the airport is much nicer than taking the bus to LGA.)

Japan's rail systems have higher-tier options on some route. JR has "home liners" that are long-distance passenger trains basically repurposed for commuters (actually very similar to the default commuter train in the US, whereas the normal commuter trains are very similar to metros). Tobu has the "TJ Liner" which is similar. The demand does exist.

I think the reality is that public transportation is too popular to reduce capacity by having more seating. Everyone in NYC would pay for a more comfortable subway, but there wouldn't be enough trains to carry them all. The completely-packed-at-rush-hour is a byproduct of the fact that cars just aren't faster in NYC, so you might as well pay $2.50 to get home faster but in less comfort. (I've looked into it. There were many times when I was running late and thought, "maybe I'll just take an Uber to work today". But the delayed subway would take 20 minutes and the private car would take 30 because of traffic. It only made a difference in the middle of the night when the subway was running on 30 minute headways and there was no traffic. Even then, still close to 20 minutes to drive. The subway is just that efficient, despite everyone's complaints.)


> Busses and subways have no "first class" or "business class" section

There are. For example: the Shenzhen Metro Line 11 https://en.wikipedia.org/wiki/Line_11_(Shenzhen_Metro)


Yeah I think this is the main challenge - you can have private buses are that are comfortable and provide a net positive over driving yourself. These exist in the bay area (FB, Google, Apple all do this) but then they're not public transport.


Right, which is why public transport should be funded via taxation.

The way to achieve a reasonable service is actually to tax driving at a higher rate (to account for the externalities; pollution, congestion etc) and put the funds into infrastructure like buses/trains.

I often drive because it's more efficient and comfortable than using the bus. It's more efficient and comfortable than using the bus because the externalities are not priced in.

We can address that balance by taxing cars higher and putting on more buses / paying for more cleaners, etc.


Really depends on the distance and the rides harshness, that's why we have buses and carriages filled with people who didn't take a car anyway.

Definitely not ideal but we can't optimize just for one thing. Putting more busses on demand has really hard scaling problems because of these busses don't appear and disappear like a virtual machine, it needs to be stored somewhere and delivered to the location where that bus is needed. Then you have load-unload problems. This is also why cars that are just like the cars of today but electrical and automatical won't scale too.

You can't really divorce the design of the inhabited areas and the transport systems. This is also probably the reason why the Americans seem so hostile towards public transport. In Europe and even in Asia public transport works fine and the most annoying thing about cars is not that you have to drive them but you have to put them somewhere when you reach your destination.


> If it happens on more than a few % of journeys, the public transport authority should be putting on more buses.

That's unrealistic, because there are such huge peaks in traffic. You'd need a large number of extra bus drivers only from 7:30 to 8:30 and from 17:00 to 18:00 each day.

Who would want such a job?


"such a job" would not be made available as bus drivers are typically unionized employees and that is one reason part-time bus driver positions do not exist


Whether it is realistic or not is beside the point.

A bus cannot beat or even match a fleet of self driving cars if the suggestion is that a bus has to be a cramped standing area.

Given a service much cheaper than Uber is today, the only people using the buses would be the poor and environmentally focused.


I take the bus and I'm neither poor, nor a particularly good environmentalist. They're convenient and cheap and I sorta prefer it to putting my life in the hands of a sleep deprived wage slave. Uber rides feel _consistently unsafe_ to the point I don't even consider it an option anymore.

IME most of the resistance people have to taking the bus is classism. They don't want to share space with "those people".


Presumably then, in your area, buses are superior.

This is exactly what I am talking about. They are not everywhere, and they should be.

I prefer buses too when they actually work.

I tend to take public transport a lot more off peak. At peak time I vastly prefer cycling or driving depending on the weather because I find being crammed next to other people disgusting (a failure mode unique to public transport).


It’s not about class at all. It’s that people don’t want to share space every day for an hour each way with loud people, smelly people, crazy people, violent people, etc. And if they do, they sure as hell don’t want to do so standing and packed up against them. Not all cities and routes have these problems, but many do.

Nobody objects to having to share space with a broke working class single mother taking the bus to go clean hotels downtown.

Aside from that, the weather can be a pain in the ass (rain, snow, wind, scorching heat), transferring buses/trains is a hassle, you cannot travel on your own scheduling terms, you can’t swing by the grocery store or run some errand on the way home, travel time is generally much longer unless you happen to live/work right beside an express stop that does not require a transfer, and the list goes on.


I grew up on a council estate in a post industrial Northern town.

It's not classism.

I don't want to be rammed into a bus. I like buses that are at capacity, it's relaxing.


Or... cities see self-driving cars, draw the correct conclusion that they can put some smaller, cheaper and more efficient self-driving buses and blanket ban cars in places where there are too much of them. You either walk or take the bus.


Ok, but check this out: self-driving buses! It's easier to make a self-driving vehicle that follows a fixed route. Buy enough buses to cover the peak demand, then send most of them back to garage during non-peak times.


Buses are a half-million dollars or more, without fancy self-driving tech. Doubling your fleet (to double peak capacity) is a big investment.

Garage and maintenance costs also increase with the number of buses.


It's solvable, especially since it's the government that is responsible for public transportation. For example, you can arrange another job for the non-needed hours. For example - a taxi driver.


I imagine Uber or Lyft drivers would be good candidates.


Would they be willing to get a bus driving license ?


> It's simply not acceptable to have to stand on a bus.

you need bus fares to cost less than car travel or people'd just drive everywhere and car ownership/congestion comes with it's own set of issues.

it's annoying but having enough capacity to handle peak hours would also see lot more buses parked off hours, lot more technicians doing maintenance.

given the seated:standing capacity on current models, you'd end up with twice to thrice the buses to seat everyone.

now this wouldn't immediately mean to have triple the fares, but it'd be a significant factor and limit/reduce total bus usage, and there's no indication the consequences for society would actually be net positive compared to standing.


> Otherwise people are just going to take their car because they actually have control over their own comfort.

it depends on your definition of confort.

Ownership of time ressources for mental cognitive activity > sitting.

So in this case, inside a bus with a driver is a better option that consuming my mental cognitive activity for driving. From my POV, of course


This is a thread about taxis and self driving cars.


> Otherwise people are just going to take their car because they actually have control over their own comfort.

The fact that many people don't indicates there are other factors at play.

I bet there's a diminishing return to this. The more buses you add at rush hour the more traffic you cause and thus slow things down reducing capacity. In heavy traffic buses tend to clump together as they get stuck behind one another due to the nature of smaller vehicles around them.

Also, what would be an acceptable cost increase to pay for this additional capacity? You'd have to balance the theoretical increase in ridership against the decrease from raising prices. Keep in mind that public transport is usually heavily subsidized, in the US its on average 50%.


I almost never sit on a bus or a tram even if there are empty seats.


because poop and gum seats right?


Nope. In my location trams are new and clean. I just prefer to lean against the windows and people who need it more can use it. I don't travel more than 20mins with public transport, though.


reply to the edit:

Well, when I was working in the City and living in south London I used to take the DLR (which is a self driving train btw) and walk for 10min.

I can confirm that there were drivers on the streets as well as cyclist, pedestrians, bus riders, tube riders and so on.

All of those at their limits at rush hour. Later the city banned private vehicles from certain locations at rush hour, so you can do such changes to favour one transportation method over the othet but it's far from solving it. It's a very hard problem, very unlikely to be solved without redesigning the city.


The cost floor of driving even with automation is above the cost of public transportation in most cases. Because of traffic, the service standard will never approach public transportation in dense areas


It is seeming more likely by the year that Tesla is one of the biggest over estimators when it come self-driving cars. Musk has repeatedly promised "autonomous vehicles", and even promised to have a fully autonomous vehicle by the end of this year. He is notorious for his rather bold and exciting predictions, but fewer and fewer seem to come to fruition with each passing year. It seems especially unlikely considering Tesla doesn't even test any autonomous vehicles on public roads, unlike many other companies in the space: https://www.dmv.ca.gov/portal/wcm/connect/96c89ec9-aca6-4910...

It's disappointing to hear that Ford overestimated, however, it's reassuring that they publicly recognize the challenges and also put safety above being first out the door.


disclaimer, own a TM3 with full autopilot and purchased full self driving.

I have never believed they will deliver much more than level 3. I did buy the FSD package recently when it was discounted mostly for resale value; it was 2k; and not faith in the technology.

Too many companies want to label it a safety feature yet for the most part it doesn't operate like most safety features. almost all demos we see are fair weather events on simple roads or in some cases systems which can operate on specific roads. so I think they focus too much of the navigation part.

I am more interested in full self driving technology if it leads to more passive safety features like the forward emergency braking. as in an always on system which watches "over your back". that to me is the real value of all this research.

(personal usage stuff is below)

now don't get me wrong, Tesla's system is amazing for what it does do. It certainly isn't full self driving but it offers many of the features people could use to augment their driving. I used it for the majority of a six hundred mile drive both directions. It was useful in rush hour traffic, the stop and go variety, as well as the long haul.

two reasons I found it valuable. first being on long drives I engage it but still maintain the presence of mind to take over when I need to. In this mode I am using it to handle situations when I am distracted. The old issue of looking in your rear view mirror and seeing a bridge you went under but don't remember. I also used it on a typical mid Western two lane road at night in the pouring rain. It helped alleviate fears caused by the constant issue of people with poorly aimed lights or not dimming high beams. The car never lost track of the lanes whereas a few times I was having to hard focus on the white line to avoid being blinded by one of those hibeam drivers.


I agree 100% on why I purchased FSD at $2k. But I'll add that I almost never use Autopilot, as I've had too many negative experiences with it. I do frequently use (and generally love) the smart cruise control feature, though - steering the car in traffic just isn't that much work for me and I don't have to worry about the car lunging to the side when an exit ramp splits off.


Yup, have a Subaru and adaptive cruise control is great, but I doubt I'd want auto-steer. Steering just isn't much work but keeps you attentive without causing the fatigue that speed management does.


Tesla's big bet on autonomous driving is very strange to me. They already have a unique selling point with fully electric cars with good performance and design, why are they overextending themselves to try to be first to market with such an advanced and difficult feature? They're a small company with little money, focus on the fundamentals of selling cars first.

It's like the falcon doors on the X, unnecessary, expensive, and caused delays when regular doors would have been just fine. But no, Tesla had to be different, for what? Elon's ego wanted cool doors?


The problem is that to get where they are they've needed huge investment and issued huge amounts of debt. They are only able to do that because of the huge valuation justified by self-driving technology. If they end up as a company churning out great electric vehicles that is not going to justify being valued at 50% more than Ford.

What costs Tesla money is building great big battery factories and car factories, meanwhile the the software engineers working on self-driving are a relative drop in the bucket but justify probably 80% of the market cap.


Their market cap was already over $30 billion in August 2014, before the first version of Autopilot was announced.


It may not be enough to validate the choice, but the falcon doors make the X a minivan without the stigma of a minivan.


I'd say that the X is more of a crossover than a minivan. A true minivan would be the FCA Town & Country, Toyota Sienna, or Honda Odyssey.


Other SUVs accomplish that with regular doors.


He is lucky if we have self driving cars by the end of the next decade. I am sorry but false positive and false negative situations are not ok when it comes to self driving cars and most if not all solution exist today are based on technologies that produce false positives and negatives.

https://arstechnica.com/cars/2019/03/feds-investigating-dead...

https://www.youtube.com/watch?v=-2ml6sjk_8c


> Tesla is one of the biggest over estimators

Tesla promised that the current hardware the cars come with is all it's needed to implement autonomous driving at some point in the future, which is currently an unknown unknown. if that doesn't scream of snake oil, I don't know what does.


Reason Tesla is having trouble with autonomous vehicles: Elon Musk != Nikola Tesla.


People will die with self-driving cars. The only thing that matters is how many people are dying because of self-driving cars vs dying without. That's it. The moment it's an order of magnitude less, then self driving car will become default.


That's not how people will see it.

Nuclear energy is the safest form of energy, it has much fewer casualties than gas, wind or solar. The difference is the risk of nuclear is indiscriminate, whereas people dying in a solar accident are the workers (rooftop fall).

If the driver is in control of their own car, they have themselves to blame (mostly). With self-driving, they are trusting their lives into a single entity they understand nothing about.


Nuclear energy is a good example of the situation that self-driving cars find themselves. A common saying in my industry is that with safety critical systems you do not get to decide whether the systems kill people or not, the best you can do is decide how many people get killed. Unfortunately for self-driving cars, like nuclear energy, even if the number is far less than existing technologies the public has little appetite for the indiscriminate death.


^This.

The census don’t, generally speaking, tend to believe statistics and facts. And the media will always spin a bad situation to boost their click through rates.

So, even though self driving cars may be 100% less prone to accidents (this is just an example), media headlines like “driver killed instantly when self driving car automatically steered into a wall without warning,” will cause the majority to push back and say they’re unsafe.


> “driver killed instantly when self driving car automatically steered into a wall without warning,”

Interesting example. A justifiable fear people have of self-driving cars is them crashing in scenarios where no (minimally competent, not ass-blasted drunk) human would reasonably crash, like randomly steering into a wall.

I think if self-driving cars were seen to crash in scenarios where a human could have made a similar mistake, their reputation would be better.

Not to say the media reporting of these incidents would be in any way accurate or fair.


This is why self-driving cars will feel more dangerous, even if statistically they are safer. I wouldn't much feel better if in the moments before my car drives into concrete barrier, after having determined impact is inevitable, it informs me that while it might have failed me and I will die, that statistically it has saved 10 other theoretical people.


If I had to guess, the decision will always be to protect the occupants of the car over everything else. No one is going to buy a car that will kill you to save others.


That's not what I was saying. That a car is statistically more safe is not comforting when it kills you in a way you would have easily avoided had you been driving.


there are plenty of sober people that randomly steer into walls/trees/you name it, every day


Nuclear power generation is also being used, you just don't hear about it much. You can see California's power supply by fuel here: http://www.caiso.com/TodaysOutlook/Pages/supply.aspx

Right now it's around 10%, but it's different for different regions.


I think the problem with nuclear energy is the overall number of people who die per incident. Because of this, the deaths are overreported.

We seem to be able to handle people dying one at a time, but once a whole town is in danger, even if this is something that happens once a generation (this is the hard part to understand), it crosses that threshold of "this isn't worth it".

That said, if self driving car deaths get reported as much as traditional car deaths (that is to say, not very often), perhaps people will think they are just as safe as driving themselves.


>I think the problem with nuclear energy is the overall number of people who die per incident.

IIRC those numbers are very, very, low. Your village should worry about the dam upstream, not the nuclear power station upwind.


> The moment it's an order of magnitude less, then self driving car will become default.

I have no surveys to back this, just intuition. But I don't think one order of magnitude is enough to counteract the fact that people don't feel as if they have control over their own mortality.

i.e. it'll have to be a lot better than just one order of magnitude before people can be swayed towards putting their lives in the hands of a robot.


That is not the only thing that matters. There is some interesting psychology at play here.

One thing I see rarely discussed is the fact you are getting killed by a programming error, not a human. Of course, the humans made the said programming error, but they have no face, there is nobody (individual) to blame, no-one to apologise, no-one to mourn for their actions.

At least if someone is hit by another driver, they will be prosecuted in the case of negligence, or if it's due to a mistake, be able to apologise. It's human. It's different.


I don't know. People still fly despite the occasional mechanical error, programming error, or other non-individual error leading to fatalities.

I think if it is an order of magnitude safer that would override many of those emotional concerns. Insurance companies are decent at thinking rationally about problems. People might have emotional concerns, but I think a halved auto insurance bill would be enough to sway a critical majority of people.


But when you fly there is still a human face ‘in control’ (even if not to a sufficient degree) you can blame: the pilot.


That's a good point. Pilots are still an important psychological factor that makes it easier for people to fly.


> Of course, the humans made the said programming error

Are you sure?

You do understand that self-driving vehicles inherently rely upon machine learning systems (and I am not talking simple regression here), which are trained and ultimately make decisions by means that we still don't fully understand?

What I'm trying to get it here is that, barring faulty sensors or mechanical systems, any decision that the control system of a self-driving vehicle makes is most likely arrived at via the inner workings of one or more machine learning systems (likely deep learning neural networks of some nature, but there are other systems being experimented with as well).

They are not a huge series of if-then-else statements or anything of that nature; such algorithmic approaches have been tried in the past and dismissed as unworkable due to the sheer complexity that is inherent in driving.

Honestly, what drove the field forward was mainly work done by CMU, mainly ALVINN; but there was interplay between CMU, Hans Moravec, Stanford (the Cart, among others), Thrun, and others. It has a very long, convoluted, but detailed and rich history (I encourage anyone who's interested in the fields of robotics, ai/ml, and/or self-driving vehicles to read up on it - it's very fascinating). ALVINN ultimately gave the hint toward using neural networks and deep-learning, but the tech couldn't make the leap forward until the convergence of a number of technologies.


I know this. You know this. But we are in a technology bubble. Machine learning also has no face to the regular public. Telling to the public that AI killed their relative is even worse than saying the team at Google did it.


You've articulated the idea underlying a nascent field of research: algorithmic accountability.


> The moment it's an order of magnitude less, then self driving car will become default.

This is rational but I don't think it's true. ~37,000 people die from car crashes every year in the US, from about ~35,000 accidents. Each individual Tesla crash that results in any serious injury or death is still a national news story. Humans are bad at estimating probability; it's hard to imagine popular support for self-driving cars if they kill thousands of people each year. Even if the alternative is tens of thousands of deaths, we have already collectively priced that in as a risk of living in a developed, automobile-driven society, such that fatal crashes never rise beyond local news.


A third of fatal accidents involve drunk drivers.

I find it interesting that the apparent objective is to make driving safer, but apparently the powers that be have decided that it is easier to make cars that drive themselves than to convince people not to drive while drunk.


Quibbling... 28% in 2016 (https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/...), but this is after decades of awareness campaigns and enforcement. Then there's the ~70% of fatal accidents that happen with sober drivers. There might not be more to be had by addressing human factors (though I personally believe graduated licenses with more rigorous testing and much higher investments in public transit would help) so the focus on self-driving vehicles may not be misguided.


You're off by some number of orders of magnitude on the accidents, unless you mean, "There are ~35,000 accidents that involve one or more fatalities (and many more that do not)."


> ~37,000 people die from car crashes every year in the US, from about ~35,000 accident

OP reads correctly to me


Even if overall self-driving cars are safer, they won't feel safer to the individual if the situations in which it will kill them are situations they could avoid if they were driving themselves.

The self-driving car might be able to avoid that t-bone situation more often than the driver can, but if the death scenario for the self-driving car is it gets confused and drives into a river, that will feel unacceptable to a human because it seems so simple to avoid to them.

Most people aren't concerned how many other people survive, but are generally primarily interested in their own survival over population-level statistics.


> The only thing that matters is how many people are dying because of self-driving cars vs dying without.

This is not how human emotion or layperson risk assessment works.


I bet it'll be more about money than lives. As soon as car insurance is an order of magnitude more expensive when "driving yourself", then we'll have self-driving cars as default!


It's not like that's going to happen next year or ten years from now.

Insurers like boring, happens at a consistent rate, human error accidents so long as their slice of the population is not making them at an appreciably different rate than the competitors' slices.

Insurers really don't like hard to predict accidents that cause people to max out their collision and medical (which is exactly what would have happened in the Tesla cases had the customer not died) which is exactly the kinds of accidents self driving cars seem to be having.


It was very exciting when autonomous vehicles from a handful of companies first began demonstrating the ability to go a few miles in complex traffic and pull off a few fancy maneuvers along the way. I got really excited when I saw that, and for the engineers building these systems, it did go to their heads, and timelines to scaled deployment went from being 20 years away to being 5 years away in the span of 18 months.

When deep learning and simulation were first applied huge gains were made, and the limits of these tools were not fully understood. Now that these tools are better understood, we are back to old fashioned robotics development timelines. You gotta build features, and test them, over and over, down to a very granular level. Every time you roll out into a new geography, there is a new mountain of problems to be solved. To drive the error rate down another order of magnitude, the workload goes up exponentially relative to what was needed to achieve the previous order of magnitude. It's like inverse Moore's law.

Boston Dynamics had bipedal robots balancing dynamically and walking around in the early 1980s. 35 years later, now that we have a billion times more compute to throw at the problem, well, BD's robots are still mostly just walking around, and far from being good enough to walk around in the real world. That's the real rate of progress in robotics. The prospect of autonomous vehicles has drawn billions of dollars of investment towards solving these fundamental robotics problems, and certainly that has accelerated things, but how long does the money need to keep coming in for before it starts paying out? That's anybody's guess.


BDs controllers were/are all hand-tuned using good old convex optimization. Good luck getting fancy 'Deep-RL' to work on these without multiple man years tweaking every little knob in the cost function and the algorithm.


Trains involve massive capital outlays and 60% of bus operating costs are for the driver, resulting in buses that are larger than necessary and therefore increasing the non-driver costs.

If we had self-driving vans we could provide better and cheaper transit to everyone.


> Trains involve massive capital outlays

And self-driving cars don't? How many trains could be built with the billions invested so far in self-driving cars?


That's a one time cost. If we get the technology working building more self driving cars will be much cheaper than building new trains.


Self driving cars are simply a backwards-compatible transit solution. We have already invested so much money in car-based transportation to the detriment of trains. But, we will still need to continue paying billions and billions to repair and build new roads.


It costs NYC about 2.6 billion to construct a mile of subway.

In light of that, the money invested in self-driving cars is pretty small.


either that or NYC is just really inefficient in terms of cost.

Keep in mind you've cited probably the first or second most expensive mile of any subway ever built, and subways usually cost much more than other trains.

China regularly builds high speed rail for $30M per mile. Europe is usually closer to $50-60M. Tokyo has built subways for $400M per mile.

https://www.nytimes.com/2017/12/28/nyregion/new-york-subway-...


Beyond the geo-fenced solution, and system like Super Cruise, I doubt that we'll see truly self-driving cars for another 30 - 40 years.

What fascinates me is that company like Uber believe that they can use their current business model to finance the development of a self-driving car and the start to make profit when those cars a ready. That would mean investors pumping money into the company for at least 25 years more.

It makes more sense to slowly develop self-driving cars for technologies like Autopilot and Super Cruise. Gradually improving the solution until one day it can safely navigate a leaf covers dirt road.


Strategies like Autopilot are the worst of both worlds: They don't free up the driver and they are still susceptable to faults. Driving aids which engage to prevent an accident are a useful way to improve safety, driving aids which mean the driver is not in control and is just monitoring the system in case something goes wrong increases the likelihood of accident significantly.


> just monitoring the system in case something goes wrong increases the likelihood of accident significantly.

That depends on the rate of system failure. I used an early form of autopilot and you quickly started to pay attention to where the system would fail. It was like cruse control where you look for situations where you can turn it on but never stop paying attention.

At the other end with really safe systems they are going to default to safer than a human driver. So, there might be a significantly deadlier middle ground, but no company has deployed anything like that yet.


> I used an early form of autopilot and you quickly started to pay attention to where the system would fail. It was like cruse control where you look for situations where you can turn it on but never stop paying attention.

You may do this, but I bet the majority of people do not. I know that Google has data about it's self-driving cars that they know their "safety drivers" simply don't pay attention. In Uber's case - here in the Phoenix area last year - this inattention helped to cause an arguably avoidable fatal accident. These were people supposedly trained not to do this - yet they did.

As far as cruise control is concerned - there've been more than a few accidents of people thinking it did more than it does, and not pay attention to the road in front of them. Tesla's autopilot system, while better than simple cruise control, has - because of this better system (and poor marketing) - caused people to think it does more than it can, and has led them to sleeping in their vehicles while it "drove", to being completely inattentive, etc.

It is pretty well known at this point that those "middle levels" of self-driving technologies are things that shouldn't be implemented, because people either misunderstand them, or misuse them, leading to often fatal accidents.


When I mean early I am talking a system that stops working every 60 seconds or so. The constant failure instilled a fair amount of paranoia, and did not give enough time to cause boredom.

Google’s cars are in that middle ground, but Google has also not released their system. Presumably they are going to wait until it’s as safe as the average driver before releasing it.


I've been wondering: why don't the self-driving car companies build a system that requires a person to drive it but assures the human can't crash the car? In other words, if the self-driving models predict that the human inputs will result in a crash it forcefully overrides the throttle, brakes, or steering wheel.

The sensation would be like driving one of those tracked cars at an amusement park. Try and slam the wheel hard to the right to steer into a guard rail? Computer would physically block it from turning past the point that you would leave your own lane.

This system would:

  * force human drivers to drive the car 100% of the time
  * allow real world testing and enhancement of self driving capabilities (data gathering, model backtesting, etc.)
  * be strictly better than both fully automated and existing non-automated systems
We have the beginnings of this with the new auto-stop capabilities that are showing up in new cars. Why not build this hybrid system as a stopgap until full level 5 automation is working?


Because now you have the worst of both worlds. The computer still has the final say in what the car does, but now you have a fallible human in the loop at all times as well. If the computer really can predict crashes better than the human, then have the computer drive the car. If it can't, then letting it override the human isn't a good idea.

Example: you slam the wheel to the right to steer into a guard rail. The computer overrides you and you remain in your lane. You flatten the toddler you were trying to avoid, that the computer didn't see.


> requires a person to drive it but assures the human can't crash the car?

Because the 'not crashing' part is the hard part, the rest is just navigation.


Uber knows the routes ahead of time. They can decide if the route and time of day is safe enough for the limited self driving technology they employ or send a human driver if their SDCs can't handle it.


Unfortunately too many people subscribe the idea of startups bringing disruptive changes into car transportation because other startups could do the same with entirely different problems. Just because Elon Musk created Paypal it does not mean that he will be able to jump the hoops to get self driving cars on the roads by end of this year. Many people who are intimately familiar with self driving car technologies predict that we need at least 10 years and several innovations before we can seriously consider this as an option.


Of course you're correct that they might not pull it off, but it's not like people are basing any confidence on Musk's Paypal background. Tesla already brought disruptive change to car transportation by making electric cars popular. And Musk brought a third big change by commercializing reusable rockets.


I wonder what the model of survival is for Uber's regional competition like Didi and Ola. I can't speak for China, but driving in India is way way difficult than in the US.


I think self driving research has already benefited cars substantially via driver assistance, like you say. Recently I was in a base model Toyota rental (camry or corrolla I think) and was surprised when it automatically warned me that someone was cutting me off in traffic, before I even noticed them move a few car lengths ahead. Then I noticed there was automatic radar cruise control built in, which made the rest of the drive much nicer. I hadn't even looked too closely before since I assumed that tech wasn't pervasive enough yet to trickle down to a basic rental car.

I think that the technology is probably already saving lives - Volvo is trying for zero fatalities right now, and I think driver assistance will reduce the lethality of cars substantially before we're able to adopt self driving vehicles, which would likewise make it less of a disaster if we aren't able to adopt self driving vehicles in the short timelines originally predicted.


I think you're discounting exponential growth. We (humans) overestimate the short term, and underestimate the longterm.


On the contrary, humans are keen on extrapolating "linearly", ignoring that progress is discrete, and that the next discrete step isn't on any guaranteed timeline. See: humanity's idea of the next twenty years of space exploration in 1970.


Humanity's ideas of space travel 1970-1990 were right on, assuming US government investment had remained the same (4.4% in 1966, <= 1.01% since 1975) https://www.theguardian.com/news/datablog/2010/feb/01/nasa-b...


Technological growth is an S curve. Exponential at first but hitting a point of diminishing returns after. The 747 is roughly equidistant from the Wright Flyer and the 787. Aviation hit the wall before we got, for example, routine supersonic flights. In computers, the 1.5 GHz Pentium IV is roughly equidistant between the 8 MHz 286 and the i9-9900k (which clocks up to 5 GHz, but that’s not even a fair comparison because the 286 dissipated under 15 watts, the Pentium 4 was a 60W chip, and the i9 is nominally a 95W chip but in reality can dissipate much more than that during boost.)


> If the solution is flawed even a little bit, people will die.

Of course they will, but as long as it's more convenient and the death rate is lower than human drivers, they will take off.


Will regulators allow machines to kill people? Human drivers killing themselves and others are a part of human nature, but will politicians and the public accept that people get killed through no fault of their own by mistakes machines make?


Even ignoring the answer to that question is yes for other industries. They are allowing SDVs even after the Uber/Phoenix incident, so we already know the answer to this question.


> Human drivers killing themselves and others are a part of human nature, but will politicians and the public accept that people get killed through no fault of their own by mistakes machines make?

This has already happened with manufacturer defects. It's just more of the same. As long as a clear liability boundary can be established in principle, there's no problem.


If autonomous driving can be delivered with accident-causing failure rates similar to those of general manufacturer defects (i.e., much lower odds than accidents caused by human error), then I think that would be sufficient. But if say we go from ~35k human-caused road deaths/year to ~10k autonomous-driving caused deaths/year, that may be seen as unacceptably high.


> But if say we go from ~35k human-caused road deaths/year to ~10k autonomous-driving caused deaths/year, that may be seen as unacceptably high.

Sure, but there wouldn't be an outright ban. They'd instead establish higher standards that need to be met.


The sobering thought is if you work in machine learning, big data, or talk to someone who does.

What do you consider a good accuracy rate for your machine learned / hand-engineered heuristic? 95%? 99%? 90%?

We've built some real marvels of machine learning, but you can get away with the occasional not-very-rare horrible failures in speech recognition, photo labeling, or machine translation, and it just results in some giggles on Reddit.


> If the solution is flawed even a little bit, people will die. That will give self-driving cars a bad reputation and people will be unwilling to ride in them.

There is lots of debate on every autonomous car story, including this one, about "is as good as humans good enough", but that's a technical threshold. The real blocker (from my view) currently is and will be liability for "negligence."

If I'm driving and fiddling with the radio and as a result blow a light and kill someone, at most you can turn out my shallow pockets in civil court. The automaker has no liability.

But if the _car_ is driving and blows a light and kills someone, then the automaker can and will be sued because they've got deep pockets. I can see the ads now running during the Price is Right. "Hit by a runaway robot car? Jon from Orlando was awarded $3.2 million! Call Richard Cranium Attorney at Law and get yours!"


Both GM and Ford have been hiring a lot of software engineers in Detroit in the past few years. The problem is that they need NASA-style software engineers (crazy-anal nit-picky consider-every-possibility style engineers)--not engineers from the regular software industry. If your car crashes, you can't just reboot it.


It seems to me that there's less of that than there used to be.

In days of yore, I shipped quite a few ROM-based products. Bugs were frowned upon.


That, and they pay shit relative to the regular software industry.


IIRC, it was Elon Musk that said something along the lines of it's not too difficult to get self-driving cars to 99.9% safety - but that's still 1 accident in every 1000, which is terrible all around. You need to shoot for many, many 9's before it can be considered a practical alternative.


People die today. Self driving cars just need to improve on that but a significant amount. I predict that despite not being perfect (which might not even be a reachable bar) they will become mandatory for all cars in the near future because they are that much better than humans.


Humans are deeply emotional creatures and it's most evident when you observe group-level decision making.

Individually, you and I can both agree that the bar is quite low for self driving cars. It just needs to do better than current drivers to be safer. We should adopt it immediately, if we can just prove that it crashes less.

Society doesn't see it that way. 1 crash is horrific. 1 accident, is proof the technology is flawed. 1 accident is 1 too many.

It's stupid, but that's how it is.


The metric shouldn't be #crashes but #innocent lives lost. I've never seen a study that accounts for all the influences to prove that autopilot and co. are better in that regard.


Who's going to be liable for those deaths? The manufacturer? I don't think they'll be so hot on that. So really, they need to be orders of magnitude safer than humans, to the point that they are close to flawless.


I don't think that is the blocker. You can pass a law that says "manufacturers that meet XYZ self-driving certification are exempt from liability".

The question of liability is always tough to resolve. Is Boeing responsible for the 737 MAX deaths because they wrote the faulty software? Is the airline responsible for poor training or buying the version without the "disagree" light? Or is the FAA liable for their poor oversight of the whole program? Arguments can be made for all three.


Selfdriving.. oh wait, this is in the air, where there's no traffic and all these different situations. And wait, it was not even an autopilot. Even this was too hard to implement.

Lane-assist on the highway? Sure.. autopilot? Not any time soon. Probably not within 20 years


Exactly, this is also a legal one. I'm curious what happens when we get human driven cars crashing into auto piloted. Are we going to have an assumption of guilt on the human or the auto or will it actually be balanced.


I agree, but the optics are awful. It's not a technical problem, it's a psychological one.

I don't have any sources to back this up, but I'd wager that people, on average, would prefer to take a risk an order of magnitude greater if they believed they had some degree of control over the outcome.

I.e. let's say 10% of conventional cars experienced a fatal accident, and 1% of self-driving cars experienced a fatal accident. I think most people would choose to be in the 10% fatality group because they feel like they would be in control.

Anecdotally, I have a co-worker who hates flying because she feels like she isn't in control, even though aviation deaths are extremely rare.


> If you are going to geofence self-driving cars it is questionable whether they are better than trains and buses.

If they could manage to geo fence them to highways and get full functionality there it would be a big win for commuters and road trippers.

However the companies can't then make billions off of automated city taxi services.


Exactly. It's relatively easy to build a self-driving car that works well on good, well-marked highways in good weather in the daytime when every other driver on that road is acting responsibly. But that's just the base case, and the edge cases are wickedly hard.


> If the solution is flawed even a little bit, people will die. That will give self-driving cars a bad reputation and people will be unwilling to ride in them.

Some people have died while using elevators, but I don't see the rest of us not using because of that.


> Some people have died while using elevators

For some reason I forgot about this in an earlier comment I made about people wanting someone to blame.

But here we have what could be argued as a "simplest version of self-driving vehicle, with no human operator at the controls, and no one to immediately blame when somebody is hurt or killed by the system"...

So the question is - has there been a study done on how people respond to such incidents, when they have no one to blame? At least, in the moment? They can't "yell at the elevator". Eventually, they may yell at the owner of the property or the manufacturer of the elevator, or whatever company last did maintenance (ie - yell -> lawsuit).

But does that stop them from using elevators ever again (I imagine that a percentage do become fearful; I know I read something of that nature regarding people who've been "trapped" for a long period of time in an elevator, especially those who are alone when it happens)?

Also - why do others continue to use elevators, and not take the stairs (this is more a rhetorical question - stairs are not an option in a skyscraper over more than a few stories for most people)?

I'm just interested in why a self-driving car is so different to people, beyond the relative "novelty" of it? Will we completely reject it, because it can't be made 100% perfect? Or will we get used to them in time, even though statistically someone somewhere will die occasionally from an accident in one?

I'm not looking for answers here; it's just something that I ponder when these kinds of discussions come up, as I've had a bit of minor experience in learning about (and how to create) these kinds of systems.


The parallelisms don't end there: elevators used to be much more complicated that they are now. Instead of having one button per floor they had motor controls. They literally had to be "driven" by human operators.

I don't remember if any studies were cited, but the "elevators vs self-driving cars" was studied in the Planet Money podcast: https://www.npr.org/2015/07/31/427990392/remembering-when-dr...


Agreed. Advanced driver assistance is where the market opportunity is for a considerable time. From there it can evolve closer to full self driving. However full self driving will never be “done”


So we want a car that we don't want to drive. It's calld a taxi.


I used to think this too -- but Tesla has killed half-a-dozen or so of their customers with no apparent brand repercussions, let alone an impact on the broader AV space!


Self driving trucks on highways driving from hub to hub with manual drivers for the first/last miles seems doable today. That’s most of the way to a working solution and still a major potential cost/labor/crash savings.


I think a more fundamental problem here is that self-driving cars are as much an infrastructure problem as they are a technological one.

As an analogy, consider hybrid vs electric vehicles. In places like North America with large, open spaces, electric vehicles really only serve a specific type of urban driver. The culture, infrastructure and geography dictate 600km distances which really aren't practical at the moment with current battery tech. Whereas hybrid vehicles can (or could) quite easily reach that range with options to recharge once you get to your destination or have a longer stopover and still use existing infrastructure. The focus on purely electric is a lost opportunity for anyone who needs power or long distance.

Similarly, cars could be designed to be self-driving in the easy cases; highways, certain urban thoroughfares, particular times of day and the like where existing vehicle and pedestrian flow patterns eliminate the edge cases. coordinating systems along the aforementioned types of roads could be installed as was done for cellular service and GPS and other protocols could be developed to ensure safety and reliability as well as fallback in case of emergencies.

Instead, we've decided on all-or-nothing bets which don't move things forward--or at all--and my worry now is that we'll lose an opportunity to pick the low-hanging fruit and solve the harder problems incrementally over time.


But chasing the higher hanging fruits might allow for breakthroughs that you would not see if you only went for the low hanging fruits.

The range anxiety is less and less problematic with EV. The Tesla Roadster 2 already is said to have a range of more than 1.000 km. Add current research in the fields of solid state batteries and super capacitators and you have the possibility to reach those numbers even with less expensive versions of EV. German automakers already calculate that by 2026 electric engines will be cheaper and more capable than their ICE counterpart.

If you go for that easy middle ground like hybrid cars that you suggest, you limit yourself to the local maximum of that solution. Hybrid cars have the same maintenance cost as non hybrid cars and additionally the complexity of balancing both engines. The only saving in maintenance cost is by going full electric. In the same way you might only achieve certain breakthroughs by actually going for full autonomy even if it wont work perfectly for the next decades for all edge cases.


Bloomberg believes Teslas numbers are a little optimistic and claim it's not possible with "current" battery technology: https://www.bnnbloomberg.ca/tesla-s-newest-promises-break-th...

It doesn't mean Tesla won't do it. But it will be a big deal if they do.


The choice is about releasing version 1.0 vs constantly adding features that are better suited for versions 2, 3, 4+. What can be done now, at reasonable cost and technology level? My position is that we could have a system right now that would alleviate traffic congestion and offer greater safety. Advancement on the harder problems comes with real-world experience in the field.

The doing is the learning.


Agreed. We can more easily follow the autopilot patterns of commercial flights such that cars can activate autopilot once they have officially entered freeways, and disengages again after exiting.

Doing so can also help spur further investment into the Interstate system, which as an immigrant was one of the best inventions that America had made in creating a higher quality of life than other countries.

Hybrid drivetrains (tiny engines with forced induction and electric motors as a blueprint) plus driver assistance tech is a much brighter near future than trying to wrangle busy city streets and electric charging.


Could you do it without all or nothing? Mixing self-driving cars with humans means the intelligence has to understand irrational behaviour and has to respond appropriately (speed up rather than slow down, ignore the potential threat, account for traffic behind you), which is sometimes counter-intuitive. And it comes in many ways.

Will a self-driving car know that you don’t pump the brakes when sliding on snow? What about hydroplaning?

If that didn’t exist we wouldn’t have car fatalities or accidents.


I don't think any of those examples are edge cases. The first set are normal traffic conditions that in the context of self-driving cars are easy to solve, especially in narrowed conditions such as on a highway. Moreover, mid-range cars already have collision warning and automatic braking systems. As to your example of traction issues, pretty much every modern car that I'm aware of has had computer assisted traction control systems for a while now.

The edge cases that are difficult essentially boil down to entity recognition; unexpected and moving obstacles, road sign changes, traffic light outages or alternate signal pathways and the like. Some of those definitely would require government level coordination which is about a lot more than technology.


Well it's been a day or two since I've had a comment buried so I might as well not mince words.

It's an incredibly difficult problem and Ford is way behind. So it makes sense for him to try to reduce expectations.

On the other hand, there are a lot of other companies than Ford doing self driving cars, and some are very advanced.

It is really amazing to me that people are still in denial about the existence of self driving cars.

They have existed and worked with various limitations for decades. The latest from Tesla and Waymo are still limited in some ways but also extremely capable.

Teslas can now do all of the driving from one freeway to another up until you enter a normal street.

Waymo is way beyond that. From what the riders are saying at r/selfdrivingcars, the only time the employees actually need to take over are occasions where there are very risky maneuvers in heavy traffic. Now, I believe they could relax the safety parameters and the cars would execute the same as the Waymo employees in those situations. It's just that there is no margin for error sometimes with traffic and if there is an accident at this stage they want to be able to blame the employee.

So what's holding Waymo back from removing the employees from the car is mainly just an abundance of caution.


> From what the riders are saying at r/selfdrivingcars,

Seems like such a community would be heavily biased towards self driving cars. I wouldn't take much from a place like that, even discounting the shill factor.

Self driving isn't going to be a thing any time soon. Luckily for Ford, their CEO is seeing past the hype. And, if such a large company see this, what's a company like Lyft going to do?


Has Waymo expanded the list of routes significantly? Last I read about it, the number of disengagements was very low, but the cars were traveling a pretty low of number of fixed routes in areas with good weather and visibility.


They are not fixed routes at all as far as I know. It takes people where they ask to go on the app according to what I've heard. It is a geofenced area. Phoenix pretty much always has good weather and visibility that's why they are starting there.


I've been saying this for a while. Truly autonomous cars are decades away.

My line has always been that I hope that by the time I can't, or am not allowed, to drive any more, autonomous cars will be ready... But that I am not even sure about that.

Many of my friends have ridiculed me. They think their elementary-school-kids will not need to learn how to drive. Perhaps I'll have the last laugh. (But actually I hope I'm wrong about this.)


Here in Seattle, we have single-lane roads that go both directions.

When two cars are facing each other, there is a lot of human interaction and understanding about who should go first, who should pull off into a parking spot, or perhaps who should reverse into the intersection to let the other pass.

This is one of thousands of edge-cases that are going to be extremely challenging to automate. I'm not sure it's even possible without some understanding of the counterparty, their intentions and body language.

One problem with developing these technologies in California is that California has very wide, very regular roads. High visibility and dry conditions are the norm. There are few uncontrolled intersections (in Seattle they are the rule, not an exception). I think California's history as a motorist culture has conversely made it one of the easiest places to automate.


> Here in Seattle, we have single-lane roads that go both directions.

We have those in California, too.

> One problem with developing these technologies in California is that California has very wide, very regular roads.

Except where it doesn't.

> High visibility and dry conditions are the norm.

San Francisco, for instance, has fog about 1 in 3 days of the year.


> San Francisco, for instance, has fog about 1 in 3 days of the year.

This is one of the things that self-driving cars WILL do better--and fairly quickly.

Having a sensor system that isn't blinded by aerosolized water will be a huge benefit to driving.


>Except where it doesn't.

It has few enough that you can route around them and still get where you're going.


> It has few enough that you can route around them and still get where you're going.

I think this is a misconception. California has many residents who live in areas that don't have the super wide roads you're thinking it has. It has a wide gamut of roads and many residents (such as myself) are surrounded by more narrow roads. I'm not even in San Francisco. I'm in a suburb in the bay area. (San Carlos)


Exactly where in San Carlos do you think a self-driving / autonomous vehicle would not be able to traverse without affecting the safety or enjoyability of a pedestrian / bicyclist / another vehicle?

The following seems to show a road next to a playground in San Carlos.[1] You think thats too narrow a road for an autonomous vehicle to navigate without endangering kids? Have you familiarized yourself with the latest in autonomous / self-driving tech?[2]

I think your fears are overblown or purposely exaggerated.

[1] Laureola Park San Carlos, CA Playground

https://www.youtube.com/watch?v=3g3ITuPd7Gw

[2] The $800M Robo Taxi That Could Beat Uber

https://www.youtube.com/watch?v=OjDLwnTyybo


2 blocks from that playground.

https://goo.gl/maps/vokEswX4S8z

I live on that street. The streets adjacent to it are also plagued by a plethora of cars on both sides. Much worse than what streetview is showing because streetview is done midday on weekdays when everyone is gone. (Notice how most of the driveways are empty and recycling cans are out - it's the middle of a Thursday that this was taken) It used to be that people would park with their cars on the curb to give more access but I don't see people doing that anymore.

Certain weekends and at night these streets are very full with cars. Much more than google street view shows. It's deceptively smaller than it appears. It looks as if two cars might be able to slip by but frequently not the case. I can't actually remember the last time I saw two cars go by each other side by side without one pulling over to wait for another to pass on my street in particular. The more main street to the south has the same problem but is even wider.

The streets in particular are pretty wide for this issue but because of street parking being available on both sides, it's an issue.


Many places in the world are like that, I 've travelled to most european countries by car, and I can't name a single city that doesn't have 2-way 1-lane roads.

Actually I am curious what europeans honestly think of the self driving car craze, since getting the bus, the metro, and.. ehrm walking and biking is much more common and pleasant there.


My home town is close to Rome. I would love a car that could drive there autonomously. But in the city? Cobblestones, broken pavement, pedestrians crossing randomly, motorbikes overtaking you everywhere, no formal lanes, narrow streets, restricted areas, broken buses, tramways, city police indications when semaphores are off, people trying to sell you stuff or washing your screen when you have a red light..

I think we'll need general artificial intelligence to handle that.


> think we'll need general artificial intelligence > to handle that.

There are roads near me that people struggle to drive down. The amount of negotiation with other drivers, the ability to bump up a hedge or pavement, or reverse hundreds of meters ... or sometimes the need to assertively barge through. But we have nice roads, so geo-fencing to AI certified roads sounds fine.


Yeah but that doesnt support the "autonomous taxi" use cases that have been pushed for revenue.


Perhaps you need a better government that takes care of all these things. Don't you think?

Isn't dealing with corruption and corrupt officials rife in the quotidian lives of Italians even in 2019? Don't you think its high time Italians figured this dysfunction out rather than expect the modern world to bend to the moribundity of your culture and ways of doing things? [1] [2]

[1] The Impossibility of Italian Politics

https://www.the-american-interest.com/2018/03/07/impossibili...

[2] Italian Elections: More of the Same Political Dysfunction

https://americas.nikkoam.com/articles/2018/02/italian-electi...


Easy: don't put that road in the map. The cars don't have to choose the same routes people do, and when you're not the one driving, you're often less sensitive to timing.


Lots of residential places have roads exactly like those. In Seattle in particular, it's not just suburbs, but roads filled with apartment buildings on either side (see the neighborhood capitol hill for example).

I like Musk's proposal for mostly autonomous tunnel driving because it actually seems much more feasible than handling every edge case known to man.


> I like Musk's proposal for mostly autonomous tunnel driving because it actually seems much more feasible than handling every edge case known to man.

Maybe I'm just not familiar with the specifics of that proposal, but I don't think tunnels solve the problem you presented either. If I need to go down a one lane street to get to my house, I'll still need to do that whether we have tunnels or not, unless for some reason the tunnel exit is right in front of my house.

Personally, I think the solution is to make only make certain large, predictable roads self driving and force a user to take over on smaller more complicated roads. As technology advances, you can slowly add more and more of the smaller roads to the self driving map.


The idea, as far as I understand, is that the tunnels present new underground routes that can use central control of vehicles to manage congestion.

According to Elon, his investment in cheaper drilling tech will mean we will have many tunnels with relatively cheap to build entrance and exit points that are significantly smaller than subway stations.

These entry points will allow you to drive to and from your house, enter the tunnel system where some sort of conveyor belt/computer controls your car. You exit the tunnel and drive normal.

So the idea is that the tunnel exits will be _very_ near to your home. Otherwise it's dumb. I think what you are saying about large predictable roads is basically what Elon Musk is proposing but underground.

https://www.boringcompany.com/faq


Okay, that is the proposal I've seen, and IMO it has a huge number of flaws.

- There are going to be certain places where you can't dig a tunnel for geological reasons. For example, half the population of Florida lives less than 4 feet above sea level.

- Even if you increase the speed you can dig a tunnel and lower the cost, it's still cheaper to pave a road above ground.

- There are plenty of places where NIMBY folks just wont let you dig a tunnel (looking at you LA purple line extension).

Ignoring all that though, even if you can build an entry point every half mile, you still run into the issue of needing to drive the car that last stretch, which means you need to either solve the problem of self-driving on complicated roads, or force the driver to take over. The tunnel system shortens the amount of time you would spend on small roads, but it doesn't eliminate it.


The first quarter is easy, the 2nd more difficult. I don't think we'll ever get to the third.


> Easy: don't put that road in the map.

Not reasonable. You'd be parking the car at least half a mile from where I live because it's full of these streets. Usually they're one car at a time because cars are parked on both sides of the street causing the streets to become just big enough for one car to get through.

For reference, I live in San Carlos (Bay Area). I've encountered this same issue in many other parts of the USA too. It's not uncommon at all.


> Not reasonable

If your car could get to your location without needing to use those streets, it could park itself anywhere, half a mile or further. There is also the very reasonable possibility of banning parking conditions that lead to that kind of issue, made more palatable by my earlier point.

I've learned to always think in "options", not "absolutes".


Good luck getting that passed. The reason there are so many cars parked on the street where I live is because many of the homes were made in a time when people didn't even own a car. You'd essentially be banning ownership of more than one car.


> One problem with developing these technologies in California is that California has very wide, very regular roads. High visibility and dry conditions are the norm. There are few uncontrolled intersections (in Seattle they are the rule, not an exception). I think California's history as a motorist culture has conversely made it one of the easiest places to automate.

You're doing well until you say this. The state of California is just as diverse as Seattle when it comes to roads. I lived in both quite a bit...

California has some bad roads with poor visibility. Recently, we've had a lot of forest fires. Guess what - visibility gets pretty bad. It's like driving in fog (and we have that too). We also have snow! (The bay area got snow this winter - skyline did its thing and fell off the face of the earth again from it too) We had so much snow that most of Tahoe was closed for winter. The roads aren't super well maintained and if you go do some trips to Tahoe (good luck getting there or back!), you'd find out that the markings for lanes just disappear altogether. (Snow or not) We have single lane with 2 way traffic either intentionally or because there are so many cars parked on the road that only one car can fit through. There are insane hills with no nearly no visibility unless you creep forward into the intersection past the stop sign when at the top. Same with going down them - no god damn idea what's below until you just jump that cliff. You have some of those in Seattle but they're really steep in SF. Incredibly twisty and poorly paved roads with no road markings that suddenly change from having lines to not having lines randomly or permanently (e.g. Alpine, Old La Honda Rd, Kings Mountain) and here's the kicker - they're pretty popular roads with lots of people using them all the time with a wide variety of people on them. It's common to see a cyclist on the road or a damn horse and having to overtake them when there isn't a ton of room (and there's a very real risk of oncoming traffic) unless you want to be stuck behind them for 30 minutes during their little workout or trot up the hill.

So... don't write off the whole state. There's a lot of varied conditions.


I think everyone is underestimating how many people live in these conditions. The Los Angeles economy alone is bigger than almost every other country in the world. Add in the rest of Southern California & The Bay, Phoenix, New Mexico, and the majority of Texas's population -- and that's probably a bigger economy than any other beside China.

Will self-driving cars be applicable EVERYWHERE in EVERY condition shortly? No. But even if they're only available in ideal conditions in some places -- that can still be almost unfathomably huge.


> This is one of thousands of edge-cases that are going to be extremely challenging to automate.

On the contrary, this is very easy to automate, it's a simple shared route planning communication between the cars.

The challenge remains interpretation and reaction to irregular environmental objects and structures.


Step 1: no human drivers.

Ah yes, so simple.


I appreciate you're trying to be glib. However, up to level 4 autonomy I'm not sure I understand your point? Even the most optimistic people agree level 5 vehicles are multiple 10s of years away.


Is Elon Musk included in your "the most optimistic people"?


I'm typically an autonomous skeptic, but I disagree. I think autonomous cars are decades away UNLESS we start designing roads specifically for vehicular autonomy. design lanes that are extremely easy for cameras to see. Traffic lights that are easy for computers to parse. Street signs like QR codes etc.

Obviously the cost of this is prohibitive. But at some point we have to get it through our heads that for a computer to excel at a task, the task needs to be redesigned.


This is a terrifying idea. Most of the US -- and its cities -- is already designed around the car over the people which make it up, and there's more than a few convincing arguments this car-above-all philosophy has been detrimental to the economy, sustainability, and overall health and happiness of cities, and, as Jane Jacobs would say, has all but destroyed the diversity of city life and replaced it with cookie-cutter "Great Nothingness".

I personally shudder at the thought of redesigning our cities any more around cars, and instead hope the next decade will continue the trend of adapting car usage to living in cities and not the other way around.

---

Self-driving cars can and should be an amazing complement to city life: owning a personal vehicle would be unnecessary if the city were dense enough to support a fleet of ever-roaming self driving cars, allowing more space for pedestrians, residents, and workers to enjoy the city and reclaim all the land dedicated to street-parking and parking lots (Los Angeles proper has more area dedicated to on street surface parking than the entire city of San Francisco is big). That's more housing, more parks, more schools, more activity, more events, you get the idea.

Most of that promise would dissipate if we had to further redesign the city around the car, and for what: so the few of us lucky enough to own a self-driving car could live far away from everyone else and putter around on their phone while their car is stuck in traffic on ever wider and wider highways?

American city planning underwent a fundamental shift in the 1950/60s which prioritized cars (I'd argue for the worse), and I think it's past time we undergo another shift away from ever-extending sprawl.


> UNLESS we start designing roads specifically for vehicular autonomy

At this point, why not just have tracks? Really, you can automate so much by just putting the damn thing on rails.


That's what I envision, digital rails.


Yep. Or they shouldn't even need to "see" the lanes. The roads could be enhanced with signal transmitters so that the car is "on rails", in a digital sense.

Of course that would require centralization and working with the government. The government has no incentive to do this because it would just create another problem for them (mass unemployment).


> The government has no incentive to do this because it would just create another problem for them

Actually the government DID try to do this. The U.S. DOT had an entire decades long program researching the following:

1) Costs and plans for updating roads with sensors

2) Upgrading traffic boxes (the things that control traffic lights and such) with a module to communicate back and forth with surround cars

2) Forcing manufacturers to install a module in all cars that would allow them to communicate with surrounding cars + road sensors + traffic boxes.

The problem was the amount of push back the government got from car companies was really absurd. And of course Republican funding games. Republicans love their private companies so they basically sapped the program's funds.

It was able to pick up momentum again from around 2012 onward though:

* https://www.its.dot.gov/about/federal_its_program.htm

* https://www.its.dot.gov/research_areas/automation.htm

* A practical example of its research: https://www.its.dot.gov/pilots/florida_agencies.htm


They only need to "see" well enough to match against their maps.


How about designing roads for people. All the problems you mention should be solved even if we don't have self-driving cars of any kind.:

start designing roads specifically for ease of use. design lanes that are extremely easy to see. Traffic lights that are easy to parse. Street signs standardized and easily readable.


> design lanes that are extremely easy for cameras to see

Good luck when it snows. And it's 3 AM. And no one else is on the road, so there are no context clues.


> Good luck when it snows. And it's 3 AM. And no one else is on the road, so there are no context clues.

Um, a human sucks in this case, too.

There is a reason we put reflectors on the lane markers and rumble cuts in the road.


> reflectors on the lane markers

I'm from the snow belt, and these are invisible until the plows come through.

We'll eventually embed sensors in the road, but that's not really general purpose self-driving imo.


I think autonomous cars are decades away UNLESS we start designing roads specifically for vehicular autonomy.

If the tech industry wasn't so bubbled, this would have been the plan from the beginning.


Are there any companies currently trying though? I'd be interested in joining an effort to design self driving cars and the roads those cars would need. I've had the idea for a while but haven't heard of anyone doing it.


Scania presented this as an idea for long-distance hauling in Sweden and they had a trial of autonomous trucks somewhere north of Falun (?).


Uber is ready. It's autonomous for you as a passenger, and you get a free chat buddy if you feel like it.

Autonomous cars are interesting for the trucking industry - for everyday driving, reducing traffic via remote work and smarter city planning is what'll make people happy imho.

I don't mind a 10 minute drive. It's the 100 minute drive in bumper to bumper traffic that makes people dream of self-driving cars. The real answer is not being in that position to begin with.


Uh given that Uber was the most recent autonomous car to kill someone, let me be a little skeptical that Uber is ready.

And, I suspect that give a 100 minute drive route now, if self-driving cars lower the bar of being in traffic, that the same commutes will actually get longer with even more traffic.


I think the parent poster was suggesting that a human driven uber was here now - not an autonomous one (hence the "chat buddy"/driver)


Oh, it seems so on re-reading the comment, that was my mistake in interpretation. Thanks for the check.


> The real answer is not being in that position to begin with.

That to me is the crux of the issue. Self-driving cars are a poor solution to an entirely self-inflicted problem. The only viable long term solution is to reduce commuting.


You're arguing that in order to solve world hunger, we should have fewer people that need feeding, instead of automating farming for higher output.


It's not an either/or argument. It's an 'and' argument of autonomous cars are great, AND many believe do not solve the problems that many proponents of autonomous cars think they will.

In regards to world hunger - many of this planet's woes, would be helped by advancements in science, agriculture AND not endlessly having children.


I actually think even autonomous cars that can safely handle "highway" (for some definition of highway) driving in "good" weather (for some definition of good) is a huge win in and of itself. Both for safety and freeing up human time.

What it doesn't do is satisfy the fantasy of those who live in dense cities that they'll never need to own a car or even learn to drive. But hitting the point where a car can reliably drive itself around a chaotic urban landscape in all sorts of weather conditions by itself is almost certainly a much harder problem.


Exactly. Hopefully, it can help reduce the cost of goods and highway road safety. There’s going to be a lot of edge cases in Manhattan traffic.


I agree with you, and am in the same boat with most of my friends. I have friends that would normally be shopping around for a new car, but who are deciding to hold off a couple years because they've decided that their next car will be fully autonomous and they insist that it's just around the corner. These aren't people I think of as credulous or not knowledgeable about the technology- these are engineers in SV with many years of experience. I don't get it.

I really want autonomous cars to be a thing in my lifetime, and feel like this sort of optimism is more damaging to their future than people being skeptical or even down on the idea. The more people push the narrative that it's a done deal and just around the corner, with no infrastructure changes required, the more backlash (fair or unfair, it doesn't matter) is created whenever there's an accident involving autonomous vehicles.


In the meantime we have Lyft and Uber. I don't think we need fully autonomous vehicles to want to eliminate the need to own a vehicle that needs to be housed and stored for 95% of it's lifetime. Waiting for an L4 transport service is just an excuse.


One of the things that hurts autonomous vehicles is that we're basically solving the most difficult situation up front: maneuvering in a world full of vehicles that are operated by unreliable watery sacks of meat that give no indication of their intentions. If all cars were autonomous and interconnected the problem would be far easier to solve.

That said, I think "decades" is overly pessimistic. There have been some great strides in computer vision and decision making over the past few years and the hardware is pressing ever onward. That said it will be decades (if ever) before we're willing to rip the controls out of the cabin entirely, but a level 4.5 system where the people in the car can do whatever while it drives and only need to intervene when the computer alerts them that there is a situation it cannot handle (but can stop a safe distance away from if necessary).


>operated by unreliable watery sacks of meat that give no indication of their intentions

We give plenty of indications. It's just that they're hard to define objectively.


> We give plenty of indications.

for some values of plenty < ε. As cyclist I can tell you that human drivers often don't even use their turn signals at intersections where it is important for me to know where they're going. And it's not like I'm sneaking up on them, it's when they're facing me at the other side of the intersection.


Sometimes, not always though. Anybody who has been cut off by the driver crossing 4 lanes of traffic without looking 20 feet before the end of the exit lane can attest.


> we're basically solving the most difficult situation up front

I disagree. It's not up-front, there's been decades of work on autonomous systems such as drones, UAVs, trains, robots, etc.

It's just that it's a hard, if not impossible problem. Some companies thought they could successfully navigate this problem with enough data. I don't think it's a data problem, I think it's impossible to give real situational awareness to a machine, and that's the problem.


None of those were asked to navigate and interact on roads with non-autonomous vehicles. That's the hard part I'm referring to. Simply staying in the lanes and taking the right turns is relative easy compared to figuring out what the people in the other vehicles are going to do. If you set the bar such that they never misread the situation then yeah, that's going to make the job effectively impossible. If you set the bar at "as good as an attentive but not very smart human" then I think we can make it in a reasonable timeframe.

FWIW I know of at least one company that's developing their autonomous vehicles with a remote piloting feature. In the event that the vehicle runs into a situation it cannot handle control is passed to human operators in cubicles that take over control of the vehicle on the spot and release control only after clearing the obstacle and the on-board autonomy gives the green light. They also had ambitions to record what the human operator did and program that into other vehicles into the fleet on the fly as they approach the obstacle. It sounded very ambitious (and data hungry) to me, but is probably the only way to fully remove the controls from the vehicle cabin in the foreseeable future.


See Moravec's Paradox for why decades away is likely optimistic:

https://en.wikipedia.org/wiki/Moravec's_paradox


Great quote at the end of that page, quoting from a book:

> As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come.

Indeed, much of the trading world has been replaced by machines.


My test case is: self driving lawyers. That seems to me to be quite an easy technical problem, and obviously financially viable, and mostly can't kill people. And yet..


I think you're both correct and incorrect.

There has been great progress made and I don't see any reason why we won't see them in less than a decade. However, this is only going to be in very specific areas. Really complex driving, like roads with no lines, a single lane but allows two directions, etc; I think that type of driving will take decades.

So in 8 years I can totally see getting off a plane, jumping into an autonomous car and having it drive me somewhere specific. But I can't see driving through all roads until much, much, much later.


I'm a skeptic as well. There are too many variables to make this kind of technology work perfectly. Reminds me of augmented reality measuring app on iPhones - it's just not reliable at all.


> to make this kind of technology work perfectly.

That is a flawed premise. Humans do not operate cars perfectly either. The highest requirement you could put on them is to be as good as the average human driver. And given the other benefits that self-driving fleets offer we can even go a little lower and still come out positive according to various utility metrics.


We might even have teleportation before we have self-driving cars.


I'd rather have frequent, reliable and safe public transport than self-driving cars. Can we pour billions of dollars into that instead, please?


Why not have both? Fast public transit for busy routes, and self-driving cars that bring you to/from public transit?

It's a false dichotomy.


Bicycle brings you to public transit. Problem solved


Perfect for an non-handicapped person responsible only for themselves!


And who lives in the city not far from the public transit.


I'm not sure if you were being sarcastic but this only works in specific cities. Lots of cities have issues with sprawl, weather, lack of bike lanes, or even a place to lock your bike. It's also extremely difficult to do with a young family.


Only in warm climates.

Also good luck carrying your children on your bike.



>Bicycle brings you to public transit. Problem solved

EVehicle brings you to public transit. Problem solved OW known as Park&Ride


You can merge them also: Self driving public transit.


A minor, but still significant, area of research to great public transport adoption is the last mile problem. Bicycles, scooters, Segways, OneWheel... hoverboards?

If there was a sci-fi hoverboard that could be used in rain/snow to easily connect to the nearest public transit node, and easily carried until disembarkation, then car use would decline.

On a different tangent, I am curious about societies where motorcycles are a significant fraction of private transportation. What's their outlook on self-driving cars?


> A minor, but still significant, area of research to great public transport adoption is the last mile problem. Bicycles, scooters, Segways, OneWheel... hoverboards?

The big problem here at this point is not individual innovation into new devices. We already have folding scooters, folding bikes, bike share, scooter share, etc. I already combine a folding scooter with public transport in Munich and it works fairly well for me for many trips.

The real problem is that in many cities, the infrastructure to support these things is extraordinarily terrible. For example, there are literally zero cities in the US with decent bike infrastructure -- and it's bike lanes/paths that fit the kind of devices you're talking about -- let alone good infrastructure. The city I'm in, Munich, hits the decent mark, but still has a ways to go before being 'good'. And it only works well for me because I'm a relatively healthy/fit adult, an elderly person or a child would have more problems.


The last-mile problem is solved by reducing it to a last-quarter-mile problem: improving public transportation density.


the last-mile problem is made up to justify post-hoc why public transit sucks, when really it sucks because it is underfunded (as an intentional act of sabotage)


I wonder who exactly would vote to pay for that. Covering the entire US to a quarter-mile resolution would probably be the most expensive project ever undertaken.


Unfortunately the USA wasn't built or zoned with that in mind, so public transportation while necessary can't be a complete solution for too many. I don't condone it, the majority of cities in this country are true horrors and make no sense and we should aggressively zone in a way that gets rid of the majority of los Angeles to make an example of one of the worst offenders, but we can't ignore the problem exists.


These are completely orthogonal things.

"We" the people can already invest billions into public transport if we collectively decide to do so. What the car companies are investing in has nothing to do with that.

Your comment is disingenuous, because it implies that one is blocking the other, but it's obviously not.


My original comment is a criticism of how investment is allocated in society. Self-driving cars are (in my opinion) a much lower priority than public transport. If the current allocation mechanism prefers wasting billions (in mine, and the article's, opinion), over investing in a chronically underfunded public transport system then something is wrong at a high level.


But you're still conflating two different things. There's no 'instead' here. Companies are correctly investing into a potentially profitable thing that makes sense for them, and for whatever reason, society has democratically chosen to not invest very much into public transport.

Short of arguing for FULL COMMUNISM NOW, where society would also be democratically telling businesses exactly what to do, these two things are not meaningfully connected. If the people decided to take public transport more seriously, we could do so at any time.


I agree with you that there's no direct zero-sum connection between public transport investment and self-driving cars but I'd say that's missing the point.

This is a tangent, but there aren't two separate pots of currency labelled 'PRIVATE FUNDS' and 'PUBLIC FUNDS' in society. Nor does private industry have a monopoly on R&D. As you acknowledge, it's entirely feasible through political action to change society's declared priorities, and there's nothing 'full communism' about it: Private companies operate within the market defined and backed by the state and legal system.

Self-driving cars, even if ready to go tomorrow, present a worse value proposition than public transport unless we enter into the realms of a post-scarcity society. My opinion is that it's a shame that the current sociopolitical system has ended up offering better incentives by having half a dozen mega-corps pump endless amounts of money into their own proprietary research of something less useful to society than public transport, which provides similar functionality.

Is it a useful, actionable statement? No, of course not! We're wasting time on HN not debating in parliament ;-)


> This is a tangent, but there aren't two separate pots of currency labelled 'PRIVATE FUNDS' and 'PUBLIC FUNDS' in society.

This is true, because in reality there's more like millions of different 'pots' of money that are allocated for different purposes and are controlled by different entities.

> Private companies operate within the market defined and backed by the state and legal system.

Sure, but short of banning or at least actively disincentivizing development of self-driving cars, car companies are going to pursue that. And why wouldn't we want them to? We're always going to have millions of cars around, even if tomorrow everyone's public transit was as good as Japan (which would take decades at a minimum for the US, and probably other countries too), that's still a ton of cars.

Not to mention that self-driving car tech will also be used by public transit for buses, and by trucking companies for freight, both of those being very good things.

> Self-driving cars, even if ready to go tomorrow, present a worse value proposition than public transport

This is a false dilemma, since we can have both, and they're not even completely separate things: self-driving buses would be great for public transportation. That would let you run buses more safely, more cheaply, and with more routes more frequently.

I don't even own a car right now and I'm for it. If nothing else, it would've stopped me from getting hit by cars a couple times while biking.


We do, but in most urban markets billions isn't enough to do much. SF spent $2 billion just on building a new transit hub.


That isn't a whole lot. SF will be spending a billion dollars on the Presidio Parkway project for example.


Self-driving cars are the future of frequent, reliable and safe public transport.

Why own a car when you can have a self-driving car pick you up from anywhere, bring you anywhere, and drive away to the next customer when it's done.

Sure some people would still want to own a car, or drive one themselves, but eventually I can imagine self-driving shared cars will become cheaper and are still a reliable way of transportation. For many people, owning a car will no longer be desirable since shared self-driving cars suit their needs.


>>Why own a car when you can have a self-driving car pick you up from anywhere, bring you anywhere, and drive away to the next customer when it's done.

That's called a taxi and I don't see them replacing personal cars en-masse[0]. Even once autonomous cars are ubiquitous I'd still like to own one, not share it with other people.

[0] Taking an uber to work and back every day would already be cheaper for me than what I'm paying for my car + petrol + insurance + servicing, and yet I prefer to have a vehicle and drive it myself.


Because where I live it takes at least an hour for a taxi to decide to come get me (and that's being generous: last time it was about 4 hours). Uber simply says "no cars available" and Lyft frustratingly shows cars 10 miles away that simply disappear. Just because the car has no driver doesn't mean that the economics change enough to service a low population density area.

The reality is that there are huge swaths of the US where you need personal transportation.


> The reality is that there are huge swaths of the US where you need personal transportation.

Yes, but 80% of the population lives in urban areas, so would be better served by improved public transit. That doesn't mean anyone is taking your personal car away if that's the best way to get around in a low population density area.


Cars that carry less than 4 passengers aren't viable as a replacement to existing public transport in terms of capacity or price.


So how did you update your view after reading the article?


I've often wondered why consumer-level self-driving cars are being pushed as a thing when the logical step would be automating service vehicles first - low-risk vehicles that cover the most miles, which can go much slower if working 24/7. Instead of trying to replicate complex pedestrian routes at standard speed, why not handle established delivery routes instead, and optimise over time for new routes?

It's interesting to see how the numerous players in the field, including startups across the globe, are trying to handle this problem, but no one seems to be close. It's a slow race, where everyone is noisy in order to not seem they've been overtaken.


Autonomous service vehicles for mining industries and ports are being developed by the big truck manufacturers and are far more likely to succeed in the near term (less for the AI to handle, private land so fewer regulatory hurdles), but they just don't get as much hype (positive and negative) as the idea of revolutionising our commute.


Rio Tinto, the world's second largest mining company, already has autonomous trucks and trains.

They've been at it for some time now[0]. Even with a massive budget and highly controlled environments (trains don't have to follow lanes, after all) it's proved to be quite difficult.

[0] http://www.riotinto.com/australia/pilbara/mine-of-the-future...


Trucks driving long-haul routes. That would be one of the easiest things to automate. Not even the pickup or delivery at terminals, just going over the highway.

Probably there is lots of work in this area and it will happen first. It just doesn’t get much press because it isn’t consumer facing and it’s boring.


The challenge with going big is that the risks and costs raise substantially, while the price of having a local expert to manage both the drive and any eventualities (ie a human driver), isn't changing much...

Long-haul routes are important, and shaving minutes off deliveries is a big deal. Improved driver logistics and routing are probably wildly more profitable than automation (and risks of highly visible automation failures).


After all some trucking companies can sometimes even get away with not paying their drivers because the drivers didn't drive enough miles to cover the truck lease this week and therefore go home with a bill rather than a paycheck.


1. VC prefer technical risk to market risk. Industrial is riskier.

2. The hype around ML has turned it into magic, so why not “disrupt” a huge market?

Imho a lot of this is the direct result of the deep learning hype cycle.


>Imho a lot of this is the direct result of the deep learning hype cycle.

I'm right there with you. The first round was just 'web connected'. Uber - we're going to do taxis, but with the internet. For example.

Now we've moved to stage 2 - machine learning. Auto-Uber - we're going to do taxis, and uber, but with machines!

It makes for nice headlines.


I can't wait for blockchain ride-sharing.

Never mind https://www.iride.io/


almost.. like trains?


Almost, but on a road and not on a rail. Also a little bit like cars, but these ones drive themselves. A bit like a tram, but with the ability to move freely and not be guided by a rail. Not at all like a bar of soap. Somewhat like a bicycle, but bigger and enclosed.


Like trains with Automatic Train Control. Unfortunately, it's hard enough to get our train operators to install Positive Train Control, never mind Automatic Train Control.


True, but there's also the logistics around getting their goods from a factory to a train. The assumption around automated deliveries would be that the entire chain is automated - from creation to lorry to destination.


Curious have you ever been on one? Not sure how fast or slow they are for cargo but I have taken a train from Orlando to Hollywood (near the Miami area) approx a 3 hour drive by car. On a good day its a 6 hour train ride, I think I have been on there for 10 hours before. Some people for waaay longer from further up north.

I rather be on the road but a train was cheaper than spending money on tolls plus gas. Saving money then was a necessity for me.


You speak about trains as if they are some incredibly exotic things? Maybe they are in US, I don't know. But at least UK trains are almost always faster than driving. I could get to London by train in like 2:50h, or 5 hours of driving(and that's just to outskirts of London, not the city centre). Or to Edinburgh in 1:20h by train, or 3 hours of driving. Not to mention proper European trains, where taking a TGV from Paris to say Brussels, Amsterdam or Berlin is in no way or form comparable to driving - it's just much much faster.

Now the cost is an issue - in the UK driving is almost invariably cheaper than taking the train, even when going solo. When taking any passengers with you driving will win every single time, sometimes it's even worth renting a car for the journey and it will still be cheaper than the train.


I can't believe that "have you ever been on a train" is a real question.

Depending on the route, I could see those numbers being reversed in the UK and Europe; intercity trains are fast but pricey, even allowing for the much higher fossil fuel taxes.

Commuter trains are a similar story; while it's expensive and unpleasant to commute into central London by train, doing it by car will be even slower, incur the congestion charge, and there's nowhere to park.


The price various a lot.

- In Belgium you can ride from anywhere in the country to anywhere else for 7.5€ (if you buy a prepaid ticket for 10 rides), which is hard to beat even with a car with great mileage.

- In Portugal, you can go from Lisbon to Porto for 15€ (buying a couple weeks in advance). By car, that won't even cover the tolls.


I commute into central Edinburgh from Fife - I wouldn't say it expensive (£12 return) and it's pleasant journey (maybe once a month I have to stand for 15 mins on the way home). The equivalent journey by car would be much more expensive due to parking costs, take much longer and, in my view, be rather unpleasant.

I like having a car - but not for commuting into the center of a city!


Why?

In the US, I guarantee you that the vast majority of people have never been on a train. In a few select cities the number will drop if you include subways and other commuter trains, but the general conclusion is the same.


I have taken a train from Berlin to Saarbrücken, approx an eight hour drive. On a good day it's a 6:30h train ride. For Berlin-Munich it's seven hours in a car or 4:30 in a train. For Tokyo-Kyoto it's six hours by car or 2:30 by train.


The US rail system privileges freight over passengers, which is a big part of what makes it (by world standards) a slow system to ride.


Money? Maybe people are more likely to pay.


It doesn't sound sexy, therefore not good for publicity thus making it harder to get funding.

A less cynical idea would be that the problems are less interesting and easier, so the best people may be reluctant to work in that field?


I am absolutely sure companies & large organisations are far more likely to pay AND are willing to pay larger amounts, since there are actual cost-savings when you're able to replace a human with an AI driver


If it allows for development of more systems to to augment a driver's ability or limitations. Rather than relying on some unfeasible HAL dream. Things such as: https://en.wikipedia.org/wiki/Collision_avoidance_system

BMW has been working on something similar, in the goal of developing new safety/comfort for riders. I tried showing it to some older riders who gave the whole 'they want to automate everything etc etc carburetors drum brakes kick starts points etc' https://www.youtube.com/watch?v=MfGmfV9em1A


That's what most of people don't understand - self driving cars are evolutionary, not revolutionary. You are not getting a car that can do everything - just a car that does progressively slightly more and more every day, like automated parking on 2019 toyotas or adaptive cruise control, etc.


If my car could drop me off at the front door and go park on its own, that would be a significant improvement over the status quo. A personal automated valet is perhaps a little easier than an automated driver that does everything. Low speeds also means stopping or starting on a dime, which would allow it to be more conservative about identifying obstacles than when driving at highway speeds.

I don't really understand why nobody aimed for that sort of incremental improvement. Maybe the sensor package is too expensive to use in such a limited fashion?


Game changing technology tends to be overestimated in the short term and underestimated in the long term.


It doesn't mean that any overestimated technology is game-changing.


Any examples?


Personal computers, mobile phones, genome sequencing, MOOCs, email, the telegraph, phones, the railway.

There are examples where it was just straight up overestimated as well of course, like AirPower in warfare but air travel was definitely under estimated too. A plane is a flying bus and people travel across a continent for a weekend trip.


Yeah the first web boom then bust... everyone thought it was over with no way to make money.


Perhaps bitcoin will follow this pattern. (Or well, Blockchain finance more generally).


I agree. Initial boom, then loss of faith, but in the long run probably eventually will all be some form of crypto.


That would require the gov't and big business to jump on board, and it's not clear why they would want to.


Not sure why this is downvoted? Is there any clear reason to think blockchain is exempt from this?


thats why i cant help but lmao when people say Ford and company will just "catch up to Tesla".

Noone has anything close to self driving vehicles other than Tesla.


Waymo can do inner city driving better than Tesla and GM can do highway driving better than Tesla.

I love the competition amongst the different platforms, as a consumer it's a win for us. I could care less on which brand does it better.


Waymo is vaporware.


or more importantly -- no one has the battery building abilty that Tesla do.


If you think that self-driving cars are going to take over anytime soon, I have some bitcoin to sell you.


Well at least you can sell a bitcoin.


i mean, bitcoin is doing quite well..


I don't understand why all these car companies are trying to create it themselves ( doubted it since it was starting to be a hype)

Self driving cars is currently a not solved solution ( too complex). Instead of putting in billions of dollars to be the best.

Put in billion of dollars to work with the best when a decent solution comes out ( eg. currently that's Waymo) and delegate responsability. Currently, i wouldn't partner up with any of them.


They don't want to be in the hands of whoever gets that solution, who will surely gouge them for all they've got. Investing in alternative solutions is an hedge.


I agree. Machine learning is not a core-competency of a typical car company and, when self-driving is ready, customers will soon stop differentiating on the safety levels just like with airbags, ABS, or tyre treads. Either it will be 'safe enough' (and safer than a human) or it won't be for sale. It will be a commodity.

Right now we have dozens of companies spending massive amounts of money on approaching the same problem in the same way.


There is a real danger that someone else comes up with a solution that is enough better than humans that governments outright ban anything else. This risk has a huge downside.

Car companies need to hedge this risk. There is evidence that self driving cars can work (they already do in some areas), so the amount of effort worth putting in his pretty high.


Car companies are extremely capable lobbyists. They'll probably be fine.


After pushing electrical cars? I don't believe so.

It's too costly


Because it will be huge, like the iPhone. No one wants to be left out.


So we’re entering the through of disillusionment, which is only part 3 of the hype cycle tough.


It has been obvious to intelligent observers since the self-driving car hype started that this hype is hugely overblown. My own opinion is that fully self-driving cars capable of competing with human drivers will remain a dream for at least two more decades. What really interests me is why and how the hype became so powerful, even among people who one would think would know better. How much of the hype is explained by stupidity and how much of the hype was created as a deliberate scam? I also wonder if maybe the military is more involved that people might think. After all, even if self-driving cars are decades away, the autonomous technology created as a product of self-driving car research can be already used, now, for military appications. A tank doesn't have to avoid killing people the way that a car on public roads does.


Of course the car companies are all in on an improvement to their devices that would sell more of them; what I really want, though, is far more buses on the road (big ones on the arterials, van-sized ones on the side streets) and a much larger and more intricate trolley system.

I don't want to see a world that enhances and entrenches private vehicle movement; that's not necessary. The goal is to get from place to place. I want to step outside, walk a block, wait 3 minutes, and get on a bus or a marshrutka and be on my way without ever thinking about gas, traffic, insurance, or any of that.

All this "when self-driving cars..." is missing the point: we can move people conveniently and easily and more earth-friendly with what we've got, we just don't use it well because car companies want to sell everyone a private vehicle.


I don't disagree with you, but...

Follow the money. If there was sufficient profit in what you describe, it would be happening. Since it's clearly not then it has to be done by Government. That means actual people lobbying their local and state government for change.

And on top of that, it has to be marketed properly. Where I live there's essentially no taxi/Uber/Lyft service (see my comment above about waiting 4 hours for a taxi), but the County has an on-demand bus service that you can call and they'll pick you up at your home. However, since I've never had to take it, the few times I needed a taxi (e.g., to go pick up my car from the shop), I never even remembered that it exists!


I think Ford is correct here. Getting the problem 99% right means a lot of people get killed and injured.

By tackling small chunks such as geofencing, and things like autonomous busses on planned routes, the industry will move the ball forward in slow, steady steps.

But to me, the idea that a real self-driving car would be around the corner (i.e. 2020 or 2021) is just laughable. People underestimate the difficulty of that last 1%. My best guess is that the last 1% is greater than the first 99%.

Or, as Yogi Berra might say - the first 99% is the easy part; the second 99% is the hard part.


I'll believe in self-driving cars once the robots start sorting my trash and recyclables for me... And once my Tesla stops driving like it's drunk!


Driving on the road is about cooperating with the other people so that everyone benefits. Almost everyone underestimated the social aspect of it.


I doubt they underestimated the social aspect of it given I was at a talk about self driving cars about 10 years ago and one of the developers went into detail about how the hardest part they had to work on was the social aspect of driving.

I remember at the time being surprised to hear that as it's always something I'd taken for granted but it's something that I've consciously noticed ever since.


Thank you for pointing it out. I am sure there were always a few who did focus on the social aspect. I just feel that the direction in which the car industry and self-driving tech has grown obviously suggests that they underestimated it. Everyone would agree that if every car in the road is autonomous, it will be possible to nearly eliminate accidents and casualties and increase commuting efficiency. Unless there is a bug in the system or a rogue actor takes over that is- but that's a different type of risk. However, people are still buying cars with no self-driving capabilities and financing them for next 5-6 years or even longer. Let alone the infrastructure which is crumbling away instead of being readied for autonomous tech. That's why people like the Ford CEO are having to come out and recognize issues like this.


I wonder if we're using the same term to describe different things since I'm not talking about the social stigma of owning an autonomous car.

I'm talking about social in terms of reading the intentions of other drivers (eg joining a busy motorway or deciding who goes first on a spot island when all lanes have stopped and waited for the other) and other hazards (eg pedestrians crossing the road without looking properly because they're distracted with young kids, chatting to friends, or tapping on their phone)

As a driver you developer an intuition for how other drivers would react in different situations. Sometimes that's based on the speed of a vehicle, the angle of the car or driving style (eg are they bumper to bumper with the car in front?). Sometimes it's based on profiling/prejudice about people who drive certain brands of cars (eg BMW owners do tend to be pushier drivers than perhaps someone in a Fiat 500 - obviously this isn't always the case but you might still be more cautious if you see someone approach a busy junction in a powerful car).

Another thing experienced drivers might react to is if they see a series of brief break lights ahead at roughly the same point but cars aren't swerving around an obstruction then that might suggest there's a speed trap.

There's even been times when I've been on a motorway and a car has suddenly swerved in front of me however I was already hovering over the break just in case as I was expecting it through a series of subtle clues I picked up from their driving style. I knew that they were about to change lanes without checking their mirror even before they committed to the manoeuvre themselves.

There are so many hints and cues that drivers pick up on from other drivers. So much non-verbal communication. And that was the hardest part to train an AI. Sure you can teach them to react to hazards as they happen but having them pre-empt hazards before they happen without having that algorithm react to every false positive is a whole over level of engineering. It's something that takes humans literally years of driving every day to get good at and we already come pre-programmed with more sophisticated hazard detection before we even sit our bum behind that steering wheel than AIs have currently.

So this is the social aspect I was referring to. And I'm sure autonomous cars will learn this in time too - or at least get a close enough approximation where they're "good enough" for general purpose driving.

edit: I should have added that one good thing about AI is at least it's reaction times are lower. That reduces the significance of pre-emptively spotting likely hazards somewhat.


For self driving cars to actually work, you really need the road infrastructure to support them, and for a majority of cars to be self-driving so they can coordinate with other vehicles. A car run entirely by sensors without receiving outside information will always be doomed to fail.


We know how an autonomous car will drive in a system made up of mostly human drivers. But what we do not know is what that system will look like when autonomous cars take over. That’s a huge leap and I don’t think we have yet to even address that question.


Most countries don't even fix pot holes in their capitals, I don't see them rebuilding their entire road infrastructure for autonomous cars.


The idea of self-driving cars being practical somewhere like Bangalore is laughable.


Traffic saturated cities like Bangalore and Beijing have the strongest reasons to make self driving vehicles work, however. It is laughable that Americans would be the ones to do this, given their relatively traffic light cities (even LA is nothing compared to BJ) and high rates of personal car ownership.


Don't the traffic saturated cities have a better incentive to make public transportation as good as possible? Less people on the roads seems like it would benefit everyone in that case, whereas most american cities are too sparse to support that kind of infrastructure


They do that as well. The problem persists inspite of Beijing’s huge and growing subway system. Take myself as an example, I decided taking a taxi to work was worth it over a subway ride because of the crowded conditions...even though it lived and worked near stations on the same line.

American cities has plenty of space to build new roads, that isn’t possible in many Chinese cities. Anyways, roads and private transportation will always be there, why not optimize it along with a decent public transit system to boot?


The key is to start with a self-driving autorickshaw. Then you can pathfind as a straight line and just move forward whenever there is at least an inch of space.

The real trick will be automating the horn.


LOL. On second thought, that... might actually work.


Yeah. Machine vision is hard. Really hard.


It's that, but it's also that "driving" is a problem with a really large surface area. There are so many edge cases that it's insane. And every single major metro area basically has subtly different rules and baseline assumptions about how people will drive.


Most major players in the space are not relying solely on machine vision, but rather a combination of lidar, radar, mapping, etc.

Vision is just one small piece of the puzzle, and while it's extremely difficult—particularly when considering the complexities of navigating a busy city street—the other parts are just as challenging. This is where claims like that of Ford's CEO really stem from.

Looking at the problem and some of the existing approaches to solving it, it's become apparent the way forward is not at all clear. Realistically companies are making progress, but as an industry we may not have a straight path toward a feasible solution for some time.


Companies that do not figure out a long-term partnership model will definitely not find this sustainable. Spending billions of $$ without a clear revenue mechanism in short-term is definitely not sustainable, even though AV will be trillion $$ industry long term. And if you doubt my last statement about AV becoming a Trillion $$ opportunity, read this book on AVs by Spencer Burns (legit automobile guy) and you will see his reasoning.


Highway driving is the first step. Limited challenges, different problem space. Once that's completely nailed down, we'll start to see workable urban solutions.

The reality is that human drivers will be better at interacting with other human drivers in close quarters for the time being.


I think we all underestimate it. Driving a car in a road without assistance is solving the robot navigation problem forever. Its the answer to life, the universe and everything. There is still great progress and it will probably work in some limited sense.


When you add up the amount of money that Americans spend on car payments, auto insurance, and gasoline every year, if that same amount of money were spent on public transit you’d have a really nice system which would also be self-driving from the rider’s point of view.


As a total layperson, I think the focus should be on long term car-to-car networking. Skip this visual feedback rut we are in and make every new car broadcast its position, velocity, acceleration, heading, etc.

Problem is how to get manufacturers on board en masse.


>Problem is how to get manufacturers on board en masse.

I think security is a much, much larger problem. Designing a safe, secure and reliable system for vehicles to communicate information with each other on an ad-hoc basis and designing a system to make use of that information in a way that doesn't allow bad actors to cause mayhem may be an equally hard problem to solve as the generalized computer vision that we need to make the current self-driving thrust work.


> As a total layperson, I think the focus should be on long term car-to-car networking. Skip this visual feedback rut we are in and make every new car broadcast its position, velocity, acceleration, heading, etc.

knock knock -- Who is it? -- Leafs on the road -- What is it now?! -- Raining :>


I'm also concerned about security.


That doesn't help when there is a pedestrian crossing the street.


Having been working in "AI" field and delivered couple of projects. I think, Automation/AI always needs humans.

What we see from all these hyped tech, is just the MVP version. You discover the problems when it's in production.


Is there a way to short the self-driving cars business? I have found the whole idea of self-driving cars ludicrous since the beginning and it boggles my mind why people with experience in computers do not feel the same.


I don't see the point of every car manufacturer making their own self driving cars. They don't make their own air bags, ABS, transmissions, etc. Leave that to suppliers like Bosch instead.


Overestimated? How come anybody that has any history with computers knows how difficult it is going to be but such a vast amount of people, including people with money are so bullish about them.

After doing some projects in kuala lumpour and singapore I knew that self driving cars were going to be incredibly difficult. one minute, driving along with no problems, next minute a huge amount of rain got dumped out of the sky, next minute the rain had stopped but some roads were impassable, some merely flooded but still open, others were undrivable on the sides but ok if you drove down the center.

How do you code for that?


The market for self-driving cars that run everywhere except in places with monsoons is still quite big.


Yeah true. Though probably each location will have it's own problems. Fallen branches, missing man hole covers, gangs of window washers.


I really really like driving cars and I don't think self-driving cars is the future it could be a small part of the future though.


Didn’t they fire the last CEO for being too slow to adopt new technologies?


It would be a lot simpler if they just skipped to self flying cars


Just the dip in the gartner hype cycle...


I don’t actually believe this will prove to be correct, it just might take a decade longer and or be on more limited routes but it will happen.


Autonomous vehicles operating on limited routes (geofenced) at first is exactly what is described in the article, so it's not clear to me what you disagree with.


I suppose you could say that, but I’m saying it’ll be late rather than not be possible. I suspect roads will start being built with automation in mind for example, and push bikes might be required to have a small transceiver to alert the cars AI to their whereabouts. I think Ford are saying here that it just won’t be good ever, I’m saying it will be better, later and slower to be everywhere but will still exist.


Nowhere in the title or body of the article does it suggest it won't be possible. In fact it's simply saying it's harder than many people thought.


Well a non-owner CEO only cares about the company for the duration of his compensation package. He’s cutting costs now to boost the stock price. Why should he give a flying fuck what happens to the company when everyone but Ford has a self driving car in 15 years? He got his.


This assumes that investors look only at programme costs and perceive absolutely no possible upside to a self driving car programme when evaluating the long term value of a company. Which seems pretty unlikely...


In one browser tab, everyone's clamouring for autonomous self-driving cars.

In another browser tab, a gamer is streaming himself driving in Euro Truck Simulator 2 for over four hours.

If only we had a way to connect people who would willingly do a job, with companies which would pay to have a job done, minus the cutthroat capitalist exploiters/employers.



How come all of us people that drive cars and are not in the car production business ,all knew the predictions Hackett and the rest made was loony?


SDCs are a slipperly ope to AGI.


They're really not. Nothing on the horizon right now points to AGI.


That wont stop the latest institute from touting that their latest glorified BLAS reducible ML will take over the world tomorrow if someone doesn't continue to fund their work.


Merely scaling up a GAN and optimizing it's network structure and training procedure allowed GANs to create nearly realistic high resolution faces. If you can generate realistic faces, then you can likely also generate realistic action and thought sequences simply with an even larger model.

An action/thought sequence is just a 4D tensor with some outputs controlling actuators. Thinking is just production of actions while actuator output neurons are inhibited, which can simply be implemented by a product with some sigmoid activated neurons.

Coherent combinations of such sequences can be produced by feeding both the current sensory inputs and the preceding internal state as conditioning vector to both the generator and discriminator.

You simply need to find a way to train the discriminator not only to tell real from fake, but to determine the value of the generator's outputs and make it backpropagate those values in time during training over several generated episodes by TD.

As the GAN is conditioned on its own previous state, it can learn by trial and error how to combine the short action and thought sequences it produces, can thus learn to produce coherent ("real") language and logic.

Based on such intuitions, I'd say it is impossible to tell when AGI will come exactly, but currently technology looks damn promising.


> If you can generate realistic faces, then you can likely also generate realistic action and thought sequences simply with an even larger model.

I'd really be willing to take bets that, for the next thirty years (paid out at the end of that period), nothing remotely like AGI will happen. Should fund my retirement pretty nicely.


What's your standard for AGI? Passing the Turing test?

I would also be willing to bet a lot that will happen within 30 years.


AGI being at least on par with humans when it comes to creativity and invention? I.e. writing great novels, coming up with compelling philosophy, coming up with new, good mathematics etc.


Current technology doesn't look very promising unless we somehow come up with a "computer" architecture that is similarly scalable and energy efficient as a brain. Machine learning and deep learning aren't exactly new. The big change that made them possible is the availability of faster hardware. If transistor density increases stop before we can reach AGI or even just a dumbed down version of it, then we might never reach AGI ever.


Driving is fun. Being a passenger is not so much fun. Technical issues aside, for this simple reason I don't see self-driving cars becoming popular.


Driving is not fun. Being a passenger is much more fun. For this simple reason, I see self-driving cars becoming popular.

There are different opinions out there. I for one would rather learn and relax than drive.


Infrastructure, machine learning, sensors, and other components are there. We already got self driving cars.

Now human adoption and acceptance is the problem.


We have self driving cars, they just can't go where we want and need them to go, and won't change for decades.


We have them in very limited settings, and fairly small numbers. Over time, they'll work in more and more environments, and that lessens their limitations. That will then increase adoption, and that will in turn drive acceptance. Resistance against self-driving cars is (in my opinion) based around either fear for loss of job, or fear of an image of self-driving cars that is not very representative.


No we don't. We'll "have" self driving cars when I can ride them in Seattle when it's raining. Right now this is not doable.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: