Hacker News new | past | comments | ask | show | jobs | submit login

The big difference is that Google's car are actually self-driving themselves. Tesla's cars will collect a lot of data, but about human driving. The actual automated driving software will have to be tested separately.



Actually Tesla has access to two sources of large-volume real-world data:

1. When the driver has full control of the vehicle

2. When the 'autopilot' is engaged and the driver is ready to intervene if necessary

So if the AI passes the safety test on Type 1 data, Tesla can promote it to being tested on Type 2. And if it passes that safety test it can be promoted to full autonomous control.

The 'autopilot' mode effectively does for Tesla what Google's test drivers do, but for free and on a much larger scale. Seems to me Tesla have a very strong hand here.


There's a third type as well - 'Shadow Mode' where the software is running constantly but the driver is in full control.

So if there's an accident, Tesla can check to see if the autopilot would/could have avoided it. If they can turn round to lawmakers and say that "X% of accidents could be avoided if hands-off autopilot was legal" it should help speed up the regulatory side of things.


Which is utterly disingenuous too.

"It would have avoided this accident" (by braking, steering). It can say nothing about the future, "... but it would still have been in a collision 0.42 seconds later".

For all the collisions it would have avoided, there's another subset where it would only have "delayed" the collision. But that won't be mentioned. Because it doesn't fit the narrative.


But this way they can't tell how many accidents the hands-off autopilot could have caused. Number of accidents avoided alone is not enough to say this technology would decrease the number of accidents.


That is only partially true. They can resimulate what the automation _would_ have done given all the telemetry and video data leading up to the event.

This is why "partial automation" for the initial data collection still produces valid results. You can replay data against updated models and do "what if" testing without actually sending the car back out on the road again.


They don't have to have that information if what they're angling for is a sound bite to help get regulatory approval.


No, it can run itself in the background and check how it's acting differently on exceptional cases comparing to a human driver. Like a dry run.


Yes, but it has to understand the difference between minor human errors (allowing more drift than autopilot) and actual important differences (braking ahead of a potential collision that the autopilot can't see).


So.. like linear regression, and removing outliers to get a more correct fit?


Kind of. There's a lot of different machine learning approaches, but at least with neural networks, you essentially have a black box (the network) that you feed inputs into and it spits out which category it gets classified into. When learning, you know what the right category should be, so you go through the network in reverse (backpropagation) and update the weights inside by an arbitrary amount multiplying the difference between what is wanted and what is spitted out.

Eventually, (and with a lot of handwaving) the difference converges towards 0 and the network gives the right answer.


True, but my point was more a scaling factor. Google can crank N amount of data with its small fleet, Tesla will be able to reach 10^5 N soon. If they're able to process it properly they may pass Google tests quickly.


I don't know if it's that easy. There are tons of images and text laying around, but research is focused on just a few datasets. Sometimes it's hard to make use of 100x more data.


Research in focusing on just a few datasets in that case because they need labelling.

In this case, what you need is mostly "what would most humans do?"

There would be things to refine about that (e.g. prevent speeding; analyse how humans reacted right before crashes etc. and improve on responses), but as a starting point it is immensely useful.


Isn't this a machine learning problem? If so then more data means more learning.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: