Production capacity is the most important factor for data collection.
The Teslas sold in October 2018 don't need to have the same sensors as the ones sold in january 2018. As you can see, the car is designed modularly. No other mass marketed car on the planet has any sensors integrated yet.
It's all about big numbers. Google has spent 5 years or more driving a few cars around California. It's never going to match billions of miles driven around the globe.
EDIT: Imagine the engineers working at Google's autonomous cars today. I bet many of them are preparing their resumes for Tesla. They can launch an algorithm or a new sensor type now, wait a few months and then get the feedback on how it worked. With Tesla, they will get that feedback in days. In which lab would you want to work if you want to take part in shaping the future? In the Google lab where you have a Lexus or two assigned to you that drive up to 25 miles per hour in California or in the Tesla lab where you have 2,000 Teslas assigned driving in 50 different countries?
Once again: if the sensors don't work for the problem domain, the quantity of data gathered is irrelevant.
If you mounted a forward-facing video camera on every car in in the world and gathered the data for decades, you'd still be nowhere: you're missing the side and rear views. This is a thought exercise, but it demonstrates the point. If your robot car has a blind spot, all the data in the world won't fix it.
Nobody knows how good these cameras are, but every camera-based system so far has had the same critical limitations: they don't work well at night or in poor visibility.
But we know that the sensor suite must be at least as good as a human; more vision, plus other sensors. Therefore we know the sensor suite is sufficient to be as good as (and likely better than) a human.
It's not. For example, the eye has much higher dynamic range than any camera sensor available today. Try taking a picture at night that looks half as decent as it does in your head.
The Teslas sold in October 2018 don't need to have the same sensors as the ones sold in january 2018. As you can see, the car is designed modularly. No other mass marketed car on the planet has any sensors integrated yet.
It's all about big numbers. Google has spent 5 years or more driving a few cars around California. It's never going to match billions of miles driven around the globe.
EDIT: Imagine the engineers working at Google's autonomous cars today. I bet many of them are preparing their resumes for Tesla. They can launch an algorithm or a new sensor type now, wait a few months and then get the feedback on how it worked. With Tesla, they will get that feedback in days. In which lab would you want to work if you want to take part in shaping the future? In the Google lab where you have a Lexus or two assigned to you that drive up to 25 miles per hour in California or in the Tesla lab where you have 2,000 Teslas assigned driving in 50 different countries?