Decent thought experiment but the whole "wolf in sheep's clothing" part is just link bait title or a really bad analogy. A more appropriate analogy would be "a head fake" as the late Randy Pausch so aptly put it.
Probably just a really bad analogy. They also used "vicious cycle of driving and learning" where "virtuous circle" (which I take to mean a system with positive feedback) would be more appropriate.
Well, both vicious and virtuous cycles involve positive feedback (i.e. in the direction of current change). The difference is that virtuous cycles tend towards positive outcomes.
The whole piece uses terms with negative connotations to convey what I think is meant to be a positive message of being impressed by ingenuity. So, I'm not sure which is the more charitable interpretation: he was extremely careless in writing the post, or was deliberately misleading to get views.
He qualified it when he wrote the piece, calling it "sort of a 'wolf in sheep’s clothing'" in the actual article, then took off the "sort of" when he made the title. For that reason, I think it's probably sensationalist link bait.
That's not to suggest that it's a good analogy, because it's also a totally improper use of that idiom.
> A sheep follows the herd, but the wolf drives itself
I doubt that the author is using 'sheep' and 'wolf' to refer to the herd behaviour of the former versus the self decision making ability/nature of the latter.
I cannot give a very well analyzed comment on the seriousness or the _wolfiness_ of this initiative by Google, I haven't given it a deep thought yet. I am, however, unable to understand the arguments of why what Google is doing, is bad. The author elaborates the possibility of Street View feeding Driverless Project and the latter increasing efficiency of the system. The reasoning behind this being vicious is missing.
"A Wolf in Sheep's Clothing is an idiom of Biblical origin. It is used of those playing a role contrary to their real character, with whom contact is dangerous." - Wikipedia. Role contrary to real character, possibly. Dangerous, I can't see how.
If I squint my eyes and ignore all the facts that are already out there about Google's driverless cars, then I suppose this could be true.
I sometimes enjoy these sorts of articles. They're the fan-fiction of the tech world and it's no different to the sort of conjecture people like to come up with for Apple, which can be shameless fun to read.
But when there's already a ton of information out there about how Google's driverless cars work, it just seems cheap and hollow.
> all the facts that are already out there about Google's driverless cars
Are you referring here to fact that the driverless cars follow pre-determined paths rather than reading traffic signals? Or do you just mean that the proposed motivation doesn't really hold water? Because it seems pretty plausible to me that streetview data could be very useful to the drtiverless car, even if other data is more important and even if streetview is better motivated on its own.
Google's driverless cars have had a lot of PR recently, with some pretty decent coverage of how they actually work.
The cars don't follow predetermined routes. At present they do learn routes, but not using Google Street View data. Actually, the opposite is true, the intention is for the diver-less cars to generate 3d data for street view.
According to this blog post, the map data is pretty important.
> Two things seem particularly interesting about Google's approach. First, [Google's driverless car] relies on very detailed maps of the roads and terrain, something that Urmson said is essential to determine accurately where the car is. Using GPS-based techniques alone, he said, the location could be off by several meters.
Everyone who took Sebastian Thrun's Udacity "Building a self driving car" course (or everybody that has had training in robotics) will see that there's a whole lot more than mere training data to build a self driving car.
It could be helpful in certain scenarios, but it's for sure not enough for the data for the cars to be the main reason behind street view. As the author himself notices, the map business is huge, and it's even bigger when coupled with always connected location aware devices.
I took most of Thrun's course. There was a lot about robot localization based on pre-existing map data...exactly what StreetView is collecting.
If the StreetView cars are using LIDAR, they have a lot of high-quality 3D maps, perfect for robot localization. Even if they're just taking photos, various groups have demonstrated building 3D models from collections of photos.
Yes, that's right. What I find hard to believe is that the main reason behind the project is the self driving car. I can easily believe it's one of a set of objectives.
> It could be helpful in certain scenarios, but it's for sure not enough for the data for the cars to be the main reason behind street view.
Your logic doesn't go through. Driverless cars are potentially worth hundreds of billions of dollars annually. It could easily be worth hundreds of millions of dollars to make them 2% better.
How so? If that data alone won't bring significant improvements, how could it be the primary reason behind its collection? Much more likely they are collecting the data mainly for other applications and have the self driving car data as a nice addon.
Buying paper clips won't bring a significant improvement to my office productivity--less than 1%, I would say--but it's still true that the primary reason for buying paper clips is to improve productivity. It's just that paper clips are so cheap compared to the value of my total office output that it's still worth buying them for a tiny improvement. This is all still true if I also occasionally use the paper clips to construct tiny toy ninjas for fun.
I worked for a company that made the first two street view systems. They also contributed a lot to the robocar project and eventually got bought by google to work on Chauffeur.
The sensors and math that provide the perception component of chauffeur are, for conversational purposes, identical to those of street view. But the two teams are not working together. The demands of each project are too different.
it also feels that in some cases the data would simply be useless, e.g. data collected in rome and other historical cities by pedaling bikes in pedestrian areas.
I agree, the author mistakes that any kind of driving is all that is needed to build the driverless car system. When so far it's been revealed that the current level of technology requires the same route to be driven many times. I.E. It's not generalised yet, and not the trojan horse purported to be. (That and the driverless car relies on technology with a resolution that was not originally available when the street view program originally launched.)
Prof. Thrun was part of the team that used laser range-finders to assist the image stitching in StreetView. Since both sensors (rangefinders and cameras) were mounted on a mobile vehicle, wheel odometry and GPS "ground truth" must have been available at each waypoint as well.
The author of the post suggested that Google might have used the GPS data (and possibly rangefinders) to create a simulated world to teach the self-driving car (useful for testing new tweaks?). This is a good idea and probably did not take them a lot of effort given the available log of data from StreetView.
Thanks for this timeline. This matches my impression.
As further evidence that the driverless car concept was getting attention well before street view matured, remember that Sebastian Thrun led the Stanford team in the 2005 grand challenge. See:
I thought Google's Street View cars couldn't drive very fast to be able to take those pictures. When not driving at full speed, it's not really a good learning set.
Also the link says they get to "know every road by definition", but you could read that almost entirely from TomTom's maps as well. Roads change, weather conditions change, etc. I'm not so sure Google really uses their Street View car data for their self-driving cars. Though it is almost certainly a part of it, I don't think this is the crux to solving the self-driving car problem.
I thought Google's Street View cars couldn't drive very fast to be able to take those pictures. When not driving at full speed, it's not really a good learning set.
I'd argue that the vast majority of the roads where one drives faster than 50 km/h are very simple. This is especially true for highways; they have to be, because humans are bad at thinking at 120 km/h and even worse at surviving collisions at that speed. There's an overabundance of signs and road marking, and an incredible effort has been put into making it relatively hard for drivers to behave irrationally. Compare that with, say, a multi-lane roundabout in the inner city.
Once you've got enough data to reliably survive that kind of situation, you're 90% of the way there.
> Also the link says they get to "know every road by definition", but you could read that almost entirely from TomTom's maps as well. Roads change, weather conditions change, etc.
'Know' here doesn't mean just knowing the road as in a map. OP meant 'know' (I presume) in the sense that: the data on how the street view driver drove through that road and under what conditions (including weather, traffic, et al.) is learnt by the machine learning algorithm.
The author is going to be even more blown away when he finds out Google has been using recaptcha to train image recognition on street view data. Of course seeing street view as simply a long bet purely for self driving cars seems like a narrative fallacy.
Google's self-driving car's main sensors are a 360 degree rotating laser and a radar, NOT a camera... unless Sebastian Thrun has been lying in his Udacity CS 373 course.
I believe (speaking from the Udacity material) it also uses a stereo camera setup for localization (e.g. where the car is within a lane). Sebastian mentioned something about how they could deal with rain, but not something like snow that covers the visible references.
Street view cars all work during the daytime. There is no nighttime street imagery on streetview, and anecdotally, I've never seen a streetview car at night.
Having self-driving cars during daylight hours only is still an ambitious goal, though.
I don't think that would matter. Googles driverless cars use 3D laser scanners which don't require any ambient light. Plus, computers can theoretically see a lot more of the spectrum and therefore see a lot more than we mere mortals. They could look at the road ahead an know exactly what temperature it is, and possibly even see black ice. They could see a ninja on a black horse, at night, in the rain and fog, from miles away using thermal imaging.
Daylight driving, to a machine, would probably be more difficult than night time driving simply because there's more stuff happening on the roads during the day.
It sure fits nicely together, but I doubt this was the plan all along. Google general strategy seems to be to collect as much data as possible just in case it might become useful one day. Good decision to load up the StreetView cars not just with cameras for pictures, but with all kinds of sensors to collect a bunch of potentially useful metrics. That data is becoming handy for quite a few things now.
Fact 1: Sebastian Thrun co-developed Google StreetView
Fact 2: Sebastian Thrun is developing the driverless car.
A lot more goes into a self-driving car than data from how drivers drive. That's maybe 3% of the problem. Nevertheless, assuming Sebastian is too dumb to make the connection would assume a fairly high level of stupidity on his part. That's a pretty bad assumption.
Fact 2: Elon Musk is developing rockets, with hopes of one day sending people to Mars.
It is clear that Elon Musk is only trying to corner the mobile payments market on Mars. Don't assume Elon is too dumb to realize the market potential of an entirely new planet!
Ok, let's be serious. Usage of street view data may or may not be useful for the driverless cars. But the original, primary motivation for street view was the ability to build their own map data and remove their dependency on existing map providers like teleAtlas. Driverless cars were frankly too far out, and not close enough to their primary business model to be worth it at the time. And being able to remove their maps licensing requirements gave them infinitely more freedom at a time when their mapping product was especially important to them.
If you have ever used street view to catch a glimpse of your destination at an unknown address or to look for key landmarks at important intersections, you'll notice right away that the Street View data is the last thing you would want powering an autonomous car. Many of the mapped roads only show the view from one lane, and if the road is more than one lane wide, you're missing data on the positions of the other lane(s). I can't count the number of times the view I wanted was on the other side of a median or divider, with no data for that side. Following a pre-mapped course on a multi-lane road using only Street View data would be a nightmare. Reminds me of the stories of people pulling U-turns in the middle of a freeway because their GPS told them to.
I think you misunderstood the point of machine learning. The cars don't drive by memorization/knowledge of each street but rather the knowledge of what has been done in similar situations thousands of times before.
So that cars can drive with the 80-90% success rate that voice recognition has achieved with similar methods? If you can model your entire surroundings several times a second to cm precision, you don't have to settle for machine learning and "just about perfect".
I only just occurred to me - are these self-driving cars expected to pump their own gas? Surely Google's not prepared to build a complete network of fuelling stations that are compatible with unmanned cars?
"You put a camera on the front of your car, and you set it to capture frequent images of the road ahead of you while you drive. At the same time, you capture all the data about your driving -- steering-wheel movement, acceleration and braking, speed"
I'm curious: How does one capture the car data? In the self driver car, the ML and camera part of it seems to be easier than the interface with the car mechanics, yet there's generally little mention of that part.
I know someone who used an arduino to pull in sensor data for their vehicle. I don't think the project is online but they got a very solid amount of data over a period of 3 months including, braking, acceleration and steering. all of this without any distinct modifications to the body.
There are diagnostic devices for most modern automobiles, some of which can be bought by consumers. For example the VAG-COM Diagnostic System for Volkswagens: http://vag-com.de/
This article is flat out wrong. Google's autonomous vehicles do the data collection for Google's autonomous vehicles.
They drive ahead of time around every environment in which they want to operate, with a bunch more sensors and more accurate localisation than the streetview vehicles. The lidar on the streetview vehicles is intended to provide a 3D surface model of the buildings lining each street. I find it very doubtful that they'd attempt to do supervised learning of human driving behaviour from the streetview vehicles, rather than the actual automated ones.
Actually, Google Translate was an enormous leap forward in machine translation quality. It won a number of awards for its astonishingly good performance. And its design premise is that you can use really simple algorithms if you have crazy amounts of training data (a then controversial approach called 'statistical machine translation' - as opposed to rule/grammar based).
Look at Norvig (Google's head of research, and AI-demigod) at al's paper "The Unreasonable Effectiveness of Data."
It might have been an enormous leap, and it might have won a lot of awards.
But it still produces a great deal of nonsense.
My only semi-informed opinion (hunch) is that the Google/Norvig brand of statistical approach to AI is a 80% solution to a lot of things, but that last 20% is going to be killer to get.
Right now this approach to AI is a great boost for humans, who can finish off that last 20% themselves, but I have doubts about the autonomous versions...
I am curious to see what happens with the self-driving car in real world use. And if translate ever gets much better than it is today. Or if, for that matter, Google search gets much better than it is today.
The driver-less car seems like just a side-project of Google's primary business goal: to steal Social Network marketshare from Facebook in the form of shoving Google Plus down our throats. I mean, what's more important, innovating and giving people a technology which will improve their lives, or forcing them to use a website in which to show them cat pictures and click on context-aware advertisements?
you forgot to add the project glass thing. Ads would literally be in your face!
Sometime in the future, they'll also be able to control people's psyche through all the glass thing to mask their evil doings.
Is the author seriously implying a car loaded down with lots of cameras and doing an average of 30MPH is Google's ultimate vehicle for training driverless cars?