Hacker News new | past | comments | ask | show | jobs | submit login
A Robocar Specialist Gives Tesla ‘Full Self-Drive’ an ‘F’ (forbes.com/sites/bradtempleton)
39 points by mhb on Jan 14, 2022 | hide | past | favorite | 22 comments



The most optimistic take I can give is that even if you fixed all of the jerkiness and all-around mistakes in the driving situations the author points out, you're still left with the problem of being unable to reasonably guarantee what the car is going to do next. And so you either just trust the system implicitly, or... what?

So much of training in dealing automation (achieved in a far different way) in airplanes, for instance, is about knowing you can reason effectively about what the automation is doing and why it is doing it, and knowing how to recover in case something happens that you don't like. Further, the use of those technologies hinges on this very important concept of making sure that at all times you can stay ahead of the automation, and when you can't, it simply isn't a scenario where it is to be trusted.

That's been the case for many, many decades in aviation and overall the results are very, very good. Yes, from the automation side the problem it is tackling is much simpler, but I think the easiest part of it is that the need to have the system take split-second decisions is REALLY, REALLY small. That's just not a thing that happens when you are flying. You can trust the automation, and have it be an ally, because the possibility that at any given time it'll decide to cause a plane-crashing maneuver is simply not a possibility.

I don't see how that's so easy to avoid in the car scenario. The multiple drivers in the intersections, pedestrians, etc., cause you to CONSTANTLY deviate from your planned course of action in a moment's notice. Anyone who has driven for any stretch of time has had their ability to quickly change course called upon very, very frequently. We seem to be focused on how we can get cars to do the same, but I sincerely wonder if whether or not that would be sufficient. Even if your car could drive brilliantly, but you knew that at any given time it could not, could you really relax and trust it? One could say its not different than being driven BY SOMEONE else, which is a fair point, but in that case I can very effectively reason about the driver making those decisions for me (i.e. I have folks I think are unsafe drivers so don't get in their car, etc.). But what about for machines? I do think that communicating to drivers the safety of what the car is doing will prove to be the ultimate challenge, beyond navigating tricky road conditions.


Not to steer too far from your points but one of the things I have noticed is that these self driving system are not failing at being good AIs but are failing at the level of basic modeling of their physical environment. Lots of examples of this (not specific to tesla) including cars not being able to tell there is something on the road in front of them, cars mistaking headlight reflects for road lines.

It is this behavior that is the main cause for concern.


I mean, self-driving seems trivial if you have a good model of the physical environment. Look at how well cars in video games self-drive. That's why I always assumed it was perception and otherwise physical modelling that was the main issue.


This would be the human equivalent of DUI, which doesn’t give much reason for optimism.


that's exactly the problem - when the model and the scene doesn't reflect reality, the whole thing breaks down.

Perception and modeling is everything, policy is nearly trivial in comparison, until you get to L4+


The self proclaimed "Robocar Specialist' is no specialist, is an investor - see the bottom of the article "I founded ClariNet, the world's first internet based business, am Chairman Emeritus of the Electronic Frontier Foundation, and a director of the Foresight Institute. My current passion in self-driving vehicles and robots. I worked on Google's car team in its early years and am an advisor and/or investor for car OEMs and many of the top startups in robocars, sensors, delivery robots and even some flying cars. Plus AR/VR and software. I am founding faculty and computing chair for Singularity University, and I write, consult and speak on robocar technology around the globe."

And, by the way, in 2020 he was writing .... New Tesla Autopilot Statistics Show It’s Almost As Safe Driving With It As Without - https://www.forbes.com/sites/bradtempleton/2020/10/28/new-te...


> And, by the way, in 2020 he was writing .... New Tesla Autopilot Statistics Show It’s Almost As Safe Driving With It As Without - https://www.forbes.com/sites/bradtempleton/2020/10/28/new-te...

This is misleading though - to read the title, you'd be wondering why the change in perspective. But if you read the article, it makes note of the fact that these are Tesla's statistics, not anyone else. Some choice quotes:

"The report they publish is highly misleading, and strongly suggests the answer is “greatly safer with it on.” That’s not true, and the most recent quarter numbers show it was probably slightly safer with Autopilot off."

"At first reading, the Autopilot number looks almost twice as good as the non-Autopilot number. The problem is, this is what you would expect, because according to research at MIT, 94% of Autopilot use is on limited access highways."


Your comment seems like a classic ad-hominem—attacking the author rather than the point he is making. I don't see how it's relevant to the content of the video. I watched the whole thing and found it to be non-sensationalist to the point of bordering on milquetoast, with clear claims and backing examples.

(And, although your argument doesn't hold any merit, I do think it's reasonable to call someone with the bio you provided a "robocar specialist.")


I rented an XC90 on a trip recently and realized it's miles ahead of Tesla in the basics like lane assist on the narrow roads I was driving on (Europe). It doesn't even claim to be a self driving car yet it's still a better self driving car than a Tesla.

Basics things that didn't happen that I've seen happen in the Tesla back home. It didn't vear off the road sharply when lane markings disappeared but sensibly continued. The steering was smooth. It didn't disengage constantly but was fine to stay on unless there genuinely was chaotic conditions ahead.


I think the author's criticisms are fair. I would not use the beta with passengers in the car unless they asked. It is too jerky and unpredictable. It's definitely fun to use and I'm enjoying testing it, but it has a ways to go.

That said, the fact that this system works anywhere where competing systems are fenced to some degree (Waymo to Chandler/Phoenix, SuperCruise to highways it has mapped) has to be given more merit, though. Tesla is focused on the general solution instead of a super smooth, but very geographically-restricted solution, which (I would imagine) doesn't scale.


Back in 2015 I remember a coworker gushing about the promise of monetizing your self driving Tesla. I am cheering for full self driving but we don't seem to be close. Perhaps we shall experience a self driving winter in the aftermath of this


Tesla could be to self driving what Theranos is to blood testing.


Instead of defrauding investors though Tesla used it's fantasy timelines to get those investors money by defrauding customers so it's OK.


The jerky/jittery control inputs are interesting to me, I've noticed them or heard people mention them on several of these videos.

Makes me wonder why there wouldn't be some sort of smoothing function applied...to ensure nothing is in the way to modulate emergency maneuvers? Just really weird seeing the steering wheel snapping side to side and the car getting on and off the accelerator so quickly. I had the same thought about the "streetscape around you" display on the dashboard when I test drove a Model 3 a few months ago, things it was detecting nearby kept bouncing and jittering around. Just kind of unsettling.


This is one of those that unnerved me as well the one time I drove a Tesla - I don't know if the display is accurately representing the car's model of the world around it, but it seems to me that you'd want to assume that real-world objects don't just disappear when you can't see them anymore; they have a size and shape and velocity and you can estimate their location based on relative movement (with, of course, ever-increasing inaccuracy) even if you can't clearly see or identify them at the moment.

Though, it seems like such a basic thing that there's probably some tradeoff that I'm not thinking of.


Reality is all perception systems occasionally wink out, but they also have object permanence algorithms that know they are not actually doing that. Humans are not so different. You see a car coming up behind you, you look away, you presume it didn't vanish. A good visualization would be to show where you predict the object is in a different colour.


This is called tracking and it’s a common problem in robotics. Basically you are trying to figure out 4 things:

- If I stop detecting an object, is it still there? - If I do detect something, is it noise or real? - If I have multiple detections, are they the same objects or different ones? - Do detections I’m seeing now correspond to an object I’ve seen in the past?

This is normally accomplished with a statistical model that takes noisiness of detections, physics of how objects move, etc into account.

Getting tracking right can be very hard, but based on this report, Tesla’s tracking is behind some of it’s competitors.


Given the timelines this is happening along, it would not be surprising if we end up with a middle of the road solution that involves some kind of road marker/beacon standard to provide points of reference for these systems. It only took a decade or so to retrofit LED streetlights.


Out of all the problems with autonomous driving, I'd argue that mapping is one of the easier ones. AFAIK most competitors have generally figured that problem out and as the article points out, that Tesla is bad at it is entirely based on their bad macro decisions. No high-detail maps as a basis paired with mainly relying on vision - what are they thinking?


... a failed analogy to human driving, it seems. If we can drive successfully with just a camera input, then surely the AI ought to be able to accomplish the same, right?!?! It seems this "first principle" point has governed everything that has come afterwards, with (quite literally) deadly consequences.


The author of this piece has previously consulted for waymo fyi.


He mentions the companies he has worked for at the top of the article. "I’ve worked for and advised a wide variety of companies including Waymo, Zoox, Cruise, Starship, giant car OEMs and many others."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: