I gotta say, I was driven on residential streets/stroads from home to restaurant by a Tesla 3 (using vision alone I think? it was fairly recent) in absolutely POURING rain and I was blown away by how good it was. I think it's hard to get a handle on how good this stuff is because it's such a politically charged field now but I have a hard time believing the folks that say it's never gonna happen, or at least not soon, because of technical reasons.
Don't get me wrong, I'm on team ban cars and replace every stroad with a light rail corridor and bike paths, but I think self driving cars will be fine in some number of years once the haters calm down. Hard for me to believe we can't achieve better than the average shitty driver level of safety.
It works most of the time but the issue is that "most of the time" is not good enough for these systems. Even if the failure rate is <1% that may end up being lots of accidents and deaths at scale.
People often make arguments that "oh it will still be less accidents than human drivers", which is true, but, the problem is that human accuracy is a very poor benchmark for autonomous systems. Autonomous systems need to be held to a higher bar, and it's better if that accountability and expectation is held from the beginning.
> Autonomous systems need to be held to a higher bar
Why? Won't this lead to a lot of needless deaths at the hands of human drivers while we wait for driverless cars to improve? Why not roll them out once they are safer than human drivers?
I'm a driver, but I'm a safer than average driver. So why would I want a system that's better than "average" where average includes drunk people, speeders, new/bad drivers, driving in ice, etc.
I don't think it should be illegal to use a system that's actually better than average (which is 1 accident every 18 years), but many drivers might not want to.
Also when some company's financial success hinges on them reporting their safety being above a certain accident threshhold, I think any statistics provided by that company on safety should be doubted unless an indepedent third party can verify them.
Because they don't make laws just for you, they make laws for everybody including the drunk 16 inexperience texters and the new parents who survive on 3 hours of sleeps.
And those people drive on the same roads as you, so you're still affected. Don't you want other drivers on the road to be less likely to t-bone you because they ran a red light?
I’ve experienced both learning to drive in the United Kingdom, and later, in the United States (you have to test again if you immigrate). In summarizing both experiences I’m trying hard to avoid being too biased.
In the UK, almost everyone learned (and learns) using a manual/stick shift vehicle, and if you learn and test in an automatic gearbox, your license is limited and you legally can’t drive manual. Hill starts and clutch control can be much fun! Lessons and the test involved difficult city situations ranging from extremely narrow streets, through 7-lane roundabouts, country roads both single-track and unrestricted (so 60mph speed limit, but not necessarily safe to drive that quickly - good judgement is required). You must pass a theory and hazard perception exercise, and the testing is government-administered.
In the US it seems almost everyone learns in an automatic vehicle, your license then lets you drive stick with no restriction. At least near cities, the roads you learn and test on are seemingly not 2” wider than your vehicle (measured at the mirrors), the situations are comparatively simple as well. There’s no hazard perception test. In my state, the test is administered by the instructor and not an impartial/neutral party.
I haven’t gone looking for large datasets to support this but it feels like the “I just passed my test” driver competence is going to be different.
I would be extremely happy to buy and trust a system like Ultra Cruise if it could navigate UK roads and city situations autonomously with less accidents/incidents than drivers at the 75th percentile in those environment, meaning with widespread adoption the system would raise the bar, and improve median safety properties of being a driver/participant on the roads. However, I would guess had they not cancelled it, being acceptably good for US driving conditions / better than the average US driver really just means you’ve built a system which can work in the US but absolutely won’t work in London, Paris, Berlin or anywhere else?
People are less fine with risk if it's less controllable by themselves. Every mode of transport where people don't directly control their fate is held to a much higher safety standard (and partially as a result) much safer. Aviation, Trains, Busses, ... .
> People often make arguments that "oh it will still be less accidents than human drivers", which is true, but, the problem is that human accuracy is a very poor benchmark for autonomous systems.
Never let a good solution get in the way of a good problem.
“But this medicine, while it can cure cancer, has a 1% chance of death!” “Ah shit good point, fuck it then. As long as we can still get pissy with that musk fellow.”
The thing about machine learning is that it can fail very suddenly, in a very unhuman way.
So even if self driving works most of the time, it takes a lot of work to address weird edge cases even the most inebriated human would not mess up, that other drivers/pedestrians would not anticipate.
And I was in a Tesla 3 in sunny weather about 2 months ago, and it nearly crashed into a tow truck and a cyclist that it didn't see. It's embarrassingly bad at handling basic use cases and it pretty much the poster boy for why vision alone won't work for actual self-driving.
Tesla vision gets confused by shadows when it is sunny. Its the new phantom braking when using the Tesla vision stack on freeways. Before it was low hanging freeway signs that the radar picked up.
Also it can't figure out lanes at all and is always trying to change lanes into the wrong lane. The navigate on Autopilot stack seemed better than the current FSD stack.
I am on the side of cars (long term), even though I hate cars, the inefficiency, the space issues, the anti-pedestrian externalities, etc.
Ultimately, to design a transport system that benefits all of society, it needs to go from point A to point B.
Light rail / public transport will always need a +somethingelse, in order to do that. Or we end up expanding the rail infra so much that we have just reinvented roads, but a little more constrained. Or we end up with car shares. Either way is fine.
I just don't see how to cater for lots of different disabilities and needs without 'cars' being a (maybe small) part of it. Regardless of what path we take, I just see the evolution converging back on a car-like vehicle for a substantial portion of the freight/transport industry (albeit 'trains' of them, but not physically connected), even if it's mostly final mile. But people and things don't want to hop between transport modes. They want to step outside their door into a vehicle, then out again at the destination.
Anyway... Self driving vehicles could be worse than some drivers currently, but it sure is better than SOME drivers I have seen. For an industry that has only had 15yrs direct investment, it's already better than bad drivers from my view - So it's almost time to start making driving licenses slightly harder to get and keep, IMO (it's so easy to get a license. It's hard to take them away, unless it's after an incident. There are so many very unsafe drivers).
That's a long way of saying that I concur with your comment.
Just took a Waymo ride across San Francisco 3 nights ago in hard rain, at night. A hilly complex city with bike lanes, kinda oddball medians and bollards, many pedestrians, and homeless people wandering down the middle of streets. It did great.
I've taken 5 trips so far, and all have been great and better than the average uber driver.
I've had 3 sketchy Uber rides out of about 10 total in the last 3 months. One older woman was was peering over her steering wheel commenting that she really can't see that well at night, she'd kinda guess and head over to next lane, and had to abort once when she almost merged into another car. One plunged across 3 lanes of traffic without signaling while looking at the phone in her hand, twice! Another did a no look left turn while looking at the map and almost hit a pedestrian in the crosswalk. I said "stop!" and he did and looked up shocked. Slow enough that it would have only been a broken leg, but still...
One angle is to look at when we expect there to be data showing a new autopilot vehicle is at least equivalently safe to a new non-autopilot vehicle with modern ADAS. I don't think the performance is there yet, hard to tell when it will be.
No, this is Ultra Cruise. It was GM’s competitor to Tesla FSD on city streets. Cruise (GM’s L4 robotaxi unit) is different from Super Cruise (L2 hands free highway ADAS) is different from Ultra Cruise.
Self driving is going the way of fusion power. Always just a couple years away.
I still believe divided highway trucking between major cities with last mile handoff to humans has legs, but I wonder how much the Tesla claims have poisoned the proverbial well of a more constrained system for the foreseeable future.
1. Waymo works great. It's safer than humans within its operating regime, and that regime is more than enough to make a profitable taxi service. As they collect more data, the regime expands.
2. Cruise is in the corporate penalty box for being dishonest in withholding video data from the state of CA, but that was a stupid PR move. It doesn't tell us anything about the tech, which in fact is strong.
I use Tesla FSD everyday. It is not a robotaxi yet, but it makes a big difference in day to day life. I can do most drives without disengagement. To be clear, it needs to get 100x better for robotaxi. But even with its beta flakiness, it is magical.
I paid for a month of FSD over the summer to try it out, and it was an extremely stressful experience.
It was very twitchy and jarring at stop signs, there are several uncontrolled T-intersections around my house that FSD just flew through (almost striking another car one time), and I didn't like how it always tried to get into the slowest lane on the highway during rush hour.
Maybe I'll try it again this summer to see how it's improved, but some of the close calls last time soured me on the experience. It felt more like I was babysitting a new driver and less like I was being chauffeured around.
So do your regular drives just not include any tree lined roads, or have they done some mapping to fix phantom braking in your area?
And that’s a serious question, because the stress of trying to anticipate the next phantom brake event completely defeats the purpose in my mind. I’d much rather just drive myself.
Cruise has Teleoperators. Their snafu revealed an intervention every 4-5 miles, which is worse than Tesla's FSD. (Cruise vehicles were also subjectively bad at driving)
Leveling means nothing, and are determined by the company. Actual operations and range of operational capability matter.
Tesla doesn't have self driving though. Its level 2. Mercedes has level 3 cars on the road where the driver can legally watch a movie or read a book. Level matters but it dictates what the driver can legally do. Also Mercedes is accepting 100% liability while their cars are in self driving, Tesla makes the driver 100% liable. There's a huge difference.
> Also Mercedes is accepting 100% liability while their cars are in self driving
That's what I thought too, but apparently it's not that clear cut. Their manual says that the driver must be ready to take over not only when prompted by the system but also "due to obvious circumstances". It's not clear what that means — cue the lawsuits. https://safeautonomy.blogspot.com/2023/09/no-mercedes-benz-w...
It revealed that a human operator engages with the car in some way every 4-5 miles, and the (ex) CEO explained that this metric was stupid because human operators were always fully assigned.
His suggested metric is cars/operator, which was ~20 IIRC.
Cruise’s teleoperators cannot perform any safety critical interventions like preventing a crash in a fraction of a second, which Tesla drivers can and do. If you remove the driver from Tesla, their intervention rate would be even worse. No matter how you slice it, a driverless car is more capable than one that requires a driver.
Exceedingly few Tesla interventions are safety related. The majority of mine are to fix awkwardness negotiating with aggressive human drivers and legally mandated full stop (which no human driver does).
Cruise has still been in and caused accidents, at a rate likely no better than Tesla. Also, Cruise operates only on a small number of mapped roads, whereas a Tesla can operate anywhere in the U.S. in most weather conditions, and yet still does a great job.
Anecdotes are not data. “Exceedingly few safety interventions” doesn’t matter when the number is non-zero. That number is exactly zero for driverless vehicles. So their interventions are not the same as Tesla’s interventions.
If Cruise had a fallback driver all the time, it would “operate everywhere” too. That’s not really saying much. The entire problem is about how to remove the driver, which Tesla is nowhere close to in any geographic area. A dead giveaway is how they’re reluctant to even let drivers take their hands off the wheel.
Do you have a source of this data? Is Tesla releasing all their interventions. Cruise interventions are all submitted to the state of California as part of their self driving certification.
> and that regime is more than enough to make a profitable taxi service
Waymo isn’t profitable and has been a massive money loser over the past several years. See “Other Bets” in Alphabet’s earnings. Where does this claim come from?
It's based on public comments about the per-mile cost of Cruise and Waymo, and the fact that Waymo is currently operating in only a tiny number of areas. Expansion greatly dilutes the fixed costs, and iterating on the hardware will drive down the marginal/capital costs.
Amazon made no profit for many, many years, but that's because they were re-investing all their revenue. No one doubted that they could have turned a profit if they wanted. (Tbc, there is not yet a component of Waymo that is profitable because, as mentioned, they are just operating in a handful of cities.)
Where is the actual evidence of this, though? I'm not trying to be snide -- genuinely curious.
Waymo is expanding -- their service areas (and operating times) have expanded in SF and Phoenix. There's a waitlist in LA and Austin. Yet Waymo's financials are still buried in Alphabet's "Other Bets" line, which lost $1.2 billion in Q3 2023.
You think Google, which has been catching heat endlessly for falling behind in AI, has not only won the self-driving race from a technical standpoint (the claim by many in this thread), but has also found a way to make it both profitable and scalable to arbitrary locations, and they're hiding this reality in their earnings statements? Seems unlikely to me. Why bother with waitlists and slow rollouts in favorable climates? Why not blitzscale this thing to all markets and disclose the numbers to investors and send the stock to the moon?
Waymo generally seem to be extremely cautious: they have invested the most effort into the safety design of their systems and they don't shout very loudly for being the leader of the pack. I think this extends to their expansion as well: autonomous car companies are only one or two serious incidents away from severe backlash (see Cruise), and the more you operate the more likely it is that one occurs. Expanding slowly and dealing with each expansion of near-miss edge cases as you do so is a rational strategy in that case, even if you have already beaten human drivers.
(Also Google is bad at full commitment, and I don't know how much that extends to Alphabet, but they as a company seem pathologically incapable of putting even a majority of their weight behind anything, which is a significant source of failures for them)
> Where is the actual evidence of this, though? I'm not trying to be snide -- genuinely curious.
That claim was just based off how all businesses work. All the infrastructure, management, training, etc., that scale sublinearly with volume are diluted at large volume.
> Yet Waymo's financials are still buried in Alphabet's "Other Bets" line, which lost $1.2 billion in Q3 2023...
As I said, I definitely think Waymo is losing money overall right now, even in a particular regime.
> Why not blitzscale this thing to all markets and disclose the numbers to investors and send the stock to the moon?
They are in fact expanding rapidly. But this can't be scaled as fast as a software/internet company. They literally have to manufacture custom cars and, crucially, map the region in detail. Remember that Tesla's meteoric rise was ~70% growth per year. For years Waymo was only offering service to the general public in Phoenix. They opened in San Francisco in late 2022, and then in LA in mid 2023.
Also, Waymo (~$30B in 2020) is still an tiny part of Alphabet stock (>$1T), who owns a controlling interest. They have no reason to pump the stock because they have no shortage of capital.
Just took a Waymo ride across San Francisco 3 nights ago in hard rain, at night. A hilly complex city with bike lanes, kinda oddball medians and bollards, many pedestrians, and homeless people wandering down the middle of streets. It did great.
Even if it didn't work in snow (and it's unlikely to do worse than humans), all the west coast, desert west, and south have basically zero snow all year long.
How else would you do it? First, get the technology to work at all before doing the harder things it can't handle yet. Crawl, walk, then run. Or in this case, drive in snow. Even if it never gets there in my lifetime (which I doubt), it's already working.
I don’t know where this idea that driving in SF is easy mode comes from. No one who has actually driven in SF would say that. Yes it doesn’t have snow, but it has rain, zero-visibility fog, lots of pedestrians and cyclists, heavy traffic, narrow roads, steep grades, bus-only lanes, lots of cars parked on the street and double parked. Self-driving companies don’t pick SF because it’s easy, they pick it because it’s the most challenging city that’s close to their engineers.
Is that "same direction travel divided between autonomous trucks and all other traffic"? Or "only on highways where traffic directions are separated by a median"?
The former sounds like a massive (understating it probably) infrastructure investment. Trains sound better (as other comment while I was typing notes).
Waymo appears to be the real deal. Last night I saw one navigate a situation with a hesitant pedestrian better than most human drivers. And before people chime in with "ideal conditions," it was at night in the rain.
I never trusted the Cruise cars, they would drive like a teenager that was afraid of the road. But Waymo seems a step up even from the Uber drivers.
Waymo and Tesla are the only real competition left in the U.S. The deciding factor is how fast Waymo can roll out to more locales before Tesla gets closer to level 5 over the next 3-5 years.
After that, vehicle production and operations cost will be the main factor in the race to $0
There’s an entirely different way of looking at this that is perfectly in line with why self driving. Cars have failed several times before…
Who is liable?
That’s what largely killed prior attempts, especially those using custom built roads. If the car crashes who is liable - the manufacturer, the road builder, or the driver? I think it is telling that the pull back we’re seeing is correlated with early cases in this becoming more salient.
Why does this have to be a technical limitation/success (im not saying it is or isn’t) only?
Vision-only is very very very far behind other techniques. Cruise (the self driving company, not OP) was killed by an overzealous CEO, not by their chosen technique. Waymo drives great in SF.
>Cruise (the self driving company, not OP) was killed by an overzealous CEO, not by their chosen technique. Waymo drives great in SF.
Cruise is dead regardless of how you spin it. And Waymo will continue to piddle around SF with their $500,000 frankenstein cars until eternity.
Meanwhile Tesla and Comma (both pure vision) are the only operational L3 systems being used every day by regular people to drive millions of miles. The legacy manufacturers will end up licensing one or the other, a la Android/iOS.
Vision-only is stupid. I can't see through fog and have a tough time with glare and can't see in the dark. Other technologies aren't subject to the same limitations. Don't we want these things to be better at driving than humans?
You say vision-only is stupid, but Tesla FSD is the only full self-driving system deployed to millions of drivers and usable anywhere in the world, right now, at scale.
> Don't we want these things to be better at driving than humans?
Sure we do. Your options are, a as-good-as-a-human-driver for 1 unit of cost, or a better-than-human-driver at 1000 units of cost. Realistically, which do you pick?
Other tech like Waymo is good. They drive well. But it's not scalable--expensive mapping and hundreds of thousands of lines of code written to address the nuances of whatever city they specialize the solution to. Not to mention that they only operate in cities with sunny weather all year round!
The company decides the level. Tesla's approach is to get more real-world training data to reach a general solution, while Waymo goes slow with pre-mapping as a requirement.
Tesla's approach will likely be the winner long-term and can be flipped on globally far faster than Waymo's could
Everyone’s working on a general solution, including Waymo. Everyone has large real world (and even larger simulated) training data. It’s not just Tesla despite what is widely claimed in Tesla circles. Others just choose to deploy in certain places because of market, operational and safety reasons. This notion of Tesla just ingesting training data which magically outputs a general self driving solution is nonsense.
The levels aren’t just slapped on by the companies for vanity. They indicate liability and therefore capability.
They're classifying it as level 2 because that's all the system is capable of. You can choose to call it an approach, but it's really an indicator of capability.
1. Waymo costs around $120,000 and the cost is rapidly dropping with new hardware generations and vehicle platforms. They’re in LA, soon in Austin, airport rides in Phoenix and imminent highway driving.
George Hotz take was that vision alone could be trained to get to human or near human driving ability, but that the sensor fusion part was confusing the machine learning and much harder to get to that level.
Maybe he'll be wrong in the future but for now even Tesla seems to have pulled the radar stuff from their cars.
Cue the Mercedes apologists pointing out their „L3” system that can be used on a handful of stretches of nevada freeways, below 30mph, only when there’s a car leading directly ahead, and only when there’s little road curvature. But hey bragging rights.
Good luck finding non-press videos of it in action
George Hotz. Niche celebrity hackerman. Comma.ai self driving car stuff with cameras only.
In an interview some years ago (I think it was lex friedman) he was asked about Tesla and their radars etc and he said he thinks the future of self driving cars is vision only.
Don't get me wrong, I'm on team ban cars and replace every stroad with a light rail corridor and bike paths, but I think self driving cars will be fine in some number of years once the haters calm down. Hard for me to believe we can't achieve better than the average shitty driver level of safety.