Hacker News new | past | comments | ask | show | jobs | submit login
Mercedes-Benz says Level 3 self-driving option ready to roll (thedetroitbureau.com)
254 points by mxschumacher on May 6, 2022 | hide | past | favorite | 385 comments



While the specs sound very underwhelming at first (geo-fenced on some roads, very low speeds, no bad weather), they seem to be so confident in their system, that they will take responsibility for the car when in autopilot. AFAIK, no other manufacturer does this for their systems (which capabilities they often make rather wild claims about).


Just from a management perspective, even if you're super-confident in your engineers (which you shouldn't be), doing anything other than a very limited slow roll out would be negligent. This is a tectonic shift in liability. It's one thing if you sell 1,000,000 cars and 100,000 get in an accident. It's actually much worse to sell 10,000 cars and 1,000 of them get in an accident and the company is liable.


'This is a tectonic shift in liability. '

Its the only form of liability i consider acceptable - I woupd never use autopilot where I would be to blame if i fail to react in 50 milliseconds once the car fucks up


I think for this reason full self driving is going to be more feasible in taxi-style vehicles first. You hail the driverless taxi, like with another person driving the taxi the passengers are not liable. It’s a model we are already used to, at least. If the taxi company winds up being the car producer, then the liability is the same as if they didn’t produce but operated the car.


That's not really "full self driving" if you are referring to the SAE scale. Level 5, the top, is fully autonomous driving, everywhere. What you are referring to is level 4; fully autonomous driving in specific geofenced areas on predetermined routes. Think Delamain in Cyberpunk 2077.


Fitting username :)


You are already going to be to blamed if you get in an accident for most hardware failures in a car. Why is this one so different?


Uh, no?

If the car's brakes don't work due to a defect, you are not liable at all.

And if you show reasonable due diligence, that is regular maintenance, the same applies doubly so.

Beyond that, I've seen a lot of accidents, and none of them were the car's fault.

All were literally human error, or external factors. EG, deer leaping on to road, too many frogs on the road, etc


"If the car's brakes don't work due to a defect, you are not liable at all."

Morally perhaps but if you drive into the back of me because your brakes have a manufacturing fault I would still expect you to pay for the damage. The manufacturer may then be liable to you but that that is your business.


You'd don't expect the driver to pay, you just expect to be paid. Doesn't matter to you if it is OP, his insurance company or his car manufacturer. It does matter to the driver though, which is the reason of this discussion on liability.


It goes beyond morals.

If you are driving a car and an crash occurs due to a manufacturing defect, you'll probaly not be found criminally liable.

This becomes super important if someone gets injured or killed.


Yeah, and if the self driving functionality doesn't work at all, it would be the manufacturers fault as well. The liability would work the same.


I guess because appropriate usage and maitenance makes that kind of hardware failure extremely unlikely, no?

I can be "warned" to check for issues if the car is older, creaking, seems to be "driving" different etc

A mistake in a mostly completely uninterpretable bundle of software is definetely much sneakier. You can't prepare for it

So I guess that's one argument.

The other aspect of the same argument would be that if you're in an accident due to your car breaking in a way that is obviously the fault of the maker, they will absolutely eat the liability. I think this is how a few recall cases get kickstarted


I don't think the GP is 100% correct anyway, it depends a great deal on what happened to cause the accident. Brakes fail because you haven't changed the pads? Your fault. Steering shaft u-joint comes loose on your 2020 Ford? You were never expected to maintain that item in that time frame so is it your fault?


Because none of the other hardware features give the driver a false sense of being able to not do their job. It doesn't matter how many times the owner is told, we have already seen people are not smart about this. Drivers shooting video of them being in the passenger seat, people shooting porn while the car is driving, and several other stupid things that people will do just because they think it that's what self-driving is meant to do.


Because you can't control it. Drivers aren't liable for design flaws in their cars be they faulty self-driving, unintended acceleration, etc.


The overwhelming majority of crashes are because of human error, not hardware failure.


Because we have rigorous laws and liability managing hardware quality and maintenance.

If liability for software will rest with the driver, there are no laws regulating it's quality and maintenance, what is the basis to believe it will be anywhere near ad reliable?


Legally, they’re all liable.

If your product is unreasonably dangerous and defective, and that defect is the proximate cause of injury to another, you’re responsible.

I know nothing of German law, but from a US legal perspective, it appears that MB is just promising what they already have to do - compensate those harmed by defects in their products.


No, the competition explicitely tells the driver to be ready to take over any second, they promise full self driving only in the ads, but not the contract.

So when a accident happens, the "driver" can be blamed for not reacting.

Mercedes offers full self driving including legal responsibility. You are not oblieged to watch the road or the car while driving (with the current tight limits)


The competition doesn't get to override laws. Tesla may put as many illegal clauses in their contract as they want, if they crash because of autopilot in Europe, they're liable. That's why Autopilot is barely enabled here at all, because they know their software is unreliable.


There are accidents which are not due to defects or unreasonable dangers in the product.

Shit happens and responsibility is not necessarily due to defects/malfeasance/unreasonable risks. There ara honest-to-God accidents. And you are still liable.


I looked up the number of cars sold. Mercedes sells about 1/2 relative to ford and 22% compared to Toyota

Obviously the statistics of a new car is unknown


Mercedes will obviously outsource the liability to an insurance company. That’s what insurance companies are for.


Insurance isn’t magic get out of risk free, if self driving cars have loads of accidents and payouts the cost to insure will correct itself.


Not obvious at all. Companies the size of Mercedes usually self insure for routine risks.


Oh man, I didn't expect to read a comment saying "it's OK if 100 times more cars have accidents as long as a company isn't liable".


I read the parent as meaning to say that because Mercedes will be liable, even 1,000 accidents will be a real problem for them. This is a good thing because it shows that Mercedes expects a very low number of accidents while the system is enabled.

Mercedes accepting the risk like this is a massive step forward for these reasons. It sets a precedent that hopefully others will follow. They wouldn't transfer the risk if they didn't think it would profit them.


I guess, maybe I misunderstood. I was just surprised to read "it's better to have a ton of deaths, rather than a few deaths we're on the hook for".


From the company's point of view this is correct reasoning. The sooner people realize companies do not have any responsibility to be moral the better


They absolutely do have responsibility. There's no reason we should allow investors limited liability if they are going to be assholes about it.

Corporations should only exist because they are net beneficial to the public!

I do agree that enough companies are unethical that it is reasonable to expect it.


This is just naive. Every company that makes any kind of product has made some kind of trade like this.

Costco sells you knives cheaply because they will not be liable if you murder people with them. If the Costco investors were liable for murder every time one of their knives was used to kill someone, you can bet they would just not sell them entirely.

Just because a company thinks about liability doesn’t mean it’s immoral. Individuals avoid liability as much as possible too (see insurance).

The world is dangerous and “fault” is everywhere.


I'm not sure what your point is. My comment is a statement that corporations do have a responsibility to act in the interests of society, not an analysis of the particular ethics of selling knives or avoiding liability.


So do people, which are the ones that run companies. What’s your point?


It's actually a human thing. When bad things happen, we strongly prefer that they don't happen as a result of something that we did.


On the other hand, if incautiously switching from the former to the latter drives your company bankrupt, the end result doesn't benefit anyone.


I feel like almost any summary of the form "so you're saying it's ok that..." is almost without exception not something the other person would agree with.


That's kind of the whole point. If a decision or policy has predictable consequences that aren't being addressed, either the decision-maker is unaware of those consequences or is accepting of those consequences. Asking the question removes ignorance as a possibility, and lets the conversation continue.

Sometimes the answer is "No, I was unaware, and I will adjust my decision." Sometimes the answer is "Yes, here are the consequences of the alternatives, and here's why I believe this set of consequences to be best." Sometimes the answer is "Yes, I don't care about those people." By asking the question, these three types of answers respectively give the other person an opportunity to improve themselves, give you an opportunity to learn from the other person, or give the audience an opportunity to learn not to trust the other person.


You missed one option; "You're falling prey to the is-ought fallacy." That is, saying that something is true is not the same as saying that something should be true. The original claim was that from the perspective of management at a company, 1,000 accidents the company is legally liable for is worse than 100,000 it isn't. Which is true! From that limited perspective! The reply "so you're saying it's ok that..." implies that the comment agreed with that perspective, which isn't necessarily the case. It could simply be pointing out a failure state of current management practices and corporate law. But further than that, that phrase is usually a particularly uncharitable one, and I find this usage of it to be more common than any other. I think "implying the speaker believes that the unfortunate condition they pointed out is right and just" is the normal use case for that phrase, rather than trying to bring attention to the consequences of a policy.


> You missed one option; "You're falling prey to the is-ought fallacy." That is, saying that something is true is not the same as saying that something should be true.

I think I'd put that as a subcategory of the second case, that the options were considered and this one was considered the best. That may mean that it is the least worst of several bad options, or that there are restricted options to choose from.

> Which is true! From that limited perspective! ... It could simply be pointing out a failure state of current management practices and corporate law.

I definitely agree, this is a fantastic example of options having been considered and rejected. In this case, the alternative would be "A self-driving car company accepts more liability than they can handle, and go bankrupt. This saves lives in the short-term, but costs lives in the long-term." It can then be the start of an entirely different conversation of how to avoid that failure state, and what would need to change in order to still get the benefits of that decision.

> The reply "so you're saying it's ok that..." implies that the comment agreed with that perspective, which isn't necessarily the case.

I'd make a distinction between a comment agreeing with a perspective and a commenter agreeing with a perspective. One is based solely on the text as it is written, and the other is a human's internal belief. It's not necessarily a statement that the person is wrong, but that the words they have spoken may have unintended consequences. The difference between "So you're saying $IDEA." and "So you believe $IDEA."

> I think "implying the speaker believes that the unfortunate condition they pointed out is right and just" is the normal use case for that phrase, rather than trying to bring attention to the consequences of a policy.

Good point. In situations where there are no long-term social relationships to be maintained, and where there isn't a good chance for a reply, the message given to the audience is the only one remaining. This is a major issue I have with any social group beyond a few dozen people, and one that I don't have any good solutions for.


> Sometimes the answer is "Yes, I don't care about those people."

Frequently true but rarely admitted


From a corporate perspective, of course it is. It's the Ford Pinto study all over again.

This is why corporate influence on legislation is bad, as their "best interests" often come at odds with morality-based ones.


Fight Club summarized the Pinto thing nicely:

https://www.quotes.net/mquote/31826

without providing the illusion that such cost benefit analyses are a thing of the past.

(Pintos had a problem with the gas tank, not the differential, but it's pretty clear what they were referring to.)


Most car accidents are 100% operator error, it'd be really far fetched to try to blame those on the manufacturer.

Autopilot not so much, the point stands.


Launch control option added by car manufacturers I'd say is 100% the car manufacturer's fault they thought of it, installed it, promoted it, but it's pointless and dangerous.


It clearly has a point, because they successfully market and sell the feature. And everything is dangerous to some degree.

Clearly it is not 100% their fault because the feature can certainly be used responsibly.

Is there a more nuanced and substantive form of your argument against developing and selling a launch control feature?


>launch control How many accidents happen from a standstill? Id love to see some stats, but i highly doubt it would be a high number


That's not really what the OP is saying though, is it?


True, it was more "the company would rather have 100k accidents it's not liable for rather than 1k accidents it is". Doesn't make it much better for me.


Yes. The liability factor is a great proof point, but the real tech innovation in my mind is not the self driving code, but instead the attack on the fundamental problem of building a model of where and when their stuff is trustworthy. As you point out, everyone else in the industry has simply punted that very difficult problem to the poor users to figure out!


few things inspire confidence like companies putting their literal money where their mouth is


But haven't we all seen Fight Club? It isn't a question a confidence, it is a question of financial math.

This decision tells us nothing about the safety of the Mercedes system compared to its competitors. All it tells us is that adding these limitations to ensure the system is only used in the safest possible scenarios makes taking over liability more reasonable. That isn't surprising. Their competitors' systems are also very safe if used in this manner. The only difference is that the competitors are not satisfied with releasing a system with enough limitations that it only works in stop-and-go highway traffic in clear weather. It is that added functionality that is more dangerous and the reason other manufacturers don't take on liability.

Odds are the marketing and accounting wings of Mercedes had just as much if not more influence on this decision than the tech team.


It’s both, right? The competitors may very well be just as safe in those conditions, but we wouldn’t know based on their liability stance; the Fight Club equation simply doesn’t apply.

With Mercedes, the Fight Club equation gives something like a mathematical guarantee of their estimated confidence of the safety of the system. There are no mathematical guarantees from the competitors.


>It’s both, right? The competitors may very well be just as safe in those conditions, but we wouldn’t know based on their liability stance; the Fight Club equation simply doesn’t apply.

I was referencing Fight Club as an example of an auto manufacturer making a life and death decision based off their financial incentives and not the best interest of the customer. The decision to take on liability is about money, not confidence in safety.

>With Mercedes, the Fight Club equation gives something like a mathematical guarantee of their estimated confidence of the safety of the system. There are no mathematical guarantees from the competitors.

You also have to factor in the marketing aspects. I'll reference another movie here in Tommy Boy[1]. Mercedes knows a move like this is attractive to consumers. People will look at it and think it means the system is safer. This decision will sell cars. But a guarantee doesn't tell you anything about the quality of the product. As Chris Farley's character says, you can "take a dump in a box and mark it guaranteed".

Maybe the system is truly dangerous and taking on this liability would be a losing proposition alone, yet adding in these additional sales from the marketing of this liability coverage yields a net positive for the decision. Or maybe the system truly is incredibly safe. There is no way for us to know. I am simply pointing out that this decision about liability is largely meaningless when judging safety because safety is only one of numerous criteria used to make the decision.

[1] - https://www.youtube.com/watch?v=mEB7WbTTlu4&t=49s


> I was referencing Fight Club as an example of an auto manufacturer making a life and death decision based off their financial incentives and not the best interest of the customer. The decision to take on liability is about money, not confidence in safety.

The Fight Club scene is about how these two things are integrated: their confidence in safety defines their ability to choose to take on liability.

Yes, its intent in the story is to horrify: there's a lack of humanity, a reliance on a simple function relating those two variables.

However, that doesn't imply the two variables are unrelated, in fact, it implies they are completely correlated.


This real life example is more complicated than the Fight Club version. It includes more variables like the added sales I mentioned and all these variables are unknown. How can you draw conclusions about one variable in a formula in which you don't know the value of any of the variables?


Not sure what you mean: the movie scene has the same property. It's not about the risk of individual failures of components, it's risk of a payout

Strong indicator I believe my anti-flood machine is good at preventing floods is I'm willing to take on paying for any liabilities you incur from flooding


You are only thinking about payouts and not the change in sales. Imagine you make $100m selling your anti-flood machines. Maybe your machine fails 10% of the time and a failure costs 2x the unit price. Taking on liability in that situation would bring you down to $80m. Bad deal for you. But what if someone in marketing comes and tells you that market research suggests taking on liability leads to an extra $30m in sales. You come out ahead because the $30m in new revenue exceeds the $26m in new liability. That isn’t confidence in your machine. It is marketing and accounting.


Occam's Razor: would only work in the short run, crash data goes to regulators.


Many things can be marketing - the drain cleaner sold in a bottle in a plastic bag doesn’t need it for safety - it needs it because it makes the product look more “dangerous”.

The interesting part is the balance they have to strike - be too lax and everyone uses it and you get the Tesla “autopilot did a big bad” articles; make it too restricted and you get “the damn thing never lets you turn it on”.


I don't really see what it has to do with fight club.

suppose car A and car B have autonomous driving that perform identically across a wide range of conditions. the manufacturer A enables FSD whenever the customer feels like it, but accepts no liability. manufacturer B accepts full liability for FSD use, but restricts it to situations where that's a good bet. car B is safer for the average customer, because it doesn't let them use FSD when it is especially risky. unless I understood a lot more about ML, CV, etc, I would pick car B every time.


No, this reasoning is flawed.

The baseline safety of cars is an absolute shit show. 30,000 people dying every year and trillion dollars of damages and lost productivity.

Car A can enable FSD in all cases, be safer in all cases, but still not be a good deal economically for the manufacturer to accept liability.

Car B could be making drivers less safe overall by preventing them from enabling FSD in most cases, but at least they aren’t liable.


Your comment is predicated on the assumption that FSD (the actual system installed in the car, not a future theoretical perfect system) is safer than the average driver in the situations where Mercedes currently disables it.

I'm not sure we have data to support that? We know Tesla's autopilot is safer on average, but most of those miles will have been driven in the situations where Mercedes allows it to be used.


> We know Tesla's autopilot is safer on average,

We don't even know this (even if you restrict it to highway miles), since it's not an apples to apples comparison. General safety statistics include old cars with fewer safety features independent of who's driving the car.


I am saying such a scenario may possibly exist, not necessarily that it does exist.

Mercedes could be increasing the number of overall deaths by limiting the availability of the feature and still be reducing their liability for when the system is in use.

Let’s say with FSD on all the time that instead of 30,000 people a year dying that only 20,000 people a year die. Would a company accept the liability?

What if the death rate was 10,000? 1,000? 100?

If FSD could prevent 29,900 deaths a year but still see deadly failures 100 times a year, would a company say “I accept the liability”?

So you see, perhaps it’s crucial that companies not be able to be sued out of existence even if a few hundred people a year are dying in exceptional cases under their software, in order to prevent over a quarter million deaths and untold number of maimings every decade.

Also consider in this ethical and legal liability dilemma that these populations are not necessarily subsets, but could be disjoint populations.


"We know Tesla's autopilot is safer on average"

Am I allowed to just use "LOL" on hackernews?


Well, unless you are going to rebut the statement, I don't see the point.

If you are just basing your point of view on the widely reported Tesla crashes, you might want to look up some actual safety stats. Crashes of human-driven cars happen every day, and they're often fatal.

But as I pointed out, most Tesla autopilot use is presumably in "easy" conditions, which complicates comparisons.


.


You were responding to someone saying that Tesla’s autopilot is safer (based on crash stats per million miles), not FSD. FSD and autopilot are two different features.


Fair enough. I cannot believe those related are not in tight conversation considering the AI element of FSD (or am I mistaken there?).

Either way, in summary, I cannot trust FSD until it is 100% reliable (impossible) and the temporary situation for some time to come (regulated/supervised FSD) drains all the life out of what I enjoy, actual engaged driving! ...

The bits we don't enjoy (stop-start traffic and some motorway driving) have already been taken care off more than a decade ago.

I'd love the option of FSD but ... either FSD will never fully be realised, or will be adopted widely and there'll be some hold outs like me who actually enjoy their driving.

We'll see


> The bits we don't enjoy (stop-start traffic and some motorway driving) have already been taken care off more than a decade ago.

Not really. You're referring to assistance features that require continuous driver attention. I think that highway driving, and perhaps even city driving in some parts of the world, could be completely automated to a level of safety that is far higher than humans can achieve.

I am deeply skeptical that we will ever see a system that can drive in all current road conditions though; I think it's more likely that road systems will eventually co-evolve with automated driving to a point that the automated systems simply never encounter the kind of emergent highly complex road situations that currently exist which they would be unable to handle.

I also enjoy driving, and my 40yo car doesn't even have a radio, let alone Autopilot, but I think it's likely that within our lifetimes, the kind of driving that you and I enjoy will be seen as a (probably expensive) hobby rather than something anyone does to get to work or the shops every day.


I think the majority of people enjoy driving. Driving is fun. Sure, traffic sucks, but the actual act of driving comes with lots of pleasures. Most people don’t seem eager to give up driving, nor are many people ready to hand over control to AI.

I’m a transportation planner, and for many years my specialty was bicyclist & pedestrian planning and safety. I would follow autonomous vehicle news, but always through that lens. In addition, I have sat through lectures, webinars, and sales pitches that tout our wonderful autonomous future. And lemme tell ya, there is little to no mention of all the road users who are not in vehicles. Countless renderings and animations that do no account for our most vulnerable users. It smells like mid-20th century transportation planning mentalities that is completely engineer-driven. Very narrow-focused and regressive.

My coworkers and I enjoyed sitting around and coming up with countless difficult-to-solve scenarios (that my tech friends would look at and say “eh, sounds interesting and solvable”) for AV developers to contend with. And despite pressure from our “future forward” marketing coworkers to focus on this sector, it feels nowhere close to really being ready (20-30 years maybe?).

Anyway, I do think the focus on “allowed in some places” is interesting. I have some trouble seeing “road systems will eventually co-evolve with automated driving” coming to fruition given the glacial pace of road system evolution.


I guess by "road systems will eventually co-evolve with automated driving" I would also include relatively minor interventions like increasing the proportion of controlled intersections, which are much easier for autonomous systems to deal with.

I have spent a lot of time in parts of Asia where massive evolution of transportation infrastructure has happened on a scale of a few decades (or less), so it seems less crazy to me that large-scale road evolution could happen along with autonomous vehicle development than it might seem to someone working in the West.


> This decision tells us nothing about the safety of the Mercedes system compared to its competitors

Mercedes is just taking notice that grandstanding and PR worked for Tesla, so they are doing the same thing.

Everybody who is serious about this knows that unless you get Level 5 it's all just grandstanding.

Level 5 won't come from unleashing Level 3 into the world and throwing deers , cyclists and pedestrians at it (hopefully not literally).

It's just a weird hill that brainpower and capital decided to die on. Deaths on the road are tragic but airbags, seatbelts and ultimately bigger cars and lower alcohol intake are something that is practical, whereas FSD is something like a pie-in-the-sky thing


I don't know how you came to this conclusion.

Level 4 is a serious goal that provides very useful benefits.


Bigger cars? Have you seen the new pickups, what do you want to drive, a tank?

Also bigger cars kill more pedestrians


People who say "bigger cars are safer" mean "the person driving the bigger car is safer". It's an arms race to them and they want the bigger car.


Not all crashes involve multiple cars. Bigger cars reduce the amount of deaths because there is more metal mass between you and the tree/barrier that you hit.

I also said that bigger cars is just one element in the equation, with seatbelt compliance, zero alcohol tolerance, airbags etc.

All those things are much more feasible and practical solutions compared to pie-in-the-sky FSD


US and Germany might be at different stages of this process, though. For example Germany has almost 2x less deaths per km driven and the situation is even better in countries like Switzerland or Norway.

Also we’re talking about the perspective of the car manufacturer. New cars are already significantly safer than an average car on the road so it’s relatively little they can do to address many of the points since there seems to bet not so much room left for non marginal improvements in safety and FSD is a much more attractive proposition to most car buyers than “3.5% safer than the competing brand”.


> FSD is a much more attractive proposition to most car buyers

So is donating to the church hoping to secure a place in Heaven. We all fool ourselves for the sake of peace of mind or immediate gratification, the important thing is to be aware of it.

It's important to be honest with yourself and decide who are the people who can take you for a ride.

For me it's my younger siblings, the sport franchises I support, and attractive women. Because at least in such circumstances it would be a fun and worthy ride.

Being taken for a ride by Musk, Mary Barra, Herbert Diess, (insert automotive CEO), or even Elizabeth Holmes, Bernie Madoff...that's pathetic, not fun and will leave you with regrets...but maybe techno-utopianists have a different mindset and their calculus is the opposite of mine, meaning they have a contempt for the small things in life and only get excited about moonshots and pie-in-the-sky ideas even if it means getting scammed.


I disagree. If often had to spend time in highway traffic jams (which is what Mercedes seems to be offering here) I’d rather pay an extra 10k to get 30 minutes of my life back every day instead of buying a car which in which I’d be 1.5% less likely to die during a crash.


The “Fight Club” math is incomplete for the sake of a good story. Regulators can force a recall on manufacturers and insurance companies can make a car uneconomical to buyers by appropriately pricing in risk. You may argue that the former has been undone by regulatory capture (something I would dispute), but I think we all recognize that insurance companies aren’t particularly charitable.


The Fight Club math is complete. They specifically quote the cost of out of court settlements, implying hush money.

This has been done successfully before. One model of elevators used to turn kids into ground beef (under a dozen a year -- the gap between the inner and outer door was too big).

Eventually they were almost all replaced, the deaths tapered off and everyone involved retired.

One year, much later, one of the few left in service killed a kid. No one working at the elevator company knew what was going on, and all hell broke loose. The remaining surviving culprit was in an old folks home at that point. The company ended up recalling something like two or three antique elevators.


Wouldn’t this logic suggest that the economic value of taking on this liability is minimal considering insurers haven’t made cars with competing tech unaffordable for buyers to insure?


>But haven't we all seen Fight Club?

???


http://inaneexplained.blogspot.com/2011/03/fight-club-car-re...

TL;DR; If there's a defect that kills or maims people, car manufacturers compute if it's cheaper to recall or just deal with the lawsuits.

And this happens regularly: https://en.wikipedia.org/wiki/Unsafe_at_Any_Speed


> And this happens regularly

Do you have an example to demonstrate regularity that's not nearly 60 years old?


Thesr lawyers claim they got a multi million settlement out of a manufacturer of a defective automobile recently:

https://www.raphaelsonlaw.com/questions/how-much-to-expect-f...

Since it's out-of-court, there is probably a non disclosure agreement. Anyway, it happens enough for lawyers to think it's worth targeting the cases, at least.


Honestly, I didn't read that link extensively. It went straight to a sleezey looking "How Much to Expect From a Car Accident Settlement?" That didn't sound like it justified the usage of "regularly" in the statement I responded to.

Feel free to correct me on how its related to the use of "regularly" tho.


There's always a tradeoff between cost and safety, how else would you imagine this to work?


By treating any lies to the customer as criminal fraud.

For some reason misleading investors in a way that causes them to loose money lands you in jail very quickly. You must provide extensive 'inverstment risk' report with the shares you are selling.

But when selling a car or tickets to faulty and deadly airplane, you don't have to inform customers of all the flaws you've discovered.

The lawsuits of Theranos case and Anna Sorokin make it clear that our society has one set of laws for owners of capital, and another one for the plebs.


The proposed alternative, I think, is that the car company values a customer's life at more than the expected cost of settling.

The cost of settling is supposed to approximate the value of a human life, but it sounds pretty bad to say "yeah, we knew this defect would cost $50MM to fix, but we estimated only 5 people would die, and each person's life is worth less than $10MM because the settlement is only like $400k per life".

The proposed alternative is that the car company has to say "We calculated this defect will probably kill 5 people, so we will spend any amount of money to fix it, up to and including us having no profit"


Maybe cars with known technical/mechanical defects that will result in death could be recalled even in cases where it eats a bit more into the profits of the auto manufacturer than it would if they just paid settlements to the families who lost loved ones. That seems like a pretty good way to imagine this to work.


I imagine that they'd ddmit that the car as designed is bad and fix or replace the bad cars. Leaving cars on the road that have a design flaw that randomly kills people, because it's cheaper to pay off dead people's families is reprehensible.


There's a question about what a design flaw means though. What about a car that was built without a backup camera because it was older than when they were commonly included? Or built when they were commonly available, but not before they were mandated? Is that a design flaw?

What about something that's more accidental that causes fewer deaths than the lack of a backup camera, but also costs more to fix than retrofitting a backup camera?


Legally, it’s up to the jury.

Each side has the right to bring in experts to testify about what were reasonable design choices, what was greed, and what was just a bone headed mistake.

If the jury concludes that the product wasn’t unreasonably dangerous or defective, defendant wins. If they find that is was, plaintiff wins.


Narrator: A new car built by my company leaves somewhere traveling at 60 mph. The rear differential locks up. The car crashes and burns with everyone trapped inside. Now, should we initiate a recall? Take the number of vehicles in the field, A, multiply by the probable rate of failure, B, multiply by the average out-of-court settlement, C. A times B times C equals X. If X is less than the cost of a recall, we don't do one.

Business woman on plane: Are there a lot of these kinds of accidents?

Narrator: You wouldn't believe.

Business woman on plane: Which car company do you work for?

Narrator: A major one.



Have you seen the movie, if I remember the scene correctly Norton discusses the financial incentives leading to auto product recalls. I don’t think this is accurate in real life although Boeing hasn’t inspired much confidence.


>I don’t think this is accurate in real life

Ehhhh I mean they often don’t say the quiet part out loud but many companies definitely run a variation of the equation he describes.


The "Pinto Memo" [1] being a notable example. Although, as the article says, the cost / benefit analysis was against "societal costs" of the safety issues, not just the cost of litigation.

[1] https://en.wikipedia.org/wiki/Ford_Pinto#Cost%E2%80%93benefi...


Yeah, very good example and explanation


I feel strangely assured to know that Mercedes Benz will be paying thru their nose when I’m dead


I'm sorry Dave. There's a 60% chance of rain today. You're going to have to drive yourself.


What about legal? As in, if I'm being driven around my Mercedes-Benz while watching a movie on my phone and something happens, am I not gonna get fucked by the cops?


In Germany, the law was changed to account for this.

There are limits - while you can watch a movie, you have to be ready to take over within ten seconds.


10 seconds is a ridiculously long time when driving. What was the logic for that timeframe? It seems to me you either need to be ready to take over in an instant or basically not at all.


Things like approaching ambulances or construction sites that the system can predict, but knows it can't handle.


That would depend on the exact laws in particular jurisdictions where you might operate the vehicle. In some areas, law enforcement could cite you for using a phone even if the manufacturer had taken on the civil liability. Longer term some of those laws will probably change as Level 3 autonomous vehicles start to become more common.


Germany did already change the law (and is, I think, requiring exactly the kind of liability for the manufacturer that MB is offering)


Absent legal/regulatory changes/waivers, the fact that MB is offering to cover financial liabilities would seem to be pretty much irrelevant to legal liabilities. So basically "we'll cover your costs" is pretty much irrelevant if you want to drive in a manner that would normally be considered reckless.

But there's talk of certification so this may be taken into account. One would of course want to see specifics.


> AFAIK, no other manufacturer does this for their systems (which capabilities they often make rather wild claims about).

What's interesting is the level of nit-picking scrutiny this is attracting compared to the uncritical defence of those wild claims that we're seeing here. A whole lot of copium.


I'm more in favor of underwhelmed but responsible. Musk "approach" is despicable on this.


Renault does, they announced it a while back. THIS is the real breakthrough (I worked on self driving cars in the mid 2000s).


I wouldn't be surprised if the EU made this a law.


Confident but limited is probably how the German manufacturers describe most of their cars (although I think mercedes do at least some of their driverless in the USA)


I think other manufacturers will also be taking responsibility for liability when they come to market. It is just insurance. You can already pay an insurance company to accept liability. The difference with lv3 self drive and beyond is that insurance companies no longer have the data to correctly price their policies, because this is all new and they rely on historical data. The car manufacturers believe they have that data, and are now in a position to take that slice of the pie. I expect that once things settle down, the bulk of your insurance will be paid as a yearly fee to the manufacturer to cover self driving (and the manufacturer will likely offload much of the risk to Big Insurance).


> You can already pay an insurance company to accept liability

If the car drives over a child in self-driving mode, will insurance company go to jail instead of you?


Nobody goes to jail, same as now unless you can prove negligence or intent, there might be significant fines and/or settlement money depending on the jurisdiction. But I’m not sure how is this difference than deaths caused by some design flaw in normal cars.


At this stage of development, I think those limitations are entirely appropriate and I would want other manufacturers to be this cautious. Self driving is a convenience feature and limitations like this are reasonable. Once they have enough miles/years of experience, they can begin to remove some limitations.


> they seem to be so confident in their system

Having spent a lifetime building networks and software that actually makes me nervous


There's nothing about the way that I've seen automotive related software built that inspires me with confidence. Quite the opposite.


If you've spent enough time building software you learn that the real world almost inevitably introduces edge cases that even the most careful testing couldn't foresee. Self driving to date has proven itself not to be an exception to this.

I just can't imagine how Mercedes could be confident in a highly complex (self driving) system, designed to manage an insanely complex and unpredictable (driving) environment with no real-world tests. Maybe that's coming from the marketing/PR department and not engineering.


No other manufacturer explicitly says they do it, but guess who is legally responsible for manufacture defects whether or not they explicitly say they are?

There's an extent to which what MB is doing here is turning a fact they have very little ability to evade into a marketing point.


> they will take responsibility for the car when in autopilot

This probably just means autopilot gets disengaged just before impact


This also includes a 10 second takeover window (in accordance with EU regulations)


Can a driver wake up, be calm, get spatial understanding, do bayesian physics predictions and decide the corrective maneuver in 10 seconds?


That's plenty of time. Find a clock and watch the second hand move for ten seconds. It's a long time.


Especially if I am actually asleep, I can barely speak coherently on the telephone within 10 seconds much less take care of situation with a vehicle that a computer has thrown up its virtual hands about dealing with.


Level 3 autonomous driving isn't made with the use case of sleeping but with you being able to do other things while still awake (for example use your phone or watch a movie). For those situations 10s is more than enough time to react.


If you're not paying attention, you can easily end up dozing off. You're either more or less paying attention--and yes drivers' attentions can drift a bit--or you're going to take some time to reacquire some awareness of what exactly is going on.


If you can't stay awake then you need to not use this feature. If you're not exhausted it should be an easy task. If you are exhausted, arrange something else.

Attention drifting is supposed to be fine. That's the entire point of level 3. How long do you think you need to regain focus? Maybe a different brand will cater to that amount of time, or you can lobby regulators.


My point was simply that, if you're not driving and are absorbed in something else whether a book, a movie, a game, or just zoned out, 10 seconds is not a lot of time to figure out what's going on because the car's computer has encountered something that is sufficiently non-standard that it doesn't know what to do.

I do there there is some period of time that is reasonable for a handoff. Maybe a minute? I actually believe that full autonomous driving with geo-gating and condition-restricted limits is probably how things will play out.


We’ve gone from “unless you’re actively engaged you can’t possibly be ready to takeover immediately” to “unless you’re actively engaged you are easily going to be sleeping”.

People fall asleep behind the wheel all the time. They just usually crash and maybe die when this happens, if FSD is not enabled at the time.


About 4 months ago I wrote in https://news.ycombinator.com/item?id=29520685

> I'm currently driving a pretty new Mercedes (company car), and its lane assistant regularly tries to kill me in construction areas because it wants me to follow the regular, currently inactive lane markings.

I've gotten an even newer Mercedes since then (A250e, build finished 2022-03), and it's really impressive.

The lane assistant no longer makes a mess in construction areas.

It has adaptive cruise control, which works REALLY well, and I'd be comfortable relying on that on the highway, for example. It's sometimes breaks when I wouldn't, in dense traffic when I'm approaching an obstruction that I will drive around later.

It has an enhanced lane assistant which does small corrections much earlier than the regular lane assistant, which I found annoying and switched off.

It has sensors to the side and back to be aware of approaching cars when switching lanes.

Long trips have become much more less stressful with that, and I can totally believe that this is on a good way towards level 3 self-driving. (This doesn't include the self-driving package, it's "just" what you get when you pay for the regular security/assistance tech).


I really wish cars removed from residential areas and inner city, basically anywhere people are walking around.

It makes self driving much easier and also eliminate the need for parking. You would walk a bit more (like five minutes), to a bus stop and be picked up by a car. Much healthier and probably causes less congestion too.

Within city's and residential areas you can have nice bike and walking paths and more room for nature.


It needs to be convenient. I personally don’t want to bike or walk 5 minutes in the snow with my toddler and bags before driving every time.

I used to bike or walk to a carpool car quite a few times. But I was driving back home to pick my family and bags.


Yes, luckily it seems cities are slowly starting to realize that allowing cars everywhere and infinitely widening roads is not scalable and, more importantly, does not create a pleasant environment for people to be outside in.

The future is definitely creating more walkable/cyclable cities and improving public transit. Self driving cars have the potential to make things even worse by generating new car trips that otherwise wouldn't have been made.


Car company combined with out of city center parking company combined with bus / electric rental bikes.

This is a much more modern city design pattern!


A point I commonly hear made from both Tesla and Comma.ai for example is that Lidar is far too expensive with Waymo's vehicle costing a total of $200,000 and that cameras alone are sufficient for full self driving. I do think that cameras alone are probably sufficient for full self-driving but every time I hear this point I think to myself that self driving progress is moving so slowly on the camera-only front that Lidar might become so affordable by the time camera-only makes substantial progress that by that point it was much more efficient to just have been developing with Lidar from the start. Am I missing something or just completely wrong? I would really appreciate any insight on this.


The conversation is complicated because LIDAR is used for many purposes in self driving. It's used for localization, it's used for objection detection, it's used for classifications, it's used for understanding occlusions, it's used to get accurate 3D positions and speeds of objects, etc. For each of these cases there is a way to use just cameras (or camera augmented with radars), but often with some significant performance penalty.

One of the big challenges is that most self driving stacks have an interface between the perception and planning stages that is specified to be a 3D model of the world. LIDARs are particularly helpful at creating 3D models because that is essentially their native data product. So, for "traditional" AV stacks that use this interface, LIDARs are bound to improve performance a lot.

If you use a different approach, say pure imitation learning off of sensor data, you might find that LIDARs are not as important. (intuition: Humans drive well without understanding super accurate positions or velocities of objects.) Though Tesla isn't taking a pure imitation learning approach (yet), they are more in line with this strategy.

All this said, I don't think that having or not having LIDARs is a major factor in the progress of self driving. It's just a way to use money to improve perception performance (and reduce data labeling cost!) but neither is the major blocker for the industry. If we extrapolate from the last 10 years of progress, it seems like high-level self driving is going to take a while and I think that it's likely that LIDAR prices will have fallen dramatically by then and we'll see them as part of the overall sensor constellation on most vehicles.


I think you're probably right - the really difficult bits of self-driving cars are still really difficult even if you have perfect sensor data.

But still, we're a long way from vision based sensing being good enough for reliable self driving so why make it difficult for yourself?


It's a question that ultimately depends on the utilization economics pathway, which if you listen to folks like Tony Seba, is going to come from the side of robotaxi fleets, not personally owned vehicles.

If you own a taxi fleet, the cost structure will favor spending more on Lidar if it gets you to a level 4 or 5 result faster. Cars designed for personal use will tend towards seeing the feature as a consumer "add-on" and prefer cameras. Tesla's business has focused on this latter path, trying to capture high end users first and then mainstream the results.

But if Seba is correct and we take a logistic curve pathway with both EV and self-driving tech, the cost curve will price robotaxi fleets underneath all forms of ownership within just a few years; you get much better utility from your sensor investment if the same car is making a dozen trips every day, and the consumer pays "for what they use" instead of having an unused hunk of metal taking up parking space. At that point, adoption shoots up and the consumer add-on model doesn't have a leg to stand on. Camera tech might still get better and replace the sensor package, but the race, such as it is, would be won by whomever deploys a fleet at scale first.


I work in a parallel industry where capturing motion at high speeds is critical, and in the last two decades, the cost of LIDAR and RADAR has barely come down - and the technical specs haven't gotten much better - while the cost of optical continues to absolutely crash through the floor and technical specs skyrocket.

Mobile phones and laptops are the obvious main reason for this. Optical tracking (cameras) have economies of scale that LIDAR and RADAR can't even come close to. LIDAR and RADAR are niche features in a specific set of high priced luxury goods while optical tracking is dominating everything from government/consumer surveillance to camera phones to sports motion capture and many, many more applications.

Optical tracking has so much pressure on the field to drive costs down and to increase feature sets (innovate) that RADAR/LIDAR don't. It's going to be this way for quite a long time, so I don't really see the costs for RADAR/LIDAR getting under control anytime soon.

EDIT: Optical tracking will be the dominant method of self-driving cars, I'm almost sure of it. The trained datasets have a huge advantage in this regard. Still, it's obvious that LIDAR/RADAR will have additive value down the line. It is just very hard to see how it becomes the primary technology. This is echoed in my field as well as many others - where RADAR dominated, machine learning / software + good enough optical tracking took over at 1-3 magnitudes of cost savings.


> I work in a parallel industry where capturing motion at high speeds is critical, and in the last two decades, the cost of LIDAR and RADAR has barely come down

This doesn't track for me. What was the price of the cheapest Velodyne LiDAR unit 12 years ago, and what is the price of the cheapest today? My not-in-the-industry searching says a Velodyne unit cost $75,000[1][2] 12 years ago. IIRC, estimates of Google's inhouse LiDAR sensor (which is not for sale) was about $10-20,000 per unit - this was about 5 years ago. Currently, Velodyne is selling(?) the Velarray H800[3] solid-state unit that had a $500 price-point target[4][5] during development. How does this square with your assertion that the cost of LiDAR has barely come down?

Edit: I checked, it was precisely 5 years ago and Google claimed[6] that it cut the cost of LiDAR by 90 percent! I think you were wrong to say LiDAR costs have barely come down.

1. https://www.latimes.com/business/la-fi-hy-ouster-lidar-20171...

2. https://arstechnica.com/cars/2020/10/the-technology-behind-t...

3.https://velodynelidar.com/products/velarray-h800/

4. https://www.forbes.com/sites/samabuelsamid/2020/11/13/velody...

5. https://www.reuters.com/article/velodyne-lidar-tech/velodyne...

6. https://www.businessinsider.com/googles-waymo-reduces-lidar-...


Apple also puts lidar in the iphone pro models


My robot vacuum cleaner from xiaomi has lidar. It's coming at the consumer level fast.


I would say the race isn’t over yet. One thing to consider is the training dataset. By using only vision, Tesla was able to start collecting data with their entire fleet starting many years ago. They probably have more edge case data than anyone else, in more diverse driving situations. They would not have been able to do this if they needed to use expensive lidar on company owned training vehicles.

What we will likely see is that lidar equipped vehicles gain a lot of ground at first, but have to be rolled out slowly both due to cost and how they are trained. Tesla’s vision based fleet could get an update tomorrow that theoretically was able to drive almost anywhere.

Of course, Tesla has clearly been finding out that while vision is in principle sufficient, the brains behind the cameras are not so simple. In my opinion it’s still too early to call who will win in the end.


This point about Tesla has been made frequently in this discussion. I still wonder: how does Tesla store any meaningful video data in customer vehicles and transfer it?

I know that other vendors have a hard time putting the necessary fast storage in the trunk and get the data off with 100G cables in the garage.


Their AI system has something called “shadow mode” where it can make observations without having any effect on the car. When Tesla needs to collect a dataset of something, like short videos of cars that put on their blinker but did not change lanes, or partially obscured stop signs, they can train a net to run in shadow mode on all their cars and collect more of this edge case. I presume that they also collect a lot of data around disengagements. And then they send the data presumably over Wifi when the customer gets home, or otherwise over the car’s 3G connection. But they collect targeted data, not mass dumps of drives.

Andrej Karpathy has described this system in various talks including on Tesla AI day.


As I understand it, it's mainly collected when the car goes in for maintenance.


,,Lidar might become so affordable by the time camera-only makes substantial progress that by that point it was much more efficient to just have been developing with Lidar from the start.''

Tesla and Comma are already making profit with great margins that they needed a lot when they started developing self driving capabilities. Waymo is using Google's coffers, but so far they lost billions of dollars that they have to get back.

Regarding Lidar I wouldn't be surprised to see it in the Robotaxi that comes out in 1-2 years from Tesla, but maybe it would be a PR nightmare at this point.


The problem with Lidar is that it doesn’t work well in bad weather. So it’s not the “endgame” technology - that’s vision. It’s a bridging technology. Whether that bridge is needed today is an open question.


> The problem with Lidar is that it doesn’t work well in bad weather

Cameras don't work well in bad weather either.


What most people here complaining about the "40 mph" limit do not realize is that below 40mph is where it's hardest to implement proper self driving, as majority of below 40mph driving will be in city streets with pedestrians, frequent turns and more variables. Anything higher will generally be in highways where lane assist cruise control systems are already more than good enough.

I will gladly take a system that can drive itself confidently under 40mph in city, and use basic lane assist (and maybe lane changes and exit ramps like in Teslas) for highways. Cant wait!


> as majority of below 40mph driving will be in city streets with pedestrians, frequent turns and more variables

What this article fails to mention, but other articles on this system mentioned in the past, is that this system can be enabled only on designsted highway streteches. So no city driving.

Here: https://group.mercedes-benz.com/documents/innovation/other/2..., page 18, Operational Design Domain.


Yeah, this seems barely beyond Tesla's Autopilot and not even close to the FSD Beta.


Neither autopilot nor FSD would let me take my attention off of the road. It sounds like this thing is fine if you're distracted, as long as you can take control within 10 seconds.


The ten second thing just means it will be disabling itself a lot. Unless it's just giving you false alarms constantly and then doing a little "my bad, nevermind" beep that tells you it has cancelled the heads up saying you have to take over 10 seconds from now. Which sounds not so great.

Ten seconds is an eternity, so if it sees the tiniest possibility ahead that things will get complicated, it will have to be conservative and give the ten second warning. And when it does get into heavy traffic, it will be disabled.

Because it will be disabling itself a lot (or super annoying if it's giving false alarms) it will either be not in use except on very sparse roadways, or it won't even matter because it will be completely turned off by the driver.

Bottom line, it's a marketing gimmick.

The alternative is a system that goes into the thick of traffic, still active, still helping, still adding an additional level of safety over and above what the driver maintains, but since the system is going into the thick of traffic in an enabled state, actively engaged, and things are dicey, the human still needs to be involved as an active participant in oversight. Ready to take over at a moment's notice. Not ten seconds from now. In this approach, the safety benefits are real, not just marketing. But the human needs to step up and stay responsible and own the shit that happens.


Autopilot is glorified cruise control and lane keeping. FSD is still occasionally trying to drive you into a concrete pillar and is not available unless you get selected to be a beta tester.

Unless Tesla assumes liability for their systems while you can legaly play on your phone, they are far behind.


Autopilot can change lanes. This cannot.


I really don’t think they’re more than good enough on highways. Even teslas still veer over when they encounter on-ramp-merges.


Hardly a problem in Europe where on ramps are much better (one lane is primary and one is joining).


Europe is quite diverse in this respect. Italy for example has many highways that I would be terrified to let autopilot try to navigate merges on.


Italy for example has many highways that I am terrified to try to navigate merges on. Spain too!


> under 40mph in city,

from the video:

> but once we hit some traffic we go under 40 miles an hour we'll be able to engage it but there are some other caveats: you have to be on a highway, the weather has to be nice, it can't be raining, it can't be freezing, and you can't change lanes; you have to stay in your lane while drive pilot is enabled

Hence the need for geofencing to the easiest situations where assuming liability means assuming nearly 0 liability. I really hope it progresses but we'll see in 3-5 years.


In terms of Mercedes assuming liability (which is what many commenters are focused on here), it seems below 40mph has the lowest liability risk.


"it's geofenced to certain roads, and only work under 40mph", " you should not use this in the rain"

https://www.youtube.com/watch?v=sMNnOosjrBo&ab_channel=Engad...


Breadth of operating conditions is a trade off with reliability of operation. It’s possible to do a lot of things okay, or a couple things perfectly, but not yet possible to do a lot of things perfectly.

When you remove the driver from the equation, the requirement for reliability goes up.

There’s going to come a day when Tesla is forced to make difficult decisions about their “FSD” system. They’re either going to be requiring human supervision for a long time, or they’re going to have to start pulling features.

I’m glad to see other manufacturers taking a different approach and actually start working on features good enough to qualify for SAE level 3 operation.


The big risk that Waymo/Mercedes is taking with this approach is that they do not know what blocking issues they will run into when adding new functionality to these systems.

What if driving in rain or snow is incompatible with their architecture and they have to do a major rewrite to expand into that functionality?

With the Tesla Approach they already know where the system fails and where the system isn't making any progress when trying to fix those failures. So Tesla can do major rewrites and major architecture changes much earlier in the development process.


"We killed a few people but we got a bitchin' architecture going from the start."

You should be a construction site manager in Qatar.


"It is better to play it as safe as possible safe when rolling out a technology that could save tens of thousands of lives a year"

The antivax people agree with that approach. We don't know the long term effects of the vaccine therefore we shouldn't use it.


Vaccines still had to go 3 stages of trials and vaccine side effects are immediately obvious in the vast majority of cases.

And pharma companies are far less irresponsible and far more regulated than vested and biased commenter darling Tesla.



Yeah and they certainly have a lot of data because of their approach. I just wonder how (or if) they’ll make the shift to level 3 without angering customers. Or, if they’ll choose to stay at level 2 so they can have more features.


Maybe they don't design their roadmap around arbitrary level numbers but instead around marketable features.


SAE levels are based on the features of the system… and the conditions it operates in, and the responsibility of the driver.


I said marketable features. Outside of HN's demographic, the number of people who even know what an SAE level is would round off to zero.


Which criteria of J3016 wouldn’t be marketable?

Few may know what “level 3” means, but they certainly know what “you don’t have to pay attention” means.


Right, but earlier you said:

> I just wonder how (or if) they’ll make the shift to level 3 without angering customers. Or, if they’ll choose to stay at level 2 so they can have more features.

I don't know whether it would be a meaningful business goal to "make the shift to level 3". The business goals would be more nuanced than that, since approximately zero actual customers know or care what level 3 is. And the line between "you must watch what the car is doing and be prepared to take over" and "you must be prepared to take over if the car asks you to" is going to get blurred by manufacturers (and the distinction isn't that obvious to an average user in the first place).

Level 3 is absolutely not "you don't have to pay attention". SAE defines it as "when the feature requests, you must drive". Characterising that as "you don't have to pay attention" is what makes this potentially the riskiest level.


The full quote:

> once we hit some traffic we go under 40 miles an hour we'll be able to engage it but there are some other caveats: you have to be on a highway, the weather has to be nice, it can't be raining, it can't be freezing, and you can't change lanes; you have to stay in your lane while drive pilot is enabled

So it also needs to be on the highway.


Restrictions that very much make sense given the usage scenario; Stop and go traffic on highways.


"Stop-and-go traffic on highways" is quite a narrow usage scenario, isn't it? At least in a lot of highway driving in UK, FR, DE, CH, I find this mostly only happens with major events or roadworks, and even then it's for a short time or distance. The rest of the time you'll be plodding along at 60 km/h, with all other traffic overtaking you, which itself adds risk.


I wonder if 40mph is a magic number that is a good tradeoff between speed and survivability (and injury-related out-of-court settlements)


That threshold is generally considered to be 20mph. Below that speed pedestrian injuries are unlikely to be fatal.

It's what urban speed limits are in Sweden, and they average about one pedestrian death a year in the entire country.


20mph is considered to be good for survivability, but is not really good from a speed/travel perspective.

I believe OP meant maybe 40mph is a good balance between the two factors of speed (higher is better) and survivability (higher is better) from a corporate risk / insurance perspective.

Raising one of these lowers the other - so the actual threshold depends on how you value speed / convenience vs accident survivability / risk, and what you want to market your car as able to do (few consumers will see a 20mph self-drive speed limit as a viable self driving system).


20mph is within populated areas and not on a main road. In the US, the norm is 25mph, which isn't that far off.


That comes with it’s own challenges - ie limiting speed to 20mph in the UK would realistically mean it can only be used in areas where there are schools and lots of pedestrians - ie less chance of an incident being fatal but possibly more chance of an incident.


25mph is >150% of the energy at 20mph. It's a big increase.


And not only that. In a situation where, with 20 mph, you can just come to a complete stop before the child, with 25 mph initially you'd hit it with 15 mph.

  vr = (c^2 - 1)^0.5 v1
where v1 is the original slow speed with which you'd come to a standstill just before the obstacle, and v2 = c*v1 is the faster speed, and vr the speed with which you hit the obstacle coming in with the faster speed (assuming you hit the brakes at the same spot, and constant deceleration).


Indeed, as an interesting aside it's a 3-4-5 triangle.

25^2 - 20^2 = 15^2


It's limited to highways as well (hence the geofence), so they probably don't expect VRU incidents to happen much, if at all.


This is only on highways. Survivability is concerned only with passengers not pedestrians.


not gonna be a lot of pedestrians on the highway, are there?


The reduction from 30mph to 20mph in London is political CV padding and has had no real effect on outcomes. All based on Chinese whispers. Wouldn't be surprised if your comment ended up as an original source in Islington's justification for some thing or other. It's that flimsy


The effect is huge. Energy goes up quadratically, so here (4->9) you have more than doubling. Stopping distance goes up quadratically (well, due to reaction time there is a linear component in there as well, but still).

And if there is a sudden obstacle that you can just avoid hitting going 20mph initially, you'd hit it with 22 mph if you're going 30 mph initially (assuming no reaction time and constant deceleration).

I submit that that's a massive difference: hitting a child not at all or with 22 mph.


Things can be quantifiably described as "doubled" without that having any real world impact.

In my experience the people supporting these measures are adopting them not because they care about safety at all, but have another agenda.


If it doesn't it's because it's not enforced. It's irrefutable that vehicles moving at 20mph are less dangerous than vehicles moving at 30mph.


> If it doesn't it's because it's not enforced. It's irrefutable that vehicles moving at 20mph are less dangerous than vehicles moving at 30mph.

Then make them drive at 1mph


Average speeds in London are below 20mph. Accelerating absurdly overweight, overpowered cars to anything over 20mph in between stops at lights is just a moral hazard.


> Accelerating absurdly overweight, overpowered cars to anything over 20mph in between stops at lights is just a moral hazard.

The level of absurdity in suggesting a moral element to driving a car at below 30mph just shows how fundamentally biased people have become about a normal part of everyday life. For whatever reason


I think you just don't know what that term means.


Absurd


Has to do with the limitations of radar. Radar will fail >40mph for stopped vehicles or detect them too late to stop. Honda and Tesla are working to remove radars and go vision only for speed control.

If you have a car with a AEB or CMBS (automatic emergency braking, crash mitigation braking system) the owners manual has a ton of fine print about it only making crashes less severe, not preventing them outright.


The detection range for radar is more than 200m, you can stop from speeds above 100mph in that distance. And the radar sensor automatically includes relative speed information on the target (doppler shift) which vision doesn't so it's easier to figure out whether something is stationary or not. So I don't believe this "vision only will fix it all" story.


It's translated from 60 kph. Apparently, the current system is restricted to German highways. Those are limited to vehicles which can go at least as fast as 60 kph, and this might be where this number comes from. It's basically the slowest highway speed at which there is no controversy about legality. Targeting highways first makes sense from a complexity and data quality perspective.

Although driving at 60 kph on German highways seems quite risky; at that speed, most trucks will overtake you.


I doubt this reasoning. While it's true that 60 kph is the minimum for Autobahn, it's the minimum in the sense that the vehicle must be capable of that speed. There is no requirement to actually drive 60 kph or faster.

On the other side of the equation, nobody would feel comfortable cruising along at that speed, since you'd be continuously overtaken by trucks, which drive a little more than the allowed 80 kph unless it's physically impossible.


The vehicle is not capable of more than 60 km/h when used in self driving mode, so the reasoning seems quite logical. I wouldn't want to use this on the Autobahn though; everyone would be overtaking constantly which introduces additional risk, and the speed is so low that journey time would be considerably inflated.


I don't think it's magic, but:

1) Stopping distances go up as the square of speed, so 40mph has huge advantages over, say 70mph.

2) Survivability of accidents at 30-40 are way higher 70 (unless pedestrians are involved, in which case 30-40 is already too high.)

3) Radars and Lidars really can't see/classify things very well at the distances required to do comfortably stop for obstacles at highway speeds.


I imagine there are a couple reasons that boil down at some point to kinetic energy. Stopping distance goes up pretty quickly, there's a huge difference between, say, 40mph and 60mph.


Yeah, look up some videos of Teslas Autopilot in the rain... it's not pretty.


If you think Teslas driven by fleeing fugitives, OJ Simpson style, can't be remotely commandeered or disabled, then I have a bridge to sell you.

Geofencing is a big part of the dystopia that the Valley crowd is getting paid a small fortune to shove down our throats.


> If you think Teslas driven [...] can't be remotely commandeered or disabled, then I have a bridge to sell you.

I wonder how this is even remotely possible, given that Teslas can easily operate fully offline (which they frequently do in areas with poor cell reception). You can just build a small faraday cage encasing the modem so that it cannot receive signal, and you are good to go.

Also, I don't think that the drivetrain subsystem and the subsystem that is connected to the internet can even interop. I know for a fact that even if the main computer (the one that is connected to the big screen) on a tesla is rebooting or shutting off, you can still operate the car like usual.


> I wonder how this is even remotely possible, given that Teslas can easily operate fully offline

It's been possible for over a decade with regular cars. OnStar can remotely disable cars. Many cars have devices that remotely disable the engine if the driver fails to make a payment.


The point they're making is that this is coming and the groundwork is already being laid, both legally and technologically.


> The point they're making is that this is coming and the groundwork is already being laid, both legally and technologically.

So they are basically claiming that something is a problem, even though it is something that's straight up currently physically possible (but someone who reads that statement without knowing the actual details wouldn't know that it isn't currently possible)?

Disingenuous at best. Especially given that the parent comment said none of that. They essentially mocked people who believe that it isn't possible, and when they got pointed out that it isn't actually physically possible right now, the response is "well, not right now, but look at future possibilities!".


Wondering how would they prevent the faraday cage solution? I guess the OEM could disable the car if it cant phone home but surely that would lead to accidents e.g. blocking the modem while driving.


Probably just say the car won't start if it hasn't phoned home in a certain amount of time. Attempt to phone on start-up, shutdown if it can't. You don't have to rug-pull while the car is active.

I could see this being a "feature" of leased vehicles for example since you don't own it.

Tesla's should be fully functional currently, it's the self-driving that's a subscription.


> I could see this being a "feature" of leased vehicles for example since you don't own it.

that feature has existed for over a decade. Miss a payment and your engine can be remotely disabled.


The idea that this is the “valley crowd” is laughable. Most of this bs is not built in the valley, and most people in the Bay Area don’t like that kind of dystopia either.


>Most of this bs is not built in the valley, and most people in the Bay Area don’t like that kind of dystopia either.

Well, certainly it's Bay Area companies leading the charge towards said dystopia. Google and Facebook come immediately to mind as far as villains in this particular plot.


OJ Simpson, the man that the police forced to stop fleeing under threat of deadly violence?

Are you trying to scare people that their car may be disabled, when those same people already live in a society where police shoot and kill and pit maneuver drivers?

Wouldn't you rather have the car stop safely, instead of being pit maneuvered and flipped into a ditch?


OJ Simpson, the man that the police forced to stop fleeing under threat of deadly violence?

That didn't happen. Police followed OJ for 50 miles and ultimately let him drive safely to his house.

https://en.m.wikipedia.org/wiki/O._J._Simpson_murder_case#Br...


The LAPD had killed plenty of people who didn't comply with their orders at that point. It is completely unrealistic to think that the LAPD wouldn't have used violence against OJ, if he had tried to flee to Mexico.


>Are you trying to scare people that their car may be disabled, when those same people already live in a society where police shoot and kill and pit maneuver drivers?

Scare? No.

I'm just pointing out that, just as pervasive corporate surveillance is now the norm on the internet, pervasive tracking and always-on kill switches for cars will soon become the new norm. Law enforcement will paint it exactly as you have done here, with all the pearl-clutching and "think of the children"-type arguments that always accompany a new overstep/affront against civil liberties.


[flagged]


I think you know perfectly well what my point is and you're being deliberately obstinate.


> "Geofenced"

That's the word. That's the plan. We will be geofenced, there will be a kill switch. You will not be able to go where you want, except if the government allows it.

In fact, I think Uber is the best example of geofencing we have for the long run. In the medium term, self-driving cars you still own will fill the gap.


You're making a pretty big leap there from "our experimental self-driving mode only works inside the geofence" to "the government won't let you drive where you want".


Yes, a huge leap. I can't possibly imagine that government would pass legislation that all the automakers would comply with. /s

https://www.musclecarsandtrucks.com/biden-infrastructure-bil...

"Deep within the Infrastructure Investment and Jobs Act that was signed into law by President Joe Biden is a passage that will require automakers to begin including what can be best summarized as a “vehicle kill switch” within the operating software of new cars, which is described in the bill as “advanced drunk and impaired driving prevention technology”. The measure has been positioned as a safety tool to help prevent drunk driving, and by 2026 (three years after the enactment of the Act, per the text) the kill switch could be mandated on every new car sold in the United States. Then there’s the broader reaching RIDE Act, which we’ll touch on in a moment."


I'm being downvoted - but this is the reality of what will happen. Who wants that sort of monitoring when they are driving?

Is this a consumer-driven feature? Or is this more government management - leveraging technology to micro-manage every detail of our lives - only allowing good citizens travel privileges etc?

We are sleep walking into a dystopia, and I get downvoted for stating the obvious! Seriously - your government is about managing you, saying what you are and aren't allowed to do, extracting taxes, fines and licensing fees from you - its not there to help!

PS I was being downvoted, but now I'm back up... It was right what I wrote it!


Maybe some people just agree with this?

(a) Findings.--Congress finds that--

            (1) alcohol-impaired driving fatalities represent 
        approximately \1/3\ of all highway fatalities in the United 
        States each year;


            (2) in 2019, there were 10,142 alcohol-impaired driving 
        fatalities in the United States involving drivers with a blood 
        alcohol concentration level of .08 or higher, and 68 percent of 
        the crashes that resulted in those fatalities involved a driver 
        with a blood alcohol concentration level of .15 or higher;
          

  (3) the estimated economic cost for alcohol-impaired driving 
        in 2010 was $44,000,000,000;



            (4) according to the Insurance Institute for Highway Safety, 
        advanced drunk and impaired driving prevention technology can 
        prevent more than 9,400 alcohol-impaired driving fatalities 
        annually; and


            (5) to ensure the prevention of alcohol-impaired driving 
        fatalities, advanced drunk and impaired driving prevention 
        technology must be standard equipment in all new passenger motor 
        vehicles.



    (b) Definitions.--In this section:
            (1) Advanced drunk and impaired driving prevention 
        technology.--The term ``advanced drunk and impaired driving 
        prevention technology'' means a system that--
                    (A) can--
                          (i) passively monitor the performance of a 
                      driver of a motor vehicle to accurately identify 
                      whether that driver may be impaired; and
                          (ii) prevent or limit motor vehicle operation 
                      if an impairment is detected;
                    (B) can--
[[Page 135 STAT. 832]]

                          (i) passively and accurately detect whether 
                      the blood alcohol concentration of a driver of a 
                      motor vehicle is equal to or greater than the 
                      blood alcohol concentration described in section 
                      163(a) of title 23, United States Code; and
                          (ii) prevent or limit motor vehicle operation 
                      if a blood alcohol concentration above the legal 
                      limit is detected; or
                    (C) is a combination of systems described in 
                subparagraphs (A) and (B).


The "right" way to fix this is by reducing car-dependency - better public transit outside of metro areas, less subsidies to gasoline, taxing car makers to cover for externalities - and not by instituting policies that can be used against law-abiding people.


I think its possible to make an argument for lots of things, in the name of safety. That it should be impossible to have anonymity online, etc because of child abuse, or terrorism, or some other action.

The safety argument cannot balance the fact that we also need freedom of speech and thought. We are heading into a world where the power balance is going to become so wildly asymmetrical - the government will know everything about you and will have the power to act against you instantly, automatically - it will change us. It will be dehumanising - you will have to watch your back. If you don't already.

We will have to conform to have a job, be able to travel, receive our govcoin vouchers, etc.

We are literally installing an even worse citizen score system than China, that no one would ever want, but each step of the way we are convinced by some spurious and limited 'safety' argument. This misses the whole picture.

And if you think that the government won't act against you once it has the power to do so, you are dreaming. Everyone seems to think that government is a force for good, as opposed to being the operators of the slavery system we find ourselves in. Some of us try to kid ourselves that we are free, while we hand over 40% of our income for government to service the interest on the debt that they have accrued on our behalf. I don't.

Anyway - I say what I see, no one likes my message, everyone seems blind to the points I raise, or wants to ignore them - what can I do? I hope I'm wrong.


> I think its possible to make an argument for lots of things, in the name of safety.

I think the word you are looking for is `Nanny State` [1]. People's opinions on issues like this always tend to be based on political tribalism. If it's from my political party, then they are so amazing that they care about other human beings so much. If it's a political party I am opposed to, how dare they infringe on our freedom.

It's very hard to find good faith discussion.

[1] https://en.wikipedia.org/wiki/Nanny_state


you might be right you might be wrong - however I suspect you'll find there are many of us out here who are just not willing to let ourselves get so worked up about such a ridged view of a possible future - doubt you'll find many on HN who are ignorant to the signs - i'm certain many are concerned about the trends - that doesn't meant people necessarily disagree with your thought - we just may disagree with your outlook - myself - i'm inclined to believe I - my friends- and fine folks like you are thoughtful enough to deal with it in the moment - even if that took a civil war - i'm not inclined to believe at some point i am going to lose touch with reality - nor am i inclined to believe my government is going to turn against me without noticing - for what it's worth i don't think you need to take it so personally - lots of people are concerned - just not everyone is freaking out (yet)- maybe you can somehow make peace with that? :=)


Thank you for your thoughts and concern. Let me say I am at peace with myself, I've come to terms with life, mortality. I do fear that the future will be neo-feudal, techno-slavery very shortly though, if people don't start to push back against the governance system + the corporations that run it.

I differ on a few points to you. I don't think the government has turned, I think it has always been this bad! It has always been full of parasitic middlemen that look to make a fortune off the people doing the actual work - people in government are the worst of us, while pretending, nay demanding, that they are portrayed as 'the best of us'. Anyway, I think its that we are at the end of ... something - technology + political immorality seem to be combining in a potentially unpleasant way.

Also, I think the governance approach that has been taken is very incremental - the plans are slow and stealthy - we are disclosed a bit at a time, but never so much that it causes a rebellion. We became aware of 3 letter agencies, then that they were sometimes nefarious, then that they were spying on us, then we had travel constraints, camera everywhere passes to travel, etc, etc - the direction of travel is only one way - towards technocratic governance. None of those steps on their own tripped enough alarm bells to prompt anything more than hand wringing - but if you look back over the years - say 20 years - you'd never believe where we are now. Nor how we got here. That's the power of incrementalism.


> nor am i inclined to believe my government is going to turn against me without noticing -

Are you in the US? If so then out of curiosity, considering we have published research demonstrating that the average American has effectively zero influence on policy and the only groups with representation are extremely wealthy individuals and corporations, and we've already seen the US government engaging in the warrantless surveillance of every last American, the continued push for more extreme uses of gerrymandering and other forms of voter suppression, police executing citizens in the streets for minor infractions, the promotion of lies and misinformation encouraging citizens to get sick or seek ineffective treatments during a global pandemic, the mass incarceration of Americans at a rate far beyond any other nation on the planet, along with continuous efforts to weaken and ignore our constitutional rights, just how much longer do you think it'll be before the notice that they've turned against you arrives?


Police have had this ability for a long time now:

https://www.schneier.com/blog/archives/2010/03/disabling_car...

https://abcnews.go.com/Business/Autos/story?id=3706113&page=...

This degrades our freedoms and our privacy. Many of us are already being tracked every time we drive. All of the tracking, all of the loss of control and it's not serving us. Most people aren't even aware it's happening to them. These "features" are and will increasingly be abused and not just by the government, but by the corporations as well. Do we really want a permanent record of everywhere we drive and when? Of how fast we went? Of who we were with and what we were talking about while in the vehicle? We'd better start asking ourselves those questions now because that's where we're headed.

Right now, you can at least physically disconnect and disable the OnStar system. In the future it seems likely that kind of thing will be entirely built in and impossible or illegal to disable.


Is seriously this your biggest concern about government controlling you after the news that the Supreme Court banned abortion? You don’t have any whatsoever concern about that, I bet.


The Supreme Court didn’t “ban” abortion.


You are attempting to discredit an opponent's position by charging hypocrisy without directly refuting or disproving the argument. This is known as "whataboutism", and there's an article on it on wikipedia.


The fact is that loads of people drive drunk, drive illegally, and drive dangerously--on public roads. Police enforcement clearly doesn't work. Automated enforcement would. Why should people get to endanger others? The leading cause of death for children and teens almost every year is 'motor vehicle crashes' and it wasn't the children and teens causing those.


>Why should people get to endanger others?

This is a dangerous frame for this argument. "Get to" implies a "right," or that the action is condoned somehow. It's not. Not everything bad in the world needs to be prevented by government action. We could stop all traffic deaths today by banning cars. We could stop all forms of obesity by making everyone drink Soylent. Personal liberty prevails because, in the end, the knock-on effects of removing liberty are a case of the cure being worse than the disease, in most cases.

Automatic enforcement.... That sounds like letting the algorithms put people in jail.


Obesity isn’t putting others at risk in the same way. Cars endanger others. People make a special exception to norms for cars because it is hard to look past the convenience of them.


It’s not a stretch to say that the obesity epidemic in the US is causing limited health care resources to be redirected to a vast number of people with self-imposed health issues, leading to significant deaths and under-investment in populations with diseases that were not self-imposed.

However this is in no way is a convincing argument for government to take over control of what people stuff in their mouths.

Things can pretty much always be made safer by eliminating personal freedom. The safest human protected by the best 5-laws-safe robot overlords probably doesn’t get outside all that much.



[flagged]


You're saving fake quotes? If that's your hobby go for it I guess. You don't need to tell us though.


> You will not be able to go where you want, except if the government allows it.

Sounds like in your model Elon will be the one deciding where you can and cannot go.


Yeah, so much unlike the present, where you can go wherever you want (if the government has built a road to do so).


That's actually quite funny to me. In a dark, ironic way...

The government designed/authorised the solution - roads + cars. They decide to spend lots of money on this. They got rid of earlier public solutions - eg trams, closed train lines, etc. They could have invested more on public infrastructure - but opted for facilitating private transport.

But now they are walking this back! Around here they are decreasing roads, especially recently. We have 2 lane roads, where one lane has been closed to create cycle lanes. We have roads that have been entirely pedestrianised. All this has made traffic worse - and this is the plan. We are meant to get off the roads and ... cycle? Take the bus? There are actually no alternative viable forms of transport.

So - the government implements, then de-implements whatever solutions it likes. The public investments using money paid by the previous generation of tax payers, is actively being worked against by this generation of government!

So, no, I'm not cheering the government spend.


Don't worry, at least in a Tesla the geofencing is going to be disabled in datetime.now().year + 1


Meanwhile at Tesla: "We're the industry leaders, wait for us!"


Hey, they've had self driving ready to go next year since 2014(https://youtu.be/o7oZ-AQszEI).

Why so cynical? :)


https://news.ycombinator.com/item?id=30856861

The claims up until 2018 were met. 2018 is when Tesla got the hang of Model 3 production, so their value now had to come from promises like FSD beta.


> 2018

The Saudis should have had come through.

In a rare moment of clarity Musk self-diagnosed and understood that he can't be trusted with a public company. He takes the stock price (and generally everything that impacts or perceived as impacting the stock price) too personally.

His best work was when he was at the helm of a private company out of the public eye. Unfortunately I think he became addicted to attention and can't go back.

All the former members of the PayPal mafia became addicted to being high ranked generals in the culture and political wars raging in across the country.


This is less sophisticated than Tesla Autopilot (you can do lane changes with AP) which has been in use for years.

It's not anywhere close to FSD Beta.


The non-geofenced level 2 system that Mercedes also has, can of course do lane changes and is definitely on par with Tesla's autopilot. This is about their level 3 system, which is ahead of Tesla, both technically, and most importantly regulatory.


How is this system ahead of Tesla? What features does it have that Autopilot does not?

Obviously this is ahead in a regulatory sense, but that can very easily just be politics, not technical sophistication.


It's a level 3 system that allows you to relax and take your eyes off the road when it's active.

Tesla only has a level 2 system where you have to be ready to take over at a moment's notice, and you can never take your eyes off the road.


You can definitely take your eyes of the road with Tesla's system as evidenced by the copious number of videos where people get out of their seat and leave their Tesla to keep driving.


Yes, there are plenty of reckless people driving illegally in their Teslas.

But if Tesla is ahead, why did they back down recently and add driver monitoring systems as required by level 2, instead of applying for a level 3 certification?


Maybe because the level system is stupid and doesn't reflect how people actually want to use their cars? i.e. They think they can skip L3 entirely and just go directly to L4, no point in having to get "certified" for one if you think you can get to the other soon.

I put certified in quotes because AFAIK there is no straightforward process to "apply for a level 3 certification" in the US, where Tesla is based. You don't just pay some application fee, have your vehicle / software vetted, and now your car is allowed to drive itself in all 50 states.

Whereas the OC is about Germany, where they have the Autonomous Driving Act that is actually explicit about certifying cars based on the standard levelling system (which again, is terrible).


This type of anti progress comment will soon be banned at free speech twitter


"Our leader of all things FSD is on sabbatical, please bare with us"


Karpathy doesn’t lead FSD.


"Director of AI at Tesla, leading the Autopilot Vision team"

Whichever way you cut it, that's a pretty significant situation for FSD at this time.

In my eyes, FSD is over, and has been for 2 years. They'll get to Lane Assist ^2 but that's it. None of the pretty youtube videos convince me, and there are just as many which are terrible.


All these other companies are just wasting their time because for 5+ years now, Teslas only require someone in the drivers seat for legal reasons (source: https://www.tesla.com/autopilot)


In Germany Mercedes assumes all liability for these situations though (60kmh, highways, good visibility), so I don't see why Tesla couldn't do the same if they were just as confident in their system, but apparently they don't trust it enough for that. Instead they claim their system can do all these things on it's own, while the driver doesn't really benefit from it as they are still liable for everything it does and not even allowed to use their phone while driving, which is something you are allowed to do with the this new system from Mercedes.


This is powered by nvidia i think: https://blogs.nvidia.com/blog/2021/04/16/mercedes-benz-eqs-h...

This means if mercedes have it, so will many other automakers since nvidia sells to everyone.


With these press releases they can mean anything from "It's our tech but rebadged" to "We sold them a CAD program" so not always clear.


It’s probably something more. A recent article said 40% of Mercedes-Benz’ revenue from autonomous driving options will be going to Nvidia, starting 2024. Source in German: https://archive.ph/Gs9UD


The linked blog post seems to show off hardware but the software is all Mercedes, no? In that case automakers wouldn't be able to buy the self-driving capability from NVIDIA.


NVIDIA has a large autonomous vehicle software team and releases public products (such as Driveworks: https://developer.nvidia.com/drive/driveworks) as well as custom products for specific clients.


I own an NVidia GPU, and after hours of deep learning training it sometimes says "GPU not found", and I have to reboot the system to even recognize the GPU (even the tool nvidia-smi doesn't work anymore).

So ... I really hope they solved the issue for FSD car applications.


This is real progress.

Ad video.[1]

Still very limited. Won't work at night. Has a LIDAR, but it's in the front grille, which only gives it enough of a view for anti-collision.

We should be seeing review videos soon.

[1] https://www.youtube.com/watch?v=4Ue7ISC3eHQ


That video is completely animated.


no its not


Recent discussion: "Mercedes to accept legal responsibility for a vehicle when Drive Pilot is active"

https://news.ycombinator.com/item?id=30763522


That's really interesting. Let me quickly share this with you guys, It makes sense, BTC and crypto is helping to regulate, rather than pretend it won't ever happen. The big institutions getting in is the catalyst that will launch us into the stratosphere. Most people don't like change but after the change is made they grow used to it and it becomes a non issue usually because their fears never materialize. And benefits they were unaware of before turn out to be far more beneficial. Few if any rug pulls. The projects that initiated the process of regulation have not been ruined, they got involved in setting guidelines and helping the regulators understand the crypto space. I’d get involved more knowing that I have made over 10` btc from day-trade with norrelsmith8 @ gmail .com gm in few weeks


For me the corollary of this ("you just invented a more expensive railway") is the business model assumptions that go away. Long distance trucking is still a likely winner, but there is still the "last mile / last off ramp".

None of the business models touted today survive the new railway problem. That's not to say it's not a valuable technology - it's just we cannot whistle up a human shaped robot that just does what we want.

In the same way a machine is shaped by its function, our cities and infrastructure green will be shaped by their function.

Disneyland is built one level up, so that all the service tunnels (with I am sure some self driving robots) can get around without getting in the way of people. Whatever we rebuild our (climate neutral) cities to be, i think less surface space given to transport will be high


Boston's Big Dig is an example of this. Surface space is now a series of green parks with limited car access, and the cars moving through the city are underground. It's quite lovely, though it's detractors will ask if it was worth the cost.


Last I heard it was horribly mismanaged, insanely over budget, went on for years longer than it was supposed to and the tunnels still had leaks. Admittedly I didn’t look closely but from what I heard it was a disaster project.


there should be a qualification level for vehicles that can fully self drive at low speeds in heavy traffic on the highway so you can nap through traffic jams and then later enjoy your performance automobile by driving it yourself when it's clear.

half the point of fancy cars is that they're fun to drive.


I think that may be level 4, but I’m not sure.


it appears you're correct... but it is a much easier problem and one that i argue has massive bang for the buck _today_.

robotaxis will probably happen at scale some day. but for now, "nap through bad traffic mode" would be pretty excellent both in terms of safety and utility.


Phantom traffic jams are so common, where things aren’t moving as fast as they should because people keep slamming on their brakes and scaring the people behind them (who in turn do the same). Even if the original reason was good I imagine it makes a lot of real life traffic jams worse.

But if the cars are all driving themselves, wouldn’t they avoid that? I wonder if traffic jams would get smaller just from the lack of that one issue.


yeah, i've daydreamed about that. if enough cars opt in you could implement cooperative flow control algorithms.

i think this has already been shown to help with advisory lane specific speed limits that are common in some parts of europe and the us. (the lit adaptive per lane speed limit signs that appear every mile or two.)

even a few vehicles that drive intelligently in traffic could potentially have huge effects.

and i'll say it again: why buy a fun vehicle like a tesla or a benz if you're just going to have the computer drive it in a way that the manufacturer's legal team is comfortable with?

unless, of course, you're stuck in traffic. then let the lawyers drive and take a nap.


Some years ago, I read an article claiming the opposite. I could not find it again, but the main arguments, as I remember them where, that a) if self-driving cars drive according to the law, they drive slower and with greater distance from the car in front than human drivers, and b) for people to feel comfortable inside a self-driving car, the car must accelarate and break more smoothly than a human driver does, who actively participats in what is going on around him or her. The maximum throughput of a road would therefore become smaller and thus, ceteris paribus, traffic jams become more frequent.


I could easily see you being correct too. Unfortunately I don’t think there is any way to truly know until real life hits that point.


Seems like we need to re-run the DARPA Grand Challenge.

Local car clubs should be happy to work together to define a route and run the race.

https://www.youtube.com/watch?v=M2AcMnfzpNg


Is there a map somewhere that discloses which roads it will be permitted on?


I'll consider using a self-driving system when it can safely drive the length of the Tail of the Dragon without using preloaded data for it (that would be cheating).

The dragon has good pavement, clear markings, the occasional driveway, and 318 curves in 11 miles (18km). In this video from December 2021, the Model 3 crosses the yellow line repeatedly into the oncoming lane.

https://www.youtube.com/watch?v=Oz4yMGbRa4Q (long video)


One of the youtube channels I watch has been testing driver assistance features, and apparently Mercedes didn't do so well. I have not watched the video yet, and I'm just going by the clickbait title: https://www.youtube.com/watch?v=irFzESPCegA

They tested a "brand new Mercedes GLS 450 4MATIC with the maximum level of driver assistance available in this model."

I wonder how much tech that shares with their L3 system..


I don't have actual knowledge of the codebase but given the different sensor inputs and application, they're unlikely to share much in common behind shared libraries, drivers, etc.

Localization w/HD maps, actor forecasting, LiDAR based detectors, etc. are generally not present in L2 systems and are much more involved.


Meanwhile in Jerusalem

https://youtu.be/pDyMzz8HMIc


Kind of skeptical about how all the shots are only from night-drives, probably when traffic is the lowest.

Didn't Tesla release a somewhat similar video, showing a Tesla driving across the US?


Agreed. They plan roll out Robotaxis in Germany and Israel this year. We will have to wait for third parties to verify the performance. There is another Video from NY last year that had a bit more traffic.

https://youtu.be/50NPqEla0CQ


The car pulls out and immediately fails to yield to two pedestrians at a crosswalk at 0:54. One of the pedestrians is in the crosswalk.


It also has a traffic light so I guess this isn't a crosswalk where the car has to yield. The pedestrian isn't really in the crosswalk, there is a yellow dashed line.

Very confusing but I guess: Different countries, different rules.

1:44 is also confusing, no signs at all so the car should yield to the car coming from the right.

3:35 looks dangerous, pulling left into the scooter.

10:00 he says that it is very challenging to drive in Jerusalem and that you can drive everywhere if you can do it there. I haven't seen anything challenging up to this point in the video, all roads are clearly marked, no potholes, almost no traffic, lots of signs. This looks very easy to me.


1:44 might have a yield sign behind the no entry sign. Looking at Google Maps, it might be 16 HaMatmid Alley, and in the 2011 Street View photos there appears to be a yield sign behind the no entry sign.


The car did have the right of way though, as the crosswalk had a red light on. As to why one of the pedestrians decided to stand in middle of the road during a red light, I have no idea.


That might be the way it works in Israel, but it's not a safe way to drive even if the law permits it. In Washington state, every intersection is a crosswalk whether or not it's marked, and you have to stop for any pedestrian crossing an intersection whether or not the light says the pedestrian can be there.

I hope these car companies get local laws right. But it would be best if they did the safe thing even if local laws didn't require it.


It's not a safe way to walk either, and I don't know where walking against a red light is legal.


Walking through a red-light is illegal and so is driving through a green light when somebody is improperly walking through the intersection. There's no contradiction here.


In England it is legal as long as the walker determines it is safe to do so.


Just like a real person :)


It's nice to see a self driving car in European conditions. All the Tesla FSD videos feel like they lack some complexity and depth. I do wonder how Tesla would perform in these conditions.


Wow. That was pretty convincing. Thanks for sharing.


Agreed. It should be kept in mind that Mobileye solution has not really seen a wide rollout and scrutiny yet. One can be optimistic considering that the CEO Amnon Shashua is known for a very careful and responsible conduct compared to e.g Tesla. This apparently let to them having a falling out because Tesla overpromised on the old stack selling it as “Autopilot” when it was L2 at best. Which resulted in them doing FSD from scratch after Mobileye cancelled the collaboration after a deathly accident. At least this seems to be the semi-official account of things.


The levelling system in self-driving really needs to die.

If Autopilot is L2 at best how is the OC L3? What features does the Mercedes Drive Pilot solution have that AP does not?


LiDAR for one: current SOTA monocular depth estimation is pretty poor compared to even poor LiDAR


That's a sensor not a feature.


אני משוכנע


Google translate: "I'm convinced"


This is the only way to roll out a product unless you're a die-hard technologist who believes in 'death by marketing'.


It needs to be going under 40mph and on a highway, which sounds like this only works in stop and got traffic on the freeway.

Pretty cool.


I wonder why that speed limit would exist. Sensor range? Computational power? It seems like the difference between being able to self-drive at 40 and at 65 on the same road would not be all that significant.


I first thought it might have something to do with the range of LIDAR sensors and breaking distance, but that doesn't seem to be the case as LIDAR sensors can detect objects at around 200m away and a car at 60kmh needs only around 50m to break comfortably. While I don't know if this is the reason 60kmh is the limit for this system to be allowed to drive on German highways a vehicle needs to be able to drive 60kmh, so this might be the reason, but I am not an expert so who knows.


It's important to distinguish the sensor range vs. the range at which your model can accurately perform detection.

There's time involved in solidifying the prediction, performing computations, and you also want a safety margin for a backup system to intervene, etc.

Lower speeds means you have more timing tolerance for all your systems.


Perhaps because 65 has exponentially more energy in collisions and more likely to cause death? If it is limited to 40, death is less likely and therefore chance of negative news. So I would venture just a risk calculation for initial release.


Literally 164% more energy - not exponentially.


You used 1/2mv^2? It is 2.64 times the energy. Squared means it is increasing exponentially


No, quadratically. Exponentially would be if it was some term raised to the exponent of the velocity.


Ok, fair enough—-looks like I cannot update the original post anymore though


I wonder if this is in-house development or are they just riding on NVidia stack? Traditional car companies don’t have culture in place to create necessary AI infrastructure and attract the talent pool.


Looking forward to it. Hopefully it’ll be better/equal to Tesla’s FSD. I assume it won’t be, but idk the range of how good FSD is really.


What exactly is this "internationally valid certification" they're banging on about and how is it the first?


> internationally valid certification

I think they mean "certified for level 3 in >1 countries".


European Exceptionalism, ladies and gentlemen. The real engineering, no fluff.


> Mercedes-Benz Cars says it plans to release “Drive Pilot,” a conditionally automated Level 3 system for automated driving

I think it's pretty irresponsible to publish a picture of a driver taking his eyes off the road to play Tetris on a system designed for Level 3.


“Level 3 autonomous driving, as defined by SAE International, means that the driver can hand over control to the vehicle, but must be ready to take over when prompted” (from an EU car site)

At level 3 the driver no longer has to constantly monitor the road, but they can’t exactly go to sleep because they need to be ready to take over in a few seconds.

Given that the image seems reasonable.


In the specific case of the Mercedes-Benz system, if the driver does not take over in a timely manner, the car will find a safe place to come to a stop, activate the emergency lights, and unlock the doors, so you'd probably live even if you did fall asleep: https://group.mercedes-benz.com/documents/innovation/other/2...


That's pretty much the definition of level 3;

> In certain situations, the driver can turn attention away from the road, but most always be ready to take full control again.

https://www.blickfeld.com/blog/levels-of-autonomous-driving/


Not if the system is designed to achieve a minimal risk condition by itself or give the driver sufficient warning to take over. Level 3 includes systems that can handle an emergency situation but require the driver to respond within some limited time.


Germany has defined what Level 3 means to them. The driver needs to be able to take control within 10 seconds. If the car can't wait 10 seconds for the driver to stop playing and pay attention to the alert, it isn't Level 3 (in Germany). It can't be cowboy level 3 where you are still driving with your hands on the wheel and foot over the break pedal waiting for the software to stuff up.


What I don't get: if deep learning can recognize a crosswalk with 99.99% accuracy, that's very impressive. Yet it is not enough.


You get it correctly. Deep learning isn't inherently good for cases where you need to be right 100% of the time. But we don't have a better tool so the industry is trying to see how far they can push it.

One big open question in self driving is: Do you have to be able to see/classify everything in the scene all the time to be a good driver? I think the answer is no (see: humans.) But, because most self driving stacks in the industry have chosen "3d classified representation of all of the objects in the scene" as a fundamental interface between perception and planning stages, there is a big challenge. (e.g. Representing uncertainty usefully is exceptionally difficult with this interface, among other things.)


I can imagine this being at least two orders of magnitude higher than most human drivers (one in ten thousand misses versus one in one hundred). How many orders of magnitude better does it need to be before it's good enough?


Missing one in a hundred intersections? That's insane. Most human drivers are much better than that and wouldn't keep their license (or their life) for long if they weren't.


I dunno man, I live next to a school... And based on my observations, I think the actual occurence is probably closer to 1/100 than 1/10000.


My 99.99% was just a number for the sake of argument. The real accuracy of classification networks is usually in the range of 90-99%. It's not bad for a lot of tasks, but for self-driving I'd say it's far from sufficient.


Do human drivers spot crosswalks in time with higher accuracy? That seems very unlikely to me.

Still, as long as the driver (human or otherwise) sees the pedestrian and vice versa, an accident can usually be avoided, with perhaps some angry shouting.


[flagged]


I own a Model 3 and love it. But you're 100% right: the "full self driving" claim Musk/Tesla has been making for years is bizarre. I live in a major city and have never had a trip using FSD where I didn't have to take control multiple times. At times, updates seem to make it get worse. Simple example, on a highway, it can't even keep the car in the middle of the lane through moderate curves. It almost always drifts to the outside, sometimes onto or over the lane marker. And I know it's doing this because I can watch it on the display, so the car clearly knows it's not centered. Surely this should be a trivial task (as self-driving tasks go).


> using FSD

Unless you mean FSD Beta, Autopilot is just lane keep and adaptive cruise control, and doesn't take turns for you. Even if you bought the $12k option you don't have the beta until you reach that 98% or better safety score and opt in.


We’ve paid for something several times now. They always make it sound like the car drives itself.

But the real point of my comment was that if they can’t even keep the car in the middle of the lane, I seriously doubt their ability to handle situations an order of magnitude more difficult.


You are quoting someone who is egregiously misquoting and misunderstanding what Elon said in that recent earnings call. What he actually said:

Of any technology development I’ve ever been involved in, I’ve never really seen more false dawns, or where it seems like we’re going to break through but we don’t as I’ve seen in full self-driving. And ultimately what it comes down to is that to solve full self-driving, you actually have to solve real world artificial intelligence. Which nobody has solved. The whole road system is made for biological neural nets and eyes. And so actually, when you think about it, in order to solve full self-driving, we have to solve neural nets and cameras to a degree of a capability that is on par with, and will really exceeds humans. And I think we will achieve that this year. The best way to reach your own assessment is to join the Tesla full self-driving beta program. We have over 100,000 people right now enrolled in that program. And we expect to broaden that significantly this year. So that’s my recommendation is join the full self-driving beta program, and experience it for yourself. And take note of the rate of improvement with every release. And we put out a new release roughly every two weeks. And you’ll see a little bit of two steps forward, one step back. But overall, the rate of improvement is incredibly quick. So that’s my recommendation for reaching your own assessment is, literally try it.


I don't see the misunderstanding. Most of what you quoted is irrelevant, he said two key sentences:

"And so actually, when you think about it, in order to solve full self-driving, we have to solve neural nets and cameras to a degree of a capability that is on par with, and will really exceeds humans. And I think we will achieve that this year."

The meaning seems pretty clear. He says they will achieve "that", "that" being "solve full self-driving" and "solve ... on par with, and will really exceed humans" from the directly preceding sentence, which can only be either Level 4 or 5. Where is the misunderstanding?


The real way to achieve level 4, in the Tesla software, is to improve its decision making AI. You'll pretty much never spot the neural network not detect something in all of the public FSD videos, at least when it's important to the operational safety of the car (the objects on the screen don't always match up with the NN since they likely have an renderable object budget). Where it always, always fails is in deciding "is is safe to turn right now" or "can i pull into the center of the divided highway for my unprotected left yet".


I disagree that most of it is irrelevant. I think they are critical to understanding his point.

Why are you so desperate to cherry pick? This isn’t Twitter, there’s no 240 character limit on replies.


It's hard not to notice that you still haven't explained where the misunderstanding is or even what you think is his point. I'm still curious how you think it's a misunderstanding, I already know you disagree. I'd be more impressed with your Twitter comeback if your own reply wasn't a <240 character non-explanation.


He isn't making any point. The only thing to understand is that Musk is merely continuing his track record of lying about where Tesla is at with its supposed "full self-driving".

He's been lying about it for nine years: https://jalopnik.com/elon-musk-promises-full-self-driving-ne...

Five years ago Tesla claimed all its cars "have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver". That was a lie too: https://www.tesla.com/blog/all-tesla-cars-being-produced-now...

Whatever happened to those 1 million robotaxis that were supposed to be on the road by 2020? Yet another lie: https://www.thedrive.com/news/38129/elon-musk-promised-1-mil...


People love to follow people.


Here's another joke from the same clown at the time: Where are the robo-taxis due for release in 2020 or even Level 5 FSD? [0] Just ask any Musk fan and they will either give no concrete explanation or attempt to dodge and give a hand-wavy response in the form of an admission.

> How are people still listening to this clown?

No wonder, so many people love getting openly scammed in style by him so many times, even defending the atrocious contraption FSD (Fools Self Driving) still is.

[0] https://www.motortrend.com/news/tesla-autonomous-driving-lev...


Is it still designed to mow down pedestrians?

https://www.caranddriver.com/news/a15344706/self-driving-mer...

Mercedes ethics: Save the people who can afford lawyers.


That's such a ridiculous problem to worry about. Humans don't try to solve the trolley problem, either, I don't know why we suddenly expect computers to be able to do it.

The real answer is actually simple, and it's basically the same advice for human drivers: stop the car, however you can. Don't try to swerve, don't get clever, just dynamite the brakes and hope for the best. Or as they say on the race track "both pedals in!"

Anything else is foolishness. I think magazines just like to talk about it because it makes people click.


Your argument works both ways. Lets say a truck is driving on the wrong side of the road and is going to flatten the mercedes (not too uncommon as truck drivers seem to be crazy and will overtake each other). Lets say it's possible to swerve onto the footpath and squash some pedestrians in order to avoid the truck. A human driver wont be solving the trolley problem here and will probably just die. It sounds Mercedes goal is for the car to save the driver and kill the pedestrians in this situation.


Read what the Mercedes guy actually said, though. Swerving is bad, because it in no way guarantees a better result, and increases the risk substantially of making things worse. Any tire grip you use to swerve isn't being used to stop, and the best way to minimize damage to the car, and by extension risk to the occupant, is to get kinetic energy as low as possible and rely on the safety cage for the rest.

The magazine writer frames it as "Mercedes is making a choice in the trolley problem, and that choice is the driver" as if that's actually a decision being made, instead of the entirely expected result of trying to slow the car enough that regular safety systems are likely to save the life of the driver. Mercedes is doing the opposite of trying to solve the trolley problem, they're declaring it pointless to try.

It's much more plausible to get the car into the "driver is very unlikely to die" kinetic range than it is to get it down to "pedestrian is very unlikely to die."

So, back to your hypothetical -- the Mercedes won't be swerving for the pedestrians, because that'll statistically increase the risk to the driver. The car may very well still get hit by that truck, but now it'll be on the driver side instead of the front crumple zone. And who knows what else happens after the car becomes ballistic because it collided with something.


I think that as car perception systems start to have understanding of the 3d scene (i.e. not just adaptive cruise longitudinal controllers) joint control of the brakes and steering will becoming standard for accident avoidance and will be a big step forwards. It probably is the case that braking ends up being the "more useful" degree of freedom (for one, braking latency is lower than swerving latency) but swerving out of the way of a car can be incredibly useful in certain cases.


Usually no pedestrians running around on the autobahn, for which this system is intended for.


Their plan (as it was in 2016) is to eventually have full self driving everywhere. Presumably the fundamental decisions they make now will carry forward.


Although it's pretty clear you don't care about an answer to your question - it's only designed to operate on highways.


Their plan (as it was in 2016) is to eventually have full self driving everywhere. Presumably the fundamental decisions they make now will carry forward.


> Drive Pilot can only be engaged on a certain highways, and the car must not exceed 40 miles per hour.

> The system will only operate during the daytime, in reasonably clear weather, and without overhead obstructions. Inclement weather, construction zones, tunnels, and emergency vehicles will all trigger a handover warning [1].

There's a very limited subset of areas that one would be able to use this.

[1] https://cleantechnica.com/2022/03/23/mercedes-will-be-legall...


Mercedes system seems as capable as my 2002 Nissan's cruise control.


You, however, aren't allowed to use your phone in these situations with the car manufacturer assuming all liability.


If tesla still has issues with self-driving cars and they are like a decade if not more ahead of everyone else.

I dont believe this PR stunt.


Wow. Or Tesla are the real masters of the PR stunt, and are not a decade ahead of everyone.

EDIT I watched this comment drop from 15 pts to 5 pts. The Elon apologists are out in full force.


I believe that Tesla is ahead in the self-driving industry from an overall technology perspective. What they have failed to do, however, is to tackle the problem of when and how much the user can actually trust their system. This leaves users to struggle to build their own mental models of the same ("Oh, it likes this road", "I don't trust it at high speeds", etc.) This is terrible UX.

Mercedes seems to be stepping up to the plate to actually take a stab at this problem. Inevitably the answers to these questions are not super impressive compared to Tesla's 'hey, it might work in any situation, you never know' approach. But I suspect users will really like it, even though the underlying tech is likely far behind Tesla's.


> I believe that Tesla is ahead in the self-driving industry from an overall technology perspective.

What info could you possibly have access to that would give you remotely enough confidence to assess this? I’m not saying you’re wrong, I’m just saying I doubt you have the info to claim this.


Reasonable. It's an 'I believe' claim based not an 'based on these facts' claim. But, I am more informed that most as I spent five years in the industry leading all algorithm/ML development for autonomous systems at Apple. (Who I am leaving out of this analysis BTW :)


So what are you believing if it’s not based on facts?

Also you surely can understand that your work on AS at Apple has no bearing on what Tesla is doing, correct?

Am I missing something here?


Self-driving car is not only radar and good model. Tesla has their own battery factories, own mines, own delivery chain, years of experience of actual real world traffic data from all around the world.

You have to look at volume, Tesla soon will be producing millions of electric cars per year.

They already won the market. It will take years for competition to catch up.

If ppl want to gamble with their money and buy a car that never hit a real world traffic -> let them do it, but every other manufacturer is going to have to go through the same painful process as Tesla already did.

Not even mentioning that no other company beside maybe Toyota has any real experience with electric cars.


What on earth does level 5 autonomy have to do with EVs?

Literally nothing above has any relation to autonomy.


It means that competition has ALOT of things to focus on to even be on pair.

Tesla on the other hand has to only focus on one thing now..

And none of those companies reached lv5.

So my bet would be on company that has the best focus.


Working as an eng leader for five years in the industry gave me a considerable base of factual knowledge about the state of the industry even beyond my own company.

I was just summing up my knowledge with an (I thought innocuous) 'I believe' statement. Sounds like you believe otherwise. That's cool with me, and I don't doubt you have good reasons way :)


Tesla owner here, have friends that work at Cruise. Tesla is waaayy behind cruise and waymo on the self driving front. Those other companies actually have fully driverless cars driving around San Francisco. Tesla is not even close.

I think GM will have the best self driving system if they manage to commercialize the tech cruise built.


"If", as the Spartans famously said to Big Alex's dad.

Ah, well, let's hope GM manages to comercialize the stuff.

(The Spartans were undefeated, but became irrelevant and dwindled, just a past legend for others to wave around as political advertising. Sic transit ...)


> they are like a decade if not more ahead of everyone else.

Mercedes has been investing radar since the Tesla Roadster. They are ahead of Tesla when for European regulation.


Maybe LIDAR is more useful than Tesla thought.


What mistakes is FSD beta currently making that would be solved with more accurate localisation of the vehicle? Are you aware of any situations where mistakes are being made as a result of errors in depth estimation?

Seriously, look at any FSD beta video and you’ll quickly recognise that its ability to sense the road is mature and robust; most errors are in the planner.


Tesla seems way behind companies like Waymo

It's all marketing hype, I say this as a Model S owner with FSD


or tesla made a bad engineering decision that stalled progress?


Tesla and Elon Musk are lying through their teeth every option they get.

They can't even detect stationary emergency vehicles ffs..

Mercedes is accepting liability whenever their self driving system is engaged, meaning their insurance is on the hook. So I think they're pretty damn certain that it will work as intended.


In my uninformed estimation, I think Elon, like a lot of us 7-8 years ago, saw the recent advances in deep learning and figured they'd be capable of anything in a few years. You just need to scale them up and train the shit out of them and out pops self driving. Some of us have since repented but Tesla made too many promises so they've got to keep pushing that narrative or they become just another car company.


> I dont believe this PR stunt.

PR stunts usually don't come with official regulatory approval, tho they commonly come with grand promises [0] and the actual product not living up to them [1]

[0] https://www.bbc.com/news/technology-53418069

[1] https://www.vice.com/en/article/3aqevk/the-government-is-fin...




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: