Hacker News new | past | comments | ask | show | jobs | submit login
Audi claims the A8 is the first production car to reach Level 3 autonomy (theverge.com)
223 points by wallst07 on July 13, 2017 | hide | past | favorite | 205 comments



As someone from Europe it is good to see the EU manufacturers finally showing some serious progress.

I always chuckled a bit when (tech) press would say 'Google has beaten auto manufacturers' after Google released some PR videoclip of them showing a car driving.

European car companies just don't see the need to enlighten the world about what they're up to, how they are progressing, but simply work until it's finished for production and then unveil it.

Still haven't seen that 'vastly superiour' super-car from Google and Apple everyone claimed were coming to take over.

Tesla does seem to succesfully run their company as a tech start-up: hyping new products that are 1 or 2 year from being in production, personification of the brand via the CEO god-status, continuous deliviry of software updates to cars, public beta testing etc..

I am rooting for Tesla, I hope they find the capability to increase production levels to something that comes near of the existing car giants.


"Google shows video of real-world performance and extensively details real-world issues, logs millions of autonomous miles"

Lame! Vaporware!

"Audi releases a press-release, no video, no demonstration, no real-world evidence"

European manufacturers sure are capable and know what they're doing! What a refreshing change!


No kidding. As a developer and infosec person, having seen countless non-tech company forays into software (take Toyota for example), I'm extraordinarily skeptical of automotive companies rolling out something of this difficulty.

I just don't see it as plausible that a company with little to no roots in software managed to bring on a large enough team from scratch, that understands software engineering well enough to build something like this to a high level of reliability.

Does anyone here really think that the same or similar teams responsible for the god-awful, un-updateable, UI disaster that is the center console nav in any car ever have somehow elegantly conquered autonomous driving?


It is common for people who write UIs and APIs to call themselves software engineers, but real engineering is what AUDI and other automakers do for a living.

Demonstrating a “god-awful” UI as evidence that they can't do autonomous driving is silly imo. We talk about people who design cars that can safely travel at 150+ kph. The tiniest bug or design failure can kill their clients in a spectacular way. We trust these guys with our lives.

Autonomous driving isn't about the UI, it is about math, systems, modeling, safety valves, rigorous testing. This is engineering and isn't taught in CS but in ECE classes. It is easier for an engineer to learn git than for a developer to learn differential equations. Airplanes, cars etc are controlled by microprocessors for the couple last decades. Sure, writing a polished website in angular is cool but engineers write the software that runs pacemakers. How can they not understand reliability?


The god-awful UI was not central to my point.

Your talk about their hardware engineering chops completely ignores the very real evidence we have that their software engineering efforts are utterly horrifying. Have you seen the reports about Toyota's software development practices?

https://news.ycombinator.com/item?id=9643551

By all accounts, while Toyota is a particularly bad offender, these kinds of problems are not unique to them. Software engineering is not something that historically-hardware manufacturing companies tend to have the culture to perform well.


The problem is that most hardware companies (including car makers) simply don't understand software. The other way around is also true: software companies do not understand hardware.


If a car company would try to fly to the moon tomorrow, cause they are experts when it comes to motion, everyone would roll with there eyes.

This is basically a similar feet in autonomous robotics, whose transport constraints happen to be a person carrying car.

Its only if you formulate it as such, the gap becomes more obvious.


> There are a couple caveats, obviously. The traffic jam pilot only works on highways with a physical barrier separating oncoming traffic, and the use of the system is subject to the laws of whichever jurisdiction you’re driving through. So bone up on the rules of the road before pressing that button.

Being able to self-drive on highways with physical barriers between oncoming traffic is, like, the easiest use case for self-driving cars. This doesn't very advanced to me at all. Urban driving is the actual hard part.


True, but it's still the first time real world customers will be able to let the car take complete control/responsibility in real world traffic. For me as an automotive sw engineer this is truly massive step forward, even though what you see from the outside doesn't seem that impressive at a glance.

Also, being stuck in a traffic jam during the daily comute is one of the most annoying and most stressing parts of driving. Not having to deal with this is a significant improvement in the driver's quality of life.


Yes, I think this was an ingenious move: find a situation that is somewhat easier to deal with yet solves a real-world pain-point.

Almost an Apple move.


An Apple move would be building a car without windows and telling me that it's progress.


Is there any reason to believe a present day Model S can't pass all the tests the A8 will go through to be approved for "eyes off" autonomous driving? (Other than changing the parameters for the "hands on wheel" warning)


Considering people in the past months have died doing exactly that, yes, there is reason to believe that.

And just look at http://autoweek.com/article/autonomous-cars/tesla-model-s-au...


The A8 autonomy is only for freeway traffic jams. What you linked does not seem to be an example of that.


The A8’s lane assistant can also handle this stuff, while the Tesla’s lane assistant can not (obviously).



True. But it's nice to have that usable today :) Driving regularly on highways in Europe, it's a great feature and I'm glad my car supports it (~2yo Volvo).

Looking forward to more in my next car.

Edit: reading below, my car is ~L2, but I trust it more as L1.


Highway is the really the only setting I have ever been at risk of falling asleep behind the wheel. Also, for a human being, those are quite stressful jam: car pick up a lot of speed before stopping abruptly a few tens of meters later.

As a pure car safety feature, this gets double thumbs up. That's the culmination of all the tools they have added recently: lane departure warning, adaptative cruise control, blind spot warning, attention detection, ...

That said, it does indeed very little for the end-goal of having a fleet of constructor/rental/uber/google cars that drives you from your house to your destination.


As a driver with an admittedly short attention span I think this is great! Following the little yellow and white lines for hours on end is something machines are better at than humans.

Urban driving requires a skill set machines don't have. You have to be aggressive but not hostile. Else, you'll never get anywhere in rush hour traffic.


I would have thought the sheer speed of highway driving makes it harder to recover from simple errors like not spotting a pothole or a deer crossing the road.

In both highway and urban driving I think the "hard" bit is dealing with the unusual, like a line of ducks crossing the road, a civilian directing traffic because and old person has collapsed in the middle of the street, or any situation in which a human being is providing important context via voice e.g. don't drive down there, the road will be blocked until this truck reverses out to avoid a low bridge.


That's interesting. How will these cars deal with cases where someone is directing traffic? It's not like these folks use some sort of universal gesture to allow traffic to proceed or to tell them to stop.

Sometimes you have police directing traffic, sometimes its construction workers, sometimes it might be utility workers. Sometimes they have the stop-sign-on-a-stick sometimes they don't.

In order to avoid excessive delays I wonder if they're going to have to consult the human passenger for advice on whether to proceed?

I believe a !significant amount of work has been put into gesture recognition in computer vision research, so maybe this is all moot, but training a NN on gestures seems kind of dangerous in cases where someone uses gestures for stopping/proceeding that are misclassified.

I wonder whether there will have to be more machine-readable devices installed along the roadway to help autonomous vehicles? In this case, perhaps a portable traffic control device which provides visual signals and perhaps even RF signals which talk to cars to direct traffic? These devices would be used at accident scenes or construction zones, etc.


Until we have both smart cars AND smart highways, true autonomy will be incredibly challenging. Once the highway grid is made smart and the cars can talk with it, construction will just require an engineer to add the updated routing info into the highway for the cars to heed.


Urban driving in places that are not sunny California


> I always chuckled a bit when (tech) press would say 'Google has beaten auto manufacturers' after Google released some PR videoclip of them showing a car driving.

While I also want to see european car makers tackling these challenges, how is the posted report different from a fluff piece about Google's experiments? Other than the lack of a video demonstration.

At this point Audi's announcement sounds like a bunch of over-promised, ambitious claims with a high potential of turning into vaporware. I might be wrong and all of it is very real and working, but you can't come to that conclusion by reading that article alone.


It's different because Audi is actually building and selling a product.


I agree with you to a point, but Google's target is much more moon-shot. Door-to-door complete self-driving in all circumstances, not just highway under careful conditions.

And by all indications they are succeeding - the roadblock is regulations and turning it into a consumer product. The tech is there.


The tech is not there. If it were they'd be making more noise about it. Most of the big players (besides Apple) have demonstrated self driving cars in familiar environments; downtown SF or whatever. The problem is most of the country is not downtown SF. There's a lot of dirt roads out here. There's a lot of snow and heavy rain. Nobody has demonstrated self driving under all conditions. Or even under all non-extreme weather conditions.


We'll eventually get there but it might be another 20-30 years before we live in a world where a robot car will come ferry you off to wherever you might want to go.

I suspect the near term ~10-20 years will have some form of autonomous public-ish transportation running on certain roads that are routinely re-mapped. Highways and busy city streets are likely candidates for autonomous operation.


You make a really good point. Given the economic impact to companies, I think it will be much faster.

For instance, the Germans flew their first jet in 1941. Only 17 years later, the Boeing 707 was flying, and transportation around the world changed immensely.

Given that we had just come out of a war (2 if you count Korea), that's pretty fast. I'd think a comparable timeframe would apply rather than your 30 year estimate. Heck, 30 years ago you didn't even have a cell phone!


I'd like to see these cars' behavior when they encounter a road work crew with a closed lane and flaggers directing traffic.


Daimler drove an autonomous S-Class around Paris in 1994: https://en.wikipedia.org/wiki/Eureka_Prometheus_Project


FWIW, comparison testing showed that Tesla's autopilot was far superior to BMW's and Mercedes', at least as of last year: http://www.caranddriver.com/features/semi-autonomous-cars-co...

I agree that there's some genuinely impressive and underrated stuff hitting the market from many manufacturers (even in sub-luxury brands), but Tesla has a real and formidable advantage in the amount of data they're collecting.


That "test" has two massive issues:

1) They only pitted older models against the latest update of Tesla's system. E.g. the 3 year old S-class was tested instead of the new E-class, which had just started to be delivered at the time. (And had been tested by car journalists for months.) At the time that S-class entered production, Tesla had not had any "autopilot" at all.

2) They tested only lane keeping. Unsurprisingly the Tesla was better at that. They did not however test automatic emergency braking, autoparking etc. - features that Tesla either simply doesn't have or isn't as good in.

tl;dr: Cherry-picked test.


I'm all for more comparisons with better methods, but isn't demonstrably superior lane-keeping enough to seriously question the GP's assertion that European car makers are quietly far ahead in AV capability? By all appearances, the A8 can't even change lanes like last year's Model S - I can imagine scenarios where Audi could do better at other tests, but I'd sure bet on Tesla to do braking/parking/etc better.


well there is a lot of behind the scenes work with major suppliers. auto companies tend to be very conservative when introducing new technologies because the liability issue is a major concern and possibly even more important, they cannot afford to tarnish the brand.

because of their innate audience they don't need to make spectacular announcements to generate interest. companies without large established bases in the field do. Tesla is special in that they have a CEO who doesn't seem to have an off switch. However this is not always a positive but they still manage to handle it well.

The 3 will be an interesting car but still highly limited with regards to what many expect cars to do but EVs are still a very young technology. Autonomous driving is where the real money is, it can be applied easily to many types of vehicles, don't over look the usage of similar tech on the mundane like mobility scooters or similar


Google hasn't even released a complete PR demo of their cars, just a few sporadic shots of one of their Koala cars driving around with a blind guy in the driver's seat. The only tangible data that's been released from them is last year's California DMV disengagement reports, which Waymo reported 1 unplanned disengagement for every 5000 miles of testing. But they were driving around downtown SF back in 2011, most of what they've been up to in the meantime is validating their software to the safety and reliability standards they need to be at before this technology can be foisted on the public. They were ready to start moving folks around Mountainview, totally driverless, back in 2015 with their Koala cars but DMV regs put the kibosh on that.


> European car companies just don't see the need to enlighten the world about what they're up to

I certainly appreciate Volvo letting the world know what they're doing to test autonomous cars. I'm glad European carmakers don't work the way you're saying they do.


This press release, when you read the details, is not very impressive.

The headline sounds good, but the details are that it only works on highways, up to 38MPH, and won't be enabled right away; they are going to roll it out later. The one notable thing seems to be that you can be stuck in traffic and take your hands off the wheel, unlike Tesla where you have to have your hands on the wheel.

I'm not sure how they call this "first", since it won't be on new cars released next year, until after some future software update, how is this different from Tesla that is already shipping cars that can do this and more with a software update?

Full disclosure: in Dec I sold my Audi A8 to get a Model S.

Currently, the Model S can basically drive itself in heavy traffic, but it will periodically want your hand(s) on the wheel. I recently was in stop and go traffic on a 2 lane road for an hour and the Tesla did great. The A8 won't be able to there, it won't work on anything but a divided highway which this was not.

Don't get me wrong, I've had 5 Audis over the years and have loved them. But this really isn't "news", Audi is still playing catch up.

Oh, and from the testing I've done this winter, the Tesla AWD system is at least as good as quattro. Probably better.


Yup, pretty on the mark.

Also WRT snow performance Telsa has a pretty large advantage over any ICE since they can apply different torque amounts to each wheel in near sub 1ms timeframe due to two factors:

1. Lower rotating mass(you only have stator, axle and wheels in the physical drivetrain. 2. FETs respond much faster than an internal combustion engine.

We got a fair bit of snow a while back and even without dedicated winter tires I was incredibly impressed with the snow/ice handling. Much better than our Subaru by a long shot. You can still get yourself into trouble but it was much harder to overcorrect than on our ICE.


This very much represents what I've experienced with it. The traction control in the Audi's I've had would REALLY retard the engine, and then take a really long time to recover from it. So it definitely does respond faster.

I also think that having two truly independent motors is vastly superior to one motor and a Torsen center diff sending it front and back.

The quattro would use braking of one wheel to send power to the other side on one axle, with a traditional diff. I haven't really figured out what the Tesla does from side to side. But it seems to work great!

We didn't get a lot of snow after Dec when I got it, but what I did get to play in it was very impressive.


Yeah, they use an open diff + electronic braking to send wheel to wheel, seems to work pretty well.


"This press release, when you read the details, is not very impressive."

This press release, like all high end / performance car press releases of the last 2-3 years, can be summed up as:

"blah blah, not electric, blah blah"

Like you, I am an ex-A8 owner who loved (loved!) those cars and have no interest in buying another one - no matter how nice or feature filled or well-designed they are. There's no way they don't know this and it becomes more and more stupefying that they cede this highest-of-margins segment to Tesla.

"Oh, and from the testing I've done this winter, the Tesla AWD system is at least as good as quattro. Probably better."

Exactly - it has two motors.

My disappointment is wide and deep as the major manufacturers continue to refuse to make a modern car. However it is Audi and Volvo in particular whose inaction is truly mind-blowing. Audi because they could continue to own AWD/4WD and volvo because their target market is ripe for the aesthetic and social benefits of electric cars.

Instead, Audi has spent ten years going down the "in 2-3 years we'll have an electric platform" and volvo has committed to a shitty-hybrids-with-lawnmower-engines-inside platform for the next 8-10 years.


You are right that there's no way they don't know this. My Tesla sales guy said "I get a lot of people trading their Audi for a Model S". Then when I picked mine up, the couple who also had the same time slot to pick theirs up was trading a Audi.

Right after I got it, I took a friend to pick up his Audi from the service center. I walked in with him and the service guy said to me "You're dead to me!" He is a really nice guy named Ryan, but I'll be honest that I'm kind of tired of being so tight with the Audi service guy. I'm hoping the Tesla doesn't cost so much down the road. I've already avoided 3 oil changes, so it's off to a good start. :-)


The reason they have to cede to Tesla is because of battery supply. There's a reason why Tesla invested in a huge battery factory and established contracts with a lot of providers so they could churn out the batteries cheaply.


Daimler has built and is building large battery factories. Posts about those don't get upvoted on HN. In the least it's best to see how this forum is a very partial / biased view of the industry.


"The reason they have to cede to Tesla is because of battery supply."

Then they don't have a product.

There is no longer any such thing as a luxury vehicle with an ICE engine in it. I am not spending >100k on a nice new car with some high vibration, overly-complex[1], put-put-puttering 3 or 4 cylinder motor. I don't care how nice it looks or how fancy the software is.

[1] Fuel economy regs make it necessary to turbo and super charge ICE engines to get 300+ horsepower out of these little 4 and 6 cylinder engines. These are not simple devices...


I just want to say congratulations to all of the engineers behind it who are reading this thread! Im impressed!

Often when I see a big project come to an end I can not stop thinking about all the hours work put into something which didn't made it to the end. Features they had to abandon or wait with until next revision.

Im very curious how good it will work and will try to get the hands on it as quickly as possible. :)


Level 3: the "you need to stay awake even though you have absolutely nothing to do" mode.

Probably going to turn out to be the most dangerous.

I am planning on dying in my sleep, but not because a Level 3 autonomous car decided to throw a "Can't handle reality" exception.

(This somewhat tongue in cheek response should in no way be read as denigrating the awesome technical achievement Level 3 represents. All speed to Level 4 and Level 5)


Not sure with the Audi but Volvo's level 3 is being build to pull over and park if you are asleep when it wants to stop driving so not that dangerous.


My Audi A3 has traffic jam assist and can drive itself in traffic jams as well. Above 20km/h it'll require steering input every ~12 seconds, but not below. When it does not receive steering input it'll turn on the hazard lights and slowly come to a stop without leaving the lane it's in.

Sensors: all-around ultrasonic, 1 monochrome camera, 1 front radar.



Well, to be fair, you could still die in your sleep in an autonomous car...


In case anyone one else was wondering what these "levels of autonomy" that the article's author so haphazardly assumed was general knowledge, see SAE J3016A[1], which is free to download with a registered SAE account...or your favorite unofficial method.

Table 2 on p. 19 of the 30-page recommendation is a nice TL;DR summary.

Which begs the question...what part of Tesla Autopilot doesn't satisfy the SAE definition?

[1] http://standards.sae.org/j3016_201609/


I found the Consumer Reports version easier to read, if at the cost of precision.

http://www.consumerreports.org/autonomous-driving/levels-of-...


Summary:

- level 2: "hands off"

- level 3: "eyes off"

- level 4: "mind off"


I remember reading recently that we should probably be avoiding level 3 in production altogether (and possibly even level 2), since the general populace doesn't have the capacity for responding quickly to failures in automatic guidance systems; unlike perhaps aircraft pilots who are intimately aware of the limitations of the systems.


https://www.theverge.com/2016/4/27/11518826/volvo-tesla-auto...

"Victor says that Volvo believes that Level 3 autonomy, where the driver needs to be ready to take over at a moment's notice, is an unsafe solution. Because the driver is theoretically freed up to work on email or watch a video while the car drives itself, the company believes it is unrealistic to expect the driver to be ready to take over at a moment's notice and still have the car operate itself safely.


Absolutely agree, and not just because of the reaction time, but the lack of context in the human that has been reading their emails/whatever. The first thing the human will try and do is instinctively look up and around to assess the nature of the threat. That takes time. For mine, I can't see the insurance industry being too happy with anything between L1 and L4 being on the roads in serious numbers.


But that argument ignores that these systems help stop accidents from happening 99% of the time and focuses only on the 1% when they fail. (numbers are made up) Automation has lead to accidents in aviation as well. But it's still being pushed forward, because when looking at total miles traveled, more automation tends to be significantly safer. The same principle holds true for ground vehicles.


Driving is infinitely harder than flying, from the perspective of a truly autonomous system. Once at altitude, there isn't much to run into. There are also very few possible emergencies that demand the pilot's attention within the next few seconds. Midair collisions are about the only exception I can think of, and there are already automated systems in place to prevent those. Bird strikes might be another, but those don't tend to coincide with periods of autopilot usage.

For a motor vehicle, though, the idea that "eyes off" and "mind off" should exist as separate levels of autonomy is just nuts. "Eyes off" will absolutely be interpreted as "mind off" by the majority of real-world drivers. It doesn't make sense to offer an "eyes off" solution at all, IMHO. Wait until we can genuinely call it a "mind off" solution.


eyes off means a driver is available. Its perfectly reasonable to cope with a problem by slowing down, pulling over and asking the driver for help. This is a choice that aircraft autopilots don't get to make. I agree that eyes off should not involve sounding an alarm and make the distracted driver take over in an instant at highway speed, and any solution that did that would be really dangerous.

mind off would need to cope with there being no capable driver or even any occupants at all. And even those systems will need to phone for roadside assist sometimes.


I think that is level 4 you are describing as "eyes off", not level 3, at least according to the Consumer Reports summary of the standard:

"The car can drive itself, but the human driver must still pay attention and take over at any time. The car is supposed to notify its driver if intervention is needed." [my emphasis.]

Pulling over or shutting down appears under level 4.


A system where the failure case is to slow down, pull over, and ask for help already is a system that can drive without a human inside - if the "need for a capable driver" can wait until you're stopped, then it can stay stopped and wait for a few hours until a driver (or mechanic) arrives or it gets towed.


There are a huge range of failures while flying which is why small aircraft are so dangerous. If nothing breaks and the weather is perfect then sure it can seem safe, but in practice you can't make those assumptions.

Saying commercial aviation is safe is like saying school busses are safe, it ignores people going out in bad weather etc. Commercial aviation has a great safty record beause they are cautious, not because it's an easy problem.


I don't follow your argument - are you aware of how much flying is conducted under autopilot? Commercial aviation is very safe.


Autopilots on commercial aircraft work well, because of the infrastructure surrounding them.

Small aircraft is not commercial aviation. One of the most common failures in small aircraft is running out of fuel and then having the sudden need to find somewhere to land. That's a very rare problem in commercial aviation.

Basically, commercial aviation succeeds because it avoids the vast number of failure modes. Fully automated car autopilots are not a simpler problem, they simply need to deal with the mess which aircraft autopilots sidestep. Ex: 747's autopilot does not have a look for an open field I can land in which I can then take off from again after I refuel.


I am still not sure I get it - are you saying that autopilots are not a good analogy for autonomous cars because the car problem is more like that of using an autopilot in a small airplane? If so, be aware that autopilots for small single-engine airplanes are available [1]. Automobile automation is actually a much harder problem than that, and the "mess which aircraft autopilots sidestep" cannot be "simply" dealt with.

BTW, if you do have an engine failure in a single-engine airplane (or double failure on an A320), being able to take off again should be low on your priorities in picking a place to put down on.

[1] http://wiki.flightgear.org/Bendix/King_KAP140_Autopilot


The trick about autopilots (especially those in smaller aircraft) is that they operate almost exclusively based on information about their own state. Wing levelers just operate to keep the wings parallel to the ground, using information from a gyro. Direction keepers keep you on a particular heading, as determined by a gyro (which is corrected by a magnetometer). Even landing systems rely on radio signals being beamed at them.

The big exception to this is collision avoidance systems which use a radar combined with transponders; and is limited to fairly expensive aircraft as a result.

Autonomous cars have to start with radar-based collision avoidance simply to provide "smart" cruise control, and work outwards with lane detection for lane keeping, lidar and prediction for external body collision avoidance (deer, children, bikers), and so forth.

Aircraft autopilot is dead simple in comparison, and still has its share of failures. And speaking of failures, the other advantage of aircraft is that in the case of failure (in any phase of the flight other than takeoff/landing), pilots have tens of seconds to take over and hours each year of deliberate practice at flying compromised aircraft. Drivers, on the other hand, will be lucky to have a single second to realize their car isn't going to stop when they expect it to.


I think your still missing my point. A small aircraft running out of fuel is a common problem. If you turn on an autopilot when there is not enough fuel to get to an airport they don't deal with this situation.

So, no we don't have fully autonomous small aircraft autopilots. We have the equivalent of lane following + collision avoidance not the full range of piloting skills because they are designed as assistants for pilots not replacements for pilots. Worse a pilot in the aircraft or on ground would also needs to deal with air traffic control which is a vastly larger jump than reading road signs etc.

PS: Yes aircraft autopilots can deal with ~99% of the time flying just fine. But, that last 1% has dragons. Getting an autonomous trick to drive on highways in most weather conditions but which would drive into road construction is a good analogy.


Sure, the question is whether building the interface for the systems as level 2 or level 3 is the safer choice.

A level 2 system can still do all the same interventions as a level 3 system, it just doesn't have the further relaxed state for the driver.


As 100% reliability is unattainable in practice, I would assume this has already been taken into account in defining the levels - which leaves the question hanging: while, in principle, there may be a distinction between "hands off", "eyes off" and "mind off", will there be a distinction in practice? Volvo (and Ford?) seem to think not.

And is there any "hands off" system that tests if the driver is "eyes on"? With eye-tracking, it is feasible, but I have not read of it being used.


The reality is that planes can be automated completely. The only reason they aren't is because of public perception, and the fact that if there's a bug and it crashes, 300 people die all at once.

If one or two automated cars crash due to glitches, however, the risk is perceived as being a lot smaller because those individual incidents would theoretically affect fewer people. Also, car automation would end up saving more lives overall in the grand scheme, and would also solve a lot of traffic problems in the long run.


Its looking more and more like I'm going to get what i want. I'm almost 50 so i could get a fully autonomous car before i can't drive. Won't that be awesome though?


I hope it is much sooner than that!


I hope and expect it to be within the next 5 to 10 years, tbh.


It probably depends on your budget. For a $100k car? Could be. For a $20k car? Now that's quite a challenge!


5 or 10 bucks. Pay per trip.


I hope we get this sooner than the privately-owned self-driving cars. So long as folks can buy monthly passes (or give poor folks the passes at free and/or reduced rates), I think this has the potential to greatly improve lives.


Municipalities that run demand response bus services will be quick adopters.

One of their big costs is drivers, so they tend to use a few medium sized vehicles. No drivers, they move to smaller vehicles and deploy more of them (they probably even shift their level of service up, because why not).


That is of course exactly what I'm taking about. If you're old having easy access to transport like that will be a revolutionary change. Old people will be able to safely live much further from cities.


What I find even more exciting is that they're finally deploying their predictive suspension technology in a production model.

What this system does is utilize the front cameras to scan and map the road surface ahead. It can then apply up to 20 kN of force via electric motors to raise or lower the wheels individually. This mitigates bumps from potholes or uneven roads, which, apart from being more comfortable, will also ensure the suspension lasts longer. This system also decreases body roll during turns, and squat / dive under accel / decel. All of it is powered by the 48V semi-hybrid electric system. Expect this to become much more commonplace with further electrification.

Mercedes has a similar technology called Magic Body Control, but it can only tighten or loosen the dampeners, rather than directly move the wheels. What the Audi system can do, which the Merc cannot, is to raise the side of the car during an impending t-bone collision, which allows the main body structure to take the impact instead of the doors.


The impeding t-bone collision thing sounds fascinating. Do you by any chance have any links with more details on that?


At this point this is vaporware. The car is still nearly a year away from being on sale, and the software will come even later. Talking up something like this so far ahead is just an attempt to depress sales for its competition.

Calling it "level 3 autonomy" is misleading as well, given how highly constrained the situations are in which it can operate at level 3. From what I can tell from the press release, it only promises that the car can handle a freeway traffic-jam situation, ie, it knows how to stay in a lane and not hit anyone when traffic is crawling on a freeway.

Admittedly such a system would be nice for millions of people who commute through such conditions daily, but that is probably one of the easiest tasks for an autonomous system to perform because huge amounts of complexity can be ignored: pedestrians, cyclists, intersections, road hazards, trip routing. This is a baby step taken at the edge of a huge chasm. Good luck!


My 2015 Hyundai Genesis can already do exactly that. It has automatic cruise control with lane keep assist. Sure, it beeps at me if I leave my hands off the wheel for more than 10 seconds, but if you disabled that in the firmware, it would technically work.


In an advert I saw yesterday, this car also features wireless charging, through a charging plate installed on the ground.

I know the HN crowd is very pro-Tesla but do not underestimate traditional car-makers.


Wireless charging for what? It's not EV, it's a hybrid. And not a plugin-hybrid, from what I can tell.


Charging is optional but supported.

Random video of A8 wireless charging: https://www.youtube.com/watch?v=ME8nFtbTH44


I must have missed something in the text but doesn't the current production VW Golf (part of the same company as Audi) also feature traffic jam assist and lane assist?

I think it uses RADAR instead of LIDAR but I don't know for sure.

Could any explain what the new A8 is offering in addition to the Golf?


In a "level 3" system, how does it know the destination? I suppose, you have to put it in as with a normal GPS first? Then, what does it do at the point the GPS would say, "your destination is ahead on the right, the route guidance is now finished"?


My VW's onboard GPS (same group as Audi) actually says "you have reached your destination" when passing in front of the destination.

But I suppose it will probably start scanning the road for parking spots like in the auto park feature that exists for years like in this demo from a few years ago: https://www.youtube.com/watch?v=vt20UnkmkLI


As described the feature seems only to be for highways, so I guess it can only go straight / no turns. Most likely route guidance will tell the system that you need to take control before the required exit.


There seems to be a fairly big caveat in that "the A8 is capable of driving all by itself at speeds of up to 37 mph.".

And also:

"The traffic jam pilot only works on highways with a physical barrier separating oncoming traffic."


So, not in London where it would be useful...


I don't know. I've spent a load of time stuck in traffic on the North Circular where there are barriers a lot of the way. But then there are junctions with traffic lights quite regularly I suppose.


Actually yeah, the north circular qualifies. The south circular? Nope.


I find comparison tests, like this one from last year, far more useful than marketing or discussion of "levels": http://www.caranddriver.com/features/semi-autonomous-cars-co.... tl;dr: Tesla's autopilot was substantially better than BMW, Infiniti, and Mercedes.


I did not expect Audi to be at the forefront of autonomous cars! But what do these level exactly mean? 'Level 3' and 'level 4 by 2020'



Sounds like a feature my Volvo XC-90 already has. Up to 35 MPH it can follow traffic in a traffic jam. If it gets confused it beeps and tells me I need to steer. I use it all the time, I love it! And it's a 1 year old car. I can only imagine what Volvo will have once the new Audi comes out.

And I'm saying that as an Audi fan. My other car is an A4.


I know the article mentioned NVIDIA GPUs but who are their partners? Is this developed by Mobileye?


According to an Audi video on the new A8, this is the computing hardware inside: https://i.imgur.com/euwi7eZ.png

In short: - MobilEye EyeQ3 - Nvidia K1 - an Altera Cyclone FPGA - an Infineon Aurix microcontroller

Each is used for a few different functions.


Thanks.

Impressive specs. And it looks like Mobileeye is inside every self driving car I've seen.


It's likely to be Sony, the VAG (owner of Audi, VW. Skoda, Seat, etc) has been working with Sony for many years, and had a Level 4 autonomous vehicle prototype in 2008.


The new A8 looks quite a bit like the new Lincoln Continental. If you integrated the door handles on the A8 with the window trim, it'd be hard to distinguish between the two cars from an appreciable distance - at least the rear, profile, and 3/4 views. The rear light arrangement, the integrated dual exhaust ports, the silver trim character lines, even the A/B/C columns.

A8: https://cdn.vox-cdn.com/thumbor/1--FKfJ3y71OTTYIz91rno4mYNs=...

Continental: https://www.cstatic-images.com/stock/1170x1170/99/img-122263...



But a Model S does all of this already. What makes this “far ahead” of a Tesla? Is it that I do t have to keep my hands on the steering wheel?


That's right - requiring you to keep your hands on the wheel makes the Tesla not level 3.


A Tesla S does not do this at all. This article is about a level 3 traffic jam assist.


The system as described is considerably weaker than what my previous generation Model S actually does. On a highway with a barrier and traffic I can basically leave autopilot engaged indefinitely without having to correct it. Perhaps Audi handles a wider variety of weather conditions?


That's the Tesla "self-crashing car" feature. Watch these videos.[1] Teslas will run into a clearly visible obstacle partially obstructing the lane. Tesla is a level 2 system with inadequate safety interlocks and crappy obstacle detection.

[1] http://autoweek.com/article/autonomous-cars/tesla-model-s-au...


>Tesla is a level 2 system

Does autopilot actually meet the qualifications for a level 2 system? AFAIK since autopilot is basically lane following and adaptive cruise control and braking wouldn't that be considered level 1?


Yes, the Tesla would do that, at one point in Autopilot's history. Perhaps it still will, but for the past many months I have not had to disengage it in those types of scenarios, to the point that the car has reacted before I have, and this is with the previous generation of sensors. From a technical perspective Tesla has come quite close in practice in a three year old production vehicle to what Audi claims to be ready to ship. With the current generation of sensor package, I would not be surprised to see Tesla's vehicles achieve L3 in practice (not at a regulatory level) before the new A8 actually starts deliveries. My point being that Tesla's technology has taken quite a leap from where it was originally, even with the original sensor package. Treating what Tesla is shipping in software as a fixed point and saying "this would put Audi far ahead of competitors" (as the article does) misses what has allowed Tesla to make such rapid progress in the space: continuous iteration on the live platform based on data collected from their existing fleet across all conditions.


Do you have statistics that per kilometre it makes stupid mistakes more than a human driver?

Lots of human caused accidents were driving into obvious things too. I don't drive drunk or sleepy, so maybe I'm above average, but my only standard for safety for a self driving car is that it's safer than me driving.


Sample size is too small to tell. However, if you ignore that, Tesla's Autopilot is at least 2-3 times worse than the average human in a passenger vehicle. And it only gets worse if you restrict further to middle aged people in expensive vehicles.

Source: http://www.greencarreports.com/news/1106613_how-safe-is-tesl...

Stats are on low page 2 and high page 3.

TLDR: Tesla compared their numbers to a statistic that includes much more dangerous forms of transport(bike, 18-wheelers, whatever). The Insurance Institute for Highway Safety did a study on just driver fatalities in passenger vehicles(cars and light trucks) and came up with 1 fatality / 438 million miles driven. Versus the Tesla figure of 1 / 130 million miles.

Whole article is worth reading as it goes into more detail on statistical issues with Tesla's safety statistic claims.

I'll add that human driving ability is not uniformly distributed. Most accidents are due to particular demographic groups: people that drink and drive, teenagers. It's entirely possible for a self-driving car to be worse at driving than most humans, while still better than the average. In that scenario getting really bad drivers into self-driving cars would improve average accident numbers, but getting non-bad drivers(aka the majority of drivers) into self-driving cars would make average accident numbers worse.


It's absolutely irrelevant if it crashes "less" than a human driver. It just cannot fail to detect an obstacle in front of it to be allowed on the road, period. If there is any edge case where that can happen, then the car shouldn't be certified as safe to use. Like, I don't understand how it's not obvious. A heart surgery machine that kills the patient every once in a 1000 operations would be taken off the market faster than you can say "litigation", even if a human surgeon has a worse survival rate(say 1 in 100 operations, so lets say the machine is an order of magnitude safer). That has already happened with radiotheraphy machines, there was a model that on average definitely saved lives but it had a failure mode where it would kill a person. Would you continue using it? Of course not.

Or airplanes - autopilot in planes has contributed tremendously to improving flying safety, but every time there is a crash under autopilot both Boeing and Airbus will ground every single plane of the same type to figure out the cause, they don't just go "oh well, it's still safer than flying manually so it's good for us".

I really don't understand this notion that an autonomous car merely needs to be "better than average driver" to be allowed on the road. Absolutely disagree here.


Here's a justification for you: It's impossible to get a failure rate of 0, so there has to be a point where we get a satisfactory(for now) level of safety.

Traffic kills so many people annually, that even a slight increase in safety is going to save more lives that are lost to aircraft accidents, radiotheraphy machine failure and such.


You can't get to a failure rate of 0, but I can understand people wanting the car to drive at least as well as a human who was in, say, the top 5% of human drivers.

Who'd want an autonomous car that's merely better than the _average_ driver, when every driver knows they're above average :)


Of course, and I acknowledge that accidents will happen anyway. But we cannot have a car that is on the road with "autonomous" driving functionality when it is known that the sensors can't detect an obstacle above half-pillar height(like in that famous tesla crash), or that the lane detection can fail in intense sunlight(like dozens of videos on Youtube show). It should have no known failure states upon release, and then ones which are found should be fixed later. I'm just worried that we're in such a rush to release autonomous cars on the road that we are ignoring known issues for the sake of improving average safety on the road - literally no other industry works like this.


Isn't your argument why we have the saying that "the best is the enemy of the good"? You seem to be dismissing any automated product that is not perfect even if it is demonstrably already much safer overall than the manual human equivalent. Surely this can't be the right policy if the goal is to maximise safety.

This is not to say that current autonomous vehicles actually are much safer than the human equivalent. Indeed, everything I've seen so far suggests that they have a long way to go before crossing that threshold, so personally I'm a sceptic about claims that we will see viable fully autonomous vehicles any time in the next few years. But this doesn't seem to be your argument here.


> Isn't your argument why we have the saying that "the best is the enemy of the good"?

Regardless of the OP's position on this, I do think the reality is that in practice it's difficult to sell or legalize anything that isn't significantly better than 'the average driver'. Whether it makes sense or not, we seem to demand any 'AI-powered' system to be better than the average human. I personally wish this wasn't the case, but it appears to be so.

For example, I personally marvel at the abilities of Siri and Google Now, at the very least when it comes to understanding what I'm saying. It's quite possible, in fact, that both are often objectively better at understanding me than most people around me: more than once I said something to Siri where I immediately realized that it was mumbly enough for any real human in my vicinity to ask 'what?' (often only to right away act on what I said, indicating that they did manage to interpret what I'd said). And yet Siri or Google Now actually transcribed what I'd said.

Being as good as the people around me would not be good enough, because every time Siri fails it makes me shake my fist at the whole concept of 'personal assistants' on my iDevices, rather than sigh at the umpteenth person around me saying 'what?' and then obviously acting on what I'd said, which is much more common.

I've not studied this properly, so I might be wrong. But I get the impression that I hold my AI to a much higher standard than I would my human interactions.


How many other industries have such a high potential for saving lives by improving average safety? Traffic kills more than 80000 people every year in the EU.


Would you rather have a large chance of your car driving into something it couldn't detect and a slightly smaller chance of being seriously injured or the status quo? Keep in mind that the odds of being seriously injured in a car crash in one's lifetime are already fairly low. How many people do you know that were seriously injured in a vehicle crash?

Most of the safety of AI comes from it being implemented in vehicles that are already far safer than average.


I'd rather have the solution with fewer serious accidents, even if it introduces types of accidents that don't happen with human drivers. I know a handful of people who were seriously injured in traffic.


I think accidents in traffic will be completely 100% solved by brake assist and collision prevention systems - I think they are meant to be mandatory in all new cars sold in US by 2020 if I remember correctly? So not long now. AI solves a different category of problems and isn't really necessary to solve crashes when there is a lot of cars driving close to each other.


I'm looking forward to those things as well. But bicycles don't have brake assist. It takes a very sophisticated collision detection system to prevent right-hooking a cyclist.


Electric assist bikes/mopeds will let most cyclists safely keep up with traffic in the kinds of places where you have to worry about those kinds of collisions, bike lanes will go the way of the dodo and car traffic will no longer have to take right turns across a lane of (bike) traffic.

Having a traffic pattern that requires a right turn (or left for certain countries) across a different lane of traffic is shitty and when a better solution shows up it will probably be implemented.


What you describe sounds like an irrational fear of machines, or perhaps fear of change?

Let's take airbags as an example. Do you understand why airbags are allowed, or do you also find it obvious that airbags shouldn't be certified as safe to use? Airbags provide increased passenger safety on a statistical basis, as they mostly provide a cushion for impact. However there are plenty of edge cases where if you sit too close to an airbag, or let your small child sit next to an airbag, then that airbag will actually cause significant damage that wouldn't happen without the airbag.

Autonomous driving can be viewed from the same safety perspective as airbags. Sure there are new edge cases that can cause harm, but on a statistical basis if the AI is even just slightly safer than a human, then at the end of the day there will be fewer accidents and fewer deaths.

--

Your airplane example is a straw man argument. Nobody has claimed that we should stop advancing AI driving tech once we slightly surpass human skills. That's just the point where it makes sense to allow it on public roads.


It's not about the number of edge cases, it's about the overall result. Airbags reduce variance in outcome. They hit a lot of people who would otherwise be unharmed in the fact with a small explosion in order to save a few lives in serious accidents.

The worst case is better and the average case is better because you've eliminated some outliers. However,the median is far worse. The group is effectively being punished to reduce the effect of the worst performing members.

An AI that drives into a partial lane obstruction is only better than no AI for a very small number of cases.


My argument with your radiotherapy machine is that we're currently using the 'model that on average saves lives', and therefore anything we can replace it with that 'on average saves slightly _more_ lives' is an improvement.


Once in a while the hot water tap will blow scalding steam on your hands, but the average temperature is nice.

(Do you feel lucky?)


My (limited) understanding is that a Level-3 system will sound an alarm, and tell you take control, if it sees something unexpected, such as a truck stopped in the middle of the highway ahead. Whereas the Tesla expects you, rather than the system, to monitor roadway conditions ahead and decide whether to take control.


You are comparing apples to oranges. Tesla has traffic aware adaptive cruise control and active lane keeping. Audi has those too, but this article is about their new traffic jam assist that allows the driver to completely hand over the responsibility for driving and monitoring the environment to the car. Which is not a function Tesla has. At all.


Listening to Tesla's PR you would think it's Level 3 but in reality it is not.


I think their point is the Tesla tech requires you to pay attention at least in theory while theirs does not.


In all seriousness, on the highway, most of the time the requirement is more of legal thing rather then actually needed. Yes, my Model S asks me to touch the wheel every ~5 minutes but it doesn't really need me too.. off highways different story, but highway autopilot on tesla, when markings are clear is pretty spot on these days.


I'm glad there are people like you to test the system and all......but trusting your life with these early driving AIs seems like darwin award territory.


He's trusting the lives of the drivers and passengers of all the other cars on the roads he drives on with his system test as well.


"Pretty Spot on" is not good enough when your and other users lives are at stake.


Okay let me rephrase a little.. I've had a Tesla since day 1 of auto pilot (so the gen 1 hardware). Back when it first came out it was good but not great. They've made a ton of tweaks since then, almost two years ago now including the gen 2 hardware with a full suite of cameras which will eventually bypass my gen 1 system. Anyway when it first came out a lot of people were posting videos doing stupid things like riding in the back when the system wasn't mature enough yet to really support that.

So now, almost two years later, if you're on a highway with clear markings and good weather, though the car is programmed to nag you every ~5 minutes no matter what to basically just nudge the wheel, the car is better then a human at driving. Theoretically if Audi does get ready to release this they could simply remove the nags and have a much better system for the use case Audi is describing (Tesla supports up to 90mph). Again, for highway driving the nags are basically there for Tesla to cover their you know what. Off highway you can use it but it's more of a gimmick still as you have to very closely babysit it on most roads.

Hope that makes things more clear.

[Quick Edit: One other thing the nags are good for is just in case there is a scenario that they haven't accounted for they can say that they told the driver to put their hands on the wheel and they ignored the nag. Tesla has two years of an entire fleet of cars using and proving the system. Audi will be releasing it theoretically w/o this in place and telling drivers no hands needed right away.. I'd be skeptical on that vs what Tesla has already been proving]


There‘s no question that Teslas can drive themselves under favorable conditions. But did you see the videos above where a Tesla crashes into a highway barrier, and the one where it crashes into a stationary maintenance vehicle?

You can only take your eyes off the road when you can be sure that the car can detect unexpected circumstances. Teslas are apparently bad at detecting stationary obstacles, so you can‘t take your eyes off the road (and calling the system „Autopilot“ is really not a good idea)


So far those videos are all with the v1 autopilot hardware (I believe). It will be interesting to see as the new hardware rolls out, and its software gets better, how much better it is at avoiding those obstacles.


I thought Autopilot 2 was still not on feature parity with Autopilot 1, after Tesla switched off from MobilEye?


37mph? Sounds like they're way behind what I've seen a Model S do.


Beyond the up-to 37mph "self-driving" I'm very sure the car also supports all the already available driver-assistance features (adaptive cruise control, lane assist, etc.) up to much higher speeds. Compared to the self-driving feature the lane-assist will bug you once you take your hands of the wheel for more than 10-30 seconds, and they require the driver to be able to always immediately take over control, because the manufacturer doesn't deem them as safe for real self-driving without human intervention. This is more comparable to what Tesla currently offers from my understanding, even though Tesla markets this more aggressively ("Autopilot").


This article is about a traffic jam assist that is robust enough that the driver doesn't need to pay attention at all. The Tesla S can't do this. Audi have lane keep and ACC as well, just like Tesla, but that's not what we are discussing here.


Below 37MPH, I actually trust Tesla's AP (v1) a ton. At slow speeds, it doesn't nag you about putting your hands on the wheel very often either. I still pay attention (no phone reading, etc), but I've never had to take over control at low speeds.


Yes, if we ignore the fatal accidents.


The fatal accidents where the driver didn't respond to the warnings his car gave him?


No, he's probably talking about the fatal accident where the car didn't give any warnings at all. When Joshua Brown died his car was in autopilot and it didn't brake and it didn't give any warnings at all before it hit the semi in front of it.


There should be no role for a "driver" if the car is considered in self-driving mode.

The "driver" would then be just a person sitting in the driver seat that could be snoring or playing games on their mobile phone, and generally be distracted out of their mind...


as well behind Mercedes-Benz even on way cheaper cars, it works on same conditions and up to 130km/h.


Should people still to learn to drive today?


I have never learned to drive and I don't plan to learn it anytime soon, unless I move to somewhere with abysmal public transport for some reason.

Of course it all depends on your personal mobility habits and where you live, but in many big cities, you can absolutely get by without ever sitting in a car.


If you're in the US, you're in luck! Most of the US has abysmal public transit sprawl.


I have never been to the US, actually. Grew up in Berlin and living in Shanghai right now.


Driving is fun. You should give it a go. The sense of freedom is amazing.


Yeah, my car doesn't charge me surge pricing. I never have to stand in the rain waiting for it to show up. I can set the HVAC controls the way I want. The mechanic doesn't take my money up front and bleed it away for boondoggle politically motivated projects then ask me to pay more to do the work I already paid him for.

For day to day commuting public transit can be less stressful but for going where you want when you want it's hard to beat a car.


This is a perspective from a very specific environment. If you live in a dense city, chances are you won't get to keep your car near you house or the other places you're going to, so you'll have to walk in the rain to and from it. Plus, European gas prices feel like surge pricing every day :|


I live and work in a city in the Northeast US. The number of easily accessible parking spaces to me is less than the number of vehicles and people who need to use them in my household. It's no different than college where parking is a "long enough to be annoying in the winter" walk away.

I generally walk anywhere under two miles and avoid driving between 7am and 7pm. Having a means of transportation under my personal control readily available to me is well worth it. Tonight I'm going to go meet someone selling something on Craigslist. It's about the size of a milk crate and weighs >50lb. I could carry it on the subway and do it for free (monthly pass).

Next week I have to get a coffee table. Sure, I could buy one online but I think buying a used high quality one for cheap/free on CL or from a thrift store and having the ability to get precise measurements in advance

I could take an Uber or taxi but it only takes a few Ubers a week to be more expensive than owning a car (if you own it outright).

If your life involves doing anything more than being a worker bee who goes lives by a schedule and pays someone else for assistance with transportation outside that scope then having your own means of transportation is invaluable in terms of convenience and possibly cheaper.


I agree about owning one's own means of transportation, that's why I have a bike :)


I did try it, and it's just so much you have to think about, that's horrible for getting somewhere.

If I wanna get somewhere, public transit is a lot nicer, because I can sleep on the way, or use my phone.

And if I just wanna enjoy getting around in nature, I take my bike.


> it's just so much you have to think about

Like everything else (including riding a bike), this is true of the beginner, and relegated to your subconscious after a few dozen hours. It all just becomes automatic.

I drove past half a dozen stop signs, hundreds of cars, several traffic lights, four different speed zones, and made a cross-traffic turn, and was thinking about testing code this morning.

You can pay attention without having to actively think about what you're doing, just requires making it a habit.


Once you get past that point, though, you're faced with your personality type.

In my 20s I loved driving, I had my radio, complete privacy, my 1.5 commute was tolerable.

Now I'm in my 30s, and I hate my hour spent in the car everyday. Not because I can't find a good song or a good podcast, it's just that it feels like such an enormous waste of time. At least if I used public transit I'd be getting exercise. In my car, I'm just forced to sit and pay attention to the road, I can't read HackerNews or watch a show or read a book that isn't available in audio.


Which goes away once you're locked into traffic, and unfortunately driving a motorbike is not always practical.


People will have to drive for quite the foreseeable time in the future(Think of few decades).

These are just better traction control systems, with some accident detection and avoidance thrown in. Nothing much.

You will still do bulk of the work.


Bet you $100 that, in 2035, >10% of the cars on the road are level-4 autonomous or higher, and >90% of new cars sold are level-4 autonomous or higher.


In the country where I stay(India), this doesn't look possible. Might be possible in the US though.

But you don't exactly have a self driving car, if you can only drive in specific conditions.


I think this is something which is brushed off too often in these conversations. In the context of US-posters (in many forums I've seen), it seems very likely to them that within 20 or so years, autonomous cars will take over. There's already a big change there happening in consumer driving habits, and I guess that because it's where a lot of the technology is being developed the roll-out is a lot more obvious.

Here in places like Australia, I don't forsee it happening on the same scale in the same timeframe. There are different factors here such as the immense distances, inconsistent road surfaces and those bloody Kangaroos which is going to pose real challenges for automakers, as they already have been.

I don't think it's going to be that long before learning to drive is purely an optional thing in the same way learning to ride a horse is, but I think it'll be a fair bit longer than people are mentioning here. At least globally.


There is a large portion of land in the US (area, not population) that is in the same boat as you - minus the kangaroos.

Challenges in my area and others:

Cell reception is not just poor, it often doesn't exist.

GPS on my phone and car drop now and then. Maybe to many trees?

Roads are of varying condition and can change anytime. Some you know not to go on in certain weather, etc.

Road maps are still wrong. I've fixed the ones I care about on Google. Some roads don't exist, others have addressing issues that are so bad it will put you in the wrong end of the county.

I'd enjoy a self driving car, but they are a long ways off in my area and many others. I'm sure it will happen, but it is a whole different world than a city.

I don't know that making it work in the middle of nowhere is worse than a city, but it is a different set of problems and manufacturers will of course target the cities first. So to you in the outback (not the Subaru), you aren't alone. Most discussion about what life is like in the US doesn't apply to me either.


US actually has a considerably competitive advantage in that their roads (especially in cities) are so much easier to navigate than in Europe.

"Autonomous" vehicles are SO far away from navigating European cities. Picked a few random locations (all two-way roads by the way, often with pedestrian and bicycle traffic). It's probably 30 years out when a car can drive these:

* Lausanne, Switzerland: https://www.google.no/maps/@46.4767732,6.8289361,3a,75y,254....

* Polignano, Italy https://www.google.no/maps/@40.9957993,17.22086,3a,75y,347.9... (driven this one myself, and remember thinking that it would be close to impossible for an autonomous vehicle)

* Paris, France https://www.google.no/maps/@48.8660537,2.4074756,3a,75y,48.6... (one way)

* Zagreb, Crotia https://www.google.no/maps/@45.8222709,15.9280256,3a,75y,320...


To be fair, Audis don't have the best reputation among (UK) cyclists for navigating considerately even when there's a human behind the wheel.


HN is a SV echo chamber.

It will be a very long time before self driving cars are practical in parts of the US that do not have comparatively easy conditions for all the reasons you mentioned. Some cities in the southeast that have good automotive infrastructure will see adoption soon after the southwest. Self driving cars won't be typical in Boston or NYC for longer. Those sorts of cities don't have stupid proof markings for traffic flow the way cities that were built out post WWII do so there's a lot more subtle things involved in day to day driving that are hard problems by themselves. Odd intersections, potholes, situations where typical behavior is dependent on traffic volume, snow, etc. Rural areas add another set of unique edge cases.

Nobody is gonna pay extra for a car that can't deal with a set of conditions they encounter frequently. Have any Indian companies poured big money into driver-less cars lately?


Kangaroos are really no different then deer in this context.


They actually are proving to be pretty difficult and different from other fauna obstacles

https://www.gizmodo.com.au/2017/06/volvos-driverless-cars-ca...


Different yes, but they only started looking into them in 2015. To claim that it would be a blocking factor for getting l3 cars operational in Oz is a stretch.


In my parts even the deer aren't the same. The white tail and mule deer don't behave the same. I know I react differently to them. I can see why a kangaroo might cause issues too.


Until AI is good enough to drive without working GPS and internet connection "self-driving" is a bit of misnomer.


i would guess one decade at most in many countries. The last person to ever need a driving license for private transportation has already been born imo*

*in countries with reasonably advanced infrastructure and somewhat stable economies


I come from a 1st world country where the average car is 12 years old. Unless you can buy a level-4 car today(tip: you can't) 90% saturation in a decade is just impossible. I think 40-50 years is the likely target.


I'll believe that when I see it. Try using autonomous driving in the snow and get back to me. Nothing they've presented or have even talked about being on the roadmap solves the issue.


Autonomous ride services will come to sunny cities first, but the amount of money to be made in transportation guarantees that snowy cities like NYC and Boston won't have to wait many more years after.

https://www.google.com/amp/s/www.wired.com/2016/01/the-cleve...

https://www.google.com/amp/www.cbc.ca/amp/1.4013366


Ford [1] has been testing automatic driving in snow. Waymo and Nvidia have said they are, but don't have videos out.

[1] https://www.youtube.com/watch?v=6GdV7em34Ro


So, 5mph on roads that already have tracks for the car to follow and barriers for it to see on either side. Curbs that aren't buried. Street signs and lights lining the road.

I live 5 miles outside of the nearest town and frequently have to drive long before a plow hits the road and long before it has warmed up enough for blacktop to be visible. That video doesn't even prove they can handle ACTUAL conditions when it's snowing in anything but the most ideal situation and at a speed that would require me spending 4 hours to get to work.

I mean... it's better than nothing but it's definitely not something that changes my mind on it being ridiculous to think that's replacing a human driver anytime soon.


It heartens me to see this. Still very early in the process though - that's pretty close to the best case snow (it is awfully wet, which can be a pain), perfectly clean sensors, great visibility, and moving at about 5mph (as a point of comparison Montana drivers would be going at or near the speed limit in similar conditions - >45 for the gentle curves, 25 leading into the right angle turns).

I also saw the car starting to break free (i.e. start a sideways slide) just as they clipped to another scene; that's what I want to see more of. It's an unavoidable occurrence in winter driving conditions, and the proper recovery to those is the most important skill to learn to drive on snow.


> perfectly clean sensors

I saw thin layer of ice appear on parking sensors after few minutes of driving in snowy conditions, triggering "close object" warnings. And cars get dirty all the time, I wonder if cars will end up with separate wiper for each sensor.


Google has a wiper on one of their car-top sensors. Another trick is to alternately spray wiper fluid and air to clean sensors. We had that on our DARPA Grand Challenge vehicle, and the upcoming Continental LIDAR seems to have a spray nozzle. The SICK LMS is available with a scanner glass with heating elements, so ice can be cleared. All those problems have been solved by good mechanical engineering.


> All those problems have been solved by good mechanical engineering.

And yet we're still limited to scraping blades made from rubber over tempered glass with the help of a bit of liquid cleaner...

Given how bug guts require a good bit of manual scrubbing to clear off a windshield and the cumulative impact of rock impacts, I'm not convinced they'll stay clear for any meaningful amount of time - at least not on a commuter vehicle timescale.

Heated glass will help, but it's not a guaranteed thing; ask anyone with a rear window defroster how long it takes to clear ice and snow off.


My adult daughter still can't drive.


> first production car to reach Level 3 autonomy

> when it goes on sale next year

> The traffic jam pilot only works on highways with a physical barrier separating oncoming traffic

>Audi says it is rolling out this feature “gradually,”

I applaud the development and effort, though I have had enough of people claiming to be first at things that are not yet possible to buy and use.


Tesla fanboy here. It's a little silly to say "Our car is level 3 autonomy, but only when it comes to market sometimes next year." By then Tesla could be level 3 autonomy, as all that's required to achieve that is for Elon to push some software updates and voila, the car is now smarter than ever. Being capable of something in theory doesn't make it a practical statement, imo.


Teslas do not have the required hardware to reach Level 3 autonomy unless they've been secretly including LIDAR in all of their vehicles.


That remains to be seen; Google has demonstrated that level 3 can be achieved with lidar and cameras, but no one has proved that it can not be achieved with the non-lidar sensor suite that Tesla uses.


Exactly, without lidar no one knows if the software AI will take 1 year to create or 10 years.


There are no specific hardware requirements for any of the levels of autonomy, that I'm aware of.


Humans can drive perfectly fine with just two cameras and mirrors, I'm sure it's possible with Tesla's standard 8 cameras and neural net software.


Humans can drive perfectly well with monocular vision. Plenty of people either only have one eye, or don't integrate 3D for some reason.

The variance on human driving ability is higher than we would accept from a machine, but I have two friends who don't have stereoscopic vision, and I trust both of them to drive me


Human eyes are a lot more advanced than whatever cameras Tesla is mounting in their vehicles. (much better dynamic range, higher resolution, higher low light sensitivity, extremely precise movement detection for judging relative speeds, extremely fast and high precision targeting, etc)


Tesla's in-house Autopilot 2 has a lot of catching up to do with Autopilot 1, which was developed by MobilEye.

I like their startup-ish approach which enables them to push software updates to all their cars and collect valuable data, but I feel most people do fundamentally not understand that their cars offer at best Level 2 autonomy.

On a side note, if and when LIDAR becomes cheap enough, Tesla may change their mind and look at integrating it in their cars.


Deliveries in Europe start in 3 months, which means that the software was finished many months ago.


Got a ton on flame for this post. Probably should've avoided "Tesla fanboy" part. Lesson learned.

But look, if you reference other articles, Audi can only do self-driving when: "In slow-moving traffic on divided highways at speeds of 37.3 mph and under only. " (TechCrunch).

Tesla can already do that. It simply require the driver to keep hands on the steering because it also allows for all the other options, and Tesla doesn't want you to let go of the wheel and run over a human.

Note that Audi isn not saying their car will do self-driving in the city, with pedestrians throwing themselves onto the road. NO, it will ONLY do it on highways, while in slow traffic. So Audi can only do self-driving when the car is barely in motion and the possibility of interference is close to zero.

That's like calling yourself a University Professor because you wear a checkered jacket with elbow patches.


You should check out the difference between Level 2 and Level 3 -- they're different.


> for Elon to push some software updates

Not everything regarding Tesla is about Elon.


Do you imagine Tesla will push out level 3 autonomy without Elon being involved in that decision?


I did not say that, but the comment is a good display of the personal cult following behind Elon Musk jumping the shark (for lack of a better term).

Sentences such as

> all that's required to achieve that is for Elon to push some software updates and voila

make the task seem (A) trivial and (B) like the decision is completely up to Elon. This hugely discounts the fact that when/if Tesla roles out level 3 autonomy, that this was the result of a huge amount of work put in by a big team inside the company.

You also wouldn't hear "all that's required to achieve that is for Rupert (Stadler) to push some software updates and voila" about Audi, when both were probably equally responsible for putting the issue on the roadmap and equally not involved in the implementation.


Tesla still needs to develop the software updates (if it's even possible with their hardware), while Audi already has the software.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: