From the regulations: "Fall back strategies should take into account that—despite laws and regulations to the contrary—human drivers may be inattentive, under the influence of alcohol or other substances, drowsy, or physically impaired in some other manner."
NHTSA, which, after all, studies crashes, is being very realistic.
Here's the "we're looking at you, Tesla" moment:
"Guidance for Lower Levels of Automated Vehicle Systems"
"Furthermore, manufacturers and other entities should place significant emphasis on
assessing the risk of driver complacency and misuse of Level 2 systems, and develop
effective countermeasures to assist drivers in properly using the system as the
manufacturer expects. Complacency has been defined as, “... [when an operator] over-
relies on and excessively trusts the automation, and subsequently fails to exercise his or
her vigilance and/or supervisory duties” (Parasuraman, 1997). SAE Level 2 systems differ
from HAV systems in that the driver is expected to remain continuously involved in the
driving task, primarily to monitor appropriate operation of the system and to take over
immediate control when necessary, with or without warning from the system. However,
like HAV systems, SAE Level 2 systems perform sustained longitudinal and lateral
control simultaneously within their intended design domain. Manufacturers and other
entities should assume that the technical distinction between the levels of automation
(e.g., between Level 2 and Level 3) may not be clear to all users or to the general public.
And, systems’ expectations of drivers and those drivers’ actual understanding of the
critical importance of their “supervisory” role may be materially different."
There's more clarity here on levels of automation. For NHTSA Level 1 (typically auto-brake only) and 2 (auto-brake and lane keeping) vehicles, the driver is responsible, and the vehicle manufacturer is responsible for keeping the driver actively involved. For NHTSA Level 3 (Google's current state), 4 (auto driving under almost all conditions) and 5 (no manual controls at all), the vehicle manufacturer is responsible and the driver is not required to pay constant attention. NHTSA is making a big distinction between 1-2 and 3-5.
This is a major policy decision. Automatic driving will not be reached incrementally. Either the vehicle enforces hands-on-wheel and paying attention, or the automation has to be good enough that the driver doesn't have to pay attention at all. There's a bright line now between manual and automatic. NHTSA gets it.
I don't understand this anti-autonomy cheerleading. It's like people on HN live in a parallel universe where there have been a bunch of deaths from cars running Autopilot, whereas in the world I live in, it seems to be somewhat safer than a human alone. Like, people can mess up either way, but they seem to be less likely to do so when the car is also looking out for them. What am I missing?
You have to compare the one death using autopilot to one death of people driving Teslas without autopilot. Musk tried to compare it against the universe of drivers (Teslas, kids driving crappy cars, etc), which was a complete false comparison.
So the reason it was a big deal is because it was a huge fatality. Tesla drivers are generally a pretty safe bunch. Statistically, if autopilot hadn't been engaged, that death would not have occurred. Autopilot makes Tesla drivers less safe, not more safe.
Also, the government is doing self driving industry a huge favor. These fatalities could screw over the whole industry if they get out of hand. Musk is giving self driving a bad name.
The dash cam video was only released last week, in conjunction with a lawsuit. Now it's on all the mainstream news outlets, from the Wall Street Journal to the New York Times to Fox News.
This is yet another "Tesla hit slow/stopped vehicle on left of expressway" accident. There are now three of those known, two with video, one fatal. Watch the video. The vehicle is tracking the lane very accurately. Either the driver is very attentive or the lane following system has control. Then, with no slowdown whatsoever, the vehicle plows into a stopped or slow-moving street sweeper.
Here's one of the other crashes in that situation.[1] This was slower, so it wasn't lethal. There's another one where a Tesla on autopilot sideswiped a vehicle stopped at the left side of an expressway.
So don't brake suddenly when unnecessary, only in case of emergency. In other cases just slowdown to full stop and alarm human driver in process, giving him more time to react.
IMHO, emergency braking must be mandatory for every new car with top speed greater than 60 km/h.
Last I heard, nobody has been able to verify if the car was actually in autopilot mode. However, the emergency braking also clearly failed, if the police report that no attempt to stop was made is true.
It's the offence that an engineer feels about something being marketed as something it's not.
Tesla is fooling the public. The opinion of the general public who don't drive Tesla's cars is that automated driving is already here and Tesla is leading the way.
In your reply to a well-argued post, you offer no similarly well-argued refutation of its points, just an emotional appeal based on generalities (I say 'emotional' because you take an explicit 'it is us against them, you are either for us or against us' position.) What you are missing is that opposition to simplistic arguments and false dichotomies where safety is an issue is not opposition to autonomy. What you are missing is that one can look forward to autonomy while advocating reasonable caution.
Autonomy will be great. We aren't there yet. Tesla is/was deceptively marketing their capabilities so they can risk their customer's safety with the opposite of informed consent (mislead consent? [0]). They are doing it in order to collect data that will get them to real autonomy first. That's fucked up and greedy. The other comment demonstrated that it is in fact risking their customer's lives. It's safer than some random human alone, but not against a comparable human alone.
[0] You can say they tell you to keep your hands on the wheel and all that, but they themselves manufactured/fanned a ton of hype to the contrary. It's like arguing that you should have paid more attention to the EULA.
I'm not sure they understand just how many intermediate steps requiring non-obvious technical progress are going to be required between 2, 3, 4, and 5. On a premapped track, 3 is just fine at present, but Google is nowhere near 3 for the sort of adverse conditions and unmapped road alterations that are common in much of the road network. This is going to be an iterative design process if it's moving forward at all.
A likely future is one where automation is only enabled for consumers as an option on a minority of roads (starting with the Interstate Highway System) that have been heavily mapped and managed, and we work from there, developing the algorithms at high sample size, then slowly extending out into the state highways and arterials. The roads and maintenance actions will likely also, as the tech progresses, have some modifications made to increase reliability.
These cars are going to need a large quantity of sensors; The Uber self-driving car has "something like 20 cameras, a 360 degree radar, and a bunch [7] of laser [rangefinders]", and this is a decent start; a Tesla and even a Google car is simply not equipped for enough edge cases to let a consumer near without making them hands-on-wheel liable to take over.
99.99...9% of my driving takes place on heavily mapped well-managed roads (occasional pothole notwithstanding) that are heavily trafficked by other cars. As far as I'm concerned, if Google can do this, their vehicle is fully autonomous.
Only like three towns are "heavily mapped" to the level required for a Google self-driving car to work. And the problem is that if the car relies on the map, your first car to see a road or a change in the road, won't know what to do.
"As far as you're concerned" doesn't mean a whole lot.
For the Tesla comment, what more could you expect from them right now? If you have auto pilot on your hands have to be on the wheel or it vibrates and then it starts slowing down e eventually since it knows it's not safe.
People can work around any system for this, but stuff like this makes it have to do it consciously. Seems pretty reasonable on Tesla's part.
You could expect them to not call it autopilot, and instead call it lane keeping, like it really is. The name does a really good job at setting a terrible expectation, almost to the point of undoing any attention grabbing mechanism to keep users engaged.
The NHTSA is hinting in that direction, indicating that manufacturers must clearly distinguish between driver assistance systems (levels 1 and 2) and real self driving (levels 3 - 5). There will probably be some standard on this. There has to be; consider what happens when car rental fleets start having some of these features.
There was an accident with a Volvo where someone was showing off the pedestrian safety system, and hit a pedestrian. It turned out they hadn't purchased the pedestrian safety system.[1] Something is needed to prevent problems like that.
Then there's mode switching trouble. Classic problem with aircraft control systems. Tesla disengages the "autopilot" if the driver touches the brake. The trouble is that this also disables automatic braking, as the driver is assumed to now be in control. So tapping the brake without applying it fully in a hazardous situation causes a crash.
All these driver behavior problems with shared control authority are hard. Maybe harder than going for level 3 and letting the automation do it.
It reminds me a little of the way in which airbags are always described as "SRS" (for "Supplementary Restraint System"), to try and make it clear that they're supposed to be used with seatbelts, not instead of them.
I think the name isn't too blame, it's the misinterpretation of it (and possibly tesla overselling it).
"Autopilots do not replace a human operator, but assist them in controlling the vehicle, allowing them to focus on broader aspects of operation, such as monitoring the trajectory, weather and systems."
I've used an autopilot on a yacht and didn't expect it to dock or avoid ships. A plane autopilot doesn't freak out when the pilot takes their hands off the controls. So there seems to be room to allow the name but tighten how it's used.
> You could expect them to not call it autopilot, and instead call it lane keeping, like it really is.
That is all that most auto pilots in planes do, however. I don't get where "autopilot" somehow came to mean full autonomy in cars but not in planes (and other vehicles like boats) where the term was used previously.
Yes, but with planes and ships (on ships it's almost always called track pilot and speed pilot, by the way) you can let go of the controls for extended periods of time. In cars, right now, that's a recipe for disaster. Functionally they may be the same, but practically speaking it's very different.
Autopilots in planes can take you across the world. The car analogy would be only having to pull into and out of your driveway. And while they don't use it so frequently, aren't most of the planes most people fly in (big commercial aviation) certified to autoland themselves? The connotation of autopilot is absolutely that it can fly itself. Same with tesla's autopilot.
An autopilot in a Cessna can most definitely not take you around the world. Also, autopilots even in a commercial jet keep the plane steady along a route, they don't deal with collisions at all...you could call it "lane keeping" if you want to sue the airline industry for misleading us over a term they created.
Autolanding is not autopilot. In fact, autolanding is not even certified if multiple autopilots are not available to provide redundancy.
Most people who have flown don't even know what a Cessna is, much less its capabilities. It's totally irrelevant. Same with dealing with collisions, which aren't analogous challenges between flying and driving. To a non-engineer, the point is that autopilot gets you from point A to point B, not the details of how it has to achieve that.
Ah, the awkward old dance where two people try to walk through the doorway and end up mirroring each other's movements trying to get out of each other's way...
The aviation world actually solved this problem a long time ago. Everyone turns to their right. FAR 91.113:
> (e) Approaching head-on. When aircraft are approaching each other head-on, or nearly so, each pilot of each aircraft shall alter course to the right
(among other collision-avoidance regs in that section)
The planes talk to each other and run a common protocol that tells the pilots what to do to resolve the conflict. The problem is when one of the pilots doesn't listen to the nice robot voice yelling at him to DESCEND NOW!...
The name is not the problem. Regardless of what you call it, once people find out empirically that their attention is not needed most of the time, even the most well-intentioned minds will wander.
They don't have to implement SAE 2 and if their implementation isn't safe, doing the best they can might not be good enough.
I think there may be a fundamental flaw with lane keeping. It removes the driver from doing anything but still requires constant vigilance. That might be asking too much. My ADD is too strong to wach the road without having to do any part of the driving. I suspect a lot of people are the same way.
If most drivers are just keeping their hand on the wheel while day dreaming, Telsa should be forced to just disable the feature until the tech is ready for Level 3. Or use the Level 2 tech as a backup only.
Heck I refuse to use cruise control at all because it makes me get board. My personal solution to avoid boardemn while driving is to drive faster. Clearlly I am not a safe driver but, I'm going to need full automation to help me out.
There's a good chance that's what happens. Auto-braking (level 1) is likely to become standard, like anti-skid braking. Full automatic driving (levels 3-5) will be options. Semi-automatic steering (level 2) may disappear as the higher levels start to work. The shared responsibility between driver and control system is too messy at level 2.
Not totally accurate. You actually get two warnings before autopilot disengages. You have to put a hand (not hands) on the wheel at certain intervals, but you certainly can be hands-off a majority of the time.
They'd never do it but I imagine a system that was loud, flashy, and embarrassing, that activated when autopilot is misused (i.e. auto-pilot is demanding user intervention and intervention is not given) would be most effective in incentivizing drivers to change their behavior.
I.e. a loud beeping noise that annoyed pedestrians and other drivers until you took the wheel. Kind of like how accidentally triggering your car alarm in the parking lot will lead to a very hasty correction on your part.
The car already cuts the stereo for a moment and beeps at you. If you ignore the first warning, it mutes the stereo and beeps until you resolve the situation.
It sounds like they are suggesting something more like 'cuts the stereo and begins announcing on internal and external speakers that the driver is not paying attention and the car will slow to a stop in 5,4,3,2,...'.
They would be better off making SAE 1-2 features 'game' like.
You get positive points for avoiding situations that are noticed (but not within the audible warning threshold) or correctly reacting to input (warned but not yet in automated 'fail safe').
Now if only they could automatically make cars exiting a rolling slowdown on the freeway actually get back up to the indicated speed of travel in an expedient manor.
"Automatic driving will not be reached incrementally."
Not true.. There are other ways of doing this incrementally. For example, slow speeds, closed roads (no pedestrians or other cars), only in good weather, ideal conditions, etc.
What he's saying is that it's not good enough to be able to control the car 90% of the time. Either it needs to be robust enough to operate safely without human intervention 100% of the time or it needs to somehow enforce that the driver is alert and capable of taking over 100% of the time.
We can't have an autonomous car that expects a driver to take over in a dangerous situation if that driver hasn't had to maintain control the entire time. For instance, there are youtube videos of drivers moving to the passenger seat in a Tesla with autopilot on.
They are not undesirable at all. There are plenty of circumstances where these are great use. Airports, for example. Downtown cores. Smart Cities developed from scratch. Resorts. The list goes on..
The idea that Google is anywhere near level 3 is merely some incredibly good marketing, and a fat pile of deception. Google has an impressive tech demo, not a product nor a reliable technology.
Nobody is above level 2.
Google's self driving system only basically works with the route preplanned and premapped ahead of time, specifically for that car. Even small changes in the environment are potentially devastating. And even mundane weather changes it isn't prepared to handle.
It should be well understood that if the only people who can safely handle the vehicle are professional test drivers on a preplanned route, the car isn't ready to say its at the level it claims it is.
"These maps contain the exact three-dimensional location of streetlights, stop signs, crosswalks, lane markings, and every other crucial aspect of a roadway."
"But the maps necessary for the Google car are an order of magnitude more complicated. In fact, when I first wrote about the car for MIT Technology Review, Google admitted to me that the process it currently uses to make the maps are too inefficient to work in the country as a whole."
"To create them, a dedicated vehicle outfitted with a bank of sensors first makes repeated passes scanning the roadway to be mapped. The data is then downloaded, with every square foot of the landscape pored over by both humans and computers to make sure that all-important real-world objects have been captured. This complete map gets loaded into the car's memory before a journey"
"The company frequently says that its car has driven more than 700,000 miles safely, but those are the same few thousand mapped miles, driven over and over again."
"Chris Urmson, director of the Google car team, told me that if the car came across a traffic signal not on its map, it could potentially run a red light, simply because it wouldn't know to look for the signal."
Google's entire business advantage is based on cloud. I welcome anyone to prove that this has changed.
Nobody's saying their cool little 3D renders aren't awesome looking, but that doesn't really mean much. Google has PR down to an art form, but if you asked Google to drive one of their cars to Chicago, they couldn't do it, no matter what the weather. They only work in a small nearly closed course environment.
> if you asked Google to drive one of their cars to Chicago, they couldn't do it
Nobody here is making that claim.
If Google operated a taxi service within Austin, we would say the car operates at level 3. SAE levels say nothing about where the car is operating:
"At SAE Level 3, an automated system can both actually conduct some parts of the driving task and monitor the driving environment in some instances, but the human driver must be ready to take back control when the automated system requests"
It wasn't mentioned in the bloomberg article, but the 15 areas covered are:
• Data Recording and Sharing
• Privacy
• System Safety
• Vehicle Cybersecurity
• Human Machine Interface
• Crashworthiness
• Consumer Education and Training
• Registration and Certi cation
• Post-Crash Behavior
• Federal, State and Local Laws
• Ethical Considerations
• Operational Design Domain (operating in rain, etc)
• Object and Event Detection and Response
• Fall Back (Minimal Risk Condition)
• Validation Methods
Not sure if they're specifically ordered, but it seems positive that Data recording and Privacy are up at the top.
Can you think of a concrete example where the government got it wrong?[1] For the sake of argument, the federal government in the last 50 years? Maybe the encryption export ban. Or the CDA, but that was quickly reversed and the part that's left (Section 230) was really instrumental in the rise of the modern web.
[1] And I don't mean wrong as in "NSA spying" because you disagree with the policy. I mean like, "regulations mandated everyone use Beta tapes and laser disk even though they quickly became obsolete."
ITAR comes to mind. Its basically bans export of dual-use (mil/civ) technologies. Its has made for incalculable harm to our aerospace industry and other industries that produce things that are classed dual use (encryption used to be one of them). This is the same law that banned encryption.
You have things like companies in Aviation Week (a big aerospace industry mag/site) running full page ads for sensors and other aerospace items proudly claiming its ITAR free (means not made/designed in US). A company I worked for bought a high power (2.5kW) laser from Germany. It failed and cannot be sent back to Germany for repair due to ITAR (tooling needed to fix it cannot be easily moved and probably would fall under ITAR). High end CNC machine tools will brick themselves if they are moved without the manufacturer specifically blessing the move due to ITAR regulations (earthquakes can trigger the "I've been moved without permission" response).
There is a countless list of other harms it has caused, but I have no direct experience with. ITAR is fairly easy to get around for the "bad guys" because they can just not buy US goods.
e-cigs. We don't know their exact relative harmfulness/safety, but there is fairly broad consensus that 'vaping' is better for your health than cigarettes.
Clearly more work needs to be done to rigorously investigate and evolve the space, but recent Federal regulations have essentially put small innovators out of business.
In sum: a harm-reducing item is being regulated into the ground.
Some HIPAA regulations that pre-date the rise of shared virtual servers in "the cloud" are quite outdated and cause quite a bit of trouble for no real benefit.
> Some HIPAA regulations that pre-date the rise of shared virtual servers in "the cloud" are quite outdated and cause quite a bit of trouble for no real benefit.
What HIPAA regulations are you talking about? Other than HITECH guidance (which can sort-of be seen as a "HIPAA regulation"), HIPAA regulations don't generally specify technologies at all, and I can't think of any that I would describe as outdated or troublesome due to the rise of shared virtual servers and "the cloud", whether they predate it or not.
The biggest thing is that we can't run software with unencrypted PHI on physical hardware that is simultaneously running other people's code. In practical terms this means that we have to pay AWS some $ to get dedicated instances and also we can't use ELBs in the standard (easy) way. There are some other things as well.
> In practical terms this means that we have to pay AWS some $ to get dedicated instances
This is a feature, not a bug. It also is neither HITECH nor HIPAA; it is instead AWS's requirement in order to sign your BAA.
> we can't use ELBs in the standard (easy) way
Also neither HITECH nor HIPAA. ELBs are used in a PHI-related scenario identically to any other scenario. Unless you are referring to using it as an SSL terminator, in which case I would say "the standard (easy) way is always wrong".
> The biggest thing is that we can't run software with unencrypted PHI on physical hardware that is simultaneously running other people's code.
There is no, AFAICT, no regulation under HIPAA or related law that requires this. Certain service providers may have determined that they cannot provide guarantees of privacy/security without this technical restriction.
That seems like a fairly reasonable thing given you're talking about encrypted PHI... it's some extra $ for a considerable reduction to overall attack surface when processing the most sensitive type of personal data.
I don't think this meets OP's definition of "wrong".
In practical terms I don't agree that the threat of someone doing all of the following things is worth worrying about (in comparison to many other more likely failures):
1) determine what physical hardware in aws the target is running code on
2) somehow get the aws virtual machine manager to let the attacker run their malicious code on the same hardware
3) somehow pierce the protections of the virtual machine to read memory being used by the target application
4) figure out how the data is stored in memory in order to make sense of anything that was read
> In practical terms I don't agree that the threat of someone doing all of the following things is worth worrying about
In AWS case, this is an AWS rule about when they will sign a HIPAA BAA, even though there is no HIPAA regulation that specifically prohibits the arrangement at issue. AWS clearly thinks it is worth worrying about.
When you run your own public cloud, you can determine what risks are worth accepting potential liability for.
Yes, I agree that Amazon is behaving perfectly rationally given the legal environment. My point is that the legal environment has been designed in an un-optimal way from a technical perspective. Identifying such a situation was rayiner's request.
> Yes, I agree that Amazon is behaving perfectly rationally given the legal environment.
I'm not commenting on Amazon's rationality (I haven't actually evaluated the security concerns that would determine that.)
> My point is that the legal environment has been designed in an un-optimal way from a technical perspective.
And you haven't pointed to anything in the legal environment that is suboptimal from a technical perspective. You haven't even pointed to anything in the legal environment at all.
Amazon (as a BAA) has certain administrative responsibilities for putting administrative and technical safeguards in place to prevent breaches, and certain obligations and liabilities in the case of breaches. HIPAA and related laws and regulations do not specify the specific administrative or technical safeguards, though they do specify areas that must be addressed.
Amazon has decided that the particular technical arrangement you prefer is too high of a risk, but you haven't pointed out anything that indicates that this is the result of an outdated regulation that results in poor technical choices rather than technology-neutral regulation and a reasonable evaluation of the security concerns of the particular technical arrangement you would prefer.
People said the same thing about cold boot attavks against encryption keys. Yet today the police and others are using that and other NAND attacks regularly.
HIPPA is a very easy compliance standard to meet. If it seems difficult to meet those requirements with your standard tool configurations, you should think about what that means with respect to the integrity of your data.
I know what memory forensics is and I use Volatility and Second Look and quite a few other things pretty often, I've asked specifically about an instance of cold boot attack that you claimed in a hyperbole that are used often or at all by the police.
You know what I don't need a case, please find me a jurisdiction in which cold boot attacks have passed forensic certification, e.g. a link to the process like for example from a body equivalent to the ASTM https://www.astm.org/Standards/forensic-science-standards.ht... would suffice.
It isn't illegal to email records under HIPAA. But your doctor probably doesn't have a system set up to securely email records (such things do exist), and their practice has probably adopted privacy policies that don't allow emailing for that reason. Doctors aren't generally compliance experts, and are much more likely to know what the policies of their place of work allow than the distinctions between what HIPAA allies and what their place of employment has adopted as policy based on the particular technology they've decided to adopt and their particular level of risk tolerance and other factors.
I think 2bitencryption's point is more along the lines that the government regulates fields and that regulation quickly becomes unworkable because its a "quickly-evolving new technology market." For example, if the government mandated that all computers had to be Windows 9x-compatible that might have prevented the rise of the iPhone.
The embryonic stem-cell research ban didn't have anything to do with the underlying science and technology--it was based on a moral objection to the practice of destroying embryos for research. If the government had, for example, mandated the use of some testing methodology that soon became obsolete, that might be more on point.
The VA Healthcare debacle. The government was unable to timely provide healthcare and instead of trying to fix the problem they hid and covered it up by falsefying records. The new system they made is complexly incompetent where doctors and patains are force to spend days on the phone with people who have no clue what they are doing.
Interesting "Federal, State and Local Laws". So what happens when the self driving car violates a law and the police office pulls it over. Who gets the ticket? If there is no steering wheel... can it even get pulled over?
If I were to guess, once self-driving cars are widespread, cars won't be pulled over any more for driving-related issues (ie, cops won't radar any more). However, they will probably be required to have a kill switch that can pull it over for other reasons (ie, if the cops thought your car was the getaway vehicle from a bank robbery)
This is of course once almost all cars are self-driving so it'll be interesting to see what happens in the midterm.
Cops pull over cars for traffic infractions for three primary reasons. The first is for driving safety, the second is for revenue, and the third is because it leads them to arrests of idiot criminals who can't be bothered to fix their tail lights.
The revenue part is coming under a lot of scrutiny recently, since it's being proven to have very regressive effects. The Federal government will make it more expensive than the revenue they generate from it.
I imagine it would be like radio tag toll lanes now. If you go through and do not have a tag, they mail the vehicle owner the ticket. Most violations would not need stops. I don't think private vehicle ownership will survive self driving cars.
That could actually be an interesting problem. It might sound like "in the event of a impact >Xg, stop, shut down, and wait for police/NTSB/etc to come investigate". But if, say, you hit a deer on a wilderness road in the winter, that behavior could lead to the passengers all dying of exposure.
Do self-driving cars have a button labeled "fuck your rules and DRIVE"?
Probably any self-driving vehicle should have a button you can push that amounts to consenting to "The car will now record that you initiated a manual override. You are now in full control. Anything you do is your responsibility." Insurers will throw a fit if you push this button but it's better than being stuck in a bad situation because your car can't figure out a way out of it.
Likewise, in most discussions of self-driving cars, it is noted that they probably won't work well in the snow. Someone (presumably not from a snowy area) will then say that the car will pull over and wait, as you shouldn't be driving in a snowstorm anyway. They never say what's supposed to happen next, with the highways full of people whose cars have stranded them. Are they all supposed to call for cabs? But wait, cabs have been replaced by self-driving cars...
I'm pretty sure they'll eventually figure out how to get self driving cars working in snow, or rain, with protocols on when to stop that match when humans should.
That's a very short section, but it looks more like how the driving system itself responds if a sensor is damaged in a crash. Basically, that it should hand control back to the driver. And also if you crash and then repair the system, it must somehow be validated/tested before being put back in service.
The data collection "black box" side of it is in a different section.
> it must somehow be validated/tested before being put back in service
I pity the mechanics for that one. You just know the car manufacturers are not going to want some unwashed shade tree mechanic, or even a legitimate independent garage, to have access to do that.
By the way, in which area do the following requests fall:
- Yielding to an emergency vehicle with sirens on.
- Moving backwards to a safe and large enough spot when the route is too narrow to fit self-driving car and oncoming huge lorry (and there is no line marking the limit between road and ravine).
- Upon instructions from authority, recognize that the highway is closed due to an accident and, no matter what the driving code says, you actually have to make a U-turn on the highway and follow the crowd. Alternatively, just take that route (yes, the one with the large no-entry sign at the beginning) or that narrow path in the wood (yes, it exists, even if Google Maps isn't aware of it). At the bare minimum, park yourself off the road and let the others move on.
- Verify whether a queue is forming behind you. Listen to the honkers, they may be right. When you are an obstacle to the most part of traffic, moving to the side and letting others pass from time to time is sincerely appreciated.
Now if the could just come up with some "Data recording and Privacy" regulations for all electronic devices. So, Google and Facebook can stop creeping me out. They're like the creepy neighbors that always look at out the window to see what I'm doing.
Do you need to use their services? Have been (mostly) Google and Facebook free for a long time, and I don't live under a rock. Maybe you should also try the "FB&G"-free diet too ...? (It's not for everyone :-)
I'm astounded that it seems like these regulations are going to be sensible and promote the technology. It's a good thing that these are going into place, since autonomous vehicles should definitely not be legislated on a state-by-state basis.
> I'm astounded that it seems like these regulations are going to be sensible...
Was that hyperbole? I would say the majority of regulations (at least in OECD countries) are sensible, and many that are not are intended to be, are outdated, or are politicized.
> I would say the majority of regulations (at least in
> OECD countries) are sensible
I think it can be shocking to non-Americans just how much the Americans distrust and think their lawmakers and -- especially shockingly, their civil servants -- are both incompetent and have malicious intent.
American friends have found it incredible -- for example -- that something like NICE[0] can exist and people don't assume it's trying to kill them all; cf "death panels".
I also wonder in what other developed countries Jade Helm 15 would have been controversial[1]...
Regarding your Jade Helm 15 question: round this neck of the wood we host every year the worlds largest cold weather military excercise, with 15-20 000 soldiers from all across NATO. We literally have US Marines and other foreign forces playing invasion right where we live, using air force, army and naval assets. There has never been any conspiracies or fears regarding this. Perhaps mainly because we appreciate you guys having our backs and knowing how to fight in snow, just in case Ivan comes over for a "visit".
On a related note, there is a truly hilarious story from a guy over on Reddit who served in the Marines, they were stationed in North Carolina and never been in snow; and who came to this exercise and of course got their asses truly handed to themselves in a snowball fight by a bunch of Norwegian schoolkids. Highly recommended reading; first comment after the OP here:
> I think it can be shocking to non-Americans just how much the Americans distrust and think their lawmakers and -- especially shockingly, their civil servants -- are both incompetent and have malicious intent.
The EU is often criticized (e.g., Brexit) as being something that promulgates useless regulations (e.g., curvature of a banana).
I quite like the human rights and environmental protections afforded me as a EU citizen. British people see the beginnings of their work rights already in the crosshairs since Brexit. Glad I live in the Netherlands now.
At first I thought you were making up a super funny and clever thing, but by god, you're only referencing a pre-existing funny thing [1]. Although funny in a different way.
Hey, nevermind that the article you linked expressly notes that the entire policy is around a standard for the classification of produce, and that the EU is first and foremost an economic union with the goal of harmonized trade regulations.
You know, the type of thing where standard gradings and classification of produce and manufactured goods would be fairly important? (and you know, in no way different to any other modern nation or industrial group).
In fairness, Britons don't think NICE can exist -- see this case where an American went on a radio show and praised its CBA approach, while the British participants vehemently denied that it does exactly what it actually does:
>>...Britain had achieved cost-effective treatment for everyone, at the cost of some people missing very expensive treatments that might help them. I was rather congratulating myself on this answer, because NICE is beloved of health wonks everywhere; Obamacare’s Independent Payment Advisory Board (IPAB) is an attempt to sort of replicate it. Pointing out something the British health system can do that the American system can’t, and doing so in dryly factual tones, seemed like a good way to endear myself to the British audience.
>>The other guest, a British health official, interrupted to basically accuse me of lying; the British health system, he said, did no such thing.
>>Now I reiterate: I had not called NICE a death panel, or said that it was bad; I had simply described what NICE does, which is keep the NHS from blowing its budget on very expensive treatments that deliver relatively little value per pound spent. You can read NICE describing what NICE does on its website; the description is not significantly different from the one I gave. Being told that this was flat out wrong was surreal. Things got even more surreal when I began again to explain what NICE does, thinking that perhaps I had been unclear, and the host interrupted me and said something like “As you know, that’s false.”
Many people automatically assume a government can't make up sensible regulations. There are a lot of them in the US. It's a meme you hear all the time, especially in a POTUS election year.
One problem is that regulations tend to accumulate. I like what Canada does with its "one for one rule" which removes one piece of old outdated regulation for every new regulation made. In fact, at first when British Columbia implemented this law, they did 2 for 1 to clean out old laws, until later switching to 1 for 1.
Economists, based on empirical research, by and large agree that "Regulatory Capture" will normally make regulations work in the interest of the major companies in an industry, rather than the public interest.
Belief in the pervasiveness of regulatory capture is really less the product of empirical research and more a restatement of fundamental principles of liberal economic theory as old as Smith, buttressed by some good anecdotes from certain markets. When it comes to actual empirical research, disentangling the interests of the public and established corporations is pretty difficult: often they are shared, particularly when it comes to safety regulations.
It'd be easy enough to show that a future testing regimes increase the market share of domestic self-driving car manufacturers and push the market price up; less easy to show that it wasn't also in the public interest to have that testing regime in place.
Regulatory Capture will by definition always result in Regulations working in the interest of the major companies(or special interest) in an industry, rather than the public interest.
Definition:
Regulatory capture is a form of government failure that occurs when a regulatory agency, created to act in the public interest, instead advances the commercial or political concerns of special interest groups that dominate the industry or sector it is charged with regulating.
a) Previous government experience is highly valuable in private sector employees
b) Government pay is less than this value
b) affects regulatory capture in two ways: it allows civil servants the opportunity to get massive raises by going private (and incentivises them to be nice to future employers) and cripples the recruitment of highly talented individuals who are less dependent on industry advice. I don't think attacking a) is feasible in a modern regulatory state, but b) is readily doable if a government is willing to significantly deviate from standard salary scales for high-value industries. For example, SEC salaries would have to be much, much higher than Department of the Interior salaries. AFAIK, Singapore already does this and has very high talent retention rates. Even within the US government, it isn't entirely unprecedented, since an E-3 Navy special forces operator probably makes 8x the salary of an E-3 Army public relations specialist.
That covers materialist capture, but there is also non-materialist capture:
>Materialist capture, also called financial capture, in which the captured regulator's motive is based on its material self-interest. This can result from bribery, revolving doors, political donations, or the regulator's desire to maintain its government funding. These forms of capture often amount to political corruption.
Non-materialist capture, also called cognitive capture or cultural capture, in which the regulator begins to think like the regulated industry. This can result from interest-group lobbying by the industry.
Another distinction can be made between capture retained by big firms and by small firms.[11] While Stigler mainly referred, in his work,[12] to large firms capturing regulators by bartering their vast resources (materialist capture) - small firms are more prone to retain non-materialist capture via a special underdog rhetoric.[11]
I think it might be deeper than that. I don't feel that the US government, on it's own, is incapable of drafting up reasonable legislation. The problem is that the US government is 100% for sale to the highest bidder, and corruption runs deep (we just call it "campaign contributions" as if that makes it better). If sensible regulation is proposed, it'll last 30 seconds before the good senator from [some self-driving car company's home state] has turned it into a document crafted to drive business to his "contributor".
This isn't a political statement as it cuts across both parties, which renders it all the more insidious.
Surely this is based on 0 personal direct experience with the people that write these kinds of regulations.
I have worked with engineers that write technical regulations. They are generally focused on doing a good job at the task at hand. To think some mid level person that is hired into a normal job and never meets a politician in their career cares about campaign contributions is asinine.
What do you think the people at NASA and NAVSEA and NIST do all day?
This is not an informed opinion. This is an opinion carefully shaped by the same influences from different industries over the last 30 years who generally benefit from the removal of their regulatory environment (miners, oil industry).
The real byline is in your proposed commitment to trying too improve government process: you don't have any. You think it's hopeless. You're apathetic. Which is what everyone, pushing any agenda, wants from you.
Or even if the politician isn't influenced by the campaign contributions they'll just run 1 sided ads against the other side that usually have margin at best factual basis.
My biggest gripe is overreach. You start with sensible building codes, and eventually the city council is telling you what color bricks you have to use before they'll approve your plan. Yes this happens.
Above, someone suggested they consider the color of the bricks on their house free speech, which is a constitutionally protected right. I don't necessarily agree (although, I don't think some shitty HOA should be able to dictate the color of my damn house), just clearing up your confusion.
I lived near a small historic town with many buildings standing since the 1850s. Tourism is a HUGE industry that brings in dollars to local businesses. It's in the town's interest to preserve that income, so they mandate color and style codes for new buildings as well as restoring older buildings. This is complemented with many folksy festivals and re-enactments as part of drawing in tourist dollars.
When people balk at these codes, and they do all the time, there are several other larger and modern cities nearby where they aren't restricted in any design sense.
As long as the codes are written by people who know their architecture and architectural history 100%, I am completely comfortable with that. It's just that I've had far too many experiences with historical preservation codes written by amateurs with limited knowledge of architectural history who effectively ban any attempts at making a building more airtight and efficient while letting aesthetic travesties like asphalt shingle roofing and poorly proportioned window trim stand.
This I understand. I can tell you that this little town is maintained by both historical and design professionals, and new buildings (and renovations to a practical extent) have all the modern conveniences and safety while preserving the historic aesthetic.
There is no clear sharp line between the two; form has always been part of the function of architecture. What your house looks like affects the rest of your community; therefore there is a democratic process that gives them input.
I used to live near an old historic town with buildings standing since the 1850s. New people and businesses are moving in all the time and stuff needs to be built. The town uses color codes and other design elements to preserve both the older buildings as well as making newer buildings match the historic tone. This is explicitly done to promote and keep tourism flowing into the town, which is a HUGE part of their income.
If an industry or contractor balks at these ideas (and they do every now and again) there are several other larger modern cities a few miles away with access to the Interstate and train yards. These don't share the "historic preservation" codes of this little town.
If the town allowed a free-for-all on design it would wreck it's main source of income and likely cause decay over the years as tourism dropped off.
The town itself is less than 3,000 people. Tourism is it's major industry, and without it the town would disappear.
> Why don't "tourism people" just pay people constructing buildings to use the colour "tourism people" want?
That implies they could ignore that rule at any time. Reimbursement programs are an increase in paperwork, which many would simply ignore for convenience. This would give the town the "Tragedy of the Commons" problem, erasing it's historic sense (and primary revenue source), and it would become another run-down town like many others in the region.
Is that fair to those who invested heavily in keeping their businesses and homes in that area? Their answer is a resounding "No"
If somebody balks, just like this, there are other more modern and relaxed cities within a few miles that can accomodate their building ideas. These cities even have more access to freeways and trains, so economically it makes sense to put their businesses inside those cities.
Instead, the primary draw towards this town IS it's historical authenticity, and thus the people living there keep it maintained through it's building codes. There is no other reason why a business or homeowner would live in that area, so it makes sense to keep with it's character. If that's too onerous, then perhaps your motivations for building should be reexamined.
Some people love teal and pink. Having to look at or try to sell a house next to a monstrosity can be pretty horrible (or the neighbors with junker cars all over the lawn). Having been in a couple of regulated areas, I'm not much of a fan of the micromanagement that happens. However, having had my value/quality of living majorly degraded by an industrial operation moving in next door in unregulated BFE, the risk of living somewhere without rules is higher than the cost of compliance, at least for me.
The regulation on brick color mentioned above is at the city level, not federal or state. You have districts that enforce ascetic rules typically to preserve the look of a neighborhood.
That's not accurate. They seem more concerned that every bit of power given to government to do something right will eventually be used to do something wrong.
To be fair, there's a long and repeated history in the US of
----
GOV: We know this law is overreaching, but we promise we'll only use it the "right" way.
... 2 years goes by ...
GOV: If you don't <plead guilty | accept this plea bargain>, we'll tack on a charge of breaking <this law that is overreaching>, even though you didn't violate what it's supposed to be about, and add 20 years to your sentence.
----
It's seen over and over. The US citizen's distrust of government getting more power than it absolutely needs isn't paranoid, it's based on the actions of the government.
Conversely, and more relevantly to regulations on self-regulating cars, there's a long and repeated history of
--
GROUP: The regulation is anti-business, anti-freedom and massively outdated. We should sweep it all away and deregulate this sector as much as possible. The market will take care of the bad companies.
...2 years goes by...
GROUP: Do you know how important it is that this industry survives? Please give us some money to fix it. And some of the behaviour of some companies in our industry is unethical and dangerous and really should be stopped. Why didn't you step in earlier?
No that is simply not true. Just because someone doesn't think that the war on drugs is a good idea or that prostitution should not be criminalized, etc. doesn't mean they think "governments can do nothing right"
You're right, we probably should be celebrating our lead tainted water, fracking, telecommunications monopolies, $300 Epipens, unaccountable financial institutions, pipelines across native land, corrupt campaign contributions, infrastructure decay, lack of decent health care...
I just assume that the government work do anything without a benefit for themselves. In this case they are probably getting some nice benefits from the automakers.
I say this as someone who has failed to even get a non-form letter answer from any of my elected officials state level or higher. I'm convinced that money is the only way to affect policy.
A regulation that seems sensible to one cultural group might seem like ridiculous overreach to another cultural group. The US is one of the most polycultural countries in the world, so any regulation is bound to piss a lot of people off. It's better to be judicious with regulation in such situations.
Gun control is a great example that seems to confuse a lot of non-Americans. To your average San Franciscan, who has never used a gun and has no particular reason to use one, restrictions on e.g. magazine size probably seem quite reasonable. But go to an agrarian Texan rancher, and the situation is entirely different. Good luck thinning out a stampeding herd of wild hogs with a ten round fixed magazine. Similar situation with pot; the average SF resident is probably fairly familiar with it, whereas the rancher probably isn't. In either case, ignorance breeds irrational fear, which is a bad (but unfortunately likely) foundation for laws.
So yes, many regulations are not sensible, and it's harder to get away with in the US because the US isn't a monoculture. Even those regulations that are sensible (by whatever metric you like) are likely to anger some non-negligible group.
“Democracy must be something more than two wolves and a sheep voting on what to have for dinner.”
― James Bovard, Lost Rights: The Destruction of American Liberty
I think lately far to many people already have the answer before there is any discussion
I think democracy only functions when people are open minded and willing to put themselves in others shoes.
The stampedes aren't, but hogs are a horrible invasive pest that causes many billions of dollars of destruction per year. They've tried trapping, poisoning, everything. The only thing that has any effect on the population is some serious firepower. The problem is so bad that you can hire people to fly around at night in helicopters with sniper rifles and thermal vision to shoot hogs. I've helped a few friends with hog control, and you'll often startle 10-20 at a time and then they'll go into hiding and you're done for the day. You need to put a lot of bullets downrange fast to have any effect.
In would say that anyone who thinks they have a handle on any significant percentage of the regulations in just one country is fooling themselves or uniquely talented. The sheer bulk neatly precludes it.
I'm not sure that anybody after the Code of Hammurabi, or the Twelve Tables, or the ten commandments could even pretend to understand all the laws and regulations that their brethren have enforced on each other. Law is almost like a gas, expanding in every direction to fill whatever space and attention it is allowed.
How can a website where the majority of people likely live in Silicon Valley actually believe that the majority of government regulation is good? Any regulation needs to be intensely scrutinized because the implications are not even completely understandable when they are created.
while I agree that regulation should be intensely scrutinized, and that the implications of a piece of regulation are usually not fully understood until after it has been implemented, the idea that the majority of regulation is "bad" is myopic.
Well-written regulation (and I would argue that the majority of regulations in the US are well-written) serves the public interest. Two immediate examples that come to mindt are the Glass-Steagall act, which separated commercial banking from speculative trading until it was repealed by GLB in the 80s (opening the door for the financial crisis) and the FDA. I would prefer to live in a country where glass stegall was still in place and the FDA was even stronger than it is today.
I get where you're coming from, but I live in a state where, until very recently, brewers were not allowed to sell beer directly to customers. There are still large swaths of my state that do not allow the sale of alcohol on Sunday, period. And when brewers were allowed to sell beer on tours of their brewery, suddenly the government started messing with the regulations again to basically undo everything.
I think the biggest benefit for self-driving cars is that there is really no big corporation or lobbying group that is against this technology. Car manufacturers probably would have put up the biggest battle but they are all pushing for the adoption of the technology. Had car manufacturers felt this technology was a thread to their business model, you'd definitely see a lot of push back and crazy stipulations.
Interesting point -- presumably this means that the existing manufacturers think they can compete in this market, or steer it someplace where they can compete. I wonder if they're right.
Cars move. Splitting the laws by state would mean also splitting the market, and allowing cars on the road which would be illegal to e.g. drive into Texas.
But this kind of fragmentation already exists today with different driving laws between states. A legal student driver in one state might not be allowed to drive in another.
Now that I'm thinking about it, it's strange that vehicles are regulated at the state not federal level. They're a big component in interstate commerce, and therefore ought to be within the jurisdiction of Congress to regulate, even under a relatively strict reading of the Constitution.
For example vehicle window tinting laws vary wildly from state to state (and arguably they're more liberal in states that get hotter, and more restrictive in states with gang issues) so you can own a vehicle that is legally tinted in your home state, but gets ticketed when it crosses a state border.
Daylight running lights are another example, some states require them, while others do not. So you can buy a brand new vehicle which could get ticketed since it lacks DRLs.
Are there any states where turning on the headlights wouldn't satisfy the law? Using headlights all the time probably won't kill the bulbs in a year, but none the less, is $40 a year really a massive problem?
Similarly, most people don't care about tint. Those that do but are agonized about being able to travel to other states can simply figure out the maximum allowed in the region they plan on traveling in. I guess that reaches the level of irritating, but what are the massive consequences for Joe Driver if he can't darken his windows?
To answer my own question, according to Wikipedia:
"Several states on the Eastern seaboard, the Southeast, and Gulf Coast (except Texas) have enforced vehicular laws since the early 1990s that require headlights to be switched on when windshield wipers are in use. This prompted the phasing in of DRLs in the affected states (from Maine to Florida including Louisiana, Mississippi, and Alabama)."
So it appears that DRL aren't required, but frequently standard equipment in states that require headlamps on if windshield wipers are on... Wikipedia does not list any states requiring use of headlamps all the time, though.
Vehicles are regulated at the federal level. That doesn't preclude states from applying additional regulation, such as the California emission standards.
Yup, California ruins everything for the rest of us. They are the reason we can't have nice things. One notable example are the CA regulations on gasoline cans, which have driven out the design that worked well and replaced it with "spill-proof" designs that are terrible and, ironically, much easier to spill with.
States rights is more important for progress than you might think. Each state is an incubator for ideas.
single choice monopolies impede progress, whether governmental or corporate. It's better to have states naturally group together than to force it with some top down measure.
I was under the impression that most of the requirements only applied to registering a car in that State not operating the vehicle??
So, for example, NY requires yearly safety inspections and you'll get a ticket if your inspection lapsed. But you don't have to get a safety inspection to drive in NY if your car is registered in a state that doesn't require safety inspections.
I could be mostly off base on this one.
Though some laws are so local sometimes it's impossible for an out of towner to know the local laws like going right on red is, as far as I know, illegal in NYC but legal... Everywhere else? How does someone from Texas supposed to know that?
>Though some laws are so local sometimes it's impossible for an out of towner to know the local laws like going right on red is, as far as I know, illegal in NYC but legal... Everywhere else? How does someone from Texas supposed to know that?
You've pretty much picked an outlier. And I might be inclined to argue that someone from Texas trying to drive in Manhattan for the first time has other problems :-)
There are a few other things like whether you can pass on the right on an interstate and the aforementioned when headlights need to be on (though I often see this last point signed). But these are usually getting into corner cases and don't really affect how the average person has to approach driving.
It should also be mentioned that many of the divergent cases tend to be bad ideas for safety reasons anyways (passing on the right, using your cell phone when driving, even right-turn-on-red--don't do that in pedestrian-heavy areas).
Places with divergent laws make some effort to inform visitors of the divergence--you'll sometimes see electronic noticeboards saying that using your cell phone is illegal here, and sometimes permanent ones too (e.g, on entry into Virginia on interstates, you are immediately informed that radar detectors are illegal).
I'm not. Autonomous cars will face strong competition from regular cars and the manufacturers have got to be worried about liability. Hence, the private sector in this case has it's interests more or less aligned with the regulators so you should expect fairly effective regulation, like with the FAA and commercial airlines.
Pages 14 - 30+ of the embedded report (pages 16 - of the PDF) are particularly interesting and promising, especially the portions about transparency around privacy and ethics issues.
The report recommends that "Manufacturers and other entities should develop tests and verification methods...". Does anyone know whether verification here means software verification, or does it mean something else in this context?
The report makes reference to "Assessment of Safety Standards for Automotive Electronic Control Systems" by NHTSA, which itself reviews ISO 26262, MIL-STD-882E, DO-178C, the FMVSS, AUTOSAR, and MISRA C.
In this context, they mean verification and validation in the systems engineering sense. Software would be included in that it is a part of the whole system.
I have a hard time understanding the current AV SW stack.
On one hand, at the low level, sensor, motor control, etc you likely have traditional hard real time/MISRA C code, but on the higher layers you probably things like DNN, image recognition, which are much less deterministic.
So I am not sure how do you reconcile these two worlds, and prove it is safe and always work in timely manner.
It seems the only sound approach would be to validate the whole system on a real road.
First, as etendue says, it is not easy. The problem of mixing “Boolean” verification with probabilistic, less-deterministic verification is especially hard. I discussed this a bit in [1], if you care to take a look.
Also, I think most current AVs are not driven by DNNs at the top level (comma.ai [2] is one exception). See [3] for some discussion of that, and of verifying machine-learning-based systems.
Finally, one possible way to check that AV manufacturers “do the right thing” in correctly verifying the combination of DNNs, Misra C, digital HW, sensors and so on is perhaps to create a big, extensible catalog of AV-related scenarios, which ideally should be shared between the manufacturers and the certifying bodies – see [4]. I think there is some hint of that in the DOT pdf – still working my way through it.
I think the simple answer is that it is not easy. To start, rigorous design processes with risk analysis upfront are certainly necessary, as are well-defined operational contexts for the autonomous functionality, and a very disciplined approach to clearly defining safety-critical subsystems and minimizing their surface area.
That sounds pretty much just like web application development, or any other front-end user-facing development. You can verify internal components through testing, but once you introduce non-deterministic random variables like browser software your user is using, and your user's themselves, all you can do is validate the entire system through real-world testing and hope you've covered the edge cases you need to handle and will fail gracefully for the ones you missed.
The point I was trying to make is that if you have actuators running MISRA C that are going to be driven by something written in Tensorflow, does it still makes sense to have a requirement to use MISRA C in the first place for the low level part ?
I'd be very wary of using complex SOUP like TensorFlow, even if brought under my quality system. I think a good answer here is that once one goes under design control the subset of functionality needed should be implemented in-house under the organization's SDLC.
Of course these things are meant to be used (1) to train the system, (2) as a player in the prototype. Exactly like in the old school ML-based systems: you train in Matlab or CudaConvNet, and then you load the trained classifier into the custom-made player highly tuned to your hardware and problem domain.
Most certainly - safety should be guaranteed at the lowest level, even if Tensorflow gets borked.
Think of it as a failure cascade - if Tensorflow breaks, the car can safely stop. If the low level stuff breaks, the car may not be able to stop (or go).
This is excellent news! Guidelines to follow implies that if the manufacturers can meet these guidelines then they could plausibly have a somewhat legal basis for putting self driving cars on the roads.
N.B., this policy is mainly concerned with Highly Automated Vehicles (HAVs), which are defined as SAE Level 3 ("capable of monitoring the driving environment") and above.
edit: as to SAE Level 2, it has this (and more) to say:
> Furthermore, manufacturers and other entities should place significant emphasis on assessing the risk of driver complacency and misuse of Level 2 systems, and develop effective countermeasures to assist drivers in properly using the system as the manufacturer expects. Complacency has been defined as, “... [when an operator] over-relies on and excessively trusts the automation, and subsequently fails to exercise his or her vigilance and/or supervisory duties” (Parasuraman, 1997).
also,
> Manufacturers and other entities should assume that the technical distinction between the levels of automation (e.g., between Level 2 and Level 3) may not be clear to all users or to the general public.
I'm surprised that self-driving technology is focusing on replacing the driver as an autonomous actor, processing visual and radar/lidar signals in order to know about its surroundings. I've always thought we'd get further faster by having automobiles also talk to other vehicles nearby, and design roads to support the computer driven vehicles.
Two examples are:
1) If the vehicle is talking to the cars in front of it, it can know they are braking before it senses that visually. Also, the vehicles can speed up in a gridlock scenario more in unison, like a train.
2) On the interstate, markers in the pavement can be specifically designed for computer sensors rather than human eyeballs. Also, cars can draft together to save fuel.
While networked cars are interesting, there is also a massive security issue here.
Hackers will easily figure out a way to spoof the communication, and could play with traffic.
There are mitigations for most issues, but it's a complex topic.
Just imagine some scenarios:
-) Spoof an emergency break advisory that causes tailing cars to also do an emergency break. (could be mitigated by first observing that cars in front are actually slowing down before breaking)
-) Spoof a command from a smart traffic light at an intersection to stop immediately for police / other emergency traffic. (need to check if traffic light is actually red)
-) Spoof speed restrictions issued by a smart highway traffic jam prevention system.
-) A system for police to force a car to stop immediately and pull over, eliminating car chases. Just spoof this signal and stop anyone you want. (mitigate by checking if there is a police car trailing you, and ignore otherwise).
And so on...
A way around would be to maintain a national database with public keys for each registered vehicle, and make cars only accept those keys. But that would be hard to maintain and still hackers could just get a hold of some PK.
In the end, the driving system will always have to correlate such car 2 car communication with observations it makes itself.
And an autonomous system can react almost immediately anyway.
So coordination doesen't give you all that much.
--
There are some useful ideas though, like:
-) Traffic lights can announce an ideal speed for a route, taking into account traffic and traffic light timings, so you can optimize throughput and minimize fuel consumption
All good points. Seems like you could get around it by using these other-car communications as noisy signals and weighting evidence against the world as the car sees it, e.g., if you get a spoofed emergency brake advisory, and the car's own percepts suggest there's no reason to brake, the resultant action may be to not brake. The signal from the other car[s] becomes just another feature.
Considering the millions of miles driven each day, even if the networked signal wasn't heavily weighted, a spoofed emergency brake advisor signal could still do significant damage.
I don't think that's really all that relevant. The two activities at hand (hacking networked cars vs. throwing a brick off a bridge) appeal to two different types of people. Also one can (theoretically) be done from the comfort of one's own house/office/bedroom which can be anywhere in the world, while the other requires going to the specific location of the traffic.
The reason is that there are and for the foreseeable future will be things that are not networked. Lots of legacy cars, bicycles, pedestrians, trains. In some areas probably even animals on the road. If cars don't work in these environments they are pretty much useless.
It's interesting that this NHTSA statement doesn't mention car-to-car communications at all. There are parties pushing for that, but Google's Urmson was against it. His point is that the troublesome roadway obstacles, from kids to road debris, won't be on the net. So you have to have good sensors, and they can see other cars just fine.
As others have mentioned, vehicle-vehicle communication only works if you trust the other vehicles. This sort of thing is almost certainly coming for fleets of trucks though (Google "truck platooning"), where a known set of vehicles can communication with each other.
Car manufacturers were discussing this kind of thing about 15 years ago.
We were working on diagnostic and emissions checking standards but there was the expectation that we would be able to make use of secure network links to cars at some point in the future.
The question at the time was which would come first. Would a requirement to do emissions testing under real-world conditions push the introduction of radio networks that could also be used for cars to talk to each other or would the road-train type applications be the initial use case.
I think that concern over malicious communicators will at least slow this down. If not implemented correctly, it could give hackers a dangerous amount of control over traffic.
And when an old vehicle can't be driven on the road? I can't see laws to prevent old vehicles for at least 40 years... You can still not wear a seatbelt if your car was manufactured without one, so I would think anything complementing only autonomous vehicles would be met with public outcry.
The effort is focused where the money is. Car companies will sell these cars and make a profit. States have a hard time keeping dumb roads in good condition, they have no money to make roads smart or keep them repaired.
I'm really excited by US govt outlining what it would take to make a legal self driving car.
I'm also hoping that one of the options is to upgrade an old car to a self driving car with an open source kit that you can buy and install it via a certified mechanic.
I think that would be an interesting future I'd like to be part of.
Brad Templeton, who's been working on self-driving cars for a few years now, analyzed this in http://ideas.4brad.com/critique-nhtsas-newly-released-recomm.... He says, "Broadly, this is very much the wrong direction... the progress of robocar development in the USA may be seriously negatively affected."
> I have written that having 50 sets of rules may not be that bad an idea because jurisdictional competition can allow legal innovation and having software load new parameters as you drive over a border is not that hard.
I'm not sure I would put much weight behind what he has to say.
It's nice that these regulations sound sensitive and not heavy handed. I'm wondering whether they are needed at all though. They've been formulated in heavy collaboration with the market leaders Uber, Google etc. Is there a risk they will help shut out upstarts, similarly to how the FDA makes drug development astronomically expensive?
What is it about self-driving cars that has HN readers so incredibly excited? Sorry if it's a little off-topic.
The reason I ask is there are plenty of other countries in the world where cars just aren't that important, let's take Netherlands for example. If you have automatic cars, society here is not just going to be that excited AFAIK. Public transport here is great and most people cycle everywhere, because it's fun, easy and good exercise. Not to mention a lot of people are employed as drivers.
Same for many Asian countries where population density is high, people just don't have the money/room for cars. Scooters are the way to go because of traffic congestion.
Besides, don't people enjoy driving? I don't own a car but when I get behind the wheel, it's a lot of fun. Will people really be able to handle the car doing the speed limit?
I understand technologically it's pretty interesting, but we've had commercial airliners that fly themselves (mostly) for a long time, same for ships and drones and we don't marvel over those things all the time, though I agree they are great innovations.
So apart from the tech what is the actual excitement about?
- Concern for those who will lose their jobs.
- Concern for others safety.
- Privacy concerns.
- Excitement about the safety benefits.
- Economic opportunity.
- Fundraising hype.
- All of the above?
As a Silicon Valley outsider sometimes I read HN and it feels like some context is missing. Sure it's going to change industries, but is this really good progress, necessary progress, or just the next thing we're told we need? I mean can a self-driving car really replace a delivery person yet, a person who can do things like leave packages with a neighbor and build relationships, trust etc?
Sorry if this is a little off-topic, but I'm genuinely curious because it's hard to understand, to me as an outsider, it really looks like some kind of ride-sharing turf war hype battle more than anything else?
I dare to say it, but it's the same for machine learning, a lot of it is fascinating, interesting, exiting tech, but how many product recommendations does one need? How good do my friend recommendations have to be? How smart does Siri need to be? Will a patient really feel better without being treated by a human? Are we really going to trust these things handling nuclear warheads?
Maybe I live a strange life and have unusual views, but I just don't really see the need most of these things when so many problems could be solved using other means. Using this stuff to help people is great, but how much of this effort is actually being put towards that end?
If I'm a little naive, apologies. I'm not having a go but these are just honest questions I often find myself asking when reading HN lately. Agreed this might not be the place to ask but I'm prepared to wear the down votes :)
In my opinion, you mentioned the core reason why you find the excitement baffling, i.e. that you don't drive often but you find it exciting when you do.
Now imagine the scenario for most of the US, a public-transport-hostile country for the most part, where millions upon millions of people burn their precious lives waiting in traffic and sucking in traffic fumes. In my mind, this is one of the most appalling wastes of human potential that has ever existed. Sure, some try to make lemonade out of lemons by educating/informing themselves as they see fit but by and large, it is a huge waste. Not to mention the many thousands of people who die every year in car accidents during the daily commute.
So from my point of view, the self driving car is a thrilling concept: the ability to disengage from a useless, pointless, and hopeless daily grind and engage in something that I want to do, whether it be work, reading, watching a movie, etc. is cool. The closest I have come to this dream in my transit-unfriendly Texas city is the one job where I had an opportunity to take the train/bus into downtown: while this made my daily commute very long, I loved it because it freed me up from the drudge of driving.
Some might ask that perhaps I just hate driving. That is not true. I love taking road trips or autocrossing when I can. But to equate the daily commute with enjoyment is a bridge too far, in my opinion. Banish it, I say, banish it.
Thanks for the response and totally agree with the sitting in traffic, wasted time thing ,but with all due respect it still seems inefficient, wouldn't public transport and telecommuting be better options in a connected world?
How will there still be no traffic jams, or the car will be like an office? In that case why not just work from home and come to work for meetings here and there? Might flexible / less work hours help?
I mean people will still be driving around in vehicle which often makes people motion sick if not paying attention to surroundings , cars requires a lot of energy, take up space etc.
I used to travel to work via train, it was 1.5 hours one way, it was highly productive time for me, but for some reason trains don't seem to make people as motion sick?
I guess one other thing to note is that in Australia, where I'm from originally, some see people think of others using public transport or biking as kind of peasants or feel it's inferior, that might be part of it too ? They're also the kind of people who often like to drive fast and own expensive cars as a status thing, so I'm still note sure it's going to take?
It's incredibly frustrating that people are getting excited for autonomous cars' ability to free them from the drudgery of driving when this has been a solved problem for a long time with public transit.
Unfortunately anti-public transit special interest groups have discredited public transit initiatives all over, and fighting this has been incredibly difficult.
On your last point people definitely do see owning expensive cars as a status item and for this reason I think it's valid to question to what degree and speed will autonomous car sharing networks replace individually owned cars.
I fully expect self driving cars to arrive before BART goes all the way down the peninsula. It's total crap, but I'd be surprised if it wasn't true. Same for more frequent/faster/more reliable caltrain with room to sit or bring your bike.
On one side, you have a solution that requires a whole bunch of groups to align. On the other, you have an individual decision (buying a car). That's why I am excited. If it looked like it was on the horizon, I'd be just as excited for great public transit.
I fully agree: most rational nations have invested in public transportation to solve this problem. The US has a myriad of problems that has made public transportation an afterthought and not viable for most of the country.
I also agree that telecommuting or flexible hours would work well. But again, for most office workers in the country and world, there is significant inflexibility: they must arrive at their appointed time, leave at their appointed time, and take breaks at appointed times. My statement was intended to be broad and to apply to all workers instead of just for the high-tech industry.
So I see self-driving cars as a backdoor solution to the above problems: replace regular cars with self-driving cars, maybe with car-sharing, and now a country that has so much invested in the car suddenly is able to significantly reduce the latter's influence on society without realizing it. Well, one can but hope anyways.
The more human drivers that are replaced by software, the less congestion there will be on the roads. It's delays in human reaction times which cause cars to back up on roads and take forever to get through intersections.
I still don't see how the cars are going to entirely solve the problem of everyone needing to be in the same place at the same time, they still have to cope with each others presence, unless they can fly.
In my opinion a good public transport system does. I've spent time in Tokyo, the rail network is second to none, it's congested sure, but it works.
The problem is that people still need to get in and out of the cars for example, which means there is still wait time. Breakdowns will happen. All humans drivers will have to be banned etc.
It sounds to me like the auto-industry is still pushing a selfish agenda and the valley is buying it, because it's an opportunity to take more money.
For me, I'm happy because I'll still ride my bike and it will be a much safer place for me to exist because self-driving cars won't try and kill me, hopefully :)
But if I enjoy riding my motorcycle to work every day, that is my waste of time. I am all for autonomous cars, as long as I am allowed to have my vehicles and drive them when I want.
Also we all know the security on cars is weak, who needs a bomb in a terrorist attack when you can just hack something and order 50,000 cars to crash (that might sound silly, but it hackers find ways). Do you trust auto manufacturers enough to secure it? Yeah we can mail firmware updates on USB sticks, what could go wrong.
Sorry I am pretty sure I am one of those hostile drivers. They should market it as like a designated driver for the drunk. It would sell faster then.
It's the intersection of dramatic potential effects on society with skills that HN readers have and understand.
A sibling comment pointed out the loss of life issue. I recall (correctly, I hope) Sebastian Thrun mentioning that a traffic accident was part of his initial motivation. Reducing loss of life due to human folly is a strong motivator, but there are certainly ample opportunities for that beyond just this one.
Self-driving cars have widespread potential effects across society, from shipping to taxis to car ownership to the human angle in hours saved and lives saved. This is big. Think of all the lives lost and hours wasted in traffic in the US every year. (No functional public transit in most of the country, etc.)
It's an area where the challenges are largely technical--once the technology is safer than human drivers, we assume the regulatory issues will go away quickly. (And we probably underestimate the technical issues in getting there.)
The huge potential combined with massive and primarily technical challenges makes this probably the biggest thing since the Internet where a bunch of engineers feel like we can change society in a profound way with a bit of software.
It's partly because the tech is actually so new that we can project our expectations onto it rather than focusing on what is actually here today (which is impressive but far from the goal). Weigh this potential against the reality of what most of us actually work on today, and you may see the appeal.
Of course, reality will take time to catch up to the dream, but it's the dream that generates the excitement.
Good questions and it's great to hear this take on things.
Cars play a large role in America. I don't know all the history about how it came to be this way, but I can make some guesses...
* America is very big, and a lot of the settlements are spread out by a ways. Cars make those communities less insular because they provide a way to get from one town to another, where biking would take a very long time and significant effort.
* I forget when (was it the Great Depression?), but there was a big government initiative to build interstate highways connecting places together by roads. Again, these were distant places, but by being able to travel by car, they now feel quite a bit closer. Most everyone on HN was born after this gigantic network of roads was already in place, and car culture was firmly cemented in the US.
* Due to the distances involved, getting a drivers license around the age of 16 or 17 is a huge amount of freedom bestowed on children just as they really desire such freedom. I spent a lot of time in a car as a teenager not just because of where we were going, but also since it was a mostly-private space for me and my friends.
* For the above reasons and many many more, America in general has a culture that is very centered on cars, so given that a lot of HN is both American and interested in technology, it makes sense there'd be a lot of autonomous car talk here.
I wish I had grown up somewhere less car-centric. I moved to NYC specifically so that I wouldn't have to rely on cars, and I quite like the public transportation here. When I've gone back to the rural New Hampshire town I grew up in, public transportation doesn't exist and getting anywhere except my immediate neighbor's houses takes a long time via bike, and I remember why I grew up with cars.
Edit to add: America is also heavily invested in cars and the culture that follows. The train system throughout the country has been in a terrible state for a looooooong time, and there's not much hope of it ever getting better due to the fact that we're so invested in cars. Some cities and towns have better public transportation support, but most don't. Some cities and towns have better support for bicycles and pedestrians, but most don't. There are occasional pushes to change things, but there's always heavy resistance due to just how deep into cars we are, and just how many people and local governments would truly need to get on the same page to make meaningful change.
What happened was; an auto-industry was born and lobbied the government to tear down a lot of the tram, bike and rail infrastructure you guys had. Especially in LA.
A similar thing happened in Sydney, Australia, the city had one of the largest tram systems in the world, the director of the motor trade authority was elected into government and made sure that all the tracks were literally tared over, they are still under the roads. The excuse used was that the network was over congested / too popular, and cars would solve the problem, guess what? Sydney is in the middle of putting the tram network back in to solve the car problem :) Melbourne, Australia was spared because it built it's network in the 40's and 50's and it was difficult to persuade the working class that putting in such a newly built system was a good idea. People literally move to Melbourne because of the convenience they provide.
If you're interested Bikes vs Cars (http://www.bikes-vs-cars.com/) is an awesome documentary, it shows that LA even had things called "Bike Highways" at one stage. You will see how people were so negatively influenced by big business lobbying, and from memory outlines why New York was somewhat spared from the fate.
Sounds like you're being told the same stories again either way.
I don't get it... Are you telling me that what I wrote was wrong? You're the one who asked why people on HN are so into self-driving cars, and my answers are why.
The only regulation that really matters is making car manufacturers liable for accidents and they would have to pay a fine from $100,000 for the smallest accident (per car) up to $10 million per accident.
When the manufacturers "can't explain" how the accident happened (after an audit was performed), they should be fined the maximum $10 million amount.
Why? Because for one assuming it's just a glitch and "they don't know" about it, then they should pay for incompetence. And two, if the car was hacked by a nation state, then their security sucks, and they should again pay the maximum amount so they have the maximum incentive to ensure digital security of self-driving cars.
Where third-party self-driving systems are involved (MobilEye, etc), the liability/fine should be split 50-50 between the car maker and the system vendor.
Give car makers these "incentives" and the other regulations are more or less pointless (other than establishing common V2V and V2I standards and whatnot). Then you'll see just how hard they scramble to make their systems safe.
EDIT: And here we go. Remote hack of Tesla Model S.
We're only at the very beginning of self-driving cars. What happens when there are 100 million self-driving cars on the road? Will their security be as terrible as it is on our PCs?
People should get scared a lot faster about this stuff, before all car makers start writing their software and then refuse to write it from scratch again and just tack on to the poorly written systems new security features in the future as a response to such hacking.
There’s a lot of stupid to unwind here, but I'll just leave at this: your plan would leave a tremendous amount of ambiguity that skilled lawyers would wriggle through. That is, “what is an accident?”
Is it an accident if the driver takes control of the automated system and drives straight into a wall? Is it an accident if a non-automated truck truck with a huge branch sticking out the back obscures the automated car’s cameras, leading to a crash?
Maybe you're a cowboy coder like me who bristles at detailed project specs and prefers some sort of goal or vision to work for. I totally get that. But when it comes to accountability, nothing beats a checklist.
Bad law can be either broad and vague to the point of endless litigation and uselessness (what you're proposing), or hopelessly detailed and self-contradictory (the kind of law you're likely reacting against). This article is celebrating a good, readable middle ground. That's exactly what we want out of law.
These are common questions of fault that are trivially resolved in any motor accident. Someone or something is in control of the vehicle and can be assigned fault. Or in some situations, everyone is at fault and the shares of fault are determined.
If the system is operating autonomously, then the fault lies with the system. Failing to leave sufficient following distance is a common cause of accident and almost always results in fault being found with that driver. This would not change if the driver was a computer system.
However - side note - objects that project from the rear of a vehicle must be flagged using a red cloth. So if the truck were operating with an unflagged load then they could be the party at fault!
If the occupant takes the vehicle out of autonomous mode, then fault would lie with the occupant, unless it was to avoid some kind of impending accident in which cause the situation would have to be examined in detail.
Those who opt out are those we don't want setting the bar in this domain.
If it becomes acceptable for security and safety to be secondary to "getting the cars on the road ASAP and capturing as much of the market as possible" we -- as in the consumers -- will pay the price.
I understand that some of the innovations and progress will only come when we get the cars on the road at scale, but we should still build a giant -- exactly how the comment or suggested -- to loom over the shoulders of these car companies.
Erm, apart from the punitive damages this is pretty much any product liability suit. If you make life-critical systems and your product fails you are getting sued out the wazoo - which is why many products will specifically state that they are not to be used in life-critical systems.
Most mechanical equipment is not reviewed on a case-by-case basis by a regulatory industry; however aircraft incidents are. Aircraft products - meaning any product that is used on an aircraft, right down to the bolts attaching the overhead bins - are expected to be serviceable in "expected conditions of flight". If they are not, the manufacturer is subject to compensatory and even punitive damages [1].
Manufacturers can even be liable for their design decisions, unless the design decisions are specifically constrained by regulations. Obtaining product certification is a strong indicator that a product is compliant, but it may nevertheless expose the manufacturer to liability [2]. These are obviously difficult standards to meet, but they are appropriate when life-critical systems are in question.
> In response to the third and final question, the FAA explains that because an aircraft type certificate embodies the FAA's determination that an aircraft, engine, or propeller design complies with federal standards, it can play an important role in determining whether a manufacturer breached a duty owed to the plaintiff. The type certificate does not create a per se bar to suit, but ordinary conflict preemption principles apply to the particular design-defect claim. According to the FAA, the type certificate will preempt a state tort suit only where compliance with both the certificate and the claims made in the tort suit "is a physical impossibility" or where the claims "stand as an obstacle to the accomplishment of the full purposes and objectives of Congress."7
> Where the FAA has expressly approved the specific design aspect that a plaintiff challenges, that claim would be preempted. On the other hand, where the FAA has left a particular design choice to a manufacturer's discretion, and no other conflict exists, the type certificate does not preempt a design-defect claim. In other words, where the FAA has not made an affirmative determination with respect to the challenged design, and has left that design aspect to the manufacturer's discretion, the claim would proceed by reference to the federal standards of care found in the Act and its implementing regulations.
...
> The difficulty in applying the FAA's views on preemption to product claims lies in the fact that aircraft design specifications rarely require a specific design, but are instead couched in terms of performance or safety outcomes. For example, the certification standards for a stall warning system in a Part 23 aircraft requires "a clear and distinctive stall warning, with the flaps and landing gear in any normal position, in straight and turning flight" by a system "that will give clearly distinguishable indications under expected conditions of flight."9 A type certificate issued for a Part 23 aircraft would presumptively mean the FAA determined that the aircraft complied with these standards at the time the design was certified. However, would the type certificate preclude all product liability claims based on a defective stall warning system? What if the certification was actually wrong and the system did not comply with the standard when the FAA already said that it did? Can this type of claim actually be litigated or is it preempted?
> Additionally, what if the claimed defect was that the stall warning system did not provide a warning when operated outside certification limits such as weight, speed, or center of gravity? Are these conditions outside the "expected conditions of flight" and therefore no federal standard exists? The FAA's Letter Brief to the Third Circuit in Sikkelee does not provide clear answers in the context of product liability litigation. Courts will continue to struggle with deciding these difficult issues in the future.
>What happens when there are 100 million
Bad shit is going to happen way before that. My expectation is that by the time there are even 10 million, all the major technical and regulatory issues will have been worked out. But of course the bad shit could totally derail this industry.
Disagree with almost all of this. Liability should be with the owner of the car, like it mostly is today. They are the ones buying it and operating it on the public roads. They should have insurance to cover this liability.
I'm worried about the following: Hapless Joe buys a nice, well reviewed, good looking and performant self-driving car. He enjoys it very much for a year, goes for all the required maintenance checks. Then suddenly as he is taking a nap on his usual daily commute the car swerves onto the sidewalk and runs over my wife. Turns out there is a very rare race condition in the path finding algorithm, which makes it do that with astronomically low probability. What I don't want: I don't want Hapless Joe's life to be ruined, he did nothing wrong. I also don't want my wife to become just a statistics. What I want: A crack team of industry veterans get to work. Both engineers from the self driving car company, and independent experts. Since accidents of this kind are so rare they have ample amount of resources to overturn every rock, develop new investigation techniques if must, and nail down all the nodes in the causality chain. Then they disseminate their findings and make all cars on the road, as well as the industry best practices, so much safer.
Is this how you mean the owner's liability to work out? Because then I agree with you. If you want to punish Joe. Take away his savings, possibly his home. I don't see a point in that honestly.
The NTSB is doing that right now.[1] They're looking at Tesla's first fatal crash with the level of detail they apply to air crash investigations. This takes months.
Incidentally, in aviation and aerospace, everybody reads crash reports. Pilots study them in training. Engineers read them at work. Flying magazine publishes monthly summaries in their "Aftermath" column. The safety culture is very different from software. In commercial aviation, just about all the single points of failure have already been dealt with. It's a design criterion of aviation that there must be a way to recover from single failures. When you're one single failure away from real trouble, it's time to land. When you read a crash report today, there are two or three things that went wrong, not just one. That's what a safety culture looks like. Self-driving cars may have to get there.
Exactly. Consumers have no insight into the design process, they have no ability to affect the operation of the autonomous mode, it's insane to assign them liability.
Once the car comes out of autonomous mode we can assign them some liability, depending on the situation.
In your hypothetical, I have a hard time giving an example from current consumer products where it isn't the manufacturer's fault. Approved device, used as directed, properly maintained causes a fatality. With what consumer device today wouldn't it be the fault of the device maker/seller.
At the same time, I'm hard put to think of products/services used by consumers--outside of medicine perhaps--where it's the norm that sometimes "shit happens" because statistics and it's not considered reasonable to sue the manufacturer (though we often still do).
Interesting times. Especially given that the facts (and the degree of automation) won't typically be nearly as clear cut especially over the next few decades.
Given that for autonomous vehicles the human will be a passenger and not participate in vehicle control, it makes sense that it is the manufacturer of the autonomous vehicle who is liable for faults caused by its own operation. Take an example where you signal your car to come pick you up at a location. If the car were to get into an accident on its way to you, and a software bug in the vehicle implementation is the cause of the accident, it would make sense that the manufacturer is liable for this accident.
Volvo is one manufacturer who have stated clearly that they will assume liability for the operation of their autonomous cars:
Does it really matter in practice though? If someone else is driving your car without you in the vehicle you can still be liable. Car insurance for autonomous vehicles will be dramatically cheaper and will still be required (someone can ram into your autonomous car as Google has proven many times!) which can surely cover the liability for edge cases.
NHTSA, which, after all, studies crashes, is being very realistic.
Here's the "we're looking at you, Tesla" moment:
"Guidance for Lower Levels of Automated Vehicle Systems"
"Furthermore, manufacturers and other entities should place significant emphasis on assessing the risk of driver complacency and misuse of Level 2 systems, and develop effective countermeasures to assist drivers in properly using the system as the manufacturer expects. Complacency has been defined as, “... [when an operator] over- relies on and excessively trusts the automation, and subsequently fails to exercise his or her vigilance and/or supervisory duties” (Parasuraman, 1997). SAE Level 2 systems differ from HAV systems in that the driver is expected to remain continuously involved in the driving task, primarily to monitor appropriate operation of the system and to take over immediate control when necessary, with or without warning from the system. However, like HAV systems, SAE Level 2 systems perform sustained longitudinal and lateral control simultaneously within their intended design domain. Manufacturers and other entities should assume that the technical distinction between the levels of automation (e.g., between Level 2 and Level 3) may not be clear to all users or to the general public. And, systems’ expectations of drivers and those drivers’ actual understanding of the critical importance of their “supervisory” role may be materially different."
There's more clarity here on levels of automation. For NHTSA Level 1 (typically auto-brake only) and 2 (auto-brake and lane keeping) vehicles, the driver is responsible, and the vehicle manufacturer is responsible for keeping the driver actively involved. For NHTSA Level 3 (Google's current state), 4 (auto driving under almost all conditions) and 5 (no manual controls at all), the vehicle manufacturer is responsible and the driver is not required to pay constant attention. NHTSA is making a big distinction between 1-2 and 3-5.
This is a major policy decision. Automatic driving will not be reached incrementally. Either the vehicle enforces hands-on-wheel and paying attention, or the automation has to be good enough that the driver doesn't have to pay attention at all. There's a bright line now between manual and automatic. NHTSA gets it.