Same way you certify a person. You train them, then you test them in simulated environments.
It won't be a fixed certification procedure, one car manufacturer might hire an independent auditor to check over every line of code. The car manufacture that stupidly uses a neural network in their emissions control system will have to hire a independent auditor who is comptable putting their stamp of approval on the neural network.
Actually it's way easier to certify a neural network than a human, you can simulate millions of situations. You could even certify the code which ran the training environment in a line by line situation.
As long as the government (or certifier) is happy with whatever method was used to create the report, a certificate is issued.
You need to certify every possible output for every possible combination of input.
It sounds impossible to me.
Even if it is not impossible then at that point you can substitute your NN with an algorithm with the same response mapping.
If you don't certify everything then I can't really see how an open source certified autopilot would have saved the guy on the tesla.
Certification is not a 100% solution. We can never be 100% sure that a neural network will produce the right result. Just like we can never be 100% sure that hand written software doesn't have bugs, certification and auditing can't do that either.
Even with the most stringent software development and testing procedures in the world, NASA is unable to produce bug-free software for their spacecraft (which is why they have procedures to recover from bugs and patch software).
All the certification is saying: No bugs were found during testing, based on the amount of testing and the quality of the code and the testing procedures he probability of a fatal bug is x% (where x is a really low number with lots of zeros) and we deam x% to be an acceptable risk. In the case of autopilots, we only really need x% to be below regular human driving.
No software changes could have saved that driver. Tesla have a class 2 autopilot (adaptive cruise control with lane following). It's not within the job description of an autopilot to avoid all possible crashes, it's the job of the driver to be alert and ready to take control at all times. Tesla's autopilot doesn't even use Neural Networks outside of the camera sensor.
To be a class 3 autopilot that would be expected to deal with this sort of incident, Tesla would need way more sensors and much smarter software, so the car would actually have hope of detecting the faulty data from one sensor and taking the correct answer (The truck was only within view of the camera sensor, the radar is calibrated to only detect things at bumper height).
It won't be a fixed certification procedure, one car manufacturer might hire an independent auditor to check over every line of code. The car manufacture that stupidly uses a neural network in their emissions control system will have to hire a independent auditor who is comptable putting their stamp of approval on the neural network.
Actually it's way easier to certify a neural network than a human, you can simulate millions of situations. You could even certify the code which ran the training environment in a line by line situation.
As long as the government (or certifier) is happy with whatever method was used to create the report, a certificate is issued.