Another question is whether existing functional safety standards can provide assurances of safety. Such discussions often raise ISO 26262, an international standard for functional safety of electrical and electronic systems in production automobiles. But such standards as this were not designed with autonomous vehicles in mind, and applying them to autonomous vehicles is difficult for a variety of reasons, including but not limited to the following:
•Such standards are intended for vehicles in which the human driver can ultimately correctfor errors, which is inapplicable to fully autonomous vehicles that require no humanintervention.• The standards work well when system inputs and outputs can be well specified, but this isprobably not possible with large amounts of diverse, high-speed data coming fromvehicle sensors.• It is difficult to apply formal methods to machine learning techniques, which are thecornerstone of rapid improvement in autonomous driving but which often result indecision rules that are difficult for humans to interpret. This is not to suggest that functional safety standards cannot help; rather, further work is needed to adapt them to the unique challenges that autonomous vehicles pose. In sum, the transportation industry and policymakers do not yet have a method that is both practical and sound for testing autonomous vehicle safety. The question, “How safe is an autonomous vehicle?” may be unanswerable prior to widespread use. This does not mean that their use should be prohibited; the technology has too much potential to save lives. Instead, it suggests that the race to develop autonomous vehicles needs a parallel race to develop methods for demonstrating and managing their safety.There Is No Consensus on How Safe Autonomous Vehicles Should Be The issue of how safe autonomous vehicles should be is worth considering, even if their degree of safety cannot yet be fully proven. Some will insist that anything short of totally eliminating risk is an unacceptable safety compromise. The argument is that it is acceptable if humans make mistakes, but not if machines do. But, again, waiting for autonomous vehicles to operate nearly perfectly misses opportunities to save lives because it means the needless perpetuation of the risks posed by human drivers. It seems sensible that autonomous vehicles should be allowed on America’s roads when they are judged safer than the average human driver, allowing more lives to be saved and sooner while still ensuring that autonomous vehicles do not create new risks. An argument can be made that autonomous vehicles could be allowed even when they are not as safe as average human drivers if developers can use early deployment as a way to rapidly improve vehicle safety. The vehicles could become at least as good as the average human sooner than they would otherwise—and thus save more lives over the long term. Moreover, there might be significant non-safety benefits, such as allowing people to do more-productive things while they are in a vehicle, that could outweigh safety drawbacks. The lack of consensus on this point is not a failure but rather a genuine expression of Americans’ different values and beliefs when it comes to humans versus machines. But it complicates the challenge of developing safety benchmarks for the technology.