Five Things You Should Know About the Fed’s New Self-Driving Car Policy
By: Jesse Kirkpatrick
On September 20, 2017, the U.S. Department of Transportation (DOT) and the National Highway Traffic Safety Administration (NHTSA) issued the long-awaited Federal Automated Vehicles Policy. The policy attempts to balance safety with innovation and “is rooted in DOT's view that automated vehicles hold enormous potential benefits for safety, mobility and sustainability.” Here are five things you should know about the policy.
1. It’s important the federal government is involved.
Many areas of law, policy, and regulation concerning automobiles are left to the states (these include vehicle licensing and registration, traffic laws and enforcement, and insurance), and until now it was unclear what role the federal government would have with respect to autonomous vehicles. The new federal guidelines preserve these state responsibilities but allows the federal government a larger role in such critical areas as safety innovation, standard-setting, and compliance; investigating defects and issuing recalls; and communicating and educating the public on autonomous vehicles in general. While self-driving cars were once considered something far off in the future, it’s now clear they are coming quickly. Tesla’s “beta” version of Autopilot is in full swing, Uber just announced that it will be using self-driving cars in Pittsburgh, and industry leaders like BMW, VolksWagen, Ford, and Google are racing to bring these cars to market. The federal government’s policy guidance couldn’t have come sooner.
2. The federal policy includes a 15-point safety assessment.
The policy’s 15-point safety assessment outlines several important safety objectives that automakers should meet to “achieve a robust design,” while allowing various strategies and methodologies for how these are achieved. These safety goals include sensing, detecting and responding to objects and events; protection of occupants in the event of crashes; data recording and information sharing for crashes; privacy and cybersecurity; ethical considerations.
3. These are guidelines, not regulations.
The federal policy is intended to provide guidance to states; it does not provide federal regulation of autonomous vehicles (this includes the 15-point safety measures). This approach strikes a balance between the federal government’s desire to protect the public, while simultaneously respecting states’ traditional role in automotive regulation. Importantly, the federal policy is intended to be only a start to what NHTSA envisions as an ongoing process in autonomous vehicle policy-making.
4. Innovation is baked into the guidelines.
The federal policy is sensitive to not unduly stifling innovation. President Obama underscored this point in his recent op-ed, “Regulation can go too far. Government sometimes gets it wrong when it comes to rapidly changing technologies. That’s why this new policy is flexible and designed to evolve with new advances.” This flexibility is intended to facilitate industry’s technological innovation, while ensuring the federal government’s involvement in safety
5. Ethical Considerations are front-and-center.
DOT and NHTSA recognize ethical considerations are so important they've included them in the 15-point safety assessment. The fact that self-driving cars raise ethical issues is not novel. I have written before about crash algorithms and how these cars will be programmed in life-or-death situations, and how consumers will be informed of these programming choices. What is notable is that ethical considerations are a core component of the federal policy’s proposed safety assessment. The federal policy calls for manufacturers and “other entities” (e.g., tech companies) to ensure that ethical decisions, such as when and into what a self driving car should crash, are “made consciously and intentionally.” This is a clear call that those in the business of self-driving cars need to take seriously the programming of ethical decisions.