When Isaac Asimov first introduced his now-ubiquitous Three Laws of Robotics in the early 1940s, science-fiction stories were filled with flying saucers, flying cars, and robots seemingly bent on the destruction of their creators or their owners – either by malicious intent, or by the unintended consequences of unforeseen events. After all, some human has to program each robot to deal with every eventuality it may encounter, and as humans are always prone to oversight, negligence, and general failure, so their creations are as well.
Fast-forward seventy-five years, and humans are still struggling with the basic ethics of, and the various potential effects of artificial intelligence. Autonomous, self-regulating robots are no longer the province of speculative fiction, but are here among us now, replacing humans in the physical workplace, and running increasingly large parts of our modern infrastructures – even as we have yet failed to establish basic universal ground rules for governing the behavior, capabilities, and reach of our computerized companions.
While the jury is still out on flying saucers, and it doesn’t appear that flying cars or robot butlers will be a practical reality any time soon, the exponential advancements in computer power and speed have almost unexpectedly ushered in the era of the robot chauffeur, in the form of autonomous, self-driving cars. Led by Google and Tesla, a multi-industry effort to make autonomous vehicles a reality has resulted in the realization of almost 100 years of research into a practical and workable system in just a few short years, which has caught legislators and regulators unprepared. It is also to be seen how this will impact cars the end of their lives and the car recycling process.
In an effort to guide and streamline emerging legislation on autonomous vehicles, industry leaders Google, Volvo, Ford Motor Company, Uber, and Lyft, which have all been conducting independent autonomous-vehicle research, have joined with other industry notables to form an industry-advocacy group called the Self-Driving Coalition for Safer Streets.
The group’s spokesperson is David Strickland, an auto-industry lobbyist and former head of the U.S National Highway Traffic Safety Administration. Strickland, on behalf of the Coalition, has advocated for the creation of a set of federal guidelines, created in cooperation with the various states and industry entities, so that the emerging autonomous-vehicle industry and its customers would be able to move forward under one set of clear rules instead of a convoluted system of regulations that would vary from state to state and country to country.
The Self-Driving Coalition for Safer Streets would like to eventually see legislation that allows completely autonomous vehicles that operate without human interaction, or even a human presence. While many states have passed legislation allowing driverless vehicles on public roads for testing purposes, those governments have also expressed a reluctance to allow fully-autonomous vehicles, now or in the future, preferring that all vehicles retain controls for human operation in the case of emergencies or other needs.
The U.S National Highway Traffic Safety Administration is currently conducting public forums on guidelines for self-driving cars, and plans to release those guidelines in 2016. They currently specify five different levels of vehicle automation, ranging from Level Zero, which is complete human control, to Level Four, which is completely autonomous control, whether in the presence of a human or not. The most common level in modern cars is Level Two, or Combined Function Automation, which allows the driver to temporarily relinquish control of the vehicle, but requires that he or she remain prepared to take over control of the vehicle at any time with little notice.
Level Four is the Holy Grail of the autonomous-vehicle industry, and is the ultimate goal of the efforts of the members of the Self-Driving Coalition for Safer Streets. However, it is also the most potentially dangerous level in terms of physical accidents and injuries, and in legal liability. It is also the level that is potentially most disruptive to society, possibly resulting in lost jobs, and therefore wages to individuals and taxes to local governments.
The NHTSA has stated that an artificial intelligence could be considered a vehicle’s “legal operator” under Federal law, but has expressed concern that the technology is developed enough to demonstrate the “sophistication and safety” required for use on public roads for “general driving purposes”. Currently there is not sufficient data to compare the sophistication and safety of self-driving vehicles to the sophistication and safety of the current driving practices of humans, which resulted in more than 35,000 deaths and 2.3 million injuries in the U.S. alone in 2014.
The major automobile and software companies in the Self-Driving Coalition for Safer Streets have each, in their turn, expressed a desire to create an efficient, convenient, and above all safe technology in their pursuit of the ultimate self-driving vehicles. We can hope their combined efforts will find fruit in a comprehensive legislative and technological solution that, as Asimov put it, will “not injure a human being or, through inaction, allow a human being to come to harm.”