Author_Institution :
Commun., Culture & Technol., Georgetown Univ., Washington, DC, USA
Abstract :
The law is often called upon when a technology becomes increasingly ubiquitous and thereby presents previously isolated ethical issues to society at large. Automation is no exception and rapid developments in sensors, computing, and robotics, as well as power, kinematics, control, telecommunications, and artificial intelligence, have presented concerns to all sectors of society that were once confined to manufacturing, mass transit, and military operations. The growing concern about the large scale introduction of more sophisticated automation has driven discussions of regulation. The goals of policy are human centered; they protect or promote values threatened or enhanced by the use of automation, such as safety, privacy, dignity, self-determination, and accountability. However, an analysis of five case studies of legal treatment of automation reveals a dangerous trend: by focusing on the capabilities of the automation at a given time, past legal approaches ignore the sociotechnical nature of automation. This approach results in less protection of the values initially at issue, an irony similar to the one described by Lisanne Bainbridge in 1983. The irony is a product of neglect of the sociotechnical nature of automation: the relationship between humans and machines is interdependent; humans will always be in the loop; and reactive policy responses do not provide general ethical guidance. Like systems engineers 30 years ago, policy-makers must adjust their approaches to recognize the sociotechnical nature of man and machine in automated systems to avoid the ironies of automation law and meet the goals of ethical integration.