Update cookies preferences

E-book: Governing Lethal Behavior in Autonomous Robots

3.00/5 (18 ratings by Goodreads)
(Georgia Institute of Technology, Atlanta, USA)
  • Format: 256 pages
  • Pub. Date: 27-May-2009
  • Publisher: Chapman & Hall/CRC
  • ISBN-13: 9781420085952
  • Format - PDF+DRM
  • Price: 57,71 €*
  • * the price is final i.e. no additional discount will apply
  • Add to basket
  • Add to Wishlist
  • This ebook is for personal use only. E-Books are non-refundable.
  • Format: 256 pages
  • Pub. Date: 27-May-2009
  • Publisher: Chapman & Hall/CRC
  • ISBN-13: 9781420085952

DRM restrictions

  • Copying (copy/paste):

    not allowed

  • Printing:

    not allowed

  • Usage:

    Digital Rights Management (DRM)
    The publisher has supplied this book in encrypted form, which means that you need to install free software in order to unlock and read it.  To read this e-book you have to create Adobe ID More info here. Ebook can be read and downloaded up to 6 devices (single user with the same Adobe ID).

    Required software
    To read this ebook on a mobile device (phone or tablet) you'll need to install this free app: PocketBook Reader (iOS / Android)

    To download and read this eBook on a PC or Mac you need Adobe Digital Editions (This is a free app specially developed for eBooks. It's not the same as Adobe Reader, which you probably already have on your computer.)

    You can't read this ebook with Amazon Kindle

Expounding on the results of the authors work with the US Army Research Office, DARPA, the Office of Naval Research, and various defense industry contractors, Governing Lethal Behavior in Autonomous Robots explores how to produce an "artificial conscience" in a new class of robots, humane-oids, which are robots that can potentially perform more ethically than humans in the battlefield. The author examines the philosophical basis, motivation, theory, and design recommendations for the implementation of an ethical control and reasoning system in autonomous robot systems, taking into account the Laws of War and Rules of Engagement.

The book presents robot architectural design recommendations for





Post facto suppression of unethical behavior, Behavioral design that incorporates ethical constraints from the onset, The use of affective functions as an adaptive component in the event of unethical action, and A mechanism that identifies and advises operators regarding their ultimate responsibility for the deployment of autonomous systems.

It also examines why soldiers fail in battle regarding ethical decisions; discusses the opinions of the public, researchers, policymakers, and military personnel on the use of lethality by autonomous systems; provides examples that illustrate autonomous systems ethical use of force; and includes relevant Laws of War.

Helping ensure that warfare is conducted justly with the advent of autonomous robots, this book shows that the first steps toward creating robots that not only conform to international law but outperform human soldiers in their ethical capacity are within reach in the future. It supplies the motivation, philosophy, formalisms, representational requirements, architectural design criteria, recommendations, and test scenarios to design and construct an autonomous robotic system capable of ethically using lethal force.

Ron Arkin was quoted in a November 2010 New York Times article about robots in the military.

Reviews

"The book addresses an important issue of intelligent robotics. This book is very important for roboticists and policy makers as it addresses most of the ethical problems faced by the developers of autonomous military robots. an important book on the subject of ethics and lethal robots. [ The author] provides a clear presentation of the motivation and justification for implanting responsible ethical decision making in autonomous lethal robots, and then suggests an architecture for doing it. I highly recommend this book to the general public as well as specialists." Industrial Robot, Vol. 37, Issue 2, 2010

"My research hypothesis is that intelligent robots can behave more ethically in the battlefield than humans currently can. Thats the case I make." Dr. Arkin, quoted in The New York Times, November 24, 2008

"Ron Arkins Governing Lethal Behavior in Autonomous Robots will be an instant classic on the subject of ethics and lethal robots. He provides a clear presentation of the motivation and justification for implanting responsible ethical decision-making in autonomous lethal robots and then suggests an architecture for doing it! As the number of autonomous military robots rapidly increases, this timely book provides a basis to discuss our difficult options. Can the use of autonomous lethal robots be avoided, and, if not, how should we constrain them? I highly recommend this book to the general public as well as specialists." James H. Moor, Professor of Philosophy, Dartmouth College, Hanover, New Hampshire, USA

"Governing Lethal Behavior in Autonomous Robots represents the most serious attempt to date to set out how to build an ethical robot. An eminent engineer and roboticist, who has spent several years in conversation with philosophers, lawyers, and military ethicists, Professor Arkin is uniquely placed to pursue this project. This timely book outlines and directly addresses the ethical dilemmas posed by the development of autonomous military robots, which will confront roboticists and military policy makers in the future. Arkins thesis, that appropriately designed military robots will be better able to avoid civilian casualties than existing human warfighters and might therefore make future wars more ethical, is likely to be the subject of intense debate and controversy for years to come. Deftly interweaving discussion of the just war tradition, the law of war, military robotics, and computer systems architecture, this bold and provocative work will be of interest to engineers and ethicists alike." Robert Sparrow, School of Philosophy and Bioethics, Monash University, Australia

"This is a must read for anyone concerned about the ethical problems posed by the current development of autonomous military robots. While Arkin and I disagree over the value of providing a robot with an artificial conscience, we strongly agree that the deployment of these new weapons needs urgent international discussion." Noel Sharkey, Professor of Artificial Intelligence and Robotics and Professor of Public Engagement, University of Sheffield, UK

Preface xi
Acknowledgments xix
Introduction
1(6)
Trends toward Lethality
7(22)
Weaponized Unmanned Ground Vehicles
10(11)
Weaponized Unmanned Aerial Vehicles
21(5)
Prospects
26(3)
Human Failings in the Battlefield
29(8)
Related Philosophical Thought
37(12)
What People Think: Opinions on Lethal Autonomous Systems
49(8)
Survey Background
50(1)
Response
51(1)
Comparative Results
52(3)
Discussion
55(2)
Formalization for Ethical Control
57(12)
Formal Methods for Describing Behavior
58(4)
Range of Responses: R
58(1)
The Stimulus Domain: S
58(2)
The Behavioral Mapping: β
60(2)
Ethical Behavior
62(7)
Specific Issues for Lethality: What to Represent
69(24)
What Is Required
70(1)
Laws of War
71(10)
Rules of Engagement
81(12)
Standing Rules of Engagement
82(2)
Rules of Engagement (Non-SROE)
84(2)
Rules for the Use of Force
86(5)
ROE for Peace Enforcement Missions
91(2)
Representational Choices: How to Represent Ethics in a Lethal Robot
93(22)
Underpinnings
95(4)
Generalism---Reasoning from Moral Principles
99(5)
Deontic Logic
99(3)
Utilitarian Methods
102(1)
Kantian Rule-Based Methods
103(1)
Particularism: Case-Based Reasoning
104(4)
Ethical Decision Making
108(7)
Architectural Considerations for Governing Lethality
115(10)
Architectural Requirements
119(6)
Design Options
125(18)
Ethical Governor
127(6)
Ethical Behavioral Control
133(5)
Ethical Adaptor
138(5)
After-Action Reflection
138(2)
Affective Restriction of Behavior
140(3)
Responsibility Advisor
143(12)
Command Authorization for a Mission Involving Autonomous Lethal Force
146(2)
Design for Mission Command Authorization
148(1)
The Use of Ethical Overrides
149(3)
Design for Overriding Ethical Control
152(3)
Example Scenarios for the Ethical Use of Force
155(22)
Taliban Muster in Cemetery
157(5)
``Apache Rules the Night''
162(5)
Korean Demilitarized Zone
167(4)
Urban Sniper
171(6)
A Prototype Implementation
177(34)
Infrastructure
177(1)
A Prototype Implementation of the Ethical Governor
178(18)
Ethical Constraints
179(3)
Evidential Reasoning
182(1)
Constraint Application
182(3)
Proportionality and Battlefield Carnage
185(3)
Demonstration Scenario Overview
188(2)
Scenario 1---Suppressing Unethical Behavior
190(2)
Scenario 2---Maintaining Ethical Behavior While Minimizing Collateral Damage
192(4)
Implementing the Responsibility Advisor
196(13)
Establishing Responsibility When Tasking an Autonomous System Capable of Lethal Force
196(6)
Run-Time Responsibility Advising and Operator Overrides
202(1)
Continuous Presentation of the Status of the Ethical Governor
203(2)
Negative Overrides: Denying Permission to Fire in the Presence of Obligating Constraints
205(1)
Positive Overrides: Granting Permission to Fire in the Presence of Forbidding Ethical Constraints
206(3)
Summary
209(2)
Epilogue 211(2)
References 213(12)
Appendix A • Relevant Laws of War 225(18)
Appendix B • Acronyms 243(2)
Appendix C • Notation 245(2)
Index 247
Georgia Institute of Technology, Atlanta, USA