Muutke küpsiste eelistusi

Responsible Use of AI in Military Systems [Kõva köide]

  • Formaat: Hardback, 374 pages, kõrgus x laius: 234x156 mm, kaal: 600 g, 11 Tables, black and white; 11 Line drawings, black and white; 11 Illustrations, black and white
  • Sari: Chapman & Hall/CRC Artificial Intelligence and Robotics Series
  • Ilmumisaeg: 26-Apr-2024
  • Kirjastus: Chapman & Hall/CRC
  • ISBN-10: 1032524308
  • ISBN-13: 9781032524306
  • Formaat: Hardback, 374 pages, kõrgus x laius: 234x156 mm, kaal: 600 g, 11 Tables, black and white; 11 Line drawings, black and white; 11 Illustrations, black and white
  • Sari: Chapman & Hall/CRC Artificial Intelligence and Robotics Series
  • Ilmumisaeg: 26-Apr-2024
  • Kirjastus: Chapman & Hall/CRC
  • ISBN-10: 1032524308
  • ISBN-13: 9781032524306
"The book provides the reader with a broad overview of all relevant aspects involved with the responsible development, deployment and use of AI in military systems. It stresses both the advantages of AI as well as the potential downsides of including AI in military systems"--

The book provides the reader with a broad overview of all relevant aspects involved with the responsible development, deployment and use of AI in military systems. It stresses both the advantages of AI as well as the potential downsides of including AI in military systems.



Artificial Intelligence (AI) is widely used in society today. The (mis)use of biased data sets in machine learning applications is well-known, resulting in discrimination and exclusion of citizens. Another example is the use of non-transparent algorithms that can’t explain themselves to users, resulting in the AI not being trusted and therefore not being used when it might be beneficial to use it.

Responsible Use of AI in Military Systems lays out what is required to develop and use AI in military systems in a responsible manner. Current developments in the emerging field of Responsible AI as applied to military systems in general (not merely weapons systems) are discussed. The book takes a broad and transdisciplinary scope by including contributions from the fields of philosophy, law, human factors, AI, systems engineering, and policy development.

Divided into five sections, Section I covers various practical models and approaches to implementing military AI responsibly; Section II focuses on liability and accountability of individuals and states; Section III deals with human control in human-AI military teams; Section IV addresses policy aspects such as multilateral security negotiations; and Section V focuses on ‘autonomy’ and ‘meaningful human control’ in weapons systems.

Key Features:

  • Takes a broad transdisciplinary approach to responsible AI
  • Examines military systems in the broad sense of the word
  • Focuses on the practical development and use of responsible AI
    • Presents a coherent set of chapters, as all authors spent two days discussing each other’s work
  • This book provides the reader with a broad overview of all relevant aspects involved with the responsible development, deployment and use of AI in military systems. It stresses both the advantages of AI as well as the potential downsides of including AI in military systems.

    Preface. Acknowledgements. Editor. Contributors. 1 Introduction to
    Responsible Use of AI in Military Systems. SECTION I Implementing Military AI
    Responsibly: Models and Approaches. 2 A SocioTechnical Feedback Loop for
    Responsible Military AI LifeCycles from Governance to Operation. 3 How Can
    Responsible AI Be Implemented? 4 A Qualitative Risk Evaluation Model for
    AIEnabled Military Systems. 5 Applying Responsible AI Principles into
    Military AI Products and Services: A Practical Approach. 6 Unreliable AIs for
    the Military. SECTION II Liability and Accountability of Individuals and
    States. 7 Methods to Mitigate Risks Associated with the Use of AI in the
    Military Domain. 8 Killer Pays: State Liability for the Use of Autonomous
    Weapons Systems in the Battlespace. 9 Military AI and Accountability of
    Individuals and States for War Crimes in the Ukraine. 10 Scapegoats!:
    Assessing the Liability of Programmers and Designers for Autonomous Weapons
    Systems. SECTION III Human Control in HumanAI Military Teams. 11 Rethinking
    Meaningful Human Control. 12 AlphaGos Move 37 and Its Implications for
    AISupported Military DecisionMaking. 13 Bad, Mad, and Cooked: Moral
    Responsibility for Civilian Harms in HumanAI Military Teams. 14 Neglect
    Tolerance as a Measure for Responsible Human Delegation. SECTION IV Policy
    Aspects. 15 Strategic Interactions: The Economic Complements of AI and the
    Political Context of War. 16 Promoting Responsible State Behavior on the Use
    of AI in the Military Domain: Lessons. SECTION V Bounded Autonomy. 17 Bounded
    Autonomy. Index.
    Jan Maarten Schraagen is Principal Scientist at TNO, The Netherlands. His research interests include human-autonomy teaming and responsible AI. He is main editor of Cognitive Task Analysis (2000) and Naturalistic Decision Making and Macrocognition (2008) and co-editor of the Oxford Handbook of Expertise (2020). He is editor in chief of the Journal of Cognitive Engineering and Decision Making. Dr. Schraagen holds a PhD in Cognitive Psychology from the University of Amsterdam, The Netherlands.