Muutke küpsiste eelistusi

E-raamat: Introduction to Ethics in Robotics and AI

  • Formaat: PDF+DRM
  • Sari: SpringerBriefs in Ethics
  • Ilmumisaeg: 11-Aug-2020
  • Kirjastus: Springer Nature Switzerland AG
  • Keel: eng
  • ISBN-13: 9783030511104
  • Formaat - PDF+DRM
  • Hind: 4,08 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: PDF+DRM
  • Sari: SpringerBriefs in Ethics
  • Ilmumisaeg: 11-Aug-2020
  • Kirjastus: Springer Nature Switzerland AG
  • Keel: eng
  • ISBN-13: 9783030511104

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

This open access book introduces the reader to the foundations of AI and ethics. It discusses issues of trust, responsibility, liability, privacy and risk. It focuses on the interaction between people and the AI systems and Robotics they use. Designed to be accessible for a broad audience, reading this book does not require prerequisite technical, legal or philosophical expertise. Throughout, the authors use examples to illustrate the issues at hand and conclude the book with a discussion on the application areas of AI and Robotics, in particular autonomous vehicles, automatic weapon systems and biased algorithms. A list of questions and further readings is also included for students willing to explore the topic further.

1 About the Book
1(4)
1.1 Authors
1(1)
1.2 Structure of the Book
2(3)
2 What Is AI?
5(12)
2.1 Introduction to AI
7(4)
2.1.1 The Turing Test
9(1)
2.1.2 Strong and Weak AI
10(1)
2.1.3 Types of AI Systems
10(1)
2.2 What Is Machine Learning?
11(1)
2.3 What Is a Robot?
12(1)
2.3.1 Sense-Plan-Act
12(1)
2.3.2 System Integration. Necessary but Difficult
13(1)
2.4 What Is Hard for AI
13(1)
2.5 Science and Fiction of AI
14(3)
3 What Is Ethics?
17(10)
3.1 Descriptive Ethics
17(2)
3.2 Normative Ethics
19(1)
3.2.1 Deontological Ethics
19(1)
3.2.2 Consequentialist Ethics
20(1)
3.2.3 Virtue Ethics
20(1)
3.3 Meta-ethics
20(1)
3.4 Applied Ethics
21(1)
3.5 Relationship Between Ethics and Law
22(1)
3.6 Machine Ethics
22(5)
3.6.1 Machine Ethics Examples
23(2)
3.6.2 Moral Diversity and Testing
25(2)
4 Trust and Fairness in AI Systems
27(12)
4.1 User Acceptance and Trust
28(1)
4.2 Functional Elements of Trust
28(1)
4.3 Ethical Principles for Trustworthy and Fair AI
28(9)
4.3.1 Non-maleficence
29(1)
4.3.2 Beneficence
30(1)
4.3.3 Autonomy
30(3)
4.3.4 Justice
33(2)
4.3.5 Explicability
35(2)
4.4 Conclusion
37(2)
5 Responsibility and Liability in the Case of AI Systems
39(6)
5.1 Example 1: Crash of an Autonomous Vehicle
39(1)
5.2 Example 2: Mistargeting by an Autonomous Weapon
40(2)
5.2.1 Attribution of Responsibility and Liability
41(1)
5.2.2 Moral Responsibility Versus Liability
41(1)
5.3 Strict Liability
42(1)
5.4 Complex Liability: The Problem of Many Hands
43(1)
5.5 Consequences of Liability: Sanctions
43(2)
6 Risks in the Business of AI
45(10)
6.1 General Business Risks
46(2)
6.1.1 Functional Risk
46(1)
6.1.2 Systemic Risk
47(1)
6.1.3 Risk of Fraud
47(1)
6.1.4 Safety Risk
47(1)
6.2 Ethical Risks of AI
48(1)
6.2.1 Reputational Risk
48(1)
6.2.2 Legal Risk
48(1)
6.2.3 Environmental Risk
48(1)
6.2.4 Social Risk
49(1)
6.3 Managing Risk of AI
49(1)
6.4 Business Ethics for AI Companies
50(1)
6.5 Risks of AI to Workers
51(4)
7 Psychological Aspects of AI
55(6)
7.1 Problems of Anthropomorphisation
55(2)
7.1.1 Misplaced Feelings Towards AI
56(1)
7.1.2 Misplaced Trust in AI
57(1)
7.2 Persuasive AI
57(1)
7.3 Unidirectional Emotional Bonding with AI
58(3)
8 Privacy Issues of AI
61(10)
8.1 What Is Privacy?
61(1)
8.2 Why AI Needs Data
62(1)
8.3 Private Data Collection and Its Dangers
63(6)
8.3.1 Persistence Surveillance
64(3)
8.3.2 Usage of Private Data for Non-intended Purposes
67(2)
8.3.3 Auto Insurance Discrimination
69(1)
8.3.4 The Chinese Social Credit System
69(1)
8.4 Future Perspectives
69(2)
9 Application Areas of AI
71(12)
9.1 Ethical Issues Related to AI Enhancement
71(2)
9.1.1 Restoration Versus Enhancement
71(1)
9.1.2 Enhancement for the Purpose of Competition
72(1)
9.2 Ethical Issues Related to Robots and Healthcare
73(1)
9.3 Robots and Telemedicine
73(3)
9.3.1 Older Adults and Social Isolation
73(1)
9.3.2 Nudging
74(1)
9.3.3 Psychological Care
75(1)
9.3.4 Exoskeletons
76(1)
9.3.5 Quality of Care
76(1)
9.4 Education
76(2)
9.4.1 AI in Educational Administrative Support
76(1)
9.4.2 Teaching
77(1)
9.4.3 Forecasting Students' Performance
78(1)
9.5 Sex Robots
78(5)
10 Autonomous Vehicles
83(10)
10.1 Levels of Autonomous Driving
83(1)
10.2 Current Situation
84(1)
10.3 Ethical Benefits of AVs
84(1)
10.4 Accidents with AVs
85(1)
10.5 Ethical Guidelines for AVs
86(1)
10.6 Ethical Questions in AVs
86(7)
10.6.1 Accountability and Liability
87(1)
10.6.2 Situations of Unavoidable Accidents
87(2)
10.6.3 Privacy Issues
89(1)
10.6.4 Security
89(1)
10.6.5 Appropriate Design of Human-Machine Interface
90(1)
10.6.6 Machine Learning
90(1)
10.6.7 Manually Overruling the System?
90(1)
10.6.8 Possible Ethical Questions in Future Scenarios
90(3)
11 Military Uses of AI
93(8)
11.1 Definitions
94(1)
11.2 The Use of Autonomous Weapons Systems
95(2)
11.2.1 Discrimination
95(1)
11.2.2 Proportionality
96(1)
11.2.3 Responsibility
96(1)
11.3 Regulations Governing an AWS
97(1)
11.4 Ethical Arguments for and Against AI for Military Purposes
97(1)
11.4.1 Arguments in Favour
97(1)
11.4.2 Arguments Against
98(1)
11.5 Conclusion
98(3)
12 Ethics in AI and Robotics: A Strategic Challenge
101(4)
12.1 The Role of Ethics
103(1)
12.2 International Cooperation
103(2)
References 105(10)
Index 115
Christoph Bartneck is an associate professor and director of postgraduate studies at the HIT Lab NZ of the University of Canterbury. He has a background in Industrial Design and Human-Computer Interaction, and his projects and studies have been published in leading journals, newspapers, and conferences. His interests lie in the fields of Human-Computer Interaction, Science and Technology Studies, and Visual Design. More specifically, he focuses on the effect of anthropomorphism on human-robot interaction. As a secondary research interest he works on bibliometric analyses, agent based social simulations, and the critical review on scientific processes and policies. In the field of Design Christoph investigates the history of product design, tessellations and photography. The press regularly reports on his work, including the New Scientist, Scientific American, Popular Science, Wired, New York Times, The Times, BBC, Huffington Post, Washington Post, The Guardian, and The Economist.





 





Christoph Lütge holds the Chair of Business Ethics and Global Governance at Technical University of Munich (TUM). He has a background in business informatics and philosophy and has held visiting professorships in Taipei, Kyoto and Venice. He was awarded a Heisenberg Fellowship in 2007. In 2019, Lütge was appointed director of the new TUM Institute for Ethics in Artificial Intelligence. Among his major publications are: The Ethics of Competition (Elgar 2019), Order Ethics or Moral Surplus: What Holds a Society Together? (Lexington, 2015), and the "Handbook of the Philosophical Foundations of Business Ethics" (Springer, 2013). He has commented on political and economic affairs on Times Higher Education, Bloomberg, Financial Times, Frankfurter Allgemeine Zeitung, La Repubblica and numerous other media. Moreover, he has been a member of the Ethics Commission on Automated and Connected Driving of the German Federal Ministry of Transport and Digital Infrastructure, as well as of the European AI Ethics initiative AI4People. He has also done consulting work for the Singapore Economic Development Board and the Canadian Transport Commission.

Alan Wagner is an assistant professor of aerospace engineering at the Pennsylvania State University and a research associate with the Rock Ethics Institute. His research interest include the development of algorithms for human-robot interaction, human-robot trust, perceptual techniques for interaction, roboethics, and machine ethics.  Application areas for these interests range from military to healthcare. His research has won several awards including being selected for by the Air Force Young Investigator Program. His research on deception has gained significant notoriety in the media resulting in articles in the Wall Street Journal, New Scientist Magazine, the journal of Science, and described as the 13th most important invention of 2010 by Time Magazine. His research has also won awards within thehuman-robot interaction community, such as the best paper award at RO-MAN 2007 and 2018 ACM Transaction on Interactive Intelligent Systems journal. 

 









Sean Welsh is a graduate student in philosophy at the University of Canterbury and a member of the Ethics, Law and Society Workgroup of the AI Forum of New Zealand. Prior to embarking on his doctoral research in AI and robot ethics he worked as a software engineer for various telecommunications firms. His articles have appeared in The Conversation, the Sydney Morning Herald, the World Economic Forum, Euronews, Quillette and Jane's Intelligence Review. He is the author of Ethics and Security Automata, a research monograph on machine ethics.