Muutke küpsiste eelistusi

Moral Intuition: From the Human Mind to Artificial Agents [Kõva köide]

  • Formaat: Hardback, 226 pages, kõrgus x laius: 235x155 mm, 1 Illustrations, color; 13 Illustrations, black and white
  • Sari: Advances in Neuroethics
  • Ilmumisaeg: 24-Apr-2026
  • Kirjastus: Springer Nature Switzerland AG
  • ISBN-10: 3032201160
  • ISBN-13: 9783032201164
Teised raamatud teemal:
  • Kõva köide
  • Hind: 116,69 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Tavahind: 155,59 €
  • Säästad 25%
  • Raamatu kohalejõudmiseks kirjastusest kulub orienteeruvalt 3-4 nädalat
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Tellimisaeg 2-4 nädalat
  • Lisa soovinimekirja
  • Formaat: Hardback, 226 pages, kõrgus x laius: 235x155 mm, 1 Illustrations, color; 13 Illustrations, black and white
  • Sari: Advances in Neuroethics
  • Ilmumisaeg: 24-Apr-2026
  • Kirjastus: Springer Nature Switzerland AG
  • ISBN-10: 3032201160
  • ISBN-13: 9783032201164
Teised raamatud teemal:
In the tradition of moral philosophy, long dominated by a rationalist paradigm, the idea of moral intuition has often been a source of embarrassment. How can the mind form a moral judgment within seconds, without any apparent reasoning?



In the spirit of neuroethics, this book demystifies moral intuition by examining the mental and neural processes that generate such automatic evaluations. Addressed to specialists in philosophy, psychology, and AI ethics, the book systematically investigates three questions: how moral intuitions work, how they can improve, and how they can be implemented in artificial agents.



Challenging the dominant default-interventionist view of moral reasoning, the first part argues that moral intuitions play a dual role: they detect harm and help in the environment, and they metacognitively regulate the deployment of cognitive resources, triggering reflection when intuitive outputs are uncertain or conflicting. Building on this foundation, the book offers a dyadic classification of the cognitive biases that shape moral intuitions and critically assesses strategies for mitigating them, including reasoning, expertise, and nudging. The final part extends this moral-psychological framework to artificial intelligence, arguing that the implementation of moral intuitions in artificial agents is both a feasible and a philosophically defensible goal, compatible with the functional capacities of contemporary AI systems.



In doing so, the book sets a new research agenda for understanding, improving, and implementing moral intuitions in both human and artificial agents.
Introduction.- Part 1: Moral intuition: Psychological foundations.-
1.
The automaticity of intuitions.-
2. The strength of intuitions: A
metacognitive account.-
3. The content of moral intuitions: Dyadic harm and
help.-
4. Moral reasoning: The intuition-reflection interplay.- Part 2:
Toward better intuitions: Standards, biases, and strategies.-
5. The progress
of moral intuitions: A dyadic theory.-
6. Challenges to moral intuitions
progress: The problem of biases.-
7. Debiasing strategies: The direct path.-
8. Debiasing strategies: Indirect and hybrid approaches.- Part 3: Moral
intuitions and artificial intelligence.-
9. Artificial agency and the
alignment problem.-
10. Artificial moral agents.-
11. Toward better
socio-digital environments.- Conclusion.- Bibliography.
Dario Cecchini is a Postdoctoral Research Scholar at NC State University. He joined the NeuroComputational Ethics Research Group in November 2022, funded by the National Science Foundation project Virtual Reality Simulations of Moral Decision Making for Autonomous Vehicles. Before arriving in Raleigh, Dr. Cecchini obtained a Ph.D. in Moral Philosophy at the University of Genoa (Italy) in March 2022. He studied philosophy for his BA and MA degrees in Florence and Pisa.



With a solid background in metaethics and moral psychology, Dr. Cecchinis research interests have recently expanded into the field of applied ethics, focusing particularly on the ethics of artificial intelligence. His current projects concern moral judgment, the alignment problem for AI, the ethics of automated vehicles and carebots.