Alternative text
Photo of the book “Moral AI” by Jana Schaich Borg, Walter Sinnott-Armstrong, and Vincent Conitzer. The cover has a red background with white and black text. It features an illustration of a sliding puzzle with a partially completed image of a person’s face, symbolizing the complexity of moral AI. The book is part of the Pelican Books series.
As someone with a background in statistical social science, I’ve always been interested in identifying and addressing bias. This interest extends to my work as a data scientist and in building machine learning (ML) products. When I saw “Moral AI. And How We Get There” by Jana Schaich Borg, Walter Sinnott-Armstrong, and Vincent Conitzer in a London bookshop, I was immediately drawn to it, especially as it seemed to offer a balanced perspective on moral AI, its importance, and how to achieve it.
The book benefits from the diversity of its authors, who come from varied fields and backgrounds. The authors highlight that addressing the complex questions surrounding AI requires teamwork and multidisciplinary approaches. It provides a balanced perspective on what moral AI is, why it’s important, and how we can achieve it. I highly recommend this book to anyone in the field, whether technical or non-technical.
Additionally, from a book lover’s perspective, the Pelican book’s aesthetic and layout make it an easy and enjoyable read.
The first chapter offers an accessible introduction to AI, defining what it is and highlighting what it currently lacks: moral reasoning. It also provides a short overview of how AI models are built and the distinction between AI products and models.
Chapter 2 explores the question of whether AI can be safe. The book highlights potential dangers, stating that “[e]ven today’s AIs could pose an existential threat to us if, for example, they get to control nuclear warheads or discover something about physics, chemistry, or biology that we were unaware of but which could pose a threat to us” (Schaich Borg et al. 2024, 47). It also addresses common problems and potential misuses of AI such as:
The chapter uses illustrative case studies and raises the issue of unforeseen consequences that may arise when AI affects many people. However, the authors conclude that if we succeed in planning for these consequences and potential risks, “the resulting system may be an improvement over anything we had before” (Schaich Borg et al. 2024, 75).
This chapter is dedicated to the ethical and practical aspects of privacy and how AI systems can be designed to respect it.
This chapter places fair AI front and center. The book emphasizes the challenges of obtaining representative data and the multitude of ways fairness can be defined. According to the book, “[…] there are over 20 possible mathematical definitions of fairness! Crucially, these definitions cannot all be achieved at the same time as long as the base rates of crimes differ between groups” (Schaich Borg et al. 2024, 123). It highlights how bias can be hidden in the data generating process and reflected in our social structures. Even if AI is fair in some settings, it may not be in others. The book further raises questions about procedural justice, asking whether AIs can be procedurally unjust even if they are distributively and retributively fair. Drawing from my background in social sciences, I found the discussion on identifying and addressing data biases particularly insightful.
The chapter explores the complex issue of legal and moral responsibility in the context of AI, using the thought experiment of self-driving cars to illustrate the challenges.
Chapter 6 explores whether AI can be moral. One approach involves understanding and implementing human morality into AI systems. This includes conducting representative surveys to identify morally relevant features and assigning moral weights to each feature. The book states that the goal is to “aid people and Al systems in making better moral judgments and behaving in ways that are more in line with human moral values” (Schaich Borg et al. 2024, 186). While cautioning against overly high expectations, the authors suggest that building idealized moral judgments into AIs could be a significant improvement.
Chapter 7 focuses on actions we can take to ensure AI has a positive impact on society. It acknowledges that technology alone is not enough and emphasizes the importance of human involvement. The book identifies a key issue: “the different types of AI contributors described […] often do not have the opportunity to communicate during the creation of a specific Al product” (Schaich Borg et al. 2024, 195). It also addresses the consequences of addressing ethical AI considerations too late in the development process. The chapter concludes with several calls to action:
Moral AI offers a comprehensive and thought-provoking exploration of the ethical considerations surrounding artificial intelligence. It’s a valuable resource for anyone interested in building fair, safe, and moral AI systems. The authors stress that planning for AI’s unanticipated impacts is critical to ensure safety and maximize societal benefits. The book highlights that addressing ethical issues proactively, and with a multidisciplinary approach, can lead to AI systems that offer real improvements over existing solutions. By embracing the book’s calls to action, such as scaling moral AI technical tools or fostering civic participation, we can work towards a future where AI reflects human values and serves the common good.