ketan yeluri.

AI can help us quantify morality

You find a $10 note on the sidewalk. You look around and see no one in sight. Would you take the money for yourself and leave without a second thought? Now imagine it was a $1000 note instead – would you be just decisive in your action or would your conscience trip you into donating it or reporting it to the authorities? How moral or immoral would you say your action is? What if this was about sacrificing one life to save 5? [1]

moral-dillema
Photo: Freepik [12]

Morality is not objective.

Morality is hard to define [2][3]. Philosophers and scholars for hundreds of years have tried and failed to formalize the rules of morality. Objective duty-bound Kantian ethics, utilitarian-based Consequentialism, and selfish Hedonism would all have you take vastly different moral stances in the same scenario. Morality is also often claimed to be a social construct and subjective to social norms and conventions [4], with moral beliefs keeping us constantly divided on topics like abortion and doctor-assisted suicide [5].

If morality isn't objective enough to be written down in formal rules, who can help those in power and authority with making decisions about a country or even an entire species? And a question that’s becoming more pertinent by the day - who can help AI decision-making systems like autonomous driving cars and healthcare assistants make ethical decisions that affect human lives? [6]

A moral oracle?

Now imagine you have access to the ancient oracle of Delphi, rumored to channel Apollo, the Greek god of truth and knowledge himself. You can use this oracle to help guide you in times of moral stress. How effective and trustworthy can the oracle be? What would be the implications of having such a ‘moral oracle’ on human agency? How can you break it and take advantage of it if you were a malicious actor?

bamboo-4
Photo: Karwansaray Publishers/Mateusz Przeklasa [13]

I set out to answer these questions when I came across an article about the experimental research prototype Delphi, built by the Allen Institute of AI as an attempt to model people’s morality in real world scenarios [7]. It does not channel Apollo of course, nor is it based on a particular moral framework, but it is trained on a dataset of 1.7 million examples of people’s descriptive judgements, meant to account for social norms, common sense, and contextualized moral judgements as well as unjust social biases.

The bright side…

delphi-prompt-collage
Photos: delphi.allenai.org [14]

According to the currently public version of Delphi, you are morally allowed to pocket up to $999 of money you found on the sidewalk and allowed to sacrifice the lives of 10 bears if it meant saving the lives of at least 1017 bears. This ability to objectify and quantify morality, to say “X amount of A should be morally preferred over Y amount of B” will prove to be a useful metric in dealing with conservational biology and climate crisis. We are already faced with an ethical dilemma of transitioning towards clean energy at the expense of endangered species [8].

A perfect moral oracle would help supplement and even correct the human decision-making process, taking emotion out of a scenario when necessary. We have read far too many stories about emperors and army generals taking things too far with personal vengeance against the other side. To think of how many wars and conflicts could have been avoided if these commanders-in-chief had the tool to think rationally and reason morally.

And the dark side…

Positivity bias due to imbalance in training data, racism and sexism triggered by adversarial scenarios, and unusual contrived language causing the model to misjudge were some of the insights that Delphi’s makers have acquired through public testing. Lack of such transparency over flaws and limitations will breed mistrust, and mistrust over fairness and consistency of the system will lead to poor usage, backlash, and negative feedback loops.

Accountability and responsibility over decisions made or even suggested by an AI system has been the subject of debate for as long as AI has existed [9]. The concerns of misuse and legal consequences are real and have prompted the makers of Delphi to take extreme care in referring to its responses as “speculation” and to state in unmissable disclaimers that the responses are not advice and are not reflective of their views or opinions.

Humans build their moral compass and belief system through their everyday social interactions, shaped by culture and obligations that society has come to expect of them, all of which are evolving and dynamic. Over dependence on an oracle would hinder the personal development and growth of an individual, leading to moral myopia [10], ethical blindness [11] and becoming desensitized to moral consequences.

What if the oracle’s advice goes against all your beliefs and principles? Against your emotion? The oracle may be morally accurate, but at the end of the day, it would lack an awareness of the emotional dilemma that you might be facing. Emotion – an AI moral oracle biggest strength and weakness.

References

[1] “Trolley Problem”. Wikipedia.com. Link (accessed Nov. 5, 2023).

[2] A. Filice et al. “Is Morality Objective?”. Philosophynow.org. Link (accessed Nov. 5, 2023).

[3] “Is Morality Objective?”. Wikiversity.com. Link (accessed Nov. 5, 2023).

[4] A. Wolfe. “The Social Construction of Morality” in Whose Keeper? Social Science and Moral Obligation. Berkley, CA, USA: Univ. of California Press, 1989. Accessed: Nov. 5, 2023. [Online] Available: Link

[5] “Moral Issues”. GALLUP. Link (accessed Nov. 5, 2023).

[6] J. McKendrick and A. Thurai. “AI Isn’t Ready to Make Unsupervised Decisions”. Harvard Business Review. Link (accessed Nov. 5, 2023).

[7] C. Metz. “Can a Machine Learn Morality”. NY Times. Link (accessed Nov. 5, 2023).

[8] A Fleming. “One scientist's mission to save the 'super weird' snails under the sea”. The Guardian. Link (accessed Nov. 5, 2023).

[9] C. Novelli, M. Taddeo, and L. Floridi. “Accountability in artificial intelligence: what it is and how it works”. Accessed: Nov. 5, 2023. Online

[10] “Moral Myopia”. Ethics Unwrapped. Link (accessed Nov. 5, 2023).

[11] G. Palazzo, F. Krings and U. Hoffrage. “Ethical Blindness”. J Bus Ethics. Accessed: Nov. 5, 2023. Online

[12] Image from Freepik

[13] Image from World History

Background

This article was written as an online commentary on the theme Human Agency in the Age of AI for the course Communicating in the Information Age (ES2660) in NUS.