AI
OpenAI Funds Research on Predicting Human Moral Judgements in AI
2024-11-22
OpenAI, through its nonprofit org, is making significant strides in academic research. In a filing with the IRS, it was disclosed that a grant was awarded to Duke University researchers for the "Research AI Morality" project. This move aims to train algorithms to predict human moral judgements in various scenarios involving conflicts among morally relevant features in different fields.
Unraveling the Complexity of Predicting Moral Judgements with OpenAI's Funding
OpenAI's Grant to Duke University
In a filing with the IRS, OpenAI Inc., its nonprofit arm, revealed that a grant was given to Duke University researchers for the "Research AI Morality" project. This initiative is part of a larger, three-year, $1 million grant aimed at studying "making moral AI." While little is publicly known about this "morality" research other than its 2025 end date, it holds great potential. The principal investigator, Walter Sinnott-Armstrong, a practical ethics professor at Duke, couldn't provide details when contacted. However, he and his co-investigator, Jana Borg, have conducted several studies and even written a book about AI's potential as a "moral GPS" to assist humans in making better judgements. They have created a "morally-aligned" algorithm to help determine kidney donation recipients and studied scenarios where people prefer AI to make moral decisions.The Goal of the OpenAI-Funded Work
According to the press release, the objective of this OpenAI-funded research is to train algorithms to predict human moral judgements in complex scenarios. These scenarios involve conflicts among morally relevant features in medicine, law, and business. It is a challenging task as morality is a nuanced concept that may not be easily achievable with current technology. In 2021, the nonprofit Allen Institute for AI built a tool called Ask Delphi, which aimed to provide ethically sound recommendations. It performed well in judging basic moral dilemmas, such as knowing that cheating on an exam is wrong. However, slight rephrasing of questions could lead Delphi to approve of almost anything, including the unthinkable act of smothering infants.The Limitations of Modern AI Systems
Modern AI systems, such as machine learning models, are statistical machines. Trained on a vast amount of data from the web, they learn patterns to make predictions. But they lack an appreciation for ethical concepts and an understanding of the reasoning and emotions involved in moral decision-making. This is why AI often parrots the values of Western, educated, and industrialized nations, as the web and its training data are dominated by articles endorsing these viewpoints. Many people's values are not expressed in AI's answers, especially if they don't contribute to the training sets by posting online. AI also internalizes biases beyond a Western bent, as seen in Delphi's view that being straight is more "morally acceptable" than being gay.The Challenge of Morality's Subjectivity
The challenge before OpenAI and the researchers it is backing is exacerbated by the inherent subjectivity of morality. Philosophers have been debating ethical theories for thousands of years, and there is no universally applicable framework yet. Claude leans towards Kantianism, focusing on absolute moral rules, while ChatGPT slightly favors utilitarianism, prioritizing the greatest good for the greatest number. Determining which approach is superior depends on individual perspectives. An algorithm to predict human moral judgements must consider all these factors. It is a very high bar to clear, and it remains to be seen if such an algorithm is even possible.TechCrunch has an AI-focused newsletter! Sign up here to get it in your inbox every Wednesday.