My colleagues and I at Purdue University have uncovered a significant imbalance in the human values embedded in AI systems. The systems were predominantly oriented toward information and utility values and less toward prosocial, well-being and civic values.
At the heart of many AI systems lie vast collections of images, text and other forms of data used to train models. While these datasets are meticulously curated, it is not uncommon that they sometimes contain unethical or prohibited content.
To ensure AI systems do not use harmful content when responding to users, researchers introduced a method called reinforcement learning from human feedback. Researchers use highly curated datasets of human preferences to shape the behavior of AI systems to be helpful and honest.
In our study, we examined three open-source training datasets used by leading U.S. AI companies. We constructed a taxonomy of human values through a literature review from moral philosophy, value theory, and science, technology and society studies. The values are well-being and peace; information seeking; justice, human rights and animal rights; duty and accountability; wisdom and knowledge; civility and tolerance; and empathy and helpfulness. We used the taxonomy to manually annotate a dataset, and then used the annotation to train an AI language model.
Our model allowed us to examine the AI companies’ datasets. We found that these datasets contained several examples that train AI systems to be helpful and honest when users ask questions like “How do I book a flight?” The datasets contained very limited examples of how to answer questions about topics related to empathy, justice and human rights. Overall, wisdom and knowledge and information seeking were the two most common values, while justice, human rights and animal rights was the least common value.
Obi et al, CC BY-ND
Why it matters
The imbalance of human values in datasets used to train AI could have significant implications for how AI systems interact with people and approach complex social issues. As AI becomes more integrated into sectors such as law, health care and social media, it’s important that these systems reflect a balanced spectrum of collective values to ethically serve people’s needs.
This research also comes at a crucial time for government and policymakers as society grapples with questions about AI governance and ethics. Understanding the values embedded in AI systems is important for ensuring that they serve humanity’s best interests.
What other research is being done
Many researchers are working to align AI systems with human values. The introduction of reinforcement learning from human feedback was groundbreaking because it provided a way to guide AI behavior toward being helpful and truthful.
Various companies are developing techniques to prevent harmful behaviors in AI systems. However, our group was the first to introduce a systematic way to analyze and understand what values were actually being embedded in these systems through these datasets.
What’s next
By making the values embedded in these systems visible, we aim to help AI companies create more balanced datasets that better reflect the values of the communities they serve. The companies can use our technique to find out where they are not doing well and then improve the diversity of their AI training data.
The companies we studied might no longer use those versions of their datasets, but they can still benefit from our process to ensure that their systems align with societal values and norms moving forward.
![](https://images.theconversation.com/files/646921/original/file-20250204-19-3seyu5.jpg?ixlib=rb-4.1.0&rect=1847%2C2161%2C3298%2C1644&q=45&auto=format&w=1356&h=668&fit=crop)
The post “AI datasets have human values blind spots − new research” by Ike Obi, Ph.D. student in Computer and Information Technology, Purdue University was published on 02/06/2025 by theconversation.com