AI datasets have human values blind spots: New research
- Researchers at Purdue University found a significant imbalance in human values within AI systems, focusing more on information and utility values than on prosocial values like empathy and justice.
- The study analyzed three open-source datasets from leading U.S. AI companies, revealing that wisdom and knowledge were the most common values, while justice and human rights were the least common.
- The researchers introduced a method called reinforcement learning from human feedback to guide AI behavior towards being helpful and truthful, aiming to improve dataset diversity.
- The findings highlight the importance of aligning AI systems with a balanced spectrum of human values, especially as AI is integrated into critical sectors like law and healthcare.
5 Articles
5 Articles
AI datasets have human values blind spots: New research
My colleagues and I at Purdue University have uncovered a significant imbalance in the human values embedded in AI systems. The systems were predominantly oriented toward information and utility values and less toward prosocial, well-being and civic values.
AI datasets have human values blind spots − new research - Tech and Science Post
My colleagues and I at Purdue University have uncovered a significant imbalance in the human values embedded in AI systems. The systems were predominantly oriented toward information and utility values and less toward prosocial, well-being and civic values. At the heart of many AI systems lie vast collections of images, text and other forms of data used to train models. While these datasets are meticulously curated, it is not uncommon that they …
Coverage Details
Bias Distribution
- 100% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium



