Researchers Say Traditional Blame Models Don't Work when AI Causes Harm
2 Articles
2 Articles
Study reveals a shared responsibility of both humans and AI in AI-caused harm
Artificial intelligence (AI) is becoming an integral part of our everyday lives and with that emerges a pressing question: Who should be held responsible, when AI goes wrong? AI lacks consciousness and free-will, which makes it difficult to blame the system for the mistakes. AI systems operate through complex, opaque processes in a semi-autonomous manner. Hence, even though the systems are developed and used by human stakeholders, it is impossib…
Researchers say traditional blame models don't work when AI causes harm
Artificial intelligence shapes our daily lives in all manner of ways, which raises a simple but awkward question: when an AI system causes harm, who should be responsible? A new study from South Korea's Pusan National University says the answer isn’t one person or one group, arguing instead that responsibility should be shared across everyone involved, including the AI systems that help shape the outcome. The paper published in Topoi looks close…
Coverage Details
Bias Distribution
- There is no tracked Bias information for the sources covering this story.
Factuality
To view factuality data please Upgrade to Premium

