Anthropic is launching a new program to study AI 'model welfare'
- Anthropic launched a research program Thursday to investigate what it calls 'model welfare'.
- This initiative stems from major disagreement within the AI community regarding model characteristics and treatment.
- The program explores determining moral consideration for AI models and identifying potential signs of distress.
- Anthropic stated, "We recognize that we'll need to regularly revise our ideas," acknowledging the field's development.
- The company does not rule out the possibility that future AI systems could be conscious.
20 Articles
20 Articles
Anthropic is launching a new program to study AI ‘model welfare’
Could future AIs be “conscious,” and experience the world similarly to the way humans do? There’s no strong evidence that they will, but Anthropic isn’t ruling out the possibility. On Thursday, the AI lab announced that it has started a research program to investigate — and prepare to navigate — what it’s calling “model welfare.” As part of the effort, Anthropic says it’ll explore things like how to determine whether the “welfare” of an AI model…
AI employees with 'memories' and company passwords are a year away, says Anthropic chief information security officer
Anthropic’s Chief Information Security Officer (CISO), Jason Clinton, has said AI “virtual employees”—complete with memories, roles, and corporate credentials—could be just one year away. However, Clinton told Axios that the new frontier of AI agents poses a unique set of cybersecurity risks. Anthropic, one of the U.S.'s leading AI labs, anticipates that artificial intelligence-powered virtual employees could emerge in the workplace as early as …
Coverage Details
Bias Distribution
- 67% of the sources are Center
To view factuality data please Upgrade to Premium
Ownership
To view ownership data please Upgrade to Vantage