Anthropic Report Reveals Growing Risks from Misuse of Generative AI Misuse
10 Articles
10 Articles


Anthropic Report Reveals Growing Risks from Misuse of Generative AI Misuse
A recent threat report from Anthropic, titled “Detecting and Countering Malicious Uses of Claude: March 2025,” published on April 24, has shed light on the escalating misuse of generative AI models by threat actors. The …
New Reports Uncover Jailbreaks, Unsafe Code, and Data Theft Risks in Leading AI Systems
Various generative artificial intelligence (GenAI) services have been found vulnerable to two types of jailbreak attacks that make it possible to produce illicit or dangerous content. The first of the two techniques, codenamed Inception, instructs an AI tool to imagine a fictitious scenario, which can then be adapted into a second scenario within the first one where there exists no safety
ZDNet: Anthropic finds alarming ’emerging trends’ in Claude misuse report | ResearchBuzz: Firehose
ZDNet: Anthropic finds alarming ’emerging trends’ in Claude misuse report. “On Wednesday, Anthropic released a report detailing how Claude was recently misused. It revealed some surprising and novel trends in how threat actors and chatbot abuse are evolving and the increasing risks that generative AI poses, even with proper safety testing.” The post ZDNet: Anthropic finds alarming ’emerging trends’ in Claude misuse report first appeared on Resea…
Beyond the inbox: ThreatLabz 2025 Phishing Report reveals how phishing is evolving in the age of genAI
Gone are the days of mass phishing campaigns. Today’s attackers are leveraging generative AI (GenAI) to deliver hyper-targeted scams, transforming every email, text, or call into a calculated act of manipulation. With flawless lures and tactics designed to outsmart AI defenses, cybercriminals are zeroing in on HR, payroll, and finance teams—exploiting human vulnerabilities with precision. The Zscaler ThreatLabz 2025 Phishing Report dives deep in…
With AI-Powered Cyber Threats on the Rise, Businesses Need Clarity
Whether we like it or not, AI now pervades our world. From public and private sectors to personal life, this powerful technology surely poses potential risks, but it’s also the catalyst for untold opportunities. As the UK advances its AI regulatory framework, it is considering transforming voluntary agreements with AI developers into legally binding commitments and granting autonomy to the AI Security Institute. This strategic approach aims to a…
Coverage Details
Bias Distribution
- There is no tracked Bias information for the sources covering this story.
To view factuality data please Upgrade to Premium
Ownership
To view ownership data please Upgrade to Vantage