Skip to main content
See every side of every news story
Published loading...Updated

Google Warns Criminals Are Building and Selling Illicit AI Tools - and the Market Is Growing

Google's Threat Intelligence Group found AI-powered malware using large language models to dynamically alter code, aiding sophisticated evasion in real-world attacks in 2025.

  • On Wednesday, Google's Threat Intelligence Group warned that malware can connect to large language models to refine attacks in real time, describing a 'just-in-time' self-modification technique.
  • Researchers found growing demand in underground marketplaces as developers aggressively promote AI features with API and Discord access, accelerating availability of malicious services on English and Russian-speaking underground marketplaces.
  • PromptFlux periodically queried Gemini via a 'Thinking Robot' module to create obfuscated VBScript, QuietVault used AI CLI prompts to find secrets, and PromptSteal queried Qwen in Ukraine.
  • Google disabled accounts and assets, reinforced Gemini safeguards, and faced skepticism from Marcus Hutchins, who said, `It doesn't specify what the code block should do, or how it's going to evade an antivirus.`
  • These cases suggest a broader shift in how adversaries use LLMs, with Masan, Iranian groups, and Anthropic's Claude AI chatbot involved in cross-company threats.
Insights by Ground AI

15 Articles

Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 50% of the sources lean Left, 50% of the sources are Center
50% Center

Factuality Info Icon

To view factuality data please Upgrade to Premium

Ownership

Info Icon

To view ownership data please Upgrade to Vantage

BleepingComputer broke the news in on Wednesday, November 5, 2025.
Too Big Arrow Icon
Sources are mostly out of (0)

Similar News Topics

News
Feed Dots Icon
For You
Search Icon
Search
Blindspot LogoBlindspotLocal