Google Declares AI Bug Hunting Season Open
Google offers up to $30,000 for reporting serious AI security flaws including rogue actions and data leaks to protect user data across key products, the company said.
- On Monday, Google launched a dedicated AI Vulnerability Reward Program inviting security researchers and ethical hackers to report AI flaws, offering up to $30,000 for high-impact reports.
- Google said the program aims to harden AI security as systems are deployed more widely, narrowing AI bugs to security exploits and excluding hallucinations and content-related issues such as hate speech and copyright infringement.
- Flagship products such as Search, Gemini Apps, Gmail and Drive qualify for the highest $20,000 base award, with in-scope issues including rogue actions, sensitive data exfiltration, and phishing enablement, according to Google.
- Alongside the program, Google launched CodeMender to automatically detect and fix vulnerabilities, which with human reviewers has patched over 70 verified issues as researchers earned roughly $430,000 in two years.
- While researchers praised the focus, many argued the top reward may be insufficient, with some claiming $30,000 may not deter private exploit sales, urging higher rewards and faster triage.
12 Articles
12 Articles
Google Launches AI-only Bug Bounty But Critics Say Payouts Fall Short
Google has opened a bounty program focused solely on flaws that let artificial intelligence systems take unsafe or unwanted actions.Google first added AI-related issues to its broader Vulnerability Reward Program (VRP) in October 2023. Over the past two years, researchers have earned more than $430,000 in AI-related rewards. The new dedicated AI Vulnerability Reward Program builds on these efforts with clearer rules and a focus on high-impact ex…
Google is paying up to $30,000 to anyone who can break its AI: How to cash in on the bounty
Google has launched a dedicated bug bounty programme offering rewards of up to $30,000 for security researchers who uncover critical vulnerabilities—termed 'rogue actions'—in its AI-powered products such as Search, Gemini Apps, and Workspace. The initiative, which builds on two years of inviting AI researchers to test its systems, aims to identify and mitigate potentially harmful exploits, with specific guidelines distinguishing AI bugs from oth…
Coverage Details
Bias Distribution
- 50% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium











