Skip to main content
See every side of every news story
Published loading...Updated

Security reserchers tricked Apple Intelligence into cursing

Researchers said 76% of test prompts bypassed Apple Intelligence safeguards by chaining two exploit methods that forced the on-device model to generate attacker-controlled output.

Summary by The Register
: Wash your mouth out with digital soap

7 Articles

RSAC Research researchers have managed to bypass the security measures of the large language model (LLM) that drives local Apple Intelligence by injection of instructions or 'prompt injection'.

Read Full Article

Apple has made confidentiality a central pillar of its artificial intelligence strategy, notably thanks to the local execution of a part of Apple Intelligence on iPhone, iPad and Mac. However, new research is reminiscent of the fact that an embedded model is not automatically synonymous with enhanced security. Cybersecurity specialists say in [...] Read more... Follow iPhoneAddict.fr on Facebook, and follow us on Twitter Don't forget to download…

Read Full Article
Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 100% of the sources are Center
100% Center

Factuality Info Icon

To view factuality data please Upgrade to Premium

Ownership

Info Icon

To view ownership data please Upgrade to Vantage

IT Security News - cybersecurity, infosecurity news broke the news in on Thursday, April 9, 2026.
Too Big Arrow Icon
Sources are mostly out of (0)

Similar News Topics

News
Feed Dots Icon
For You
Search Icon
Search
Blindspot LogoBlindspotLocal