Security reserchers tricked Apple Intelligence into cursing
Researchers said 76% of test prompts bypassed Apple Intelligence safeguards by chaining two exploit methods that forced the on-device model to generate attacker-controlled output.
7 Articles
7 Articles
On-device Apple Intelligence vulnerable to prompt injection techniques
Apple Intelligence's on-device AI can be manipulated by attackers using prompt injection techniques, according to new research that shows a high success rate and potential access to sensitive user data.Apple IntelligenceResearchers from RSAC Research have unveiled a method to circumvent Apple's security measures. They achieved a 76% success rate in 100 tests by employing adversarial prompts and Unicode obfuscationThese findings were shared with …
RSAC Research researchers have managed to bypass the security measures of the large language model (LLM) that drives local Apple Intelligence by injection of instructions or 'prompt injection'.
Apple has made confidentiality a central pillar of its artificial intelligence strategy, notably thanks to the local execution of a part of Apple Intelligence on iPhone, iPad and Mac. However, new research is reminiscent of the fact that an embedded model is not automatically synonymous with enhanced security. Cybersecurity specialists say in [...] Read more... Follow iPhoneAddict.fr on Facebook, and follow us on Twitter Don't forget to download…
Coverage Details
Bias Distribution
- 100% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium


