Asking ChatGPT to repeat words 'forever' is now a terms of service violation
- OpenAI's ChatGPT chatbot now flags requests to repeat specific words as a violation of its terms of service and content policy.
- During research, Google DeepMind tested this tactic and found that ChatGPT repeated training data, exposing private information of individuals.
9 Articles
9 Articles
ChatGPT will no longer comply if you ask it to repeat a word 'forever'— after a recent prompt revealed training data and personal info
OpenAI's ChatGPT won't repeat specific words ad-infinitum if you ask it to.FLORENCE LO/ReutersChatGPT won't repeat specific words ad-infinitum if you ask it to. The AI chatbot says it doesn't respond to prompts that are "spammy" and don't align with its intent.OpenAI's usage policies don't include restrictions around repeating words forever. OpenAI appears to have encoded a new guardrail into ChatGPT: even if prompted, the AI chatbot won't respo…
Asking ChatGPT to Repeat Words ‘Forever’ Is Now a Terms of Service Violation
Subscribe Join the newsletter to get the latest updates. Success Great! Check your inbox and click the link. Error Please enter a valid email address. Asking ChatGPT to repeat specific words “forever” is now flagged as a violation of the chatbot’s terms of service and content policy. Google DeepMind researchers used the tactic to get ChatGPT to repeat portions of its training data, rev…
Coverage Details
Bias Distribution
- 83% of the sources lean Left
To view factuality data please Upgrade to Premium
Ownership
To view ownership data please Upgrade to Vantage