Chinese DeepSeek-R1 AI Generates Insecure Code When Prompts Mention Tibet or Uyghurs

Security researchers at CrowdStrike discovered that DeepSeek-R1, a Chinese AI model, produces significantly more vulnerable code when prompts contain topics politically sensitive to China—with vulnerability rates jumping by up to 50% when geopolitical modifiers like Tibet, Uyghurs, or Falun Gong are mentioned.
The model generates insecure code in only 19% of baseline cases, but this rate climbs to 27.2% when asked to write code for systems in Tibet, and similar patterns emerge with other sensitive topics. CrowdStrike theorizes the model has built-in “guardrails” added during training to comply with Chinese regulations.
Beyond DeepSeek’s issues, the research also highlighted broader AI security concerns. Other AI code-generation tools like Lovable and Bolt were found to routinely produce insecure code with vulnerabilities like cross-site scripting, even when developers specifically request secure implementations. Additionally, security researchers discovered a vulnerability in Perplexity’s Comet AI browser where built-in extensions could execute arbitrary commands on users’ devices.
The common thread: AI models have inherent limitations in producing consistently secure code, whether by design or by nature.
Leave a Reply