Statement on OpenAI's Report Regarding China's Use of Artificial Intelligence to Target Overseas Dissidents
On February 25, OpenAI released Disrupting Malicious Uses of Our Models, confirming what overseas dissidents such as “Teacher Li Is Not Your Teacher” have experienced for years: the Chinese Communist Party has been using AI to carry out cyberattacks and online harassment.
The report reveals that OpenAI discovered and banned an account linked to China’s law-enforcement system. This individual used ChatGPT to polish his “work reports”—reports describing his involvement in a so-called “Cyber Special Operation” targeting overseas dissidents. His report documented the scale of the operation in detail: in just one province there were 300 operators using thousands of fake accounts, operating across more than 300 overseas platforms, and leveraging AI models such as DeepSeek and Qwen to assist with content generation and target monitoring.
This is not an ordinary hacker group or spontaneous online mob violence. It is a state-run, state-resourced transnational cyberattack apparatus operated by China’s national security system, complete with funding, staffing, and formal structures.
The report explicitly mentions us: “Teacher Li Is Not Your Teacher” (@whyyoutouzhele) is one of the key targets of this apparatus. To be honest, we are not surprised. Since the 2022 White Paper Movement, we have endured this countless times: mass malicious reporting campaigns against our account, coordinated floods of insults and threats in our replies, and fake accounts impersonating us across platforms to spread rumors. We have always known these were not random online disputes, but systematic tasks being carried out by someone. Now OpenAI has confirmed it.
For our protection, OpenAI did not disclose more specific details of the CCP’s attacks against us in the report. But the tactics that were revealed are still shocking. The report shows that China’s national security system documented more than 100 tactics, forming a complete suppression workflow. One of the most malicious is what could be called “entrapment-style reporting”: operators first post extreme abusive content under a target’s tweet to deliberately provoke a response, then mobilize thousands of accounts to mass-report the target’s reply, exploiting platforms’ automated moderation systems to restrict or suspend the target’s account. Dissident Huibo (@huikezhen) is a typical victim—his X account remains restricted to this day, and searching his name returns only impersonation accounts using his photo and name.
In addition, the report shows that the CCP has preemptively registered the usernames of well-known dissidents on Bluesky, forged U.S. court documents to request content takedowns from platforms, and even impersonated U.S. immigration officers to threaten dissidents living in the United States. Against critics inside China, the methods are even more direct: spreading false allegations to employers and landlords, posting hostile posters near family members’ residences and photographing them to fabricate “public outrage,” and launching attacks targeting a person’s mental health. One chilling detail is that they produced a fake obituary, memorial hall, and gravestone photos for dissident Jie Lijian, then spread claims across the Chinese internet that he “had already died.”
We hope that X, YouTube, Bluesky, and other social media platforms recognize that your automated content-moderation systems are being weaponized by the CCP. We urge these platforms to build mechanisms capable of detecting state-level coordinated attacks, rather than forcing victims to bear the consequences of being silenced again and again.
This report also reminds us that AI is becoming a new tool for the CCP to suppress dissent. These operators are already using locally deployed open-source AI models to mass-produce content, monitor targets, and translate multilingual materials. We acknowledge the work OpenAI has done to identify and disclose this threat, and we thank OpenAI for sharing this information with us.
At the same time, we call on the entire AI industry to confront this problem directly. When your technology is being used to systematically suppress human rights, “we’re just building tools” is not an acceptable answer.
Finally, we want to say: the importance of this report is not that it tells us anything new—these are things we live through every day. Its significance lies in the fact that, for the first time, from the CCP national security apparatus’s own perspective—using their own internal work reports to superiors as evidence—it shows the world how China’s “Cyber Special Operation” actually operates. It also confirms that the harassment, threats, and silencing endured by countless overseas Chinese dissidents over many years are not accidental or isolated cases, but organized state actions with budgets and performance evaluations.
In the face of such attacks, we will not be silenced. We will continue our work and keep speaking for people inside China.
Lastly, we thank OpenAI for the efforts it has made to protect users.
Team Teacher Li
@OpenAI
@sama
@X
@elonmusk
@nikitabier
@YouTube
@nealmohan
@bluesky
@arcalinea
https://t.co/3bPgwOFNZC