In 2019, a Chinese researcher named Li Bicheng laid out his ideas about manipulating public opinion using AI. A network of “intelligent agents”—an army of fake online personae, controlled by AI—could act just realistically enough to shape consensus on issues of concern to the Chinese Communist Party, such as its handling of the COVID-19 pandemic. Just a few years earlier, Li had written in other articles that China should improve its ability to conduct “online information deception” and “online public opinion guidance.”
Li is no outlier. In fact, he is the ultimate insider, with a long research career at the People's Liberation Army's top information warfare research institute. His vision of using AI to manipulate social media was published in one of the Chinese military's top academic journals. He is connected to the PLA's only known information-warfare unit, Base 311. His articles, therefore, should be viewed as a harbinger of a coming AI-assisted flood of Chinese influence operations across the web.
As Meta recently disclosed in its quarterly adversarial threat report, Western internet platforms are already drowning in pro-Beijing content posted by groups linked to the Chinese government. According to the Meta report, more than half a million Facebook users followed at least one of these fake accounts in the broader Chinese network—which relied on click farms based in Vietnam and Brazil to boost its reach. The report also states that the Chinese network has bought about $3,000 worth of advertisements to further promote its posts. This effort, however, appears to still be ultimately run by humans and had marginal real-world results. A recent State Department report (PDF) on China's influence operations reinforces this point.…
The remainder of this commentary is available at time.com.
Nathan Beauchamp-Mustafaga is a policy researcher at the nonprofit, nonpartisan RAND Corporation, where he focuses on Chinese strategies for social media manipulation, and William Marcellino is a senior behavioral scientist who works on AI and disinformation issues at RAND.
This commentary originally appeared on TIME on October 5, 2023. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.