Brussels – A 39-page report in which the leading artificial intelligence development company analyzes in detail the use of its software to manipulate information online: OpenAI reveals that over the past three months, the US research lab—which started ChatGPT—has recognized and disrupted disinformation campaigns from Russia, China, Israel, and Iran.
Five covert influence operations, in which the US giant’s generative artificial intelligence models were used to generate short commentaries and longer articles in different languages, create names and biographies for social media accounts and use them as a sounding board, conduct open-source research, and translate and proofread texts. Similar strategies for spreading fake news on a wide range of topics: the Russian invasion of Ukraine, the conflict in Gaza, the ongoing elections in India, politics in Europe and the United States, and criticism of the Chinese government by Chinese dissidents and foreign governments.
Two of the five campaigns unmasked by OpenAI were from Russia. The first—dubbed Bad Grammar—operated primarily on Telegram and targeted Ukraine, Moldova, the Baltic countries, and the United States. Bad Grammar used OpenAI’s templates to debug the code for running a Telegram bot and to create short political commentaries in Russian and English, which were then spread on several Telegram channels. In the second, Doppelganger, artificial intelligence was used to translate and edit articles in English and French that were published on websites related to this operation, generate headlines and convert news articles to Facebook posts, and generate comments in English, French, German, Italian, and Polish to create a fictitious interaction with the news.
The same pattern was followed by a Chinese network known as Spamouflage and an Iranian organization, the International Union of Virtual Media. OpenAI’s investigation also uncovered the illicit activity of an Israeli commercial company called STOIC, which used artificial intelligence models to generate articles and comments that were then published on various platforms, notably Instagram, Facebook, X and websites associated with this operation.
The report points out that, in all of these cases, artificial intelligence is incorporated into disinformation campaigns to improve some aspects of content generation, such as creating more compelling foreign-language posts, but it is not the only propaganda tool. “All of these operations have used artificial intelligence to some extent, but none have used it exclusively,” OpenAI says. Artificial intelligence-generated material went hand in hand with “more traditional formats, such as manually written texts or memes copied from the Internet.”
With one week to go before the European elections, the survey conducted by OpenAI is another wake-up call for an EU deeply worried by third-party actors’ propaganda about the electoral process. From January onward, the EU Digital Media Observatory (EDMO) detected a growing trend in disinformation related to EEU policies or institutions. In April, according to EdDMO, it amounted to 11 per cent of total detected disinformation, “the highest value for EU-related disinformation since our dedicated monitoring began in May 2023.”
OpenAI clarifies that these massive disinformation operations “do not appear to have benefited from a significant increase in audience engagement or reach as a result of our services so far.” However, it is clear that generative artificial intelligence is a tool capable of greatly increasing the quality of online disinformation.
English version by the Translation Service of Withub