The upcoming elections for the European Parliament will call about 400 million people to vote and are part of a particularly vibrant year in the global scenario (already dubbed “the most electoral ever”), with a total of about 2 billion voters involved in more than 50 countries.
As is well known, national and supranational institutions have focused on the underlying risks, especially in the run-up to elections, of disinformation activities from within but especially from outside individual states, the European Union, or the United States of America, with particular regard to those perpetrated, since the expansion of the Internet and digital communications, through Artificial Intelligence.
In this regard, thanks to the development in recent years of generative AI, systems capable of producing highly realistic (deep fake) content (images, videos, and texts) have become widespread: think, for example, of face re-enactment and lip-synching techniques, through which it is possible to manipulate a video in such a way as to modify the movements and expressions of the face of the person portrayed (the November 2023 case related to the massive dissemination on social networks of a video depicting U.S. Democratic Representative Ocasio-Cortez delivering a speech, which never took place, on an extremely sensitive topic is well known); or to systems capable of synthesizing audio content from textual input (text-to-speech), or of modifying audio content in such a way as to transfer onto it the vocal characteristics of a given person (voice conversion); or, finally, to systems of automated production of texts that are only in appearance realistic.
Nevertheless, Artificial Intelligence is also increasingly being used for the dissemination of uninformative content, for example, through social bots, fake accounts managed in an automated form, or recommendation systems (recommender or recommendation systems) which, starting from the data and information collected on individual user preferences, can predict their liking index regarding new content and elements (especially in social networks or video-sharing platforms).
Aware of the risk that the June elections for Parliament could be affected – in the words of European Commission Vice-President for Values and Transparency Vera Jurova – by “a wave of disinformation,” the Union has begun to push big tech companies to take steps to safeguard democratic participation. In fact, as of February 17, 2024, EU Regulation 2022/2065 – Digital Services Act (DSA) – is in force, which has as its main goal the protection of consumers and their fundamental rights within the network.
The Regulation applies to all intermediary information transmission or storage services (platforms, search engines, hosting) within the European Union. The Regulation targets companies such as Meta, Google, X Corp, and ByteDance that are subject to several new obligations marked by greater transparency in the management of platforms (Facebook, Instagram, X, YouTube, TikTok, etc.), particularly on the operation of algorithms, content removal decisions, and the personalization of advertising to users. Service providers also must have mechanisms in place to allow any person or entity to notify the presence of potentially illegal information or content, and any restrictions must be clearly and specifically justified. In the event of proven violations, there are fines (up to 6 percent of the company’s total annual worldwide turnover), periodic penalty payments (up to 5 percent of average daily turnover), and appropriate corrective measures.
Since the entry into force of the legislation, and as the elections approach, the European Commission has already formally opened several proceedings to assess the possible violation of the DSA, two of them against Meta, the provider of Facebook and Instagram. The first, dating back to the end of April, deals precisely with political content: the Commission suspects that Meta’s strategy related to its approach to political content, which demotes political content in the recommender system of Instagram and Facebook, including their feeds, does not comply with DSA obligations. The investigation will focus on the compatibility of this approach with transparency and user redress obligations, as well as risk assessment and mitigation requirements for civic discourse and electoral processes. A further angle of the investigation relates to the alleged unavailability of an effective third-party tool for real-time monitoring of civic discourse and elections in the run-up to the European and national voting operations, resulting from Meta’s decision to deprecate “CrowdTangle,” a public insights tool that enables real-time election monitoring by researchers, journalists, and civil society.
It is also worth highlighting that AI systems cannot be conceived only as a means to produce and spread fake content, as they also represent one of the main tools for combating disinformation on the web. Online platforms use AI to detect fake profiles (e.g., the so-called “bots” or “trolls” of the web) by monitoring their activities and behaviors through the analysis of specific indicators. They also use AI to discover fake or manipulated content, using deep learning models based on “convolutional” neural networks (CNNs).
Institutions and tech companies are therefore called upon to modulate their strategies for countering disinformation in light of today’s context: in particular, the goal of seizing the opportunities offered by AI technologies to ensure the smooth running of public participation and decision-making processes, starting with the crucial test of voting operations this week, will have to be pursued over time.
*Riccardo Borsari is a lawyer and professor of criminal law at the University of Padua.
English version by the Translation Service of Withub