Brussels – In just over two months, the citizens of the 27 EU member states will go to the polls to renew the composition of the European Parliament. It is already crucial for the Union’s institutions that fake news and disinformation do not pollute the public debate in this final sprint of the election campaign so they do not jeopardize the integrity of the vote even, and above all, online. That’s why, after the communication campaigns and memorandum of understanding, the European Commission today (March 26) released guidelines on recommended measures for online platforms and large search engines to mitigate “systemic risks” just ahead of the June 6-9 European vote.
The basis for the guidelines is the new digital services law (the Digital Services Act), which defines who the services with more than 45 million active users in the EU are and what obligations they must observe, including risks related to electoral processes, fundamental rights, and freedom of expression. These are two major search engines (VLOSEs, in the jargon) – Bing and Google Search – and 17 major online platforms (VLOPs): social media (Facebook, Instagram, Twitter, TikTok, Snapchat, LinkedIn, Pinterest), e-commerce services (Alibaba AliExpress, Amazon Store, Apple AppStore, Zalando), Google services (Google Play, Google Maps, and Google Shopping), and Booking.com, Wikipedia, and YouTube. Although these are only recommendations (those who fail to observe them will have to prove to the Commission that the measures taken are equally effective), the von der Leyen Cabinet has scheduled a stress test with stakeholders at the end of April to test the use of cooperation tools and mechanisms.
The guidelines include measures to be taken before, during, and after the election, starting with strengthening internal teams “with adequate resources.” Election-specific risk mitigation measures include official information on election processes, media literacy initiatives, adaptation of recommendation systems, and reduction of monetization, not to mention that political advertising should be “clearly labeled as such.” Of particular interest is the focus on mitigating risks related to generative artificial intelligence (also given the new EU Act on AI, ready to come into force) since it is easy to spread dangerous content and fake news through deepfakes online. Therefore, the recommendation is to explicitly label AI-generated content so that users can be aware of the type of information they are handling.
The Commission also recommends cooperation with national and European authorities, independent experts, and civil society organizations “to promote efficient exchange of information before, during, and after elections,” including on manipulation and interference from abroad. In the event of the spread of fake news and misinformation, there should be a response mechanism for incidents that could have “a significant effect” on the outcome of the election or voter turnout. Finally, after European elections, online platforms and large search engines should publish a non-confidential version of post-election review documents, “providing an opportunity for public feedback on the risk mitigation measures taken,” the EU Commission explains.
English version by the Translation Service of Withub