Skip to content

Search the site

EU drops election “guidelines” on online giants, points to massive DSA fines for deviation

More moderators, less deep fakes, s'il vous plait

Photo by Marius Oprea / Unsplash

The European Commission has invoked the recently enacted Digital Service Act to demand that online platforms work harder to safeguard elections from interference and adopt specific measures to counter the threat of generative AI.

Brussels has dropped a series of “guidelines” on the largest online platforms and search engines, such as X, Meta, Google, TikTok, Snapchat and more, but reminded them it could bring the full force of the DSA to bear, including swingeing fines, if necessary.

Operators falling foul of the guidelines, or failing to produce alternatives, could face enforcement action which could lead to fines of up to 6 percent of turnover.

The guidelines came as the UK government railed against Chinese interference in UK democratic processes, including attributing attacks on the UK Electoral Commission and on parliamentarians’ emails to “Chinese state affiliated actors”.

The EU guidelines outline “best practices” for services with more than 45 million users. These include reinforcing internal processes and setting up internal teams to focus on “local context specific risks”.

They are obliged to implement “elections specific risk mitigation measures”, to promote official, authoritative information on electoral processes, and to “adapt their recommender systems to empower” users and to reduce the “monetisation and virality” of content that undermines elections.

During election periods they should put together specific measures including incident response mechanisms to minimise the impact of 11th hour incidents, such as misinformation. And there should be post mortems. Cooperation with EU and national authorities as well as civil society organisations is expected, as are post-election reviews.

And the EU wants “specific mitigation measures” around generative AI, such as clearly labelling AI generated content, such as deep fakes, and adapting and enforcing terms and conditions.

While today’s announcement refers to “guidelines”, a commission official said, “election integrity is a key priority for DSA enforcement”. There was plenty of evidence of evidence of the potential effect on elections whether from deep fakes, gaming of recommender systems, foreign disinformation, and manipulation to sow division in European societies. “This is not trivial.”

While there has been much focus on deep fakes and the role of AI, the commission is also focused on issues such as the number of content moderators and fact checkers platforms are deploying, and their degree of local knowledge, and ensuring platforms cooperate with authorities and other groups.

The regulators are also frank that they cannot eradicate problematic content completely, but said they want to ensure that mitigation measures are in place to identify it and prevent its distribution.

An official said, “This is the only enforceable framework in the world for election integrity, because it is part of the Digital Services Act framework. And it is the only framework that's verifiable… We can check whether these platforms, commitments are implemented, how they are implemented, and we can ask for data and controls on the effectiveness of the measures that platforms put in place.”

The official said, “The DSA contains a legally binding obligation to have effective mitigation measures in place.” Platforms “deviating” from the guidelines would be expected to offer up a “serious explanation” of what actions they are taking.

“We can take enforcement actions that can include fines up to 6% of global annual turnover, or daily penalties up to 5% of global annual daily turnover.”

The commission recently sent requests for information to platforms and plans a stress test in April ahead of the European Elections in May. Officials noted that the pan continental nature of this May’s European Parliament election means moderation resources will be particularly stretched, and operators will be expected to put special measures in place to account for this.

More specifically, officials noted the commission has already launched a formal investigation for non-compliance with the DSA into X, formerly Twitter, which in the name of free speech absolutism has slashed its own moderation efforts.

The European Commission’s guidelines came a day after both the UK and US called out China for “malicious cyber activity” and sanctioned individuals they claimed were involved. The NCSC said that “the China state-affiliated cyber actor APT31 was almost certainly responsible for conducting online reconnaissance activity in 2021 against the email accounts of UK parliamentarians”.

It also said a 2021/22 compromise of systems at the Electoral Commission was the work of a China-affiliated actor. The agency said it had updated its guidance for political organisations and other groups involved in the UK electoral process.

Meanwhile, the US issued an indictment against Chinese individuals it said were involved in hacks of US politicians and their families, as well as US companies. Chinese officials described the moves as “fabricated and malicious slanders”.

Join peers following The Stack on LinkedIn