Brussels asks Big Tech to curb risks to integrity of EU elections
EU implements guidelines to address election risks and combat voter disinformation on online platforms.
- Bilawal Riaz
- 1 min read

The European Commission has implemented new guidelines aimed at online platforms with over 45 million users in the EU in order to address risks to elections and combat voter disinformation. These guidelines, established under the Digital Services Act, focus on tackling election-related risks, harmful AI content, and misleading political advertising. Although not legally binding, platforms could face fines up to 6% of their global turnover for non-compliance. The initiative seeks to curb self-regulation within the industry, particularly in light of concerns regarding generative AI, deepfake content, and misinformation that can impact the integrity of elections. Platforms are required to cooperate with authorities to address emerging threats and ensure user control over content. The move comes ahead of the European Parliament elections, with companies like Google, Meta, and TikTok already taking steps to combat misinformation. Platforms will be tested on these rules in April, and with 370 million voters across 27 member states, the multilingual nature of the EU poses a challenge. The EU is urging for these safeguards to be considered on a global scale, given the importance of upholding democratic values in the digital sphere.