The EU and disinformation (pt. 2): a more subtle virus?

By Domenico Farinelli and Théo Wu

Since the breakout of the Covid-19 pandemic, false and misleading information spread like never before. Fortunately, democracies have developed a strong immune system to face this kind of antigen. In fact, on the 5th of December 2018 the European Commission decided to act against disinformation. It presented a new Action Plan as answer to the European Council’s call for measures to “protect the Union’s democratic systems and combat disinformation, including in the context of the upcoming European elections”.

Why? Disinformation can seriously harm the democratic system, infecting the public debate and impeding the citizens in the development of their political conscience. Undoubtedly, it was a great step forward on the path of constructing a true European democracy but, to truly understand this issue, one must start from the analysis of the problem’s very basis.

What is disinformation?

The Commission has based its definition of disinformation on a self-regulatory document produced by a group of the sector’ stakeholders. It defines it as “verifiably false or misleading information that is created, presented and disseminated for economic benefit or deliberately misleads the public and can cause harm to it”. But this notion, seemingly narrowing down freedom of expression, does not include neither satire, parody or properly defined partisan news and comments (which are essential for a healthy democracy) nor false or misleading advertising and reporting mistakes.

Furthermore, the subjects behind misinformation can be internal or external, including numerous State and non-state actors. Their actions sadly cripple the public debate, putting forward their interests. Consequently, the signatories of this document recognised that, “since our open democratic societies depend on public debates that allow well-informed citizens to express their will through free and fair political processes, their exposure to large scale disinformation is a major challenge for Europe“.

How did the EU react?

Already in 2015, following a disinformation campaign led by Russia, the European Council had recognised this practice as a serious threat to democracy. Hence, the Commission implemented several instruments, such as the East Strategic Communication Task Force, the Hybrid Fusion Cell, and the European Centre of Excellence for Countering Hybrid Threats, in order to raise public awareness and to support NATO activities in the field.  Since then, the EU’s coordinated action in response to disinformation has been based on four pillars.

The first pillar aims to improve the capabilities of Union institutions to detect and analyse disinformation through the reinforcement of the present strategy. It is based on enhancing the cooperation between Member States and providing them with specialised staff and new tools, such as experts in data mining and media monitoring services.

The objective of the second pillar is to strengthen coordinated and joint responses to disinformation. In March 2019, a Rapid Alert System was set up to provide alerts on disinformation campaigns in real-time, operating through a dedicated technological infrastructure. To ensure time and resource efficiency, each Member State designated a contact point, positioned within its strategic communications departments, to allow fast coordination and communication among the Unions’ dedicated institutions and the neighbouring countries.

The third pillar consists in the mobilisation of the private sector to tackle disinformation. Indeed, since the scale of the misinformation is directly related to the online platforms’ ability to amplify, target and spread it, the advertising industry has a crucial role to play in tackling this problem. The Commission exhorted the main online platforms to sign a Code of Practice to implement actions in relation to the 2019 European Elections; among the measures adopted, the effective verification of the identity of ad promoters, the closure of active fake accounts and the identification of automated bots.

In addition, the European Regulators Group for Audio-visual Media Services (ERGA) monitors implementation of the commitments by the signatories of the Code of Practice. The ERGA regularly reports to the Commission on the respect of their engagements and the platforms themselves are urged to provide the Commission with up-to date information on the actions they have taken to comply with them.

The fourth pillar is devoted to raising awareness and to improving the societal resilience to the threat of disinformation. As a matter of fact, an effective response to disinformation requires active participation by civil society. This participation can be achieved through a further understanding of the structures that sustain disinformation and the mechanisms that shape its dissemination online. This may be achieved via independent fact-checkers and researchers and the enhancement of public consciousness through conferences and courses.

Moreover, a strong support to independent media is promoted by the Commission, as their presence is vital to the exposure of disinformation and crucial for an operative democratic society to function. As an example, the Openmediahub project is financed by the EU to help neighbouring countries’ journalists in their formation and networking activities. This support is also implemented through the Audio-visual Media Services Directive, which incentivises cross-border cooperation amongst media literacy practitioners to promote public consciousness on media.

What has been done so far?

Succinctly, according to the 2018 Action Plan, the core of the Union’s strategy vis-à-vis disinformation is the spread of public awareness and efforts for greater coordination across Europe and its neighbourhood. Several instruments and many initiatives have been promoted, but the results are ambiguous. Indeed, online platforms made “noteworthy progress” in terms of transparency and fact-checking. Nevertheless, they made “insufficient progress” in:

  • agreeing on a common definition of “issue-based advertising”;
  • providing detailed data, thus impending a swift management of the problem;
  • not finding a consensus on the evaluation of the impact of disinformation.
Therefore, different options have been imagined to take the process a step forward. First, to continue with a Code of Practice 2.0, based on what has been learned until now. Second, to add a hard regulatory intervention to back up the tech industry voluntary collaboration, through a set of minimum standards and a set of sanctions for non-compliance. Some of these aspects have been revised in 2020 with the
European Democracy Action Plan (more on that here).
 
With the forthcoming Conference for the Future of Europe, it has become even more evident that any talk about a greater integration of the Union is deeply threatened by the slithering menace of disinformation, as it feeds the whirlwind of populist discourses and shakes the very foundations of the wider European structure.