fbpx

The new Code of Practice on Disinformation: an attempt to restore responsibility in the online public sphere

Photo by Maxim Hopman, Unsplash


Martina Furlan, 89 Belgium

The information and views presented in this article are those of the authors only, and do not reflect the positions and opinions of their former or current employers, or of any organisation they were or are affiliated with.


Executive summary

Three years after the first Code of Practice on Disinformation, the Commission proposes a new, more stringent version. The Code will be streamlined into the Digital Services Act, a legislative package that will enter into force in late 2022. The Code’s signatories are online platforms, advertising agencies, and technology providers who co-created the online information ecosystem. In an attempt to restore responsibility in the online sphere, the Commission proposes signatories take more stringent action towards monetizing disinformation, political advertisement, technology design, and data disclosure. Such commitment can change things for the better only if two conditions are met: revenue models reflect the new rules and a wide array of actors commit to it. 


What is the CoP?

The Code of Practice on Disinformation (CoP) comprises a set of principles to counter disinformation online. Signatories, online platforms, and trade associations representing them sign up to the Code and voluntarily commit to respecting its principles and pursuing policies to achieve them. Following a monitoring period, in May 2021, the Commission published its Guidance on Strengthening the Code of Practice on Disinformation. The strengthened Code will enter into force by the end of 2021, and it will be non-mandatory until the Digital Services Act enters into force, presumably by late 2022.

The Digital Services Act (DSA) is a legislative initiative to set rules for digital services and digital markets in the EU. Together with the complementary proposal, the Digital Markets Act, it is now in the impact assessment phase, and both are expected to enter into force in late 2022. The DSA will include the updated Code of Practice on Disinformation (1).

Why do we need a new CoP?

The first Code was intended to introduce practices against disinformation into the lawless environment of online information. It collected the signatures of major online platforms (social media, internet providers, advertisement agencies). However, as technology and the nature of disinformation changes, the Code needs constant updating. Also, the Code lacked narrowly defined goals and monitoring instruments. The Guidance to the new Code introduces more stringent demands – such as tougher action against monetisation of information – , but most importantly, it is a taste of things to come. In fact, it anticipates new obligations that will arise from the DSA.

Who are the signatories of the CoP?

The Code has been signed, so far, by companies such as: Facebook, Google, Mozilla, Twitter, and trade associations, such as: the European Association of Communications Agencies, the Interactive Advertising Bureau Europe, and the World Federation of Advertisers (2). In the Guidance, the Commission calls on new and more diverse signatories to join.


What drives disinformation in the EU and what are the main challenges?

The Commission understands disinformation as  “verifiably false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm. Public harm is understood as a threat to democratic politics and policy processes /…/ as well as to citizens’ health, the environment or security” (3). The Commission’ action focuses on online disinformation, a growing societal concern in the past decade. The first action plan dates back to 2018. However, as rapidly changing technologies enable it, it needs to be updated and adapted to the ever-changing circumstances.

Since softwares took on the roles once filled by newspaper editors, the notion of content and advertising became central in the information ecosystem. If on traditional written media, the written text was the center stage, on social media, it is not. The platforms adapt their content to user preferences, and they sell their audience to advertisers. Advertisers want audiences, not necessarily any particular content. Major social media thus neglected the traditional editorial work of news selection and prioritized engagement and little scrutiny over the content. Such gaps have been exploited both by advertisers and actors who wanted audiences to spread disinformation of political or ideological nature (4). 

Researchers have shown that social media conveyed untrue content during election processes and amplified news from unreliable sources in the past years. The Brexit referendum and the 2016 US elections were both influenced by the manipulation of targeted audiences. The manipulation was arranged by the notorious company Cambridge Analytica, which exploited Facebook’s lax controls over its news content (5). After the case was brought to the surface in 2018, social media took several measures to limit the spread of fake news and violent content (6). However, the basic algorithm that guarantees profit remains unchanged. The link between users’ preferences and the content being fed to them leads to creating bubbles – virtual spaces where users’ views are not challenged but strengthened by similar opinions. The loudest critique towards social media is that, contrary to traditional media, it publishes content without taking on editorial responsibility.


Opinion on Online Political Advertisement

Over the past decade, political parties have increased the use of social media and personal data to target individuals and win voters. Political campaign regulation varies from country to country, however, public opinion is strongly in favor of online media being made to adhere to the same rules as traditional media in pre-election periods. EU citizens are strongly concerned at personal data being used in targeted political messages in the UK and all other EU member states.

76% of Europeans agree that social media should adhere to the same rules as traditional media in pre-election periods.

67% of Europeans are concerned at personal data being used in targeted political messages in the EU.

Source: Statista (7) (8) (9)

While Facebook took action against the spread of untrue content and its amplification, the spread of untruths took other paths. Online messaging platforms, such as Whatsapp, Telegram, and Viber, became essential in conveying information and news to the masses. In the past two years marked by the Covid-19 pandemic, they turned out to be critical for the spread of news about governments’ actions to limit the pandemic. Once again, in the absence of editorial responsibility, no one could stop spreading falsehoods about vaccination campaigns that fuelled the fear of vaccines. This effect reinforced the appeal of regulators’ intervention to halt disinformation.

Breaking down disinformation

In the Guidance to the new Code, the Commission focuses on strengthening the control over long-known issues, as well as on most recent challenges. So, beside the “evergreens”, such as lack of editorial responsibility, lax control over content, and monetising disinformation, the new Code will have provisions about messaging companies and design methodologies.

It is difficult to shape a complex phenomenon that intertwines algorithms, online platforms policies, and revenue models. The Commission focuses on what drives and enables disinformation:  algorithms, advertisement and technology.


What are the principles of the CoP?

According to the CoP, the disinformation efforts hinge on the integrity of online information. The principles that guide the signatories’ measures are: transparency regarding the origin of information, diversity of information, and credibility of information. Last, the Commission recognized that a healthy online information environment can only be secured by the cooperation of several actors: online platforms owners, private entities, public authorities, fact-checking organizations and journalists.

Recognizing that disinformation is amplified by algorithms, advertisement and enabled by technology, the signatories of the Code are asked to commit to act in five main domains:

  • scrutiny of advertisement placements,
  • political and issue-based advertising,
  • services integrity,
  • empowerment of consumers,
  • empowerment of fact-checking community

So far, signatories have been asked to provide an annual account on their activities. They took action in  these areas, but the Commission’s monitoring programme in 2020 revealed several deficiencies. The Code fell short on achieving its intended purpose – to brush off online disinformation.

First, the disinformation continues to earn revenue and advertisement placements have changed very little since 2018. Second, the reporting from signatories was very diverse and difficult to compare. Third, the effect of the actions was hardly measurable, in the absence of narrowly defined key performance indicators. Last, there was a lack of fact-checking coverage. Together with this,  there has also been a mutation of channels used to communicate online – from online platforms to messaging services. Hence the need to widen the participation to the Code and open it to other and new signatories. 

What will the signatories need to do under the new CoP?

The Commission proposes signatories stick to stricter commitments and to open the signature to diverse entities. There are several new issues that the Code will encompass. The priorities that clearly stand out are: defunding disinformation, regulating political advertisement, data disclosure and shaping technology that enables disinformation.

Regarding defunding disinformation, the signatories should give more attention to not disseminate disinformation on their or third parties services. To do so, they need to set out criteria for advertisement placement and disinformation content. Criteria should be part of policies designed in advance to counter disinformation.

Political and issue-based advertisements also need to be better labelled. Signatories should agree on common criteria to mark such content or advertisement, so that users can clearly understand it is not impartial information. Such labels should be in place throughout the sharing lifecycle. Further, signatories should make sure that the identity of advertisers are disclosed. The most ambitious update regards micro-targeting of political advertisement – the mechanism that allowed Cambridge Analytica to organize its campaigns. The Commission wants the micro targeting system to be brought to compliance with the General Data Protection Regulation (GDPR), meaning that users should give their consent to the social media to collect their information and target them.

Fact-checking is an essential element of a safe online information ecosystem. The Commission proposes signatories to disclose their data to the research community, so that they can have an in-depth understanding of disinformation patterns occurring online.

Outstanding is also the commitment to safe design. The issue of designing algorithmic systems in a certain way emerged already in the proposed legislative initiative Artificial Intelligence Act (AIA) In the Guidance to the new Code, the Commission indicates software design should minimize risks for users, in accordance with the AIA.

Beside the issue-based updates, the Commission also proposes a structured monitoring system, which will require a standardised reporting outlet for signatories and, indeed, more work from the Commission side to monitor the Code implementation.


What are the main critiques?

The Commission proposal is very ambitious and one-of-its-kind in democratic orders. The only countries that doubled down severely on social media are undemocratic. Finding its way, the Commission wants to make the online information ecosystem safer for users and better for democratic societies. The goal of defunding disinformation will be achieved if all the actors involved in the ecosystem participate in it – representatives of advertisers, communication agencies, and all media outlets. The most ambitious proposal concerns limiting the political micro-targeting of users, which could solve the issue of online political polarization. The commitment to safe design is an element that relates to the proposal to regulate artificial intelligence. However, the AIA distinguishes by high and low-risk areas of operation, and neither the AIA nor the Guidance specify what kind of risk online platforms or social media represent. The obligation to carry out a more or less rigorous conformity assessment depends on the categorization. As such, more should be elaborated on what the signatories should do in that regard.

The horizontal issues that need to be solved, such as structured monitoring and open signatories, are definitely the step in the right direction. The signatories, so far, represent a small share, albeit a powerful one, of online actors, and the Commission needs to take on board more of them to bring about an organic change.

Last, there is the elephant in the room the Guidance does not mention openly. The revenue model of many online platforms will need to adapt to the new rules. Those companies who profit from the indiscriminate advertisement will probably suffer a loss when putting stringent limits to the advertisements. This will probably be a point of friction between the signatories and the Commission. However, if signatories want to respect rules, they will definitely have to figure out how to be profitable in new circumstances.

As mentioned before, regulating a phenomenon intertwining technology, companies’ policies, and revenue models is highly complex. The Commission intends to shape the policies of companies in the first place, in turn, companies will have to adapt their technology and, ultimately, their revenue models. Therefore, those who do not profit from indiscriminate advertisements and clickbait content are a step ahead. For, by entering into force the DSA, everyone will have to stick to the rules and commit to high integrity standards.


Sources

[1] European Commission. (2021). Guidance Guidance on Strengthening the Code of Practice on Disinformation. Available at:  https://digital-strategy.ec.europa.eu/en/library/guidance-strengthening-code-practice-disinformation (Accessed: 21 October 2021).

[2] European Commission. (2021). Annual self-assessment reports of signatories to the Code of Practice on Disinformation 2019. Available at: https://digital-strategy.ec.europa.eu/en/news/annual-self-assessment-reports-signatories-code-practice-disinformation-2019 (Accessed: 21 October 2021).

[3] Pasquale, F. 2020. New Laws of Robotics: Defending human expertise in the age of AI. Oxford: Belknap Press. 

[4] European Commission. (2020). Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS (COM/2021/206 final). Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206 (Accessed: 21 October 2021).

[5] Cadwalladr, C. and Graham Harrison E. (2018, 17 March). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian. Available at: https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election (Accessed: 21 October 2021).

[6] European Commission. (2019). Facebook report on the implementation of the Code of Practice on Disinformation. Available at: https://digital-strategy.ec.europa.eu/en/news/annual-self-assessment-reports-signatories-code-practice-disinformation-2019 (Accessed: 21 October 2021).

[7] Statista. (2020). Online political advertising in Europe – Statistics & Facts. Available at: https://www.statista.com/topics/5455/online-political-advertising-in-europe/#dossierKeyfigures  (Accessed: 21 October 2021).

[8] Statista. (2020). Should online media be made to adhere to the same rules as traditional media in pre-election periods? Available at:  https://www.statista.com/statistics/1030237/pre-election-rules-for-online-media-in-the-eu/ (Accessed: 21 October 2021).

[9] Statista. (2018). Levels of concern at personal data being used in targeted political messages in the European Union in 2018, by country. Available at: https://www.statista.com/statistics/1030157/personal-data-in-targeted-political-messages-eu/ (Accessed: 21 October 2021).

Comments are disabled.

    This website uses cookies to improve performance and enhance your user experience. Review our Privacy policy to learn more. More Info

    The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

    Close