The Artificial Intelligence Act: a landmark proposal that aims to shape the use of Artificial intelligence in the European Union

Photo by geralt, Pixabay

Martina Furlan, 89 Belgium

The information and views presented in this article are those of the authors only, and do not reflect the positions and opinions of their former or current employers, or of any organisation they were or are affiliated with.

Executive summary

On April 21, 2021, the Commission published the new Artificial Intelligence Act (AIA) This landmark regulatory proposal aims to create a clear regulatory environment for Artificial Intelligence (AI) providers and users and protect AI users from the harmful effects of AI deployment. AI-based systems can significantly impact human rights and fundamental rights, such as discrimination, the right to a fair trial, equal opportunities, and privacy. The AIA sets obligations for providers of AI systems that are critical for the rights mentioned above. It is a one-of-a-kind proposal on the international level that could mark the European Union as an ambitious AI legislator worldwide. However, the regulatory proposal needs to be examined by the European Parliament and the Council. Considering the plethora of interest around AI, it is reasonable to expect that the road to adoption is still long. In this policy brief, I briefly explain the principal risks of AI systems, analyze the key elements of the AIA, and the features that could turn critical for its adoption.

By June 2021, 20 EU Member States have published their national AI strategies, while 7 Member States are in the final drafting phase. These numbers reflect the will of European countries to coordinate their efforts towards this new policy area. However, in the absence of a defined AI-regulatory framework, governments struggle to develop priorities and rules of AI systems utilization. If adopted, the EU Artificial Intelligence Act will hopefully enable countries to set a clear direction for a comprehensive AI policy.

What is the Artificial Intelligence Act, and why does Europe need it?

On April 21, 2021, the Commission published its first AI Regulatory Framework: the Proposal for a Regulation of the European Parliament and the Council laying down harmonised rules on Artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts[1]. The AIA wants to facilitate the development of a single market for lawful and safe AI. Concern for Artificial Intelligence has grown in parallel with the rising deployment of AI in everyday-use softwares. The Commission defines AI systems as softwares that are developed with different techniques and can, for a set of human-defined objectives, generate outputs such as content, predictions, or decisions influencing the environments they interact with. AI systems can have significant implications for human rights and fundamental rights, among others discrimination, the right to a fair trial, equal opportunities, privacy. While the European GDPR[2] already governs the use of personal data, the AIA imposes limits and restrictions on AI systems and their providers, whether or not they operate with personal data.

What are the main risks AI systems pose?

The most debated in the media are the consequences of AI in the automated public sphere, namely the role of algorithms in polarising public discussion online and generating micro-targeted content. Further, AI emerged as problematic in algorithm decision-making systems (ADMs), systems used in private and public organizations to score, rank, and predict people’s behavior. Law enforcement agencies have also used ADMs for the purposes of predictive justice, a practice that researchers have widely criticized. Researchers found ADMs have been trained on incomplete data or on data that included personal data (gender, race, belonging to a social group), which amounted to discrimination[3]. Lastly, researchers raised many concerns regarding the use of biometric identification systems, often employed in non-democratic countries as mass surveillance tools. European countries have taken different approaches[4] towards biometric identification.  Some employed such systems, even unlawfully, while others prohibited such systems straight away. The AIA aims to establish a common approach towards these practices.

The bumpy road to the Artificial Intelligence Act and its key players

The Artificial Intelligence Act comes three years after proclaiming a common European approach to AI [5]. In 2019, the Commission published the Ethics Guidelines for Trustworthy Artificial Intelligence, and in 2020, the Commission released the White Paper on Artificial Intelligence[6]. The White Paper was the foundation for the AIA. It introduced the main elements, such as the risk-based assessment and the case-by-case approach. Yet many scholars were puzzled, mainly because of its lax approach towards remote biometric identification and the vagueness of the definition of “high-risk.” An extensive stakeholder consultation followed, which included businesses, NGOs, and academia.

It appears the Commission has carefully weighed the interests of businesses, civil rights movement, governments, and research institutions. If, on the one hand, the AIA prioritises the safeguard of fundamental values and human rights over freedom of business, it also sets a plan to help companies navigate through the new rules and practices. The Commission balanced not only the citizens and business interests but also governments and citizens’ interests, mainly regarding AI-systems use in law enforcement agencies.

The core elements of the AIA

The AIA aims to achieve four main goals. First, to ensure AI systems that are used and placed in the EU are safe and lawful. Second, to enhance the enforcement of existing laws applicable to AI. Third, to secure legal certainty for investments. Last, to facilitate the development of a single market for lawful AI. It does so by introducing a harmonised set of core requirements for AI systems classified as high-risk. The requirements for systems are complemented with obligations for providers and users of such systems. Providers of non-high-risk AI systems are only encouraged to establish and follow a code of conduct.


The AIA applies to providers of AI systems, namely those who place the product on the market or put it into service, regardless of whether the person designed and developed the system. This is irrespective of the provider’s location if the output produced by the system is employed in the EU. Similarly to the GDPR, this provision broadens the AIA’s scope to companies that are not located in the EU but use AI systems to process data of European citizens.


The AIA outlaws systems that cause or are likely to cause physical or psychological harm through the use of subliminal techniques or by exploiting vulnerabilities of a specific group of persons due to their age, physical or mental disability. It prohibits AI systems providing social scoring for general purposes by public authorities. It also precludes using “real-time” remote biometric identification systems, such as facial recognition, in publicly accessible spaces for law enforcement purposes. However, real-time biometric identification is allowed for reasons of public safety, such as identifying the perpetrator of a criminal offence or the executor of a terrorist attack.

High-risk AI systems

Annex III sets forth the areas determining the high-risk label:

  1. Biometric identification and categorisation of natural persons,
  2. Management and operation of critical infrastructure,
  3. Education and vocational training,
  4. Employment, workers management and access to self-employment,
  5. Access to and enjoyment of essential private services and public services and benefits,
  6. Law enforcement,
  7. Migration, asylum and border control management,
  8. Administration of justice and democratic processes.

The areas included in the list are linked to concerns raised by ethicists and mentioned in the above paragraph. The AIA would ensure that fundamental rights contained in the TFEU[7] will be protected. Among others; privacy (biometric identification), discrimination (AI-systems are determining access to educational and employment schemes and other services), access to a right and fair trial (law enforcement), and the right to the presumption of innocence.

The requirements for high-risk systems

Chapter 2 sets forth the requirements for high-risk systems:

  • High-quality data (high-quality training, validating and testing data sets),
  • Documentation and traceability (keeping records and the availability of technical documentation),
  • Transparency (users should be able to interpret the output),
  • Human oversight (the system should be designed so that natural persons can oversee their functioning),
  • Accuracy and robustness (AI systems should be resilient against risks connected to the system’s limitations).

The AIA’s main novelty is the requirement to follow conformity assessment procedures before placing a high-risk AI system on the Union’s market. Further, the AIA also envisages post-market monitoring systems to identify adverse effects and take a remedy to correct them. The providers of AI systems used as part of consumer products, who are already subject to third-party conformity assessment under product safety law, now must also show compliance with the AI rules. The providers of stand-alone high-risk systems must do an internal assessment themselves, except for providers of AI systems used for biometric identification.

According to the Article 71 of the AIA, EU members can establish rules for administrative fines for infringements of the regulation.  At the same time, national authorities are encouraged to help and support small and medium-sized businesses to face new rules and establish regulatory sandboxes to reduce the regulatory burden.

What are the main criticisms?

In principle, there are solid safeguards for fundamental values and quests for algorithmic transparency. However, the Commission left out some key elements and made some significant concessions to the supporters of little bureaucratic intervention.

In my view, the requirements for high-risk application should extend to non high risk application. There are widespread software engineering techniques to keep records and build software safety measures.  By doing so, one would ensure that these techniques become common practice and designers actually learn to use them regardless of the case. Further, even the transparency requirement appears to be somehow weak. Scholars[8] have put a great emphasis on different kinds of disclosure, in particular towards the public. What is missing is a more nuanced explanation of transparency, which includes a description of what the algorithm is doing and a causal explanation of why a system has reached a certain conclusion. Also, transparency disclosure should be proactive and user-friendly.

The second criticism concerns the conformity assessment. Researchers[9] advocated for the establishment of a European auditing system. However, the conformity assessment emerges as an internal procedure for the majority of high-risk providers. Documents cannot be reviewed by the public or a regulator, which would increase trust in new technologies. There is no public audit system, which would guarantee that the rules are respected. The Commission has presumably considered the interests of businesses, traditionally against cumbersome beaurocratic procedures.

Last, there is no mention of the social media environment and the problems of the automated public sphere. It is an issue that does not check off distinct fundamental values, but it heavily influences the media environment and, as such, the healthy functioning of democracies. I believe the providers should be considered to operate in an area that needs to adhere to — at least — a code of conduct.

What is next?

The proposed document has some important innovations, such as the conformity assessment and the post-market monitoring systems, and solid guarantees regarding respect of fundamental values and human rights. There are also shortcomings, including weak transparency requirements, a loose conformity assessment requirement, and the omission of an automated public sphere. In particular, the loose conformity assessment might slow down the uptake of sound coding principles, failing to support the idea that “ethical” requirements should be at the onset of the programming process and product design process. However, there is still room for improvement, as the AIA will be under negotiation for some time. It is yet to be seen whether the co-legislators will push for a more business-friendly regulation or strengthen AI systems safeguards.


[1] European Commission (2020). Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS (COM/2021/206 final). Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206 (Accessed: 14 July 2021). 

[2] European Commission. (2020). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1–88). Available at: http://data.europa.eu/eli/reg/2016/679/oj  (Accessed: 14 July 2021).

[3] Pasquale, F. (2020). New Laws of Robotics, defending Human Expertise in the Age of AI. Oxford: Belknap Press.

[4] Algorithm Watch. (2020). “Automated Society 2020”. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206  (Accessed: 14 July 2021).

[5] European Commission. (2018). “Artificial intelligence: Commission outlines a European approach to boost investment and set ethical guidelines.” Available at: https://ec.europa.eu/commission/presscorner/detail/en/IP_18_3362  (Accessed: 14 July 2021).

[6] European Commission. (2020). White Paper on Artificial Intelligence: a European approach to excellence and trust. Available at: https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en

[7] Consolidated version of the Treaty on the Functioning of the European Union (OJ C 326, 26.10.2012, p. 47–390). Available at http://data.europa.eu/eli/treaty/tfeu_2012/oj

[8] Diakopoulos, N. in Dubber, M., Pasquale, N. and Das, S. (2020). The Oxford Handbook of  Ethics of AI. New York: Oxford University Press.

[9] Algorithm Watch. (2020). “Automated Society 2020”. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206 (Accessed: 14 July 2021).

Comments are disabled.

    This website uses cookies to improve performance and enhance your user experience. Review our Privacy policy to learn more. More Info

    The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.