European Artificial Intelligence Act

History of AI and the European Union

One of the broadest definitions of AI is a machine’s ability to perform a task that would have previously required human intelligence. Therefore, Artificial Intelligence (“AI”) has been around for decades despite gaining popularity in these past years.  In 2017, the European Council called for a ‘sense of urgency to address emerging trends’ which included at the time AI. The main aim was to ensures a high level of data protection, digital rights and ethical standards.[1] Subsequently, on March 2018, the European Union (“EU”) has initiated its research on AI, together with stakeholders and experts merged in expert groups in artificial intelligence. Later on, in June 2018, the EU Commission also established an open policy dialogue, the European AI Alliance specialising on AI and engaging around 6,000 stakeholders. This has further developed to hosting yearly European AI Alliance Assemblies during which conducts useful discussions in terms of AI together with experts, stakeholders and international actors in the files of AI.[2]


This proposal has been long coming as both the European Parliament and the European Council repeatedly expressed calls for legislative action to ensure the well-functioning of the internal market for AI systems where both benefits and risks of AI are adequately addressed at EU level. In fact, the European Parliament has also laid a lot of groundwork in terms of this proposal by adopting a number of resolutions related to AI, including on ethics, liability and copyright as early as 2020. In 2021, the European Parliament went on a step further to implement resolutions on AI in criminal matter and in education, culture and the audio-visual sector as well as a Resolution on a Framework of Ethical Aspects of Artificial Intelligence, Robotics and Related Technologies recommending to the Commission to propose legislative action to harness the opportunities and benefits of AI whilst ensuring protection of ethical principles.[3]  The latest significant steps taken by the EU were the issuing of the proposal for the AI Act back in April 2021 and on 13th March 2024, the European Parliament adopted this act and therefore, the legislative process is almost complete. The AI Act will enter into force 20 days after publication in the Official Journal. Most of its provisions will become applicable two years after the AI Act’s entry into force. However, provisions related to prohibited AI systems will apply after six months, while the provisions regarding generative AI will apply after 12 months.


This Act apart from being the world’s first comprehensive horizontal legal framework for AI, is a component of a more extensive comprehensive package of measures that address issues raised by the development and application of AI. As a result, the two important principles of consistency and complementarity are ensured with other ongoing or upcoming initiatives of the Commission that share the same goal of addressing those problems. This also includes the revision of sectoral product legislation (example: The Machinery Directive, the General Product Safety Directive) and initiatives that address liability issues pertaining to emerging technologies, including AI systems. This proposal lays down a coherent, effective and proportionate framework to ensure AI is developed in ways that respect people’s rights and earn their trust, thus making Europe fit for the digital age and turning the next ten years into the Digital Decade. This Act presents a balanced and proportionate horizontal regulatory approach to AI. It is limited to the minimum necessary requirements to address the risks and problems linked to AI, without unduly constraining or hindering technological development or otherwise disproportionately increasing the cost of placing AI solutions on the market.[4]

Risk-based approach[5]

The Commission also proposed the adoption of a risk-based approach on four identified categories, whereby legal intervention is tailored in accordance to the level of risk attributed.   Firstly, this Act explicitly bans harmful AI practices which are considered to be a clear threat to people’s safety, livelihoods and rights due to the unacceptable risk that they create. These include ‘Real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes, a limited number of cases are excepted. Moreover, this risk category also includes AI systems that deploy harmful manipulative ‘subliminal techniques’, exploit specific vulnerable groups such as those with physical or mental disability and those which are used by public authorities, or on their behalf for social scoring purposes.


Secondly, this Act regulates high-risk AI systems that create adverse impact on people’s safety or their fundamental rights. It further sub-divides this risk into systems used as a safety component of a product or falling under EU health and safety harmonisation legislation and systems deployed in eight specific areas specified in the Annexes, which the Commission could update as necessary, through delegated acts. Before being placed on the market or put into service, AI systems falling under this risk category shall comply with a range of requirements particularly on risk management, testing technical robustness, data training and data governance, transparency, human oversight, and cybersecurity. The majority of obligations fall on providers (developers) of high-risk AI systems regardless of whether they are based in the EU or a third country.


Thirdly, systems that interact with humans, that is chatbots, emotion recognition systems, biometric categorisation systems and AI systems, deepfakes, that generate or manipulate image, audio or video content all pose a limited risk and therefore are subject to a limited set of transparency obligations. The AI systems that fall into the fourth category are those that pose only low or minimal risk. These AI systems could be developed and used in the EU without the need for any additional legal obligations. However, the proposed AI Act envisaged the creation of codes of conduct to encourage providers of non-high-risk AI systems to apply the mandatory requirements for high-risk AI systems voluntarily.

AI Act and regulatory sandboxes[6]

There is no agreed definition of a regulatory sandbox. It generally refers to regulatory tools allowing businesses to explore and experiment with new and innovative products, services or businesses under the regulator’s supervision for a limited period of time. It provides innovators with incentives to test their innovations in a controlled environment, allows regulators to better understand the technology, and fosters consumer choice in the long run. However, regulatory sandboxes also entail the risk of being misused or abused and need to be surrounded by the appropriate legal framework to succeed.


This regulatory sandbox approach has gained considerable traction across the EU as a means of helping regulators address the development and use of emerging technologies amongst which in AI. In fact, the draft AI Act envisages setting up coordinated AI sandboxes at the national level and would establish common rules to ensure uniform implementation of the sandboxes across the EU. This is not a requirement and indeed Member States’ competent authorities are encouraged to set up regulatory sandboxed and put in place a basic framework in terms of governance and supervision. Participants in AI regulatory sandboxes would remain liable under applicable EU and Member State legislation for any harm inflicted on third parties as a result of the experimentation taking place in the sand box.


An AI regulatory sandbox may be established by one or more EU Member States either jointly or separately. This will result in a vast plethora of sandbox regimes and frameworks and rules might be implemented throughout the EU. Therefore, this poses the risk of creating diverging national sandboxing rules. This would adversely lead to the danger of forum-shopping as AI developer might be encouraged to choose the EU Member State with less stringent sandbox regimes.

For comprehensive insights into the regulatory implications of the new AI Act, essential expenditures, or the optimal setup for integrating AI into your business operations within the EU, feel free to contact us.




[1] European Council on European Council meeting (19 October 2017) – Conclusion EUCO 14/17, 2017, p. 8.

[2] The EU’s approach to artificial intelligence centers on excellence and trust, aiming to boost research and industrial capacity while ensuring safety and fundamental rights.

[3] Proposal for a Regulation of the European Parliament and of the Council laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts.

[4] Ibid.

[5] Find EU briefing on the AI Act here.

[6] Find the briefing on AI and sandboxes here