Editorial

Navigating the EU AI Act: Key Insights into Regulation (EU) 2024/1689

Last week on July 12th, the EU Artificial Intelligence (AI) Act was published in the Official Journal as Regulation (EU) 2024/1689. This means the countdown has officially started for institutions to become compliant, and by mid-2026, the provisions of the regulation will generally be fully applicable.

Contributor

Niamh is a technology leader with experience managing complex transformation projects with an academic background in computational neuroscience and neuroeconomics.

Niamh Kingsley
Head of Product Innovation & Artificial Intelligence

The full title of the regulation is:

"Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) Text with EEA relevance."

EU lawmakers have scheduled a phased approach, which has led to some general confusion about what applies, and to who, when. See below for a summary of the timeline for the next three years (and please note that it may be subject to adjustment as the implementation process continues):

Overview of timeline

12 July 2024

- Full and final text of the EU AI Act is published in the Official Journal.  

01 August 2024
- The regulation officially goes into effect

02 February 2025
- Summary
: General provisions & AI Literacy come into effect
- Specifically:
Chapter I & Chapter II start to apply. These cover ‘general provisions’ and ‘prohibited AI systems’ respectively.

02 May 2025
- Summary:
Codes of practice apply to developers of in-scope AI applications. The EU’s AI Office will provide these codes and standards.

02 August 2025

- Summary:
Rules on General Purpose AI Systems come into effect
- Specifically:
Chapter III, Section 4 (‘notifying authorities’), Chapter V (‘general purpose AI models’), Chapter VII (‘governance’), Chapter XII (‘confidentiality’), & Article 78 (‘confidentiality’) start to apply.
- Article 101 (‘fines for General Purpose AI providers’) does not yet apply.

02 August 2026
- Summary:
According to Article 113, the total AI Act (including Annex III ‘high-risk systems’) will apply with some exceptions.
- Specifically:
As per the above, some Chapters will already have applied from 02 February 2025 & 02 August 2025.  
- As per the below, rules for Annex I ‘high-risk systems’ apply from 02 August 2027.

02 August 2027
- Annex I ‘high-risk systems’ and corresponding obligations apply.

What obligations does this place on financial institutions?

Depending on the use case and perceived risk level of an AI solution, different obligations will be put in place. Some key points:  

  • High-Risk AI Systems (Chapter III, Chapter II, Article 6 & Annex III). Financial institutions that develop or use AI systems for (1) credit scoring and creditworthiness assessment, (2) determining access to financial services (e.g., loans), or (3) pricing and risk assessments in life and health insurance, are engaging in high-risk practice. This means there are specific requirements that include (but are not limited to) implementation of a risk management system, registering systems in the EU database before deployment, implementing a post-market monitoring system, maintaining technical documentation, and ensuring accuracy, robustness, and cybersecurity.  

  • Regulatory Compliance (Chapter V, Chapter IX). Financial institutions using AI systems must cooperate with national competent authorities and provide them with all necessary information to verify compliance with the regulation (e.g., model register and risk assessments).

  • Incident Reporting (Chapter III, Article 62). Providers and users of AI systems in financial services must report any serious incidents or malfunctions to the national competent authorities.  

  • Transparency Obligations (Chapter IV, Article 52). Deployers of particular types of AI (e.g., a bank) must inform individuals that they are interacting with an AI system. Additionally, you must allow users to request human intervention, express their point of view, and contest decisions made by an AI system.

Whilst it is true that broadly, the bulk of use cases will not be ‘regulated’ as they are sufficiently low risk, there will be an expectation of adherence to reporting standards and adoption of best practice.  

Some uses cases qualify as ‘unacceptable risk’ (read: banned), including social credit scoring, the compilation of facial recognition databases, and real-time remote biometrics. There are some important nuances here, so be sure to take informed legal advice if working to develop or deploy AI systems.

What next?

You can find the link to the official text here: Regulation - EU - 2024/1689 - EN - EUR-Lex (europa.eu). Industry participants should be aware that the regulatory landscape is evolving, and the list of high-risk AI systems may change over time, as may the obligations associated with different use cases.  

At Delta Capita, we work in partnership with leading industry consortiums and major institutional players to deliver innovative technology solutions and valuable strategic advisory. We encourage our clients to deploy AI solutions with the most comprehensive and detailed obligations in mind, rather than risking introducing regulatory debt later on.  

Get in contact with us to understand how we can help you, or reach out to our Head of Product Innovation & AI, Niamh Kingsley (niamh.kingsley@deltacapita.com).

Please note that this summary is an informed, professional interpretation of the regulation, and does not constitute legal guidance.

This blog was written by Niamh Kingsley, Head of Product Innovation and AI and Olivia Godon, Assistant VP - Post Trade Services.