Profile icon Tick icon small Search icon Mobile nav icon Pin icon Linkedin icon Facebook icon Instagram icon Email icon Telephone icon Arrow down icon Logo Contact
images images
  • Home
  • Insight
  • Structures and Regulation
  • EU AI Act – What you need to do to be compliant
  • Article

    January 27, 2025

    Authors

    • Author Image
      Will Relf
      Director of Data
    • Author Image
      Angel Ramon Martinez Bastida
      Senior Regulatory Change Manager

    EU AI Act – What you need to do to be compliant

    The EU’s AI Act, which was passed in May 2024 and will be fully operational within two years, is blazing a trail in comprehensively regulating artificial intelligence. Will Relf and Angel Ramon Martinez Bastida map out the steps to compliance ahead of the Act coming into force.

    Artificial Intelligence (AI) delivers significant improvements in efficiency, decision-making, and innovation, however, these advancements come with inherent risks that need to be regulated to ensure its safe and ethical use. For example, misused or mismanaged AI systems have the potential to significantly impact privacy and security. Without proper regulations, there is a risk of misuse of personal data, leading to privacy breaches and unauthorised surveillance, impacting individuals’ right to privacy.

    One of the areas of greatest concern to regulators is how AI can influence decision-making processes in critical areas such as healthcare, finance, politics, and law enforcement. Inaccurate or biased AI algorithms can lead to unfair outcomes, discrimination, and even harm. Robust regulations can mandate transparency, accountability, and fairness in AI systems, ensuring that they are designed and deployed ethically and responsibly, all of which has led to the European Union introducing this pioneering AI Act.

    About the AI Act

    The EU AI Act aims to mitigate the risks associated with AI while promoting its beneficial uses. However, no regulator or AI expert can foresee all the possible risks that AI poses today, let alone as the technology advances, so the legislation aims to set the guardrails in a broad enough way to be implementable, but also as generally applicable as possible to adapt quickly to future developments. This is why the EU AI Act categorises AI systems based on their potential impact and level of risk, ensuring that high-risk systems undergo rigorous assessment and adhere to stringent regulatory standards before being deployed.

    The EU’s proactive approach in introducing the AI Act sets a precedent for other regions to follow, potentially enabling an environment for the development of a global standard for AI regulation in the future Currently countries are approaching AI regulation differently, for example, the U.S. and the UK have been using a common law approach, addressing risks as they arise or become apparent, using tools under existing legislation to adjust policy and supervise how AI is used.

    The EU AI Act – timelines to be aware of:

    Impact on private markets

    While the scope of the EU AI Act implicates any AI system or tool, the Act’s impact on private markets is relatively limited. Financial institutions primarily act as deployers of AI technologies, and their use of AI typically falls into the lower risk categories. This means that while compliance with the EU AI Act is necessary, private funds and their managers are not expected to face the same level of scrutiny and regulatory burden as industries dealing with high-risk AI applications.

    However, this is a medium-term view as far as fund administrators are concerned. In the long-term, as AI capabilities develop, fund administrators will become providers of AI to GPs and LPs, for example, providing the likes of portfolio performance reporting and risk prediction services.

    A classification based on how AI tools are used:

    How the AI Act defines users 9graphic

    Categories of risk:

    Unacceptable risk: AI systems deemed to be a threat to individuals or societal values will be banned. This includes systems involved in cognitive behavioural manipulation, social scoring, and real-time biometric identification.

    High risk: AI systems posing significant risks to safety or fundamental rights fall into this category. Examples include AI used in critical infrastructure management, law enforcement, and legal interpretation, as well as AI systems used in products falling under the EU’s product safety legislation, such as toys, aviation, cars and medical devices. High-risk AI systems will undergo rigorous assessment and must adhere to stringent regulatory standards before being put on the market.

    Generative AI: While not classified as high risk, other AI systems, including but not limited to generative AI systems like ChatGPT, must comply with transparency requirements and EU copyright law.

    High-impact general-purpose AI: Advanced AI models with the potential for systemic impact, such as GPT-4 (Open AI’s large multimodal model, meaning it can accept both image and text inputs and produce text outputs, that exhibit human-level performance on various professional and academic benchmarks, making it highly capable in tasks such as text generation, reasoning, and coding), are subject to thorough evaluation. Any serious incidents would need to be reported to the European Commission.

    Recommendations for compliance

    From February the codes of practice will apply. This is a placeholder mode of compliance to bridge the gap between GPAI model provider obligations coming into effect (12 months after coming into force) and the adoption of standards (three years or more after coming into force).

    To avoid the potential pitfalls of AI in your business, be pro-active in how you mitigate AI risk. The most efficient way to do this is to ensure that the data used to train the AI system is itself well-governed and of high quality. Ideally, do not jump straight into AI initiatives without doing the hard work to get the data foundations in place first. Many companies are excited to get started without due diligence and robust systems, and that’s why 80% of AI projects are failing. Strong data foundations, governance, and regulatory compliance is how you ensure your AI initiative falls into the 20%.

    To ensure compliance with the AI Act, organisations should adopt a comprehensive governance framework that includes the following measures:

    1. AI policy: Establish and maintain an AI policy that aligns with the main requirements of the AI Act. If you work multi-jurisdictionally, this will also help you get ahead of any incoming regulations in other locations.

    2. Risk management: Conduct thorough risk assessments for each AI system in use or under consideration, implementing measures to mitigate identified risks. These should be repeated as systems and the technology iterates and becomes more advanced.

    3. Industry updates: Stay informed about industry developments and adopt relevant codes of conduct at the industry level. Be proactive about keeping abreast of developments so you are always leading and never playing catchup.

    4. AI literacy: Invest in upskilling employees for the use of AI and document these efforts to demonstrate compliance. Ensure training is comprehensive and ongoing to fill gaps in workforce understanding and knowledge.

    5. AI forum: Create an internal committee or forum of experts to periodically review and analyse the use of AI within the organisation from various perspectives. If you’re able to invite external industry partners to feed into this forum, it offers perspectives to help spot potential pitfalls.

    EU AI Act and innovation

    Regulation and innovation have a nuanced relationship; and regulation can often be a trigger for innovation. Regulatory frameworks encourage organisations to approach business problems in different ways which in turn creates new solutions and new ways of working.

    Three ways the Act might lead to further innovation within private markets

    1. Provides clear rules – The Act establishes a clear framework for AI development and deployment, outlining responsible and ethical uses of AI. This gives innovators clear parameters within which they can build and scale new AI-enabled products and services.

    2. Prevents market dominance – The Act aims to promote fair competition and prevent the abuse of dominant market positions by large tech companies. It also encourages the use of open-source AI technology and standards which make it easier for smaller companies to enter the market and compete with larger players.

    3. Builds trust – By addressing the ethical concerns and focusing on transparency, the Act aims to build ‘trust’ in AI. This is essential for widespread adoption of AI systems, because when people understand how they work and the limitations of their decision-making, they are more likely to adopt them.

    Overall, whilst the Act presents some initial challenges for organisations and innovators, it ultimately aims to foster a more trustworthy, robust and innovative AI ecosystem, which benefits the evolution of all industries.

    Our approach to AI

    Aztec Group has updated and revised our policies, put in place continued risk assessment, and is rolling out AI literacy among our staff, not only ensuring we comply with the regulations as a company, but leading by example in the responsible use of AI.

    We are open to discussions and collaborations to further enhance our compliance efforts and contribute to the broader conversation on AI regulation.

    If you have questions or need guidance in understanding and meeting your AI Act compliance requirements, please contact us below.

    images images

    Want to talk?

    To discover for yourself what makes us the bright alternative and how we can support, please contact Will Relf, our Director of Data.

    images

    Will Relf

    Director of Data

    Aztec Group eNews

    Aztec Group Careers Newsletter