The EU AI Act is Coming into Force. Here’s What It Means for You

As parts of the world’s first comprehensive AI law come into effect, how will it impact financial markets?

Joshua McAlpine 18 March, 2025 | 1:45PM
Facebook Twitter LinkedIn

Illustration av AI föreställande en robot i tankar med röda och gröna kretstrådar som sträcker sig från huvudet och representerar robotens kognitiva process

Back in April 2021, the European Commission proposed the first EU regulatory framework for artificial intelligence—the Artificial Intelligence Act. The goal: Protect users from risks posed by the deployment of AI systems.

Four years later, parts of the regulation have now come into effect. But what is the EU AI Act? Who does it apply to? What does it mean for financial market participants and how are investors reacting?

Here’s what you need to know.

The EU AI Act breaks down AI use into four levels of risk, which determine their regulatory treatment.

1 Unacceptable Risk

The first—and arguably most important—is unacceptable risk. This means that an AI system poses a threat to people and their fundamental rights, contravening EU values. Unsurprisingly, AI systems deemed ‘unacceptable’ will be banned. Examples include AI systems that enable social scoring. These categorise people on personal characteristics or socio-economic factors. Other examples use biometric identification, with some exceptions for law enforcement purposes.

2 High Risk

Systems that may have a negative impact on fundamental rights will be classed as high risk and divided into two categories: those used in products covered by the EU’s product safety legislation (e.g. aviation or medical devices), and those that will need to be registered in a database (e.g. systems used for law enforcement or migration, asylum, and border control management).

3 Limited Risk/Transparency Risk

Users must be made aware of when they’re interacting with AI, so applications susceptible to manipulation, like chatbots or deepfakes, must be transparent and comply with EU copyright law.

4 Minimal Risk

Applications that don’t fall under the above three categories are classed as ‘minimal risk’ and can be developed and used based on existing legislation without additional legal requirements. It’s important to note that just because something is classed as minimal risk now does not mean the classification won’t change in the future.

The regulation outlines two specific high-risk use cases for the financial services sector. The first is using AI to evaluate someone’s creditworthiness. The second is using AI for life or health insurance pricing.

Who Does the EU AI Act Apply To?

Naturally, the regulation will apply to all 27 member states of the European Union. Crucially, the regulation will also apply to entities located outside the EU if the AI system is offered in the EU market or impacts people located in the EU.

This adds an additional layer of complexity, as Andrew Ye, investment strategist at Global X ETFs notes.

“Non-European businesses could face significant adaptation challenges, as the Act’s extraterritorial provisions require compliance from any firm operating in EU markets,” he says. “Recent enforcement actions against newer AI players from various EU countries, including app bans and data privacy investigations, demonstrate stringent cross-border enforcement. These cases highlight regulatory concerns about opaque AI training data practices and jurisdictional compliance, potentially prompting stricter international data transfer rules.”

What Do Professional Investors Say?

While it’s still early days, European fund managers point to three key considerations to monitor: the legislation in practice, the impact on innovation, and the potential opportunities.

Dan Smith, senior equity analyst at Canaccord Wealth, sees two primary challenges for mega-cap tech companies. “Restrictions on AI deployment could dampen demand for AI chips, cloud computing and AI-driven software, which could potentially limit future revenue growth. Secondly, compliance costs related to documentation and auditing will put pressure on margins, which adds to the already challenging outlook for profits in this sector given higher depreciation charges associated with the considerable capital investment required for AI infrastructure,” he notes.

“But in the short term, the biggest obstacle to AI expansion is not regulation, but capacity constraints, particularly the limited availability of GPUs and computing power. Given that AI demand currently outstrips supply, any slowdown in adoption within the EU is likely to be offset by stronger demand from other regions, at least in the near term.”

David Winborne, co-portfolio manager of the Impax Global Equity Opportunities fund says it’s not just mega-cap companies who face challenges.

“The EU AI Act’s implications are largest for smaller European AI developers. This is partly because bigger companies such as Microsoft MSFT already have thorough controls around their AI solutions so would face minimal additional operational costs, and partly as US firms could simply choose to not offer “non-compliant” solutions to EU-based companies. Companies such as the French start-up Mistral AI are more likely to be materially impacted by requirements, which could slow growth.”

Martin Hermann, senior portfolio manager of global equities at Berenberg Asset Management, notes that while restrictions could stifle AI development in the EU, especially at a time when US companies like OpenAI, Google, and Anthropic are developing cutting-edge AI models, there is a silver lining.

“The EU AI act creates opportunities for companies specializing in trustworthy AI, explainability, and AI security. Additionally, the act bans AI models threatening safety, livelihoods and rights of people,” he says. “Large companies with resources like SAP, Siemens, and European AI labs could benefit from early compliance and regulatory-driven AI solutions.”

Does the EU AI Act Apply to the UK?

Yes. UK firms that create or use an AI system that’s used in the EU will be subject to the regulation.

In comparison to the EU’s risk-based approach to AI regulation, the UK has adopted what it calls a “pro-innovation approach” to regulation. What does this mean? Essentially, there are no current plans to introduce net new regulations. Instead, existing regulatory bodies will need to take into consideration how AI systems are created and used.

On Jan. 13, 2025, the Labour government announced it would ramp up its efforts to use AI to boost productivity and gross domestic product growth via a 50-point plan. The announcement included a £14 billion investment from three major tech firms to develop the necessary infrastructure in the UK with a goal to deliver 13,250 jobs in the sector. This is on top of a previously-announced £25 billion investment. While the plan focuses on AI infrastructure and the role of AI in public services, it does not outline a plan for regulation.

When Does the EU AI Act Come into Force?

The EU AI Act has a lot of moving parts. Here’s a timeline of the most notable dates:

  • April 2021 – The European Commission proposes the first EU regulatory framework for AI. 
  • March 2024 –Parliament adopts the Artificial Intelligence Act. 
  • May 2024 – The Council approves the AI Act. 
  • June 2024 – European Union lawmakers sign the act. 
  • Aug. 1, 2024 – The EU AI Act enters into force across all 27 EU Member States. 
  • Aug. 1, 2026 – After a two-year implementation period, the regulation will be fully applicable.

Like all pieces of complex legislation, there are some exceptions, with some rules coming into force earlier:

  • Feb. 2, 2025 – AI systems categorised as ‘unacceptable risk’ will be banned.  
  • May 2, 2025 – Providers of AI systems will have their codes of practice ready to show they’ll meet their compliance requirements. 
  • Aug. 2, 2027 – High-risk systems will have more time to comply with the legislation.

The Role of AI in Financial Services

Late last year, the Bank of England and Financial Conduct Authority conducted a survey of artificial intelligence and machine learning in UK financial services. The results found that 75% of firms are already using AI, while a further 10% plan to adopt AI within the next three years.

AI is now everywhere, and it’s not going away. While the debate surrounding the extent to which AI will feature in our lives rages on, there are potential benefits for financial market participants willing to give AI a try.

  • Automation – when it comes to manual or repetitive tasks, AI can be a huge time saver. For example, automated data aggregation. Pulling in vast amounts of data frees up more time for advisors, wealth managers, and asset managers to focus on more meaningful work.  
  • Analysis – AI can be a useful analytical tool to get insights quickly. For example, sentiment analysis can help monitor the mood of the market, helping analysts anticipate changes ahead of time. 
  • Anomaly Detection – AI is great at looking for patterns and outliers, so could be a powerful tool to combat fraud.  
  • Personalisation – Thanks to advances in natural language processing, managers and advisors can use tools like chatbots to improve client communication, deliver more personalised messages, learn more about a client’s goals and risk tolerance level, and potentially prevent customer churn.  
  • Cost Reduction – Thanks to the above, AI has the potential to reduce costs, increase ROI, and boost productivity (if used effectively).  

What’s Next?

As Alison Porter, portfolio manager on the Global Technology Leaders Team at Janus Henderson Investors notes, there’s still uncertainty around how the legislation will be applied and enforced.

“Globally, the EU AI Act’s risk-based approach has been broadly embraced. But it’s still very theoretical and doesn’t have proper parameters around it. We don’t yet have the liability portion of the act, which might not come to fruition, and it doesn’t have a code of practice yet,” she says. “Although they’ve established areas of different risk classification, we don’t have any true guidelines on what exactly goes into that or how different AI applications might interact. It’s simply too early and too broad an act to draw any implications from a pure investment perspective.”

A High Price to Pay for Non-Compliance

While we wait for the full rollout, the first port of call should be to check whether any AI-related activities fall into the unacceptable or high-risk categories. The cost for non-compliance is eye-watering—EUR 35,000,000, or up to 7% of a firm’s total worldwide annual turnover for the previous financial year, whichever is higher. Not to mention the potential reputational risk.

While the EU AI Act is phased in, it’s also important to follow how non-European countries approach AI regulation. In the US—the world leader in AI technologies—the new Trump administration’s focus on deregulation signals a fundamental shift away from the approach of the former Biden administration.

As David Winborne comments, “the positive impacts of the regulation on AI safety will ultimately depend on enforcement across EU member states, which has proven to be unequal in the case of GDPR. Furthermore, since the EU AI Act has begun to be phased in from this month, it is likely to be a topic discussed in US-Europe trade negotiations given the extra territorial nature of the legislation.”

Want to Learn More?

Download Morningstar’s free Guide to Artificial Intelligence to learn more about AI, from use cases and key terminology to region-specific regulations and AI-related challenges.

Morningstar's Guide to AI

Christopher Johnson contributed to this article.


The author or authors do not own shares in any securities mentioned in this article. Find out about Morningstar's editorial policies.

The information contained within is for educational and informational purposes ONLY. It is not intended nor should it be considered an invitation or inducement to buy or sell a security or securities noted within nor should it be viewed as a communication intended to persuade or incite you to buy or sell security or securities noted within. Any commentary provided is the opinion of the author and should not be considered a personalised recommendation. The information contained within should not be a person's sole basis for making an investment decision. Please contact your financial professional before making an investment decision.

Facebook Twitter LinkedIn

About Author

Joshua McAlpine  is content writer at Morningstar

© Copyright 2025 Morningstar, Inc. All rights reserved.

Terms of Use        Privacy Policy        Modern Slavery Statement        Cookie Settings        Disclosures