Artificial Intelligence in UK Financial Services 2024
Executive Summary
The Bank of England and the Financial Conduct Authority (FCA) conducted their third survey on artificial intelligence and machine learning in UK financial services. Here are the key highlights:
Benefits and risks of AI
- The highest perceived current benefits are in data and analytical insights, anti-money laundering (AML) and combating fraud, and cybersecurity.
- The areas with the largest expected increase in benefits over the next three years are operational efficiency, productivity, and cost base.
- Of the top five perceived current risks, four are related to data: data privacy and protection, data quality, data security, and data bias and representativeness.
- The risks that are expected to increase the most over the next three years are third-party dependencies, model complexity, and embedded or 'hidden' models.
- The increase in the average perceived benefit over the next three years (21%) is greater than the increase in the average perceived risk (9%).
- Cybersecurity is rated as the highest perceived systemic risk both currently and in three years. The largest increase in systemic risk over that period is expected to be from critical third-party dependencies.
Constraints
- The largest perceived regulatory constraint to the use of AI is data protection and privacy followed by resilience, cybersecurity and third-party rules and the FCA's Consumer Duty.
- The largest perceived non-regulatory constraint is safety, security and robustness of AI models, followed by insufficient talent and access to skills.
Use and adoption
75% of firms are already using artificial intelligence (AI), with a further 10% planning to use AI over the next three years. This is higher than the figures in the 2022 joint Bank of England and FCA Machine learning in UK financial services report, of 58% and 14% respectively.
Third-party exposure
A third of all AI use cases are third-party implementations. This is greater than the 17% found in the 2022 survey and supports the view that third-party exposure will continue to increase as the complexity of models increases and outsourcing costs decrease. The top three third-party providers account for 73%, 44%, and 33% of all reported cloud, model, and data providers respectively.
Automated decision-making
Respondents report that 55% of all AI use cases have some degree of automated decision-making with 24% of those being semi-autonomous i.e. while they can make a range of decisions on their own, they are designed to involve human oversight for critical or ambiguous decisions. Only 2% of use cases have fully autonomous decision-making.
Materiality
62% of all AI use cases are rated low materiality by the firms that use them with 16% rated high materiality.
Understanding of AI systems
46% of respondent firms reported having only 'partial understanding' of the AI technologies they use versus 34% of firms that said they have 'complete understanding'. This is largely due to the use of third-party models where respondent firms noted a lack of complete understanding compared to models developed internally.
Governance and accountability
84% of firms reported having an accountable person for their AI framework. Firms use a combination of different governance frameworks, controls and/or processes specific to AI use cases – over half of firms reported having nine or more such governance components. While 72% of firms said that their executive leadership were accountable for AI use cases, accountability is often split with most firms reporting three or more accountable persons or bodies.
Further reading
You can find additional articles about AI in the US financial services and the role of generative AI on our web site.