AI Compliance Simplified - A Practical Guide for FCA Regulated Firms

Posted on: 29 May 2024

Written by: Rabih Zeitouny

AI - tomorrow's regulation, today?

The FCA is currently taking a pragmatic approach to the regulation of artificial intelligence (AI) by applying and adapting existing rules and principles to manage AI-related risks. This strategy aims to ensure effective governance of AI technologies while fostering innovation in financial services, without the need to develop a separate regulatory framework exclusively for AI.

The FCA itself is actively trying to understand the current deployment strategies of AI within the firms it currently regulates. It then plans to learn from these firms and, based on that learning, deploy necessary strategies to ensure safe and responsible AI adoption in the financial sector.

The Government’s expectations

Alongside this however, in March 2023, the government set out five pro-innovation regulatory principles for AI which the FCA and other regulators are expected to interpret and apply within their remits. These principles are:

  1. Safety, Security, Robustness;
  2. Fairness;
  3. Appropriate Transparency and Explainability;
  4. Accountability and Governance; and
  5. Contestability and Redress.

The FCA has interpreted the government's five pro-innovation principles for AI and has now established a preliminary framework for firms to follow. In its "Approach to AI" document, the FCA has issued recommendations and integrated AI considerations into existing regulations. This creates a provisional framework for firms, to ensure compliance and the adoption of best practice, which may yet evolve as AI technologies and their applications grow.

Below we have provided a broken-down list of the key steps firms need to take now to ensure alignment with the FCA's interpretation of the government's five pro-innovation regulatory principles for. Whether your firm has already adopted AI or is just considering it, these guidelines highlight the substantial effort required to align with regulatory expectations. 

1. Safety, Security, Robustness

  • Conduct regular audits and reviews of AI systems to identify potential security and safety risks. (SYSC 6.1, Principle 3)
  • Develop and maintain robust business continuity and incident response plans that are regularly tested to address potential AI-related security incidents promptly. (SYSC 13, Principle 11)
  • Develop strategies to ensure operational resilience in the face of AI-related disruptions, identifying critical business services and ensuring they can withstand and recover from severe scenarios. (SYSC 15, Principle 11)
  • Conduct thorough due diligence on AI providers to ensure compliance with regulatory requirements and have robust security measures in place. (SYSC 13, Principle 11)
  • Provide regular training for staff on the security, safety, and regulatory aspects of AI so that they are aware of the latest developments and best practices. (SYSC 6, Principle 3)
  • Establish cross-functional teams involving legal, compliance, technical, and risk management staff to review the safety of  AI systems.  (Principle 3, Principle 4)
  • Adhere to relevant technical standards, such as ISO standards, to ensure your AI systems meet high-security and safety benchmarks. (Principle 3)

2. Fairness

  • Clearly inform customers about AI use and explain how they can challenge AI-driven decisions. (Principle 7, Consumer Duty)
  • Establish cross-functional teams involving legal, compliance, and technical staff to review AI systems regularly for fairness and compliance. (Principle 8, Principle 9)
  • Recognise and mitigate biases in AI systems, distinguishing between acceptable and unacceptable biases in decision-making processes. (Consumer Duty)
  • Regularly assess your business models to ensure they do not unfairly disadvantage any customer group and Make necessary adjustments to ensure fairness in all AI-driven interactions. (Threshold Conditions, Principle 6)
  • Ensure AI-driven advice and decisions are suitable and in the best interest of customers. (Principle 8, Principle 9)
  • Implement procedures to prevent AI discrimination based on protected characteristics and ensure fairness in personal data processing in accordance with the UK GDPR and Data Protection Act. (Equality Act 2010, UK GDPR, Data Protection Act)

3. Appropriate Transparency and Explainability

  • Clearly document and communicate the objectives, risks, and benefits of each AI system to customers, including how AI is used in decision-making processes and challenge mechanisms (e.g., pricing, fraud detection), through user-friendly explanations. (Consumer Duty, Principle 7)
  • Create a detailed internal documentation on how AI systems make decisions, identify specific points where AI impacts outcomes, and provide clear, simple explanations for these processes to non-technical staff and customers. Ensure policies are based on the level of risk associated with each AI application. (Principle 7)
  • Adhere to UK GDPR requirements for transparent data processing by including AI-related details in privacy notices and conducting regular data protection impact assessments for AI systems. (Articles 13 and 14 of UK GDPR)

4. Accountability and Governance

  • Map out all AI systems and processes used both internally (e.g., HR functions) and externally (e.g., customer-facing operations). Pay special attention to legacy systems that might pose risks. (Principle 3, SYSC 4.1.1)
  • Develop and implement robust governance procedures, including protocols for approving AI systems. Ensure multiple senior managers can oversee AI used across various business areas and functions. (Principle 3, SYSC 4.1.1, SM&CR)
  • Ensure senior managers are aware of AI use within their functions and take reasonable steps to prevent regulatory breaches. This requires them to have a certain degree of AI literacy and to integrate AI oversight into their responsibilities. (SM&CR)
  • Include AI as a regular agenda item in board and risk committee meetings. Ensure that AI-related management information flows to these bodies to enable effective oversight and challenge. (Principle 3, SM&CR, Consumer Duty)
  • Integrate considerations of current and future AI uses into strategies aimed at delivering good outcomes for retail customers. (Consumer Duty)
  • Conduct periodic reviews and updates of governance and accountability policies to ensure ongoing compliance with FCA rules and regulations, particularly when new AI technologies are introduced. (Principle 3, SYSC, SM&CR.)

5. Contestability and redress

  • Ensure your complaint handling procedures allow consumers to contest AI decisions. In those procedures, provide sufficient information to consumers so they can understand and challenge AI decisions. (Complaints Sourcebook (DISP), Chapter 1)
  • Adhere to UK GDPR requirements by ensuring AI decision-making transparency in your T&Cs which should clearly set out consumers' redress options for automated decisions. (GDPR Articles 13, 14, and 22).

In summary then…

The FCA’s current approach to the regulation of AI emphasises flexibility, collaboration, and the integration of existing principles to address AI-related risks without stifling innovation. However, the regulatory landscape for AI in the UK is not static. The outcome of the upcoming General Election on July 4th could play a pivotal role in shaping future regulatory frameworks. Depending on the result, there may be shifts in how AI is proposed to be regulated. As such, it is crucial to stay informed and be prepared for potential changes in this area. In the meantime, the key steps noted above may be considered as ‘best practice’ for today.


Rabih Zeitouny

Rabih is a Senior Consultant within our Payment Services team.

Contact Rabih

Related resources

All resources
iStock 1437539329 Article

Capital Markets Newsletter - June 2024

iStock 1332708318 Article

Payments Newsletter - June 2024

iStock 1336839390 Article

The FCA’s recent focus on Consumer Credit principal firms

iStock 1223456496 Article

Operational Resilience – What’s the worst that could happen?