AI – a view from the FCA and some ground rules for avoiding risk

Isabella Macfarlane, Head of London Markets at Insurance Compliance Services Limited (ICS) from our compliance proposition partners, UKGI Group, gives their expert view on how firms can avoid some of the risks when using Artificial Intelligence (AI).

AI is now everywhere; it's becoming increasingly relevant and even if you're not actively using it, an awareness of its capabilities and pitfalls is vital as it rapidly evolves.

Back in September 2025, the FCA published “AI and the FCA: our approach”. The regulator made it clear that it doesn't intend to introduce a separate set of rules for AI. Instead, it expects firms to manage AI within the framework of existing regulation. For example, the FCA highlighted how the Consumer Duty, and its rules on accountability and governance under the Senior Managers and Certification Regime (SMCR), already set expectations for how firms should approach the use of AI.

The FCA also referred back to its 2024 publication, AI update, which mapped its existing rulebook against the Government’s guidance on AI.

Of course, turning this into day-to-day practice isn't always straightforward. With that in mind, the following ‘Ground Rules’ are designed to help firms apply the FCA’s expectations and build an approach to AI that is both compliant and constructive.

Surprisingly, AI has been in use for some time, but the significantly more advanced form we’re now getting familiar with demonstrates impressive capabilities to the casual user. Firms are already using it to improve operational efficiency, cleverly create new insurance solutions and enhance customer outcomes.

But, as with any shiny new technology, there are risks. We're still in the early days of AI and much remains unknown. The FCA is technology-neutral whilst recognising that AI can improve customer experiences and outcomes, but they state it must be:
 

“… used safely and responsibly in UK financial markets. On the one hand, at the FCA we recognise that firms need regulatory clarity on AI – a predictable framework to guide their investments, operations, and risk management. On the other hand, we also recognise that if we, as the UK's financial services regulator, move too quickly or narrowly, we could unintentionally stifle innovation. Also, any regulatory action could potentially become out-dated very quickly as the technology evolves at pace." 1


The FCA has developed a live testing environment to “build trust, reduce risk, and accelerate safe and responsible innovation” 2, allowing firms to test innovations and engage with the regulator in a consultative manner.

For now, the market bears a degree of responsibility for self-regulation, making sure that AI is used safely and responsibly. With that in mind, here are some suggested Ground Rules for firms experimenting with AI.

Ground Rules
 

1.     Embed AI use into your risk and compliance considerations

  • Include AI in your risk register and relevant policies, with detail proportionate to its use. For example, employees using AI to improve communications shouldn't input customer data into widely used applications like ChatGPT.
  • Maintain a clear ‘Use of AI’ policy accessible to employees, so they know what is and isn’t permitted.
  • Assign a new operational responsibility for AI oversight to an Approved Senior Manager and update their Statement of Responsibilities. Regulators will ask who has ultimate responsibility within the firm if issues arise.
  • Most importantly, if the use of AI determines the outcome of services provided, make sure you have good management information to allow you to monitor these outcomes and assess whether the outcomes are as expected or if there are any outliers.
     

2.     Proof of concept

  • It's important to implement proof of concept controls and testing prior to AI solutions being launched, and it's absolutely essential when it's rolled out in a customer-facing function.
  • This should include documented sign-off that's able to evidence the benefits of the AI solution.
  • Liaise with your PII insurers, to make sure that there aren't any issues or unexpected exclusions for the activities you're undertaking.
  • Likewise, if you're potentially dealing with clients who are AI service providers or agents, or broader clients who are significantly using AI themselves, make sure you're aware of the potential exposure, their insurance needs (and be clear about the extent that you can advise) and arrange cover and any limitations. Again, where necessary consider informing your PII insurer, the extent which you're advising on AI risks and any considerations you need to make.

3.     Safe launch

  • Start small: limit the rollout scope so all users are aware and potential harm is contained.
  • Monitor constantly: identify, track and document risks, adjusting the AI solution as necessary.
  • Learn and reiterate: use early feedback to refine the solution before broader deployment.

4.     Understand the limitations and risks of AI

  • AI can reflect unconscious bias, as it learns from human input or it can refer to incorrect information. Review management information for unintended/unexpected outcomes.
  • Consider AI’s interaction with vulnerable customers. Will it deliver fair outcomes? Can it identify and appropriately support vulnerable individuals?

5.     Keep the human

  • AI could have enormous benefits for businesses, but human oversight is still critical.
  • Humans should analyse risks, outcomes and management information; AI can't be held solely accountable.
  • Make sure you have people with the appropriate level of skill and capability to manage AI systems.
  • Plan for potential AI failures. Staff should be able to continue operations manually if needed.

6.     Wider regulatory considerations

  • Remember: even without AI-specific regulation, firms must adhere to existing rules like GDPR, ICOBS and any updates such as the Data (Use and Access) Act 2025.
  • Make sure communications, risk warnings and privacy notices remain up to date.
  • Consider whether you meet the criteria to enter into the FCA’s AI Lab and Supercharged Sandbox.
  • For firms interacting with or connected to EU entities, note the EU Artificial Intelligence Act (AI Act), effective August 2024. It established a common regulatory framework, classifies AI by risk, and requires transparency. Oversight of the EU AI Act will be shared between national authorities in each member state and a new European AI Office within the European Commission. If an application is categorised as ‘General purpose AI’ (such as ChatGPT), it'll be subject to transparency requirements, and those categorised as ‘High-risk’ must meet strict transparency, safety and oversight requirements, and be monitored throughout their lifecycle.

The use of AI will continue to grow. If your firm has already adopted AI or is considering adoption, proceed with eyes and ears wide-open, balancing innovation with caution.

Need more information?

UKGI Group is always on hand to provide guidance and support for any AI queries

Get in touch with UKGI

https://www.fca.org.uk/news/blogs/ai-live-testing-use-ai-uk-financial-markets-promise-practice

https://www.fca.org.uk/news/blogs/ai-live-testing-use-ai-uk-financial-markets-promise-practice