Hearth Insights | Provable Compliance & Sovereign Control Logo

Hearth Insights

EXECUTIVE ACCOUNTABILITY

Your name is already on this.

The FCA's SM&CR regime holds named executives personally accountable for AI decisions made by systems they approve. The question is not whether that applies to you. The question is what you have put in place.

£44.1m

FCA Enforcement · 2025

Nationwide fined for governance and oversight failures. The enforcement action that predates the review.

Jan 2026

FCA Mills Review

The FCA is examining personal liability for executives whose AI systems fail. This review will define what you are required to prove.

Zero

Treasury Select Committee · Jan 2026

Zero published FCA frameworks for adequate SM&CR assurance over AI decisions. There is no standard to point to.

THE REGULATORY REALITY

An enforcement action. A review. A gap.

In 2025, the FCA fined Nationwide £44.1m for governance and oversight failures. That enforcement action predates the Mills Review. The review is the escalation, not the starting gun.

In January 2026, the FCA launched a formal review, led by Sheldon Mills, examining exactly what SM&CR obligations apply to named executives whose firms deploy AI. The review is not examining AI in general. It is examining the accountability of specific, named individuals.

In the same month, the Treasury Select Committee confirmed that the FCA has not yet published clear guidance on what adequate SM&CR assurance over AI decisions actually looks like. There is no safe harbour. There is no standard you can point to and say you met it.

In April 2024, DLA Piper confirmed in published guidance that SM&CR personal accountability applies to AI decisions. This is not a hypothetical interpretation. It is the documented position of a major regulatory law firm.

THE ACCOUNTABILITY GAP

No guidance means no safe harbour.

SM&CR does not require you to prevent AI from making mistakes. It requires you to demonstrate that you took reasonable steps to govern AI decisions within your authority, and that you can prove what those steps were.

This is a narrower problem than governance. It is not about your firm's AI policy. It is about what you, specifically, as a named SMF holder, can show a regulator you did before you approved a system.

Most firms navigating this are doing so without a clear map. Standard governance frameworks document processes. They do not produce the forensic evidence a regulator would require. A policy document is not proof. A sign-off email is not proof. An external audit that predates deployment is not proof that the system behaved as governed after it went live.

The question a regulator would ask is not whether you had a policy. The question is: what can you show me about what this system actually did?

"Who in your organisation personally owns the question of whether your AI decisions are defensible under Consumer Duty?"

If the answer is not immediate, or if it maps to a committee rather than a name, that is the gap the FCA's review is designed to expose.

A DEFENSIBLE POSITION

What a regulator would ask for.

A defensible position is not a policy document or a risk register entry. It is the ability to produce, on demand, a forensic record that answers four specific questions.

01

What did your AI decide, and when?

A complete, timestamped record of every AI action, not a summary, not a log file. An immutable artefact that cannot be altered after the fact.

02

Was a human accountable at every consequential decision point?

Evidence that human authorisation was required and recorded, and that the system enforced this. Not a policy that said it should happen. Proof that it did.

03

Could those controls have been bypassed?

The critical question. A policy that could be circumvented is not a control. A control is something the architecture enforces, not something a team member could simply ignore under time pressure.

04

Can you reconstruct exactly what happened at a specific moment?

Forensic replay. If a regulator names a transaction, a customer interaction, or a model output, can you reconstruct the exact state of the system at that moment and prove what it did and why?

These are not aspirational standards. They are the questions that flow directly from the SM&CR duty to maintain adequate oversight and from the Consumer Duty obligation to demonstrate fair outcomes. They are what a forensic examination would require.

THE INFRASTRUCTURE

One layer. No bypass.

Hearth Insights is AI enforcement infrastructure. It is the layer that sits between your AI systems and your decisions, enforcing that a human was accountable at every consequential point and recording that accountability in a form that cannot be altered, cannot be disputed, and can be produced on demand.

Every AI action is written to an immutable ledger. Every human authorisation is recorded with a cryptographic signature. Every control is enforced by the architecture, not by policy, not by training, not by the good intentions of a team under pressure.

When a regulator asks what your AI did, and who was responsible, and whether the controls could have been bypassed: Hearth Insights provides the forensic answer. Not a report. Not a reconstruction from memory. An artefact that has existed since the moment the decision was made.

Request a Briefing