One of the clearest themes from our latest AI Beyond Blind Trust discussion was that moving AI from proof of concept to production is not just a technical challenge. It is a governance challenge too.
That matters because governance is often framed as the thing that slows AI down. But the discussion pointed in the opposite direction. Done well, governance is what helps AI move from experimentation into trusted, scalable use. It gives organisations the structure, accountability and confidence needed to move faster in the long run.
Hosted by David Reed of DataIQ, the panel brought together leaders from NatWest, Leeds Building Society and Dufrain to explore what good AI governance actually looks like in practice.
What came through clearly was that governance is not something to bolt on once a use case is ready to go live. It is part of what makes production possible in the first place.
1. Moving from PoC to production requires governance, not just technical success
One of the strongest points in the discussion was that many AI initiatives stall not because the idea is weak, but because production introduces a whole new set of demands. Infrastructure, monitoring, evaluation, organisational change, risk ownership and the question of what happens if something goes wrong all become much more real once a use case moves beyond testing.
That is why governance belongs in the production conversation from the start. The discussion made clear that organisations are often building governance frameworks as they go, which can feel like friction, but is actually part of the journey to becoming comfortable with risk, ownership and safe deployment. A practical route forward came through strongly too: start with lower-complexity, lower-risk use cases, build the governance and evaluation frameworks around them, and then reuse those same foundations as use cases become more complex.
The key point is simple. Governance does not sit outside the journey from PoC to production. It is part of what makes that journey possible.
2. Good governance starts early with “can we?” and “should we?”
Another clear takeaway was that good governance starts before build, not after. One of the most practical frameworks shared in the session was this sequence: Can I build it? Should I build it? Is it working as intended? Does it continue to deliver in production? That is powerful because it turns governance into an ongoing decision-making discipline rather than a static policy exercise.
The discussion also made clear that “should we?” is the question teams most often leave too late. It is easy to get caught up in the technical art of the possible. It is much harder, and much more important, to keep asking whether AI is the right answer, whether a simpler technique would reduce risk, and whether the use case is genuinely delivering value rather than novelty. The panel was clear that this question should be asked at the beginning and then revisited throughout the lifecycle.
If that discipline is missing, governance is already arriving too late.
3. Governance should evolve as AI matures from pilot to rollout
A particularly useful theme from the discussion was that governance should be proportionate. The level of governance needed to get into a pilot is not the same as the level needed for a full production rollout. If organisations govern everything too heavily too early, they risk slowing progress before they have generated enough learning to move forward well.
That was paired with another important point: in a fast-moving space, it is rarely practical to disappear into a room and try to design every possible governance scenario upfront. The discussion instead leaned towards building governance around reusable categories and patterns, so that controls can evolve alongside emerging use cases and be applied more consistently across similar deployments.
This is where governance starts to look like an enabler rather than a blocker. It flexes with the maturity, exposure and risk profile of the use case.
4. Production governance means live monitoring, intervention triggers and authority to act
One of the strongest governance points from the panel was that governance becomes most real once a system is live. This is where production-readiness stops being theoretical. Good governance means defining live metrics upfront, agreeing what would trigger intervention, and deciding in advance how a system would be adjusted, paused or stopped if it began to drift from expected outcomes.
Just as important, the discussion addressed who gets the authority to act. The answer was not endless escalation or committee delay. It was that the triggers should be predefined before go-live, with named leaders closest to the risk empowered to intervene when thresholds are crossed. Some people may call that a ‘kill switch’, but the framing in the session was more useful than that: it’s simply disciplined governance, designed to intervene early and protect outcomes.
That is a helpful distinction for leadership teams. If a system cannot be monitored, challenged and paused in a structured way, it is not yet ready for production.
5. Strong governance keeps accountability human and works through existing risk frameworks
The clearest line of the discussion may also have been the simplest: the model is never accountable. Humans are always accountable. That cuts through a lot of noise. However advanced the technology becomes, responsibility for outcomes, fairness and risk still sits with people.
The discussion also pointed towards a practical way of embedding that accountability. Rather than creating an entirely separate policy universe for AI, the stronger approach is often to weave AI through the governance and risk frameworks organisations already use, including model risk, people risk and reputational risk. That does not remove complexity, especially in regulated sectors, but it is a more coherent route than treating AI governance as something detached from the wider control environment.
This is also where fairness oversight belongs. The session did also focus on bias, accessibility and proxy bias, but the governance point was broader than that. Good governance cannot only test technical performance. It also needs to check whether outcomes remain fair, explainable and appropriate across real customer groups, real channels and real-world contexts.
These takeaways only give a flavour of the conversation. The full discussion goes deeper into what good AI governance looks like in practice, how leaders can balance pace with control, and why the organisations that scale AI best are not the ones that skip governance, but the ones that build it in early enough to move with confidence.
Watch the webinar to hear the full conversation from Karen Dewar NatWest Group, Carole Roberts Leeds Building Society, Isobel Daley Dufrain and David Reed DataIQ.
