Responsible AI in Practice: 7 Leadership Takeaways

Responsible AI is easy to talk about in principle. Operationalising it is where the real challenge begins. 

That was the focus of our latest AI Beyond Blind Trust discussion, hosted by David Reed of DataIQ with expert perspectives from Karen Dewar, Chief Data and Analytics Officer at NatWest Group, Carole Roberts, Director of Technology and Data at Leeds Building Society, and Isobel Daley, Head of AI here at Dufrain. Together, they explored what it really takes to move from proof of concept to production, how governance and accountability need to evolve, and why trusted AI outcomes depend on more than policy alone.  

This was one of those conversations worth watching and rewatching. Not just because it brought together a brilliant female-led panel, but because it got beyond the hype and into the real leadership questions around AI: what causes organisations to stall, where governance needs to flex, how bias can creep in, and why culture, capability and accountability matter just as much as the technology itself. We’ve pulled together seven of the standout takeaways below to give a flavour of the discussion, but the full webinar is where you get the nuance, practical examples and real depth from the panel. 


1. AI is moving from ambition to accountability 

As organisations push further into adoption, responsible AI is no longer just a technical topic. It is a leadership, governance and cultural priority. That matters because once AI becomes an enterprise capability, the questions shift. It is no longer only about what AI can do. It is about who owns the risk, how decisions are made, and whether outcomes can be trusted at scale.  

For leadership teams, that is the real shift now underway. AI maturity is not simply about more use cases. It is about stronger accountability. 


2. The gap between proof of concept and production is where many organisations stall 

One of the clearest messages from the panel was that getting a proof of concept to work is only one part of the story. Isobel pointed to the technical skills gap that often sits behind stalled initiatives, explaining that the capabilities needed to develop a proof of concept do not necessarily match the capabilities needed to take something safely into production. Infrastructure, monitoring, evaluation, organisational change and customer impact all come into play once a use case moves into the real world.  

She also highlighted a practical truth many organisations are experiencing – systems behave differently at scale. Risk becomes more real. Governance gets stress-tested. Her advice was pragmatic and highly usable: start with lower-complexity, lower-risk use cases, often internal-facing ones, and use those to build the surrounding skills, controls and frameworks required for more complex deployments later.  

That is an important point for leaders under pressure to accelerate. The fastest route to production is not always the boldest use case. Often, it is the one that helps you build repeatable foundations first. 


3. AI creates more value when it is treated as an enterprise capability 

Karen shared how NatWest’s journey has evolved, from exploring use cases and learning quickly what did and didn’t work, to creating the foundations for adoption, to increasingly treating AI as an enterprise capability. That means shared patterns, platforms and reusable components that allow teams to move faster without reinventing the basics every time. It also means recognising that the operating model matters as much as the technology itself.  

Karen also made a point that will resonate with many large organisations: none of this works at scale without the right foundations. That includes the modernisation of cloud and data estates, enterprise guardrails, and a broader leadership understanding of how AI can reshape customer and colleague experiences. In her view, education and awareness are key to unlocking innovation, because some of the best ideas sit with the people closest to customers.  

For organisations trying to industrialise AI, this is the real shift. AI cannot stay as a collection of isolated experiments. It has to become something the organisation can repeat, govern and scale with confidence. 


4. The capabilities around AI matter just as much as the model itself 

Carole brought a practical perspective to one of the most common blockers: making sure the adjacent skills and capabilities are in place. That includes the right data controls, data access, IT security and the wider infrastructure needed to support AI safely. Her point was a strong one. These capabilities need investment at the same rate and scale as AI skills themselves.  

That theme came through again later in the discussion on future skills. Isobel noted the growing demand for AI engineering and monitoring capabilities, but also warned against focusing too narrowly on technical build skills alone. Embedding AI successfully requires adjacent skills too, particularly around people, process and change. It is relatively easy to create a prototype. The harder part is making the solution usable, governable and sustainable inside a real organisation.  

If AI strategy is still too model-centric, it may be missing the very capabilities that determine whether value ever shows up in practice. 


5. Good governance is proportionate, practical and built into the journey 

A strong thread through the discussion was that governance should not be bolted on at the end, nor should it become so heavy that it blocks learning too early. Karen described NatWest’s approach through four simple but powerful questions: Can I build it? Should I build it? Is it working as intended? And when it is in production, does it continue to deliver the outcomes I want? That framing helps bring governance to life. It makes it practical, continuous and rooted in real decision-making.  

Carole added an important nuance: the level of governance needed to get into a pilot is not the same as the level needed for full production rollout. If everything is governed too far to the left, organisations risk slowing traction before they have learned enough to move forward well. Her advice was to think beyond a single use case and build governance around reusable categories where possible, rather than creating one-off governance models every time.  

That is a useful reminder for leadership teams trying to balance innovation and control. Governance should not be the thing that slows AI down. Done well, it becomes the thing that helps AI move responsibly. 


6. Just because AI can solve a problem does not mean it should 

This was one of the strongest and most practical themes of the whole session. Both Karen and Isobel stressed the importance of asking “should we?” right at the beginning of a project, not only after significant time and energy have already gone into building something. The advice was simple and memorable: just because you can does not mean you should. That question needs to sit alongside value, risk and customer impact from the outset.  

Isobel reinforced that not every problem needs an LLM. In some cases, traditional machine learning techniques are better suited. In others, a simpler, non-probabilistic approach may reduce risk and still deliver the required outcome. She also argued for being disciplined on value from day one: is the use case driving efficiency, improving customer or employee experience, or enabling meaningful innovation, or is it simply a gimmick?  

That level of discipline is central to responsible AI. It is not just about governing what gets built. It is about being selective enough to build the right things in the first place. 


7. Bias, accessibility and human accountability cannot be treated as side issues 

The panel’s discussion on bias moved well beyond standard talking points. Carole explored how AI can create unfair outcomes not only through training data, but through the channels and experiences wrapped around it. Accessibility itself can become a source of bias if digital-first approaches exclude certain groups, and proxy bias can emerge when channel preferences overlap with age, gender or other demographic factors. Her point that the model is never accountable. Humans are always accountable was one of the clearest lines of the discussion.  

Karen added practical steps around representative data, correcting for over- or under-representation, and creating the conditions for teams to challenge results that do not look right before issues become embedded. Isobel built on that by urging organisations to make bias a measurement problem rather than a judgement call, to involve diverse teams in development, to maintain ongoing monitoring and audit, and to have the confidence to pause or roll back where evidence shows harm or unfairness emerging.  

For leaders, the takeaway is clear. Trusted AI outcomes do not come from good intentions alone. They come from representative data, diverse perspectives, ongoing scrutiny and visible accountability. 

Watch the full discussion

These takeaways only scratch the surface. The full discussion goes deeper into governance, bias, accountability, operating models and what it really takes to move AI into production responsibly.