Following our recent responsible AI webinar with guest host David Reed DataIQ, Karen Dewar Chief Data Officer NatWest, Carole Roberts Director of Technology & Data Leeds Building Society and Isobel Daley, Head of AI here at Dufrain, we received a brilliant range of questions from attendees. They went beyond the hype and straight into the issues leaders are working through right now – from governance, skills and operating models to environmental impact, experimentation and competitive pressure.
We could not get through every question live, so we have pulled together a selection of the audience Q&A below. The questions are presented as asked, with answers from Dufrain’s AI team.
If responsible AI is moving from principle to practice in your organisation, these are exactly the kinds of conversations worth having.
Audience Q&A
Question
Have you been able to successfully utilise GenAI coding capabilities (e.g. Codex, Claude Code) to transform your SDLC process? Appreciate it’s not feasible/ stable to do a massive overhaul so quickly – but it seems like the players that adopt those capabilities first will gain a real headstart. What’s your view on this?
Answer
We’ve used GenAI coding tools to accelerate scaffolding, refactoring, and exploratory development, while keeping engineers accountable for quality and production readiness. Our view is that early adopters do gain an edge, but the real advantage comes from integrating these tools pragmatically into existing SDLC practices, which can then be built-on from there, rather than attempting a rapid end-to-end overhaul.
Question
What do you think the key new skills will be for successful professionals? Would you consider everyone will be effectively expected to “build” and “ship” code? How would that change the makeup of project teams? Interested to hear your perspective!
Answer
The value coders (be they data scientists, developers, etc.) bring has always been in their subject matter expertise and understanding of how and why solutions are built as they are, with coding just being the vehicle for that. In the same way as when you hire an electrician you’re not paying them to cut some wires, you’re paying them to know which wires to cut. Coding becoming more automated will mean the skills focus will be even more on the knowledge and experience of the solution behind the code, along with a key skill being leveraging AI to automate the manual coding process efficiently and robustly. In addition, as coding becomes more automated, the emphasis shifts even further towards human skills, such as communicating clearly, embedding change, and building the relationships that make technology successful in the real world.
Question
Linked to the ‘regulation’ comments do you see the ‘credit model validation teams’ being expanded and developed to cover ‘AI models’ in order to ensure explainability, etc. Or do you think we need different skills and a different approach?
Answer
We see strong value in extending existing credit model validation capabilities to cover AI models, particularly around explainability, controls, and independent challenge. However, generative and agentic AI introduce new behaviours and risks, so this needs to be complemented with additional skills and tooling focused on data provenance, prompt design, grounding, and ongoing model behaviour monitoring rather than a simple extension of traditional validation approaches.
Question
Going beyond simple prompting, and confined custom GPTs/ Copilot agents – do you have any guidance on how that gets tested, rolled out and utilised to transform the fundamental approach? In practical terms – how do you think a robust but not-too-restrictive TOM looks like – one that would allow for good enough experimentation to lead to actual impact
Answer
Building and testing solutions in such a way that the process is controlled enough to be low risk but agile enough to allow innovation is indeed a difficult balance to strike, and is a very wide topic. Just to pick out a couple of principles:
- Proportional governance – the level of restriction and guardrails is higher for tools with high risk outcomes than for low risk ones
- Sandboxes designed for experimentation in a secure, no-risk environment
- Early end‑user involvement, to validate what’s being built and to incorporate real‑world feedback as the solution evolves, helping to ensure experimentation leads to genuine adoption and impact
- A graduated project roadmap starting with prototype -> pilot -> production
Question
I’d be really interested to know how you consider Environmental Impact as AI usage increases – are there any metrics you use to assess energy / resource consumption relating to e.g. CoPilot usage?
Answer
We consider environmental impact primarily through proxy metrics, such as compute consumption, token usage, and cost monitoring, which are already embedded in our AI delivery and governance frameworks. Our focus is on using the smallest model and lowest-cost/compute option that delivers the outcome, and avoiding unnecessary or always-on AI usage as adoption scales.
Question
How do you, as AI leaders, use GenAI capabilities personally in your line of work? Do you believe having a deep understanding of the tech stack is key in shaping strategy?
Answer
We use GenAI day-to-day to accelerate drafting, analysis, and early technical exploration, and we deliberately use our own organisation as a test-bed through initiatives like our M365 Copilot rollout. We believe hands-on understanding of the AI and data stack is essential to shaping credible strategy, which is why our leaders stay close to building and deploying these solutions rather than treating them as purely conceptual.
Question
Do you see the biggest wins for business in customer or internal systems or processes?
Answer
From our experience, the fastest wins typically come from internal systems and processes where adoption barriers are lower, which is why we often see internal wins as the catalyst for broader transformation which we can then scale out to customers.
Question
How do we prevent “responsible AI” from becoming a competitive disadvantage?
Organisations that invest seriously in safety, fairness, and human oversight often move slower than those that don’t. How do we create market and regulatory conditions where responsible AI is the winning strategy, not a handicap?
Answer
This is a tension we grapple with directly in client work, and the framing of responsible AI as a slowdown is, in our view, a short-term lens.
The drag typically comes from bolting governance on at the end. When it is embedded in your architecture and evaluation criteria from the start, it stops being a gate and instead becomes a feature.
On regulation, the most effective lever is not prescriptive rules but liability clarity. When organisations are genuinely accountable for downstream consequences, responsible AI stops being a values question and becomes a risk management question.
The competitive disadvantage argument also inverts quickly when you look at enterprise procurement increasingly scrutinising AI governance, and top technical talent choosing not to work for organisations they do not trust.
Question
Absolutely agree AI is a leadership challenge – perhaps across functions. Have you encountered competing priorities in business, tech, and risk that become a bottleneck? How have you addressed that?
Answer
Competing priorities between business pace, technical delivery, and risk management can be particularly prevalent when governance is treated as a separate or downstream activity. We address this by embedding governance-by-design into delivery from the start – using clear roles, proportionate stage gates, and automated controls within our operating model so risk, technology, and business decisions move forward together rather than sequentially.
Keep the conversation going
Responsible AI isn’t a side conversation. It sits at the heart of how organisations scale AI with confidence, accountability and real business value.
If the questions raised here resonate — whether around governance, skills, operating models or experimentation — you’re not alone. These are exactly the challenges leaders are navigating as AI moves from principle into practice.
At Dufrain, we help organisations move from discussion to action, with the right data foundations, governance and practical delivery approach to make AI work in the real world.
Did you miss the discussion?
Watch the AI Beyond Blind Trust panel discussion here.
