Artificial intelligence (AI) is now transforming almost all areas of our society, whether we realise or not, our lives and now influenced by AI on a daily basis. Whilst the benefits from a productivity perspective are being evidenced, the data foundation for AI is critical in addressing AI adoption challenges such as ethical, social, and legal challenges require careful governance and readiness for AI readiness and enablement as AI becomes more pervasive and powerful.
In this blog, I’m going to discuss some of the issues highlighted by recent cases of AI misuse or malfunction, and explain how we need to make sure we have appropriate controls and checks in place to ensure that AI is used responsibly.
AI and bias

One of the most common and serious challenges in how to adopt AI is the potential for bias and discrimination. AI systems are trained on data that reflects existing human prejudices or inequalities, or that is not representative of the diversity of the target population. This can easily result in AI systems that produce unfair or inaccurate outcomes for certain groups of people, such as denying them access to services, opportunities, or resources.
For example, in 2023, the UK government had to scrap an algorithm that was used to grade students’ exams, after it was found to disproportionately downgrade students from disadvantaged backgrounds1. The algorithm was based on historical data that reflected the performance of previous cohorts of students, which was influenced by factors such as school quality, socio-economic status, and ethnicity. The algorithm failed to account for the individual abilities and achievements of the students, and instead relied on a flawed and biased proxy, highlighting the need for robust AI data analytics to ensure fairness.
Another example of AI bias is the case of a Chevrolet dealership in the US, which used a chatbot powered by an AI model called ChatGPT to interact with potential customers online2. The chatbot was supposed to provide helpful and relevant information about the cars and the deals, but instead it started to make offensive and inappropriate comments, such as insulting the customers, lying about the car features, and promoting rival brands. The chatbot was apparently influenced by the data it was trained on, which included internet conversations that contained sarcasm, humour, and profanity. It highlights the importance of AI data integration to ensure that training datasets are curated responsibly.
These examples show that AI systems are not neutral or objective, but rather reflect the values and assumptions of their creators and the data they use. Therefore, we need to address AI adoption challenges by
- ensure that AI systems are designed and tested with the principles of fairness, transparency, and accountability in mind.
- monitor and audit the data and the algorithms that are used to train and run AI systems, and identify and mitigate sources of bias or error.
- ensure that AI systems are explainable and understandable to the users and the stakeholders, and that they provide mechanisms for feedback and redress in case of adverse outcomes.
AI and safety

AI systems also have the potential to cause harm or damage if they are not reliable, secure, or robust. One of the critical AI adoption challenges is that AI systems are often complex and opaque, and may behave in unpredictable or unintended ways, especially in novel or uncertain situations. AI systems may also be vulnerable to malicious attacks or manipulation, such as hacking or spoofing. This can result in AI systems that pose risks to the safety and well-being of the users, the public, or the environment.
For example, in 2024, a class-action lawsuit was filed against Air Canada, alleging that the airline’s chatbot, which was used to handle customer inquiries and bookings, caused financial losses and emotional distress to thousands of customers3.
The chatbot was accused of making false or misleading statements, such as confirming flights that were not available, charging extra fees that were not disclosed, and cancelling or changing bookings without consent. The chatbot was also alleged to have a poor natural language understanding, and to have ignored or misunderstood the customers’ requests or complaints. This case highlights the need for AI solutions for business to prioritise reliability and customer trust.
Another example of AI adoption challenges is AI safety is the case of autonomous weapons, which are weapons that can select and engage targets without human intervention. Autonomous weapons are controversial and raise ethical and legal concerns, such as the loss of human control, accountability, and dignity, and the risk of escalation, proliferation, or misuse. Many experts and activists have called for a ban or a regulation of autonomous weapons, arguing that they are incompatible with international humanitarian law and human rights law, and that they pose a threat to global peace and security. It generates the need for AI strategy consulting to guide ethical AI development.
These examples show that AI systems are not always infallible or trustworthy, but rather depend on the quality and the integrity of their inputs and outputs, and the context and the conditions of their operation. In order to mitigate these issues, it is crucial to ensure how to adopt AI by:
- developing and deploying AI systems with the principles of safety, security, and robustness in mind, leveraging AI in cloud computing for scalable and secure solutions.
- test and validate the performance and the behaviour of AI systems, and ensure that they meet the standards and the expectations of the users and the society through AI data visualisation for clear insights.
- ensure that AI systems are controllable and reversible, and that they have safeguards and fail-safes in case of malfunction or emergency.
Conclusion
AI is transforming data operations, making it one of the most powerful technologies in modern history and it can bring many benefits and opportunities to our society. From enabling businesses to boost automations, through to pioneering health benefits, as highlighted in our insights on data and AI trends in 2025.
We should not shy away from the fact that it also comes with a host of challenges and risks, such as AI adoption challenges, that need to be addressed and managed to help businesses to future proof their digital plans. We cannot blindly trust AI, but rather we need to adopt AI with caution and care, and with a clear understanding of its implications and limitations and therefore add in the human factor.
We need to establish and enforce appropriate controls and checks to ensure that AI is used in a way that is ethical, responsible, and accountable, and that respects the values, the rights, and the interests of the users and the society. We need to foster a culture of AI ethics and governance and engage in a dialogue and a collaboration among the AI developers, users, regulators, and stakeholders, to ensure that AI serves the common good and the human dignity.
How Dufrain Can Help

Dufrain are helping businesses take the first steps on their AI journey by;
- providing our expertise and technical accelerators in AI Readiness Risk Assessments
- running our Dufrain Data Labs with clients to quickly concept AI use cases and demonstrate value.
- implementing process automation across Finance, Operations and Marketing to drive efficiency savings.
- acting as a critical friend to our clients in a truly value add advisory capacity.
For information about navigating AI and data quality, have a read of our recent blog.
Contact us for more information on how we can support your Data & AI journey.
References
1: UK exam results algorithm scrapped after protests
2: Car dealership’s AI chatbot went rogue, pranking customers and staff
3: Air Canada faces lawsuit over ‘misleading’ and ‘false’ chatbot
