
In todayโs corporate merry-go-round,ย it seems that every companyย is rushing to embed artificial intelligence (AI) โ automating systems, streamlining processes, and ideally, cutting costs.ย ย
Yet in the stampede towards an AI-powered โnew worldโ,ย too many businesses are making an expensive mistake: adopting powerful technologies withoutย truly understandingย their capabilities and how to manage them.ย
This lack of understandingย isnโtย just a technical oversight;ย itโsย a cost and reputational catastrophe just waiting to happen. When things go wrong with โblack-boxโ systems whose decisions are not explained nor understood โ tracing the cause and reversing the damage is difficult, if notย practically impossible.ย
Using governance as a competitive advantageย
The smartest organisations are placing AI governance in the boardroom alongside their strategy. They knowย itโsย a recipe for lasting success with protection against the risks of harm andย financial lossย (Singla et.al, 2025).ย
In fact, proactively building ethics and oversight into AI system design and deployment from the outsetย doesnโtย stifle innovation, it results in clarity and certainty. Systems become transparent and explainable, and the company stays in control with clear lines of accountability.ย
Luckily for businesses, building in effective AI governance at the outsetย doesnโtย have to meanย starting from scratch. There are already several well-established frameworks that businesses can adopt as the foundations to their AI strategy.ย
Here atย Anekantaยฎ,ย weโveย been pioneering in this space since 2020 when we created our 12-principleย AI Governance Framework (Anekantaยฎ, 2020).ย ย
Recognised by the UK Government, the OECD (Anekantaยฎ, 2024), and even adopted by the Institute of Directors in their AI Governance in the Boardroom business paper (IoD, 2025), it provides a strategic blueprint for the highest accountability when embedding responsible AI into organisational culture.ย
Our principles are timeless and adaptable โ designed to evolve alongside emerging global standards rather than chase them.ย
For businesses looking to operationalise AI governance across their business functions as a mark of trust, the new AI Management System Standard, ISO/IEC 42001, (ISO, 2023) offers a certifiable route.ย ย
It ensures AI policy isย establishedย at board level and that systems are managed in a measurable and auditable way with suitable controls which flag performance drift through monitoring and review.ย ย
However, adherence to ISO 42001 alone does not guarantee compliance with emerging regulations such as the EU AI Act1ย which introducesย additionalย legal obligations based on risk.ย
In the EU, AI systems are categorised according to the level of risk they pose to individualsโ health, safety, and fundamental rightsย (EU Law, 2024).ย
Systems which present unacceptable risks are prohibited, such as those employing manipulative techniques, while high-risk systems which affect access to services, education, and employment based on biometric and sensitive data areย permittedย but subject to strict controls.ย ย
For high-risk AI, companies must implement robust data governance, human oversight, security, and accountability โ and be able to prove it. Declaration of conformity, whether internal or independently certified, is not optional.ย
The lesson is clear โ businesses need to understand the AI Act and similar frameworksย beforeย they design their systems, not after. Retrofitting compliance is always more expensive, not to mention more complicated, than embedding it from the start.ย
The Dutch Government โ a warning to us allย ย
If any organisation still believes that governance is optional, the Dutch childcare benefits scandal should serve as a sobering reminder (Amnesty International (2021).ย
Back in 2013, the tax authority rolled out a self-learning AI system designed to detect false claims for childcare benefits. The plan sounded promising โ let the automated AI flag fraudulent cases and speed up processes.ย ย
However, what began as an efficiency drive ended in disaster. The algorithm, riddled with bias, disproportionately targeted families with dual nationalities and low incomes, flagging them asย likely fraudsters.ย
With no human meaningful oversight to challenge this bias, coupled with a lack of AI literacyย regardingย how the system made decisions, its judgmentsย couldnโtย be effectively questioned by government officials.ย ย
Over a period of more than five years, thousands of innocent parents were accused of fraud and ordered to repay benefits they had rightfully received. Many fell into financial hardship; there were suicides and some lost custody of their children.ย
For years, these families fought an uphill battle to prove their innocence. The process was confusing, appeals oftenย fell on deaf ears, and the governmentย largely turnedย a blind eye. That was until the scandal finally broke to the public after investigative journalists took up the parentโs complaints.ย
This tragedy illustrates exactly what can happen when organisations employ AI without governance at the outset. Governance means asking the right questions to understand what the AI models are doing and ensuring human accountability.ย ย
Machines may process data at superhuman speed, but they cannot understand fairness, context, or compassion. Only humans can do that, and only through proper governance.ย
Compliance is not the enemy of innovationย
Too often, businesses treat compliance as a blocker to creativity, but governanceย is what allows innovation to thrive. When teams design AI systems with ethical principles, transparency, and accountability built in fromย inception, they create technology that is not only compliant but also trusted.ย Thatโsย by customers, regulators, and the public.ย
One practical step is toย establishย an AI risk committee within the organisation. Such a body can guide development teams on responsible deployment, assess potential harms, and ensure innovationย doesnโtย come at the expense of safety or fairness.ย
If we want AI to deliver lasting value, not just short-term gains, we must build it on a foundation of ethics and understanding. After all, governance is not red tape.ย Itโsย the scaffolding that keeps progress standing upright.
Referencesย
Singla, A. et.al (2025) โThe State of AI. How organizations are rewiring to capture valueโย Quantum Black byย McKinsey.ย Available at:ย https://www.mckinsey.com/~/media/mckinsey/business%20functions/quantumblack/our%20insights/the%20state%20of%20ai/2025/the-state-of-ai-how-organizations-are-rewiring-to-capture-value_final.pdย (Accessed: 26 October 2025)ย
Anekantaยฎ (2020) โAI Governance Framework for Boardsโย Anekantaย Ltd.ย Available at:ย https://anekanta.co.uk/ai-governance-and-compliance/anekanta-responsible-ai-governance-framework-for-boards/ย (Accessed: 26 October 2025)ย
Anekantaยฎ (2024) โAI Governance Framework for Boardsโย OECD.AI Catalogue of Tools and Metrics for Trustworthy AI.ย Available at:ย https://oecd.ai/en/catalogue/tools/responsible-ai-governance-framework-for-boardsย (Accessed: 26 October 2025)ย
Institute of Directors (IoD) (2025) โAI Governance in the Boardroomโย IoDย Available at:ย https://www.iod.com/resources/business-advice/ai-governance-in-the-boardroom/ย (Accessed: 26 October 2025)ย
ISO (2023) ISO/IEC 42001:2023 Information Technology. Artificial intelligence โ Management system.ย Geneva: International Organization for Standardizationย Available at:ย https://www.iso.org/standard/42001ย (Accessed: 26 October 2025)ย
EU Law (2024) โRegulation (EU) 2024/1689 of the European Parliament and of the Council:ย laying down harmonised rules on artificial intelligence and amending certain Union legislativeย actsโย OJ L168/1. Available at:ย Regulation – EU – 2024/1689 – EN – EUR-Lexย (Accessed: 27ย October 2025)ย
Amnesty International (2021) โXenophobic machines: Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandalโ. EUR 35/4686/2021.ย London: Amnesty International. Available at:ย https://www.amnesty.org/en/documents/eur35/4686/2021/en/ย (Accessed: 26 October 2025).ย



