
We are losing our ability to critically think about our data and the AI systems built on it. As our everyday becomes increasingly intertwined with data and AI, organisations pride themselves on being ādata-drivenā, but our lack of confidence to question the data makes me wonder which direction weāre being ādrivenā in.
AI is only as ethical as the data and culture that shape it. Before investing in AI, organisations must first build a strong data culture – one that fosters ethical decision-making, stakeholder awareness, and open dialogue. As AI becomes embedded in daily work, everyone must be equipped to engage, challenge, and contribute to more responsible outcomes. This article explores strategies for embedding ethical data principles into corporate culture, drawing on real-world examples and best practices.
What is data culture?
To fully embrace data and AI to be a truly ādata-drivenā organisation, there is a demand for a fundamental cultural shift. Without a strong data culture, even the best technologies can fall short of their promises, costing organisations lot of time, effort, and money. Hidden change management costs have made organisational leadership cautious when considering new technologies over familiar legacy systems.
Data culture ā the shared values, behaviours, and confidence around data ā can determine whether the investments will deliver meaningful, ethical, and lasting impact. There are 3 important basics of data culture everyone needs to understand:
- Data is imperfect ā low data quality leads to inaccurate conclusions and therefore poor decision-making
- Data can be both good and bad ā trends can be seen but can also be manipulated
- Critical thinking is essential ā over-reliance on AI directly opposes this
By challenging data more you will see a multitude of benefits such as an increase in accuracy as poor data points are removed or adjusted, better insights as they would become clearer and more relevant, which would lead to decisions being made quicker and with more evidence. Most importantly, it means there is inherently more credibility in the data and, therefore, any AI system built on it.
How data culture underpins AI literacy
Fundamentals of data culture are the first step towards achieving AI literacy. The 2025 AI Index Report from Stanford University identifies AI literacy as a key workforce skill, which could lend itself to cultivating these critical thinking skills needed for responsible AI adoption.
The EU AI Act states that āproviders and deployers or AI systemsā need to be AI literate. Itās difficult to define the specifics of what would make someone āliterateā in this sense, but itās a clear requirement for companies that want to produce and use AI needs to whether for external or internal use. They must show efforts in training staff around the foundations of AI ā how it works, how to use it and the risks.
There have been Responsible AI (RAI) maturity assessments to indicate where organisations are at, but lack of standardisation makes comparisons difficult. This introduction of AI literacy should be a clearer indication of RAI maturity at an EU-level. With the deadline for this being 2nd Feb 2025, we should hopefully start to see the impact over the coming months as trainings get implemented across organisations.
The upside? AI literacy programs are strengthening foundational data skills, reinforcing common understanding of an organisationās beliefs, practices and processes around data usage.
The 2025 AI Index Report from Stanford University, highlights the impact that usage of AI is having on our skills. It references research that shows AI being used in relation to cognitive skills, especially critical thinking, active listening, and reading comprehension. This underscores a need for the workforce to be encouraged to exercise these skills more instead of rely on AI or they may slowly lose them.
The emerging role of AI ethics specialists
This is where AI ethics specialists come in. This role constructively critiques and supports finding a solution for any ethical dilemmas such as algorithmic bias, transparency issues and ensuring accountability. This expertise is possible from having a larger big-picture view, being confident to question the data, review the impact on various stakeholders, and hold the organisation accountable if ethical standards are not being upheld. The need and scarcity of these skills in organisations is highlighted by McKinseyās 2025 State of AI report, which shows more than 75% of organisations surveyed had difficulty in hiring AI ethics specialists.
AI ethics specialists also have a key role in ensuring all of those that are interacting with AI ā from developers to users ā can keep the organisation accountable with a clear feedback loop. Itās with this we can ensure documentation is transparent and understandable for all. Human-centric design in technology is imperative for it to continue to work for us and provide us value.
The challenges organisational leadership have with AI ethics
Beyond hiring difficulties, organisational leadership face consistent challenges with embedding ethical AI practices:
- Alignment between data and business strategies
Without alignment, data wonāt drive business outcomes and insights would become irrelevant. This could create an issue around technical debt that delivers little value.
- Communication gaps between technical and business leaders
Misunderstandingsālike those around AI hallucinationsācan hinder responsible use. Knowing that AI can generate false patterns helps determine safe, effective applications.
- Navigating regulation and internal motivation
Legal frameworks like the EU AI Act are catching up, but ethics often falls outside legislation. With policy slower than innovation, what motivates organisations to act ethically before legal consequences emerge? This is the core challenge that AI ethics specialists must navigate.
Practical steps to build an AI-ready and responsible data culture
Letās lay out some practical steps any organisation can take in their journey to being more responsible with their data and AI.
- Embed data literacy into AI literacy programs
Train all employees on the fundamentals of data, not just data practitioners. Tailor this training to improve engagement and confidence of relevant tools. This can be as simple as having examples with technology used in-house that employees would be most familiar with.
- Include ethical check into existing data and AI project reviews
Introduce questions about potential harms, explore unintended consequences, and outline feedback mechanisms for all ā developers to end-users. Project reviews can also ensure continued alignment with business and ethical goals.
- Assess stakeholder impact early and often
Consider internal stakeholders, external stakeholders and organisational reputation. For each of these areas, ask: - Who would be impacted and how?
- Can the impact be reduced? To what extent?
- Who is responsible for acting on these tasks?
Ultimately, cultivating a responsible data culture is about creating long-term value and trust – more than just compliance. If you want your data and AI projects to succeed, develop AI ethics specialists by encouraging active critical thinking. Organisations that invest in these foundations around data and AI will be more prepared for ethics, legal and societal demands of tomorrowās AI-driven world.