Future of AIAI

The EU AI Act is here – is your data ready to lead?

By Tendü Yoğurtçu, PhD, CTO at Precisely

The accelerated adoption of AI and generative AI tools has reshaped the business landscape. With powerful capabilities now within reach, organisations are rapidly exploring how to apply AI across operations and strategy. In fact, 93 percent of UK CEOs have adopted generative AI tools in the last year, and according to the latest State of AI report by McKinsey, 78 percent of businesses use AI in more than one business function.Ā 

With such an expansion, governing bodies are acting promptly to ensure AI is deployed responsibly, safely, and ethically. For example, the EU AI Act restricts unethical practices, such as facial image scraping, and mandates AI literacy. This ensures organisations understand how their tools generate insights before acting on them. These policies aim to reduce the risk of AI misuse due to insufficient training or oversight.Ā 

In July 2025, the EU released its final General-Purpose AI (GPAI) Code of practice, outlining voluntary guidelines on transparency, safety, and copyright for foundation models. While voluntary, companies that opt out may face closer scrutiny or more stringent enforcement. Alongside this, new phases of the Act continue to take effect, with the next compliance deadline taking place on 2nd August.Ā 

This raises two critical questions for organisations: how can they utilise AI’s transformative power while staying ahead of new regulations? And how will these regulations shape the path forward for enterprise AI?Ā 

How new regulations are reshaping AI adoptionĀ 

The EU AI Act is driving organisations to address longstanding data management challenges to reduce AI bias and ensure compliance. AI systems under ā€œunacceptable riskā€ – those that pose a clear threat to individual rights, safety, or freedoms – are already restricted under the Act. Meanwhile, broader compliance obligations for general-purpose AI systems will take effect in August this year. Stricter obligations for systemic-risk models, including those developed by leading providers, follow in August 2026. With these deadlines approaching, organisations must move quickly to build AI readiness, starting with AI-ready data. That means investing in trusted data foundations that ensure traceability, accuracy, and compliance at scale.Ā 

In industries such as financial services, where AI is used in high-stakes decisions like fraud detection and credit scoring, this is especially urgent. Organisations must show that their models are trained on representative and high-quality data, and that the results are actively monitored to support fair and reliable decisions. The Act is accelerating the move toward AI systems that are trustworthy and explainable.Ā 

Data integrity as a strategic advantageĀ 

Meeting the requirements of the EU AI Act demands more than surface level compliance. Organisations must break down data silos, especially where critical data is locked in legacy or mainframe systems. Integrating all relevant data across cloud, on-premises, and hybrid environments, as well as across various business functions, is essential to improve the reliability of AI outcomes and reduce bias.Ā 

Beyond integration, organisations must prioritise data quality, governance, and observability to ensure that the data being used in AI models is accurate, traceable, and continuously monitored. Recent research shows that 62 percent of companies cite data governance as the biggest challenge to AI success, while 71 percent plan to increase investment in governance programmes.Ā Ā 

The lack of interpretability and transparency in AI models remains a significant concern, raising questions around bias, ethics, accountability, and equity. As organisations operationalise AI responsibly, robust data and AI governance will play a pivotal role bridging the gap between regulatory requirements and responsible innovation.Ā 

Additionally, incorporating trustworthy third-party datasets – such as demographics, geospatial insights, and environmental risk factors – can help increase the accuracy of AI outcomes and strengthen fairness with additional context. This is increasingly important given the EU’s direction toward stronger copyright protection and mandatory watermarking for AI generated content.Ā 

A more deliberate approach to AIĀ 

The early excitement around AI experimentation is now giving way to more thoughtful, enterprise-wide planning. Currently, only 12 percent of organisations report having AI-ready data. Without accurate, consistent, and contextualised data in place, AI initiatives are unlikely to deliver measurable business outcomes. Poor data quality and governance limits performance and introduces risk, bias, and opacity across business decisions that affect customers, operations, and reputation.Ā 

As AI systems grow more complex and agentic, capable of reasoning, taking action, and even adapting in real-time, the demand for trusted context and governance becomes even more critical. These systems cannot function responsibly without a strong data integrity foundation that supports transparency, traceability, and trust.Ā 

Ultimately, the EU AI Act, alongside upcoming legislation in the UK and other regions, signals a shift from reactive compliance to proactive AI readiness. As AI adoption grows, powering AI initiatives with integrated, high-quality, and contextualised data will be key to long-term success with scalable and responsible AI innovation.Ā 

Author

Related Articles

Back to top button