
As organisations implement AI at breakneck speed to stay competitive, adoption continues to outpace oversight in dangerous proportions. Incidents like the UK Department of Work and Pensions’ algorithm wrongly flagging 200,000 people for fraud or the ICO finding that some AI recruitment tools unfairly filtered candidates with certain protected characteristics show how quickly ‘black box’ systems can cause harm.
Regulation is now starting to catch up. The recently passed Data (Use and Access) Act (DUAA) introduces changes for the regulation of automated decision-making, particularly in the context of data protection and privacy. Promising more control and accountability, the new Act sends a dual message to UK organisations: AI can’t be a black box, and data protection can’t be box-ticking.
What is changing under the Data (Use and Access) Act
The Data (Use and Access) Act is designed to provide a modern, more streamlined framework for the UK’s data protection regulations. While it addresses a range of issues, its impact on automated decision-making and AI is particularly notable.
Automated decision-making describes processes in which outcomes are determined without human intervention. These can include basic tasks such as sorting emails, as well as more complex areas like recruitment, credit scoring and even judicial sentencing. AI systems are now foundational to many of these processes due to their ability to examine large datasets and make accurate predictions or recommendations far faster than humans can.
However, besides their many clear advantages, AI systems also present concerns around transparency, fairness and accountability. The Data (Use and Access) Act is introducing new governance standards to curb these negative effects and enable safer innovation.
The Act has four overarching goals:
- Enhance transparency: The Act proposes clearer guidelines for how organisations should communicate with individuals about AI-driven decisions that affect them. Businesses will need to make the logic behind automated decisions more accessible and easier to understand, and give individuals greater transparency over the data used in those decisions. This will likely result in more detailed privacy notices that specifically address automated decision-making and AI processing.
- Give individuals more control: The Act extends individuals’ rights under the UK GDPR, which states that individuals have the right not to be subject to decisions based purely on automated processing that significantly affects them, unless certain conditions are met. In practice, under the DUAA, individuals could be granted a right to contest automated decisions that they believe are unfair or biased.
- Foster algorithmic accountability: A new element of the Act is its emphasis on making AI systems auditable. It is expected to introduce provisions that require formal audits, ensuring these systems meet ethical standards and remain open to scrutiny. Organisations will therefore need to evaluate and demonstrate the fairness, accuracy and accountability of decisions produced by AI.
- Minimise the risk of discrimination and bias: The Act sets specific guidelines for ensuring the responsible use of AI – specifically, that AI models do not perpetuate or exacerbate biases in decision-making and that automated decisions do not disproportionately harm individuals based on characteristics like race, gender, or disability. This will likely be supported by new regulations, reinforcing the ethical use of AI, particularly in high-risk areas like healthcare, finance, and law enforcement.
An evolution, not an overhaul
Although the Act marks an important step towards ensuring the safe and governed use of AI, it is not a complete overhaul. Privacy-conscious businesses which have implemented policies and procedures in compliance with the UK GDPR will find that the Act builds on extending provisions on automated decision-making and AI.
For example, article 22 of the UK GDPR already limits organisations’ ability to make fully automated decisions in some circumstances. The DUAA extends this provision by providing greater safeguards and clarification around when and how automated decisions can be made. Similarly, both the UK GDPR and the Act emphasise transparency and accountability, with the DUAA going further to strengthen these requirements, particularly in terms of explaining how AI systems are designed, tested and monitored.
Going further: How proactive organisations prepare for the AI era
While the DUAA strengthens protections for individuals, regulation only sets the framework – it does not dictate the pace of innovation. In practice, businesses must approach AI with caution and build robust foundations before scaling its use.
New research from IBM shows that 97% of AI-related security breaches involved AI systems that lacked proper access controls, and 63% of victims reported having no governance policies in place to manage AI or prevent the unauthorised use of AI tools known as ‘shadow AI.’
Employees inputting sensitive information or proprietary business information into AI tools can leave organisations vulnerable to data protection infringements and confidentiality risks, while AI hallucinations can influence decisions and result in reputational or legal consequences, lost revenue and damaged stakeholder trust.
The benefits of an AI policy
Leading organisations are now getting on the front foot by establishing internal AI policies. An AI policy sets out guidelines for how employees can use AI tools while emphasising ethical, responsible and secure best practices. These policies not only help ensure compliance with rules such as UK GDPR, the Data (Use and Access) Act and the EU AI Act, but also provide wider benefits for compliance, ethics and operations.
A robust AI policy demonstrates leadership in data privacy and a commitment to accountability. In procurement processes, it can serve as a key differentiator that sets one business apart from another. Day-to-day, an AI policy provides a structured approach for technology use, and ensures that teams understand their roles in overseeing AI outputs – thus reducing the risk of bias, misuse or misinformation.
An AI policy can even be the driving force of innovation. It can help map out AI deployments, uncover areas for expansion and streamline decision-making by clarifying which tools are approved under what conditions. This way, it can accelerate adoption and support effective implementation.
Key steps to implementing an AI policy
Just as data protection is not a simple compliance formality, there is no one-size-fits-all AI policy either. Each organisation should tailor and continually reassess its approach, and where in-house expertise falls short, seek professional guidance. However, a few key actions can help organisations cover all critical bases:
- Create an AI audit by reviewing all current AI use across the business
- Assess how risks vary across business units, as some will be more exposed than others
- Define usage guidelines, clarifying when and how employees can use AI tools
- Conduct staff training on restrictions, compliance and best practices
- Regularly assess AI-generated content for accuracy and confidentiality risks
- Vet suppliers rigorously before onboarding new AI technology
- Update existing policies – when AI permeates business operations, it’s important to ensure that all IT, data protection and communications policies are also AI-ready.
Staying compliant and competitive
Whether motivated by compliance or innovation, businesses must now build a case for a robust AI strategy that promotes the responsible use of technology and automated decision-making. Ensuring compliance with the Data (Use and Access) Act provides a great opportunity for UK businesses to build more secure, transparent and responsible ecosystems, while creating an AI policy promotes visibility, streamlines adoption and fosters long-term trust.
Businesses should take this opportunity to align their policies and practices, ensuring that AI works for them – efficiently and ethically.



