Regulation

AI in high-risk use cases: Should we focus on rights or regulations?

By Harri Ketamo, P.hD., CEO at Headai

The European Union is pioneering the regulation of Artificial Intelligence (AI) through the EU AI Act, a comprehensive legal framework designed to mitigate the risks associated with AI systems. However, a critical question arises: Is the focus on regulations overshadowing the practical safeguarding of individual rights?

While regulations are essential for establishing a baseline of responsible AI development and deployment, overemphasizing compliance can stifle innovation and hinder the realization of AI’s full potential. This is particularly true for AI in high-risk use cases, where both human rights and technical standards are at stake.

The limitations of a regulatory focus

The EU AI Act, while groundbreaking, demonstrates the limitations of a purely regulatory emphasis. There’s a risk of getting too focused on the AI Act terminology, such as data minimization, and then thinking that we have done our due diligence when we really haven’t.

For example, the Act’s broad classification of high-risk AI systems creates ambiguity as it ranges from employment screening tools to military systems. The emphasis on conformity assessments and documentation, while crucial for transparency and accountability, could also impose a significant burden on smaller organizations and startups, potentially stifling innovation.

Even more concerning, the focus on technical standards can eclipse the fundamental rights the regulation is meant to protect. These include the right to opt out, to access personal data, and to be forgotten. However, implementing these rights requires more than legal language. It demands a practical understanding of how AI affects autonomy and freedom.

Furthermore, a strict concentration on technical compliance can overshadow the importance of addressing fundamental rights. The AI Act includes human rights, such as the right to opt out, the right to check personal information, the right to be forgotten, and the right to understand what data is held. However, the practical implementation of these prohibitions requires careful consideration of the potential impact of AI systems on individual autonomy and freedom. There will be records about people outside the EU, but there is a need to protect EU citizens so that those records can’t be used against them.

A strictly regulatory approach can lead to a checklist mentality, where adherence to rules is seen as the end goal rather than the means to protect human rights. Regulations are a toolbox, but we need responsibility for advancing human rights.

A rights-based approach: moving beyond compliance

While compliance is important, this approach risks focusing too narrowly on company obligations, potentially overshadowing the protection of individuals and even leading to a loss of common sense. To move beyond compliance, we need to prioritize a rights-based approach.

A rights-based approach focuses on individual rights, ensuring that AI systems are designed and used in a manner that respects and upholds these rights. This requires a proactive stance aimed at empowering individuals and safeguarding them from potential harm.

Furthermore, a rights-based approach necessitates robust mechanisms for human oversight and control. Individuals must have genuine opportunities to intervene, challenge, or correct AI decisions that affect their rights.

Several forward-thinking organizations have already begun applying these principles in practice. For example, IBM has developed internal ethics review boards to evaluate the societal impact of its AI systems. Others are funding R&D into transparency-driven model design.

In employment and skills development contexts, which are key areas for AI in high-risk use cases, a proactive rights-first mindset might look like this:

  • It’s crucial to bear in mind that GDPR is still active, and it sets the baseline for data collection
  • Only the data that can be secured in any event is stored. If the data flows carefully, many operations can be performed without storing the data.
  • Permanently anonymizing the personal data before storing it, and always having the key to identity in another system.
  • Making every step in the reasoning process transparent.
  • Enabling people to opt out of either the result or the full data. This also means models, and that’s why the next bullet is about model training.
  • Separating the analysis of the data and the training of the AI models.
  • Reporting the known biases and factors that affect the output.

Moving forward: a call for action

We need a fundamental change in how we think about compliance and creativity. Regulations should be viewed not as barriers, but as tools to unlock innovation, especially for AI in high-risk use cases where the societal stakes are high.

To support this shift, we urge policymakers, businesses, and civil society to:

  • Prioritize human rights

Don’t stop at technical compliance. Focus on outcomes that preserve dignity, autonomy, and fairness.

  • Recognize that regulations are not guarantees

Just because a system is compliant doesn’t mean it’s ethical or safe.

  • Embrace common sense

Rules can’t cover every possible risk scenario. Sound judgment and ethical reflection remain essential.

  • Promote explainability

Explainability should be a default feature, not a luxury. Users deserve to know how decisions about them are made.

  • Encourage human-centered innovation

If the regulation doesn’t explicitly require a humane solution, that doesn’t mean it should be ignored. Think beyond minimum standards.

The future of AI depends on our ability to balance innovation and individual protection. Regulations like the EU AI Act are necessary but insufficient on their own. By treating them as a foundation, not a ceiling, we can build a framework that prioritizes rights and encourages ethical innovation.

For AI in high-risk use cases, this isn’t optional. It’s a moral and practical imperative. It’s time to shift the conversation from “What are we allowed to do?” to “What is the right thing to do?”

About the author

Harri Ketamo, Ph.D., is an entrepreneur with 25 years of experience in cognitive sciences, artificial intelligence, and game development. Currently, he is the founder & CEO at Headai, a deep-tech company providing decision intelligence automation.

Author

Related Articles

Back to top button