Future of AI

Policymakers shining the spotlight on responsible AI underscores its significance and will accelerate adoption

It’s been a busy couple of months for regulators in the world of artificial intelligence (AI). In July, the UK government published its new AI policy paper — Establishing a pro-innovation approach to regulating AI ­­­in the UK.

In essence, the paper sets out proposals for a “new AI rulebook” which the government hopes will “unleash innovation and boost public trust in the technology”.

A key part of its approach pivots around a core set of six principles to ensure, among other things, that AI is used safely, that it is secure, and that it is deemed to be fair.

Fast forward a couple of months and at the beginning of October, the White House Office of Science and Technology Policy published its Blueprint for an AI Bill of Rights.

The document recognises that automated systems have “brought about extraordinary benefits” highlighting how technology has, among other things, helped improve agricultural production, predict destructive storm paths and identify diseases in patients.

“These tools hold the potential to redefine every part of our society and make life better for everyone,” the bill said.

AI regulation is essential

But the AI Bill of Rights also acknowledged that such benefits should not “come at the price of civil rights or democratic values, foundational American principles that President Biden has affirmed as a cornerstone of his Administration”.

Welcoming the Blueprint, Eva Maydell, Member of the European Parliament Tweeted: “While our approaches to #ArtificialIntelligence regulation may be different, they are embedded in the same democratic values. We must align [EU and US] efforts and work with like-minded partners.”

Indeed, those comments were made just a week or so after the European Commission released details for the “targeted harmonisation of national liability rules for AI, making it easier for victims of AI-related damage to get compensation” as part of its drive towards “excellence and trust in AI”.

On their own, these proposals would make required reading for anyone with more than a passing interest in AI. But together, they signal a step change in the debate as policymakers grapple with the realities of AI and its potential benefits. It also hints at a global framework for AI, although anything formal is still some way off.

For us working in the technology sector, understanding the implications of AI is something we’ve been doing for years. Yes, we’ve been focused on the technical standards, the software and the engineering know-how in developing AI technology. But we have done so knowing that for AI to play a future in our lives it has to be done responsibly. And it’s easy to see why.

Responsible AI is a driver for good

AI is often referred to as the “electricity” of the new data economy. In a world awash with data, those organisations best placed to succeed aren’t necessarily those with the most data – but those with the best data…and know what to do with it. And that means AI.

But as the policy proposals set out by governments rightly acknowledge, AI has the power to do great things. But it could also have a negative impact as well.

For instance, when it comes to AI underpinning decision-making for bank loans and credit, how much autonomy should be given to AI? How do financial institutions ensure any decision-making system is immune from bias or discrimination in areas such as gender, race, religion, colour and age?

These same biases raise even more concerns around areas such as facial recognition, law enforcement, healthcare and recruitment. What’s clear, is that as AI becomes more prevalent, any solutions have to be created ethically and free from unjust biases. In other words, you have to create responsible AI.

An ethics-based approach to Responsible AI should pivot around five support pillars that form the foundation of work.

Reproducibility

In effect, this ensures that models and algorithms are standardised for consistency. For example, can the work that is done developing an AI-based system be replicated in real-world scenarios and deliver the same results?

Transparency

If people are going to ‘buy in’ to AI, they need to understand what the technology is ­doing and how it is arriving at decisions. Explanability and interpretability of AI outcomes is key to building trust in the systems.

Accountability

As AI becomes increasingly embedded in our systems, being accountable for the technology — and what happens as a result of the technology — is paramount. In other words, in the event of a decision being challenged or something going wrong, it’s no good simply blaming the technology. Someone — either personally or an organisation — has to be held to account. A human has to be kept in the AI loop.

Security

With so much sensitive information tied up in AI systems, ensuring that data is encrypted while maintaining the highest levels of compliance is a must.

Privacy

And with strict security measures in place, the same approach must also be taken to ensure that people’s personal information remains private.

Together, these pillars provide an ethical framework for responsible AI for any organisation developing and implementing software. In fact, I would go further than that. When you are deep into data processing, it’s vital that you factor in and address ethics and any potential bias from the offset.

As software developers, responsibility has to be put ahead of any business goals. That may seem extreme, but it simply underlines the importance of responsible AI. And with governments looking to legislate in this area, it’s something we all have to take seriously.

Author

  • Pandurang Kamat

    Dr. Pandurang Kamat is Chief Technology Officer at Persistent Systems: digital engineering and enterprise modernisation partner. He is an experienced technology leader who helps customers improve user experience, optimise business processes, and create new digital products.

    View all posts

Related Articles

Back to top button