Regulation

Four steps to help clear regulatory hurdles in the AI race

By Edmondo De Salvo, Principal Information Systems Architect, HPE

Artificial Intelligence (AI) is a topic that has been a major point of contention for a long time. In the past, many people thought that AI was a passing fad, nothing more than a gimmick. However, AI is finally everywhere and in real use cases — from massive data centres to the furthest reaches of the edge. Generative AI is being used by 65% of organizations worldwide, nearly double the number in 2023, according to McKinsey.[1] What’s more, three-quarters of these businesses expect GenAI to change or disrupt their industries significantly.[2]

While leaders are eager to harness AI’s financial, operational, and customer engagement benefits, an increasing number of regulations, like the recently enacted EU AI Act, are making them cautious. Data security, privacy concerns, and GenAI’s tendency to hallucinate or show bias are intensifying governmental scrutiny. As such, small business leaders rank legal and ethical risks as the most significant barrier to using GenAI. For enterprise leaders, those risks are the second greatest obstacle, after data privacy, according to a recent Technology Business Research survey.[3]

While regulatory challenges may seem daunting, they are entirely manageable. By identifying key issues and applying best practices to address them, businesses can take advantage of AI sooner rather than later.

The main obstacles

The fragmented regulatory environment across different regions is one of the most significant barriers to adopting AI. While the AI Act provides a cohesive framework across 27 European countries, many other regions, such as the U.S. and parts of Asia, lack consistent AI regulations. This inconsistency creates uncertainty, particularly for multinational organizations that must comply with different rules in each market while trying to understand new ones as they emerge. Another complicating factor is the rapid evolution of the AI landscape itself.

Over the past five years, venture capitalists have invested more than $290 billion in U.S.-based AI startups.[4] New startups pop up daily with niche AI solutions, creating a crowded ecosystem that is difficult to regulate effectively.

Finally, assessing the return on investment for AI, particularly GenAI, can take considerable time and effort. Decision-makers are often left weighing the potential benefits of investing in AI against the risks of running into regulatory issues, making strong governance practices essential.

Clearing regulatory hurdles

By focusing on a few critical steps, businesses can better navigate regulatory hurdles and realize the benefits of AI adoption.

  1. Adopt an incremental approach: Starting small and scaling gradually can help organizations manage AI adoption risks. Rather than a full-scale AI deployment, businesses should focus on individual use cases, gathering insights and adjusting governance policies as they progress. This provides flexibility to adapt to shifting regulations without significant upfront investments that regulatory changes could jeopardize.
  2. Form a governance committee: A dedicated, in-house governance committee is critical to ensuring compliance and managing ongoing AI risks. The committee should include legal, technical, and ethical experts, among others, to monitor and update AI policies continuously. Different use cases — such as internal business intelligence versus customer-facing applications — require distinct governance policies, making constant monitoring and oversight by a dedicated team a must.
  3. Prioritize data monitoring and security: Data integrity and security are core to AI governance, making robust identity access, data monitoring, and data visibility capabilities essential for regulatory compliance initiatives. This is especially important for any AI-driven companies that are conducting business in Europe and in other countries rolling out regulations similar to the EU’s.
  4. Emphasize ethical AI: GenAI introduces new ethical challenges, particularly around bias in the large language models powering the technology. Companies that adopt pretrained models must carefully evaluate the data used in their training to ensure fairness and transparency. Creating an ethics function within or alongside a governance committee and appointing a chief AI ethics officer can help address these issues regularly.

The way forward

Governance in the age of GenAI presents complex challenges, but businesses can successfully navigate them with some strategic thought. Companies that act now to build agile AI governance frameworks will be better positioned to adapt to future regulations and continue driving innovation with fewer obstacles.

[1]The state of AI in early 2024: Gen AI adoption spikes and starts to generate value,” Quantum Black AI by McKinsey, May 30, 2024

[2] Ibid.

[3]How AI is Shaping IT Infrastructure Purchasing Trends in 2024,” Technology Business Research Webinar Series, July 8, 2024

[4] “How venture capital is investing in AI in the top five global economies — and shaping the AI ecosystem,” World Economic Forum, May 24, 2024

Author

Related Articles

Back to top button