Future of AI

Leveraging AI Operating Models in the absence of regulation

Advancements in Generative AI are rapidly changing businesses worldwide, impacting everything from employee productivity to business profitability. However, as more organisations implement Generative AI into their processes, there is an increasing need for governments to regulate the technology. The EU AI Act serves as a benchmark for many nations, highlighting the importance of responsible AI implementation. In our recently published 2024 Global AI report, we found that AI spending is expected to double in 2024, with 87% of businesses now seeing at least modest AI-oriented gains, up from 74% in 2023. This surge in AI adoption is spurring governments globally to follow the EU’s lead and develop their own regulations.  

However, companies cannot afford to wait for legislation to catch up – or else they risk being left behind. That’s why organisations should consider an AI Operating Model – a set of guidelines ensuring responsible and ethical AI adoption. By setting guardrails and controls, organisations can ensure AI is used safely and responsibly and allow for complete integration into the workforce. 

For businesses to maximise and coexist with AI, they also need to ensure it is secure, robust, and resilient. AI Operating Models guarantee confidentiality, prevent misuse, and provide organisations with the expertise to make informed decisions, providing a framework for safe AI adoption even without fully developed regulations. This results in enhanced decision-making while ensuring the benefits of AI are accessible to everyone.

Why companies can’t afford to wait

Despite attempts by governments to come together to address AI regulation, for example, most notably at the AI Safety Summit in the UK in November 2023, regulatory uncertainty remains, and this vagueness over how governments will address AI is impacting businesses. This year, both Meta and Apple announced that due to regulatory uncertainty, they won’t be launching new AI products in the EU in the foreseeable future. Business leaders may be cautioned against using AI by their legal advisors due to these unclear regulations, but with such a high proportion of businesses using and experimenting with it in their daily operations, this might not be an option. Until governments can provide the answers to how AI will impact employment, privacy, and data protection, boards must take responsibility for safety and security across the AI value chain, guiding ideation, incubation, and industrialisation. A failure to do so risks falling behind on competitiveness and innovation. 

Companies also need to be aware that the use of third-party and proprietary AI tools is increasing risks to employees and customers. Third-party platforms are usually in the public domain, meaning that if employees accidentally enter customer data, this confidential information can be unintentionally made public. If businesses don’t have AI Operating Models in place, they face safeguarding and financial risks since there are no defined rules and controls around how people use AI tools. 

Creating the optimal AI co-worker

AI adoption will also augment human jobs, streamlining human tasks and increasing efficiencies that might otherwise not be possible. To help navigate this and address any potential fallout it might cause, businesses should view AI as any other co-worker and introduce it along the same lines of workforce management. Just like with human co-workers, AI needs policies, guardrails, training, and governance. This approach will reduce concerns over AI taking away human jobs and enhance our ability to augment our intelligence and boost job performance.

For businesses to view AI as a co-worker, they first need to understand the desired outcomes and identify the skills and capabilities that the AI needs to fill. From there, they must ensure there are appropriate evaluation practices in place to measure the AI’s performance and implement continuous improvement. Workforce management will ensure that AI continues to do its job to the best of its ability.

The importance of human oversight 

For tech leaders, AI Operating Models can help address various concerns with AI. From regulation and transparency to ethics and compliance, these models provide a comprehensive framework for safe and responsible AI adoption, setting guardrails for better decision-making while ensuring the widespread AI benefits. But for these guardrails to be set in place, they require human oversight to ensure accuracy and compliance are maintained. Specialized models can be tuned for specific tasks, but fully autonomous AI systems remain a rarity, meaning human intervention continues to be necessary, particularly in sectors like healthcare and finance where sensitive data must remain secure.

At a time when 38% of IT professionals state the greatest challenge related to AI adoption is security concerns, businesses have a responsibility to maintain trust in AI systems, and this can be achieved with thorough data validation processes. These should verify data sources, maintain data provenance transparency and ensure the data used in training and inferencing is tagged and managed accurately. These frameworks ensure appropriate guardrails are in place for AI accountability and data integrity. 

As AI continues to permeate the workforce and transform business practices, regulations like the EU AI Act will play a vital role in setting the standards for safe and responsible AI use. Businesses must continue to monitor developments and update their guidelines to ensure they stay ahead of any potential risks. Setting up the necessary guardrails for human oversight, transparency, accountability and data integrity now across all departments is not just best practice but necessary for protecting both businesses and the public from the unintended consequences of unchecked AI.

Author

  • Mahesh Desai is an experienced leader in the technology services industry. Over the years, Mahesh has been a key architect of growth and transformation at Infosys, CGI and key to driving growth across Europe, USA and Asia as well as across industries including financial services, retail, telecommunications and the public sector delivering organisational transformation, growth, increased profitability and operational control. ​ ​ As the Head of EMEA Public Cloud at Rackspace Technology™, Mahesh leads the Public Cloud business across EMEA and its services across Application, Data & Security Services.​

    View all posts

Related Articles

Back to top button