Looking back on 2023, it will undoubtedly be characterised as the year that AI arrived. The technology has exploded into wider use and consciousness, driven by generative AI platforms. It is touted as the catalyst to revolutionise sectors such as healthcare, banking, mass transit, and much more.
But perception doesn’t always equal reality. Governance is a daunting challenge as organisations grapple with the practical considerations of deploying AI at scale. Determining ownership and responsibility for AI decisions, particularly in critical applications, poses a significant hurdle that demands urgent attention. While the potential of AI is vast, the scarcity of standout use cases has become apparent. Many organisations find themselves navigating uncharted territory, exploring the bounds of AI’s capabilities, and struggling to identify applications that truly deliver value.
As we stand at the precipice of the AI era, it becomes imperative to understand not only its promise but also the practical steps needed to translate this potential into impactful, real-world applications.
2024 is the year of regulation
If 2023 was the year that AI arrived, then 2024 is to be the year of AI regulation. The growth and hype around AI means we need new frameworks for governance and legislation built around the technology.
That process has already begun. The final version of the EU’s AI Act was agreed in December 2023 and is expected to come into effect in 2026. Under the act, certain applications, such as emotional recognition in the workplace, will be banned. Companies that fall foul of the regulations can expect to be hit with hefty fines linked to their global annual turnover.
The US and China, two of the current three AI leaders alongside the UK, are also drawing up their frameworks, with Joe Biden issuing an Executive Order in October 2023 to “ensure that America leads the way in seizing the promise and managing the risks of AI”. The EU is also keen to lead the way on AI legislation.
What does that mean for normal businesses (not those building their own multi-million-dollar large language models) exploring how they can leverage the power and potential of AI in their operations?
Firstly, ensure you know what is happening across your enterprise – who is using AI, which platforms are being used, and which of your applications have elements of AI. By auditing your use of AI, you’re putting your business in the best position to prepare for and comply with any new regulations and, if necessary, make changes.
Build taskforces of employees across departments, legal, finance, and HR, so that you’re ready for regulations and can quickly comply with them. For most businesses, regulation is not to be feared. When approached proactively, it can be a springboard for creativity, encouraging businesses to innovate within the bounds of ethical and legal considerations.
Getting the most out of Gen AI
There is a lot of noise and hype around Gen AI, and businesses are already getting hands-on to build concepts and business cases.
A recent Gartner poll of more than 2,500 executive leaders found that 45% reported that recent hype around language models has prompted them to increase AI investments. Seventy percent said their organisation is in investigation and exploration mode with generative AI, while 19% are in pilot or production mode.
We are seeing some valuable use cases; multinational companies are utilising AI to scan and direct the tens of thousands of emails they receive daily. This triage-type system is saving thousands of hours. We also see back-end implementation, augmenting human teams in finance, HR, and IT support.
But we are only scratching the surface. To get the most out of AI, a structured approach is needed. Building or integrating AI-based applications is great, but people across your business – especially end users – must be involved from the beginning. Build out cross-functional teams to define pain points in your organisation and work from there.
By doing so, businesses can mitigate the risk of introducing AI unnecessarily. It’s an easy mistake to make, given the current discourse. Engaging stakeholders from every area of the organisation can gain valuable insights into specific needs, challenges, and opportunities for AI to enrich their work. This collaborative approach ensures that AI solutions are tailored to address real-world problems and contribute meaningfully to business goals.
Building trust in Gen AI applications in your business
A unified approach also builds trust and familiarity. And trust is very much needed in a world where the misrepresentation of AI could have far-reaching consequences.
One practice jeopardising this trust, which we’ve seen consulting companies increasingly guilty of, is repackaging traditional automation as sophisticated AI. This new generation of AI involves systems that can perform tasks, make decisions, and learn from experiences in a way that simulates human intelligence. It encompasses complex algorithms, machine learning, and deep neural networks to achieve cognitive-like functions. While those creating a facade of this level of innovation and technological advancement may initially attract more clients and get a leg up on the competition, in the long run, it’s a dangerous game.
It not only misrepresents the capabilities of the technology but also leads to inflated expectations and disappointment among clients and end users when the actual performance fails. All of this has the potential to erode the credibility and hinder its broader adoption.
Trust, ultimately, is the deciding factor in the success of any technology implementation. This also applies across the workforce, which is why bringing them on the journey is just as important as advocating for ethical AI practices. Involving end-users from the get-go means that when they have the tools and applications put in their hands, they know how to get the best out of them, and how they can make their lives easier.
Millions, perhaps even billions, of dollars have been frittered away in recent years by businesses implementing technologies across their teams without telling their people how to use them or finding out if they need them. Most of us are creatures of habit, so if we can’t see the value in a new technology or tool we’ll soon forget about it and revert to old habits.
We can’t allow that to be the case with AI. Its potential is too great.