Future of AI

AI Regulation: A New Challenge for Businesses

Artificial intelligence (AI) is growing rapidly, with 9 in 10 leading businesses (across 112 countries) using some form of technology in their operations. Consequently, use cases are constantly emerging, with businesses in every industry taking advantage of enhanced analytics and a reduction in manual tasks to enhance productivity.

In today’s evolving environment businesses must implement AI to remain competitive. Like with any innovation, companies need to know how to do so while remaining compliant with regulations likely to impact their business. Although AI regulation is still in its early stages, businesses must plan to prepare themselves for upcoming changes they may be liable to. 

Rules around AI usage will likely relate to its development and implementation. But these should also focus on specific use cases for certain industries. This will be particularly relevant to content moderation, which focuses on removing illegal, irrelevant, or harmful material from online platforms.

We are already seeing an increase in AI-generated images, many being age-restricted or even illegal. AI-generated deepfakes have increased by 780% across Europe in the past year, with well-known figures and law enforcement highlighting the impact of harmful AI-generated content.  However, it is still unclear how legislators will cover content moderation in upcoming regulations. These changes will likely be significant, so businesses must keep an eye on them. Therefore, they will need to put the tools in place to identify and remove illegal content, including AI-generated material, and work with like-minded businesses to tackle the issue.

The Latest on AI Regulations

With AI gaining traction, legislators across the globe will likely announce how they plan to regulate the technology. The EU’s Artificial Intelligence Act is a major step towards a regulated AI environment. Voted in on the 13th of March 2024,  this is the first piece of regulation from a major legislative body and is expected to set the global standard in this area. The law assigns AI use into three categories: creating unacceptable risk, high-risk, and other uses, which are largely left unregulated. Although generative AI is not considered high risk under the new legislation, it must comply with transparency requirements and EU copyright law.

Although the new AI Act is breaking ground, it will likely be followed by other legislations worldwide. To help EU businesses prepare for implementation, the regulator has set up a tool to provide insights on regulatory obligations.

This tool outlines the businesses and organisations subject to the legislation, based on a definition of AI and the role of the entity. Risk is related to how AI is used and what the organisation does. Therefore, companies can learn if they are subject to obligations when the newly approved regulation comes into force. The exact timings are still unclear.

For EU businesses, this legislation should be closely studied to understand its impact and scope. When operating beyond the EU, businesses should also monitor how they are impacted. With this new legislation now approved, it will likely inform future legislation not just across the EU, but also the UK, US, and beyond.

Unlike the internet, where regulatory action is still being debated years after the inception of the technology, we can expect AI legislation to move more quickly. Regulators, the business community, and society in general have learned lessons from the internet, which was left mostly unregulated to allow for innovation and businesses to flourish. However, the debate around online safety continues and therefore organisations must work with experts and like-minded companies to build a common understanding of the practical solutions available to them. A universal standard of thinking will likely be established, built from the reality of AI implementation, the knowledge of third parties, and regulatory necessities.

Making Business Preparations

As regulation is approved, business leaders must keep an eye on key documents, the timelines, and implications of upcoming AI legislation. Moreover, they must assess how they are deploying or expect to deploy AI in their organisations to understand their compliance obligations. As with any regulation, compliance likely comes with a monetary cost for businesses to factor in. For high-risk organisations, especially small or medium-sized enterprises, outsourcing these solutions to experts is likely the most cost-effective option, instead of developing and implementing in-house technology. Buying complete solutions from vendors limits the need for in-house expertise and ensures that AI is used in an explainable way, crucial when faced with regulatory scrutiny.

For organisations impacted by regulatory changes, a thorough understanding is essential, as is staying informed on relevant regulations and guidelines in every market. For some business leaders, this task may seem daunting. However, there are simple steps to stay up to date.

Firstly, business leaders should engage with peers, both in their industry and the wider ecosystem, to learn about new developments. As subject matter experts, these platforms can help to address real issues relating to the implementation of regulation.

Businesses in regulated spheres should also perform regular audits and risk assessments to understand AI systems, compliance, and risk. Before this, businesses should store documentation of policies, procedures, and decision-making processes. These can be used as evidence of compliance or to provide transparency to regulators and partners. For a balanced view, these risk assessments can be conducted by third parties with broader regulatory experience.

Leaders must also educate all employees involved in AI development and deployment. Watertight training will help them understand their responsibilities. This will ensure that these individuals understand their requirements regarding compliance and ethical AI use. By establishing a groundwork through training, businesses can implement continuous improvement practices, addressing emerging challenges proactively. Therefore, AI governance will be improved based on feedback, lessons learned and emerging best practices.

These practices will likely differ based on sector, organisation, company size, and function. In content moderation, however, it means aligning with like-minded businesses to implement solutions that identify age-restricted and illegal AI-generated content so that action can be taken accordingly.

Author

  • Lina Ghazal

    Lina is a Trust & Safety professional with over a decade of experience in media and tech, both in the public (Ofcom) and private (TF1, Meta) sectors. She’s currently leading regulatory and public affairs for safety tech provider VerifyMy. Lina is expert in building large-scale policy initiatives and partnerships and has led engagements about online regulation with diverse groups of stakeholders and regulators across Europe, the Middle East, Africa and the US. Lina studied Law, Economics and Management at Ecole Normale Superieure and Queen Mary University of London. She holds a Master of Finance & Corporate Strategy from Sciences Po Paris.

Related Articles

Back to top button