Future of AI

Why Responsible AI Needs Planning Permission

A prerequisite for every organisation that begins to integrate AI into the fabric of its business is determining and putting in place a solid framework that aligns AI innovation with social responsibility. Just as a construction project requires meticulous planning and adherence to regulations, developing and deploying AI responsibly demands a similar approach.

While the tools and materials differ, both must obtain permissions that address safety and ethical considerations. Arguably in the case of AI, this is the first time in history that the potential consequences of technology will require such in-depth philosophical and sociological discussion.

Drawing up a blueprint

However, responsible development, whether it’s a physical structure or an AI project, must begin with a well-defined purpose. For a construction project, this translates to detailed blueprints outlining the building’s structure, size, and most importantly, its effect on the surrounding environment.

Similarly, AI development also requires a clear purpose, identifying the specific problem it aims to solve and its potential impact on society. This blueprint will serve as the guiding principle for the entire development process, prompting questions about unforeseen or unintended outcomes right from the get-go.

Starting with proper digital foundations in place will support the deployment of responsible, effective AI and pay dividends for many years to come. Alternatively, rushing into AI implementation without forethought could be highly damaging and costly to rectify.

Taking heed of government guidelines

Helping to guide responsible development worldwide, governments are introducing new legislation and outlining standards. In the UK, the AI Regulation White Paper sets out a clear roadmap for agile AI regulation and guidelines for using the technology. It supports regulators with the skills and funding they need to address the risks and opportunities of AI, aiming to promote safe and responsible innovation.

The EU has also been setting out its stall. Member states are facing the upcoming AI Act that seeks to promote trustworthy development. It categorizes applications based on their risk, denoting that high-risk ones in areas like recruitment will face stricter scrutiny. Non-compliance will lead to substantial fines and the possible prohibition of AI models that may cause harm.

Strengthening internal governance

Meeting this growing set of obligations will require serious commitment from organisations at CEO and board levels. There’s no doubt it’s going to take considerable effort and coordinated planning, requiring input from all angles, across multiple teams, stakeholders, users, and experts.

Worryingly, a recent survey by Conversica found that just 6% of respondents have established clear ethical guidelines for AI use, despite 73% noting that AI guidelines are indispensable. Couple this with the temptation to race ahead with deployments driven by the fear of missing the boat, and business leaders are going to need to exercise a level of restraint to ensure that due diligence is carried out.

From the outset, responsible AI must begin with internal policymaking, ensuring uniformity across the entire organisation. Governance throughout the process of planning and execution will need support from many parties, both internal and external, to ensure that AI technology is implemented correctly and fairly.

Building trust with stakeholders

Creating AI compliance management teams with a head for each department is important to maintain oversight of all projects. They will play a pivotal role, in ensuring that employees have an opportunity to contribute to building the initial AI framework, as well as promoting an understanding of how the technology might affect jobs and working practices.

Compliance processes should include thorough risk assessments of proposed projects to highlight issues such as potential biases and privacy violations before plans are finalised. Recognising challenges upfront can help clarify the purpose of a project and whether it should get the green light or if plans need to be redrawn. 

Akin to getting planning permission to ensure that a construction project aligns with regulations and community needs, responsible AI requires a painstaking approach to legislation, as well as consultation with employees, customers, other stakeholders, and the wider society. Developing considerately and transparently will foster good relations and long-term trust.

Determining if AI is fair and fit for purpose

Eliminating bias and establishing fairness will be fundamental aspects of many AI models. Judging whether AI is fit to make decisions without human oversight is critical.

Take for example a job applicant who has a difficult background (such as a criminal record) but has since turned their life around for the better and has the most relevant qualifications of all the applicants. How does AI evaluate the risks involved and whether to approve or reject this potential candidate? The definition of tasks like these and the variables involved will need to be carefully determined to decide whether decisions can be fully managed by AI, or rely partly or entirely on human judgment. 

Many organisations will struggle to find the resources internally to validate their solutions are acting responsibly and are appropriate for the use case. Looking to external providers and tools will help impartiality, measure the fairness of AI applications, and, critically, highlight when technology is not fit for the use case or likely to cause harm.

Minimising risks and planning remediation

Even the best-laid plans won’t guarantee one hundred percent positive outcomes from AI deployments. So, organisations must be accountable when things go wrong. 

Having built-in trigger points for intervention within processes should help minimise risks. This will mitigate potential issues before they have serious ramifications. Nevertheless, organisations should always be open about the limitations of AI systems and their risks, explaining what safeguards are in place.

Even when mistakes are made, transparency is key. Pre-prepared plans should be ready to kick into action to inform stakeholders, respond to feedback, and manage concerns. There should be clear remediation steps for dealing with incidents covering investigation, containment, and mitigation. Follow-up steps might include data deletion or compensation to customers; all aspects should be thought through in advance.

Establishing a template for the future

There is no right answer for how to be compliant and responsible. But without regulation and accountability, incidents like the recent deep fake story in Hong Kong could become a regular occurrence. 

Companies will take different approaches depending on their size and industry. New job roles will emerge for ensuring uniform use of AI throughout a company and more tools will be developed to gauge whether AI use is compliant.

However, organisations should never see responsible AI as a one-time effort. Creating a culture of learning and improvement, and embracing feedback from stakeholders will assist in continuously refining AI practices. Additionally, following industry standards and engaging with peer groups will help businesses adapt safeguards to keep up with new AI developments.

Like every successful building project, doing the groundwork now to build AI responsibly demonstrates a conscientious approach that will benefit stakeholders and the community alike – and provide the template for successful deployments in the future.

Author

  • Hana Rizvic

    Hana Rizvić is Head of AI/ML at Intellias. She is a data scientist and AI/ML leader with over eight years of experience. Having a solid mathematical background, she started her career path in forecasting and machine learning. For the past five years, Hana has held leadership roles across several industries, including oil & gas, energy, insurance, and healthcare. She is passionate about managing and organizing teams and working on strategies for growth.

Related Articles

Back to top button