Four researchers from the University of Warwick, Imperial College London, EPFL (Lausanne), and Sciteb Ltd have identified a mathematical means and ‘simple formula’ of helping regulate AI systems biases towards making unethical choices in a paper titled ‘An unethical optimisation principle‘.
AI systems are increasingly being used across all industries from fashion to insurance across to aerospace and defense. Gartner predicted a value of $4 trillion to be reached by AI-driven business by 2022 with the number of risks and problems that come with using AI systems also increasing.
The research paper from the universities suggests a mathematical formula for how companies using AI systems can reduce the unethical commercial decisions that are made by using AI capabilities by using a mathematical equation which could lead to lower risk and costs.
The aim of the research paper is to provide a quantitative connection between economics, financial regulation, and AI ethics with a proposed formula that will provide a basis for resolving unethical AI systems.
Professor Robert MacKay of the Mathematics Institute of the University of Warwick explained how their principle could help by saying: “Our suggested ‘Unethical Optimization Principle’ can be used to help regulators, compliance staff, and others to find problematic strategies that might be hidden in large strategy space. Optimisation can be expected to choose disproportionately many unethical strategies, an inspection of which should show where problems are likely to arise and thus suggest how the AI search algorithm should be modified to avoid them in the future.”
AI systems and their capabilities have a wide range of potential strategies and decisions to choose from with a number of them being unethical decisions that have the potential to incur an added cost for the business if an unethical decision is made.
The research paper highlighted that this could lead to other costs from using AI systems that include the possibility of regulators being able to levy significant financial fines which could lead to customers boycotting you resulting in another financial cost.
The New York Times highlighted that AI and facial recognition can work well if you’re white, however, if you’re a different ethnicity, it starts to give a bias on the person and personality traits which results in an unethical AI system.
It leaves the question, how can you make an unbiased AI system?
Companies such as Experian and Equifax create and sell identity profiles which creates ethical risks and considerations for businesses when using this data.
The research paper goes on to show how AI systems have caused penalties levied on banks for misconduct with the current estimated amount to be over $276bn according to The Financial Times.
In an environment in which decisions are increasingly made without human intervention, there is a strong incentive to know under what circumstances AI systems might adopt an unethical strategy with the goal to reduce and mitigate the risk or eliminate it entirely.
“The Principle also suggests that it may be necessary to re-think the way AI operates in very large strategy spaces so that unethical outcomes are explicitly rejected in the optimization/learning process,” Mackay added.
The four authors of the paper are a mixture of mathematicians and statisticians who are Nicholas Beale, Heather Battey, Anthony C. Davison, and Professor Robert MacKay with each of them taking different roles in coming up with equation and publishing the paper.