Future of AI

The imperative of AI governance 

AI has surpassed human abilities in many domains that require complex reasoning, strategic planning, and knowledge mastery. It can ace professional exams in law and accounting and diagnose diseases with higher accuracy than most doctors. 

Some experts assert that AI will soon surpass humans in most complex thinking and rational decision-making tasks. They also contend that AI is exhibiting more human-like behaviours and emotions. Moreover, some speculate that Artificial General Intelligence, or the ability of machines to perform any intellectual task, is imminent. 

However, three obstacles are threatening to hamper the enormous potential of AI: the lack of trust in the technology, the confusion between humans and AI, and the absence of collaboration between institutions and businesses to govern AI. 

Let’s look at what the current AI risks are and how institutions, particularly the EU, have responded to these challenges whilst shedding light on how human and artificial intelligence represent two separate domains that can complement each other. 

AI and the emerging risks 

The public is wary of technology, and this should not be overlooked amid the remarkable innovations and efficiency gains that AI offers. This distrust has been building up for a long time. Events such as Cambridge Analytica or the use of location data to track women seeking reproductive rights have eroded citizens’ confidence. Moreover, the dangers of AI have become more evident in recent years, such as: 

Discrimination and bias: AI systems can reflect or worsen existing prejudices and stereotypes, or create new ones, by relying on biased data, algorithms, or human decisions. This can harm certain groups or individuals, especially those who are already marginalised or vulnerable. In fact, a recent study shows that AI not only reproduces existing bias but also strengthens it.  

Privacy and data protection: AI systems can collect, process, and share large amounts of personal data, often without the consent or awareness of the data subjects. This can expose people to data breaches, identity theft, surveillance, or manipulation, as well as violate their right to privacy and dignity. 

Safety and security: AI systems can malfunction, be hacked, or misused, causing physical or psychological harm, property damage, or environmental impact. This can also erode public trust and confidence in the technology and its providers. 

Transparency and explainability: AI systems can operate in complex ways, making it hard or impossible to understand how they work, why they make certain decisions, or who is responsible for them. This can impair the ability of people to challenge, contest, or appeal AI decisions that affect them, as well as to hold the developers or users of AI accountable for any negative consequences. 

Human agency and autonomy: AI systems can influence, persuade, or coerce people to think or act in certain ways, or replace human decision-making or judgment in various domains. This can affect the freedom, dignity, and self-determination of people, as well as their social and emotional well-being. 

Institutional response to AI risks: EU AI Act 

To prevent these risks, a proactive and preventive approach to AI governance is needed, that ensures that AI is developed and used in line with the fundamental values and principles of our society. The European Union has just approved the European AI Act, which represents the first attempt at AI-specific regulation. The legislation adopts a product risk approach to AI systems that violate the EU’s values and principles, such as those that manipulate human behaviour, exploit vulnerabilities, and cause mental harm. It also aims to ban those AI systems that allow social scoring or mass surveillance by public authorities. 

Consequently, those who create, or significantly change high-risk AI systems (systems that can lock people out of services or hiring fall into this category) must submit a conformity assessment. This is not a new regulation, but a set of documents that prove compliance with existing rules, such as privacy, transparency, robustness, safety, and human oversight. However, there are some worrisome exceptions to this requirement, such as when the AI system does not ‘substantially influence’ decision-making.   

The EU AI Act is still a work in progress, with many open questions and a long implementation period. To ensure that AI is aligned with the EU’s values and principles, the EU AI Office, the national regulators, and the private sector must collaborate effectively. This collaboration should focus on three aspects: building public trust in AI, establishing best practices for AI development and use, and innovating on how to integrate technical and legal requirements, especially in areas like privacy by design. 

Education on Artificial vs Human Intelligence 

Artificial intelligence and human intelligence are distinct entities that can enhance each other. Both governments and businesses must devise clear strategies to help citizens understand this as it would help resolve many problems, from anxiety over job loss to greater transparency about education and training opportunities.  

Humans have cognitive and computational constraints that limit their rationality and decision-making. They are prone to ignore or favor some data over others and fall victim to various cognitive distortions. These human weaknesses give computers an edge in performing cognitive tasks. Computers can leverage their ability to store and process huge amounts of data in sophisticated ways, the foundation of AI. 

However, researchers, using the example of large language models versus human language learning, found that human cognition in important instances operates theoretically “top-down” rather than “bottom-up” from data. Human cognition is forward-looking, driven by theory-based causal logic which is different from AI’s emphasis on data-based prediction.  

In essence, human-AI hybrid systems can offer wonderful advantages for strategic decision-making, and this collaboration is the key to moving forward, not replacement. This implies that humans have unique strengths that complement AI, such as creativity, intuition, and ethical judgment.   

Invest in its governance to harness AI 

To make the most of AI, businesses need to have strong AI governance in place that goes beyond mere compliance and addresses the ethical, legal, and social implications of their AI solutions. These are the essential components of an effective AI governance framework: 

Firstly, make sure all the employees know what AI is. With the advent of ChatGPT, businesses should already begin training their workforce in the fundamentals of AI. 

Secondly, organisations should contemplate the existing requirements and upskill their teams accordingly. For example, what is privacy in AI? What does privacy by design mean in AI? 

Then, organisations should collaborate with lawyers and engineers. AI governance is all about developing a shared language. For example, it is hard to promote privacy without including coding privacy engineering. Also, to ensure transparency and explainability of data and algorithms organisations should use tools and techniques, such as documentation, metadata, annotations, and traceability. 

Finally, organisations must involve external stakeholders – customers, suppliers, regulators, and civil society groups – to collaboratively design and examine AI solutions. 

Conclusion 

AI deployment and governance is a complex and challenging process that requires clarity and focus from the start. There are existing laws and regulations that already apply to AI in various domains – privacy, non-discrimination, human rights, liability, intellectual property, and labour. It is essential to understand how these legal frameworks affect individual AI use cases and to comply with them. New legislation is likely to emerge as AI evolves and becomes more widespread. Therefore, cooperation with regulators and governments will be crucial, especially on how we clearly communicate what AI can and cannot do. We will have to all work together to build trust and foster sustainable growth. 

Author

2.3 3 votes
Article Rating

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

Related Articles

Back to top button
0
Would love your thoughts, please comment.x
()
x