Future of AI

Navigating the risks of AI: ‘Bring your Own AI culture’

Artificial Intelligence (AI) is a technological innovation that has dramatically transformed the way businesses operate. Widely adopted, AI offers numerous advantages; from removing writer’s block, summarizing emails, and helping create presentations.

However, businesses must be aware of the risks of AI. One such risk is associated with Bring Your Own AI (BYOAI), where employees use external AI services to accomplish company-related tasks without obtaining official approval.

BYOAI involves employees utilising any form of external AI service to facilitate their work-related tasks, regardless of whether the company sanctions it. This could include AI-infused software, AI creation tools, or cloud-based application programming interfaces (APIs).  

A lot of these tools are publicly accessible and, no matter how innocent, could inadvertently release company IP outside the safe boundaries of the enterprise. 

The Risks of BYOAI 

The uncontrolled use of BYOAI may expose companies to several risks, including: 

1. Data Loss and Intellectual Property Leakage: Employees using unapproved AI tools may accidentally leak sensitive company data, leading to potential data breaches. Competitors could gain an unfair advantage if proprietary information like trade secrets or product designs are leaked through AI tools. 

2. Legal Risks: The misuse of AI can result in legal risks related to bias and discrimination. Anti-discrimination laws exist to protect individuals based on factors like race, gender, and age. Biases in AI that disproportionately harm certain groups can violate these laws.  Courts are increasingly recognizing “algorithmic bias,” where the very design of an AI system leads to discriminatory outcomes. This opens the door to lawsuits seeking compensation for those harmed. 

3. Security Risks: Unsolicited use of third-party AI applications can expose the organisation to malware and phishing attacks. Sensitive information can be unintentionally included in prompts, uploads, or outputs from AI tools. For instance, an employee might anonymize data for an AI analysis but miss a crucial detail that reveals identities. 

What are the typical causes for BYOAI 

Employees who are unaware of company policies, frustrated by limitations of authorized tools, or simply seeking the easiest option, may turn to personal AI tools for work tasks. 

These tools lack the robust security protocols and data protection measures, found in vetted options. Here, safeguards to prevent IP leaking to the outside world may well not be in place.  Potential consequences include data breaches, intellectual property leaks, and regulatory non-compliance. 

So how can companies navigate this trend? Job persona mapping is key. By understanding the specific needs and tasks of different roles (e.g., accountant vs. marketing specialist), organizations can provide the right AI tools for the job. This ensures employees have the functionalities they need without resorting to BYOAI options. It’s important to note, however, that persona mapping missteps can be costly. 

By carefully mapping job personas and providing the targeted AI tools, companies can empower their workforce while mitigating the risks associated with BYOAI. This approach unlocks the potential of AI while ensuring a secure and efficient work environment. 

Mitigating BYOAI Risks 

To mitigate the risks associated with BYOAI, companies must enforce strict BYOAI policies. These should guide employees on the approved use of AI in the workplace and clearly outline the potential risks and consequences of misuse. Additionally, tech leaders need to involve other teams, such as security and data governance, in developing these policies. 

The level of control around generative AI applications varies significantly across industries. Highly regulated sectors like finance and healthcare tend to be more cautious, with companies opting for a complete ban on AI tooling. 

Equally, entrepreneurial fields like technology favour a more nuanced approach. 

Only a low proportion of organizations in these sectors enact a complete block. Instead, they rely on Data Loss Prevention (DLP) controls to identify sensitive information (source code, personal data) being uploaded to AI applications. 

In short, tools and policies can do their part, but Securing the Human is key and when deploying tools and educating employees a well-structured Organisational Change management regime is key. 

Generative AI  

Generative AI, a subset of AI, has seen a rapid increase in popularity. Generative AI refers to AI models that can generate new content. Examples include OpenAI’s ChatGPT and DALL-E2. While these tools can enhance productivity, uncontrolled use can lead to security and privacy violations. 

Workplace Policies for AI 

The use of AI in the workplace has prompted many organisations and governments to implement policies aimed at safeguarding against the potential risks of AI. These policies typically focus on protecting sensitive company information and ensuring the ethical use of AI. 

The Role of HR in Implementing AI 

HR professionals play a crucial role in overseeing the implementation of AI in the workplace. This includes communicating company values around the use of AI, implementing AI policies, and providing training to employees on the approved use of AI tools. 

HR Departments involved in the Organisational Change Management process can assist in the following ways: 

  • Identifying AI Skills Gaps: Analysing the skill sets needed to work effectively alongside AI tools and developing training programs to bridge any gaps within the workforce. 
  • Building a Culture of AI Literacy: Foster a culture where employees understand the potential and limitations of AI, promoting responsible and ethical use of these technologies. 
  • Supporting a Human-AI Partnership: Emphasize that AI tools augment human capabilities, not replace them. Encourage collaboration between humans and AI for optimal results. 

Looking ahead  

As AI continues to evolve, the concept of BYOAI will likely become more prevalent.  

However, only by proactively developing and implementing BYOAI policies, providing training to employees, and collaborating with regulatory bodies can businesses ensure the ethical and secure use of AI.  

Remember to take the first step by issuing an approved AI Tool List, which addresses explicitly restricted use of certain AI tools that deal with sensitive data and lack robust security measures. This not only encourages employees to utilize approved options first but also protects the business’ long-lasting integrity.  

Author

  • Richard Owen

    Richard Owen leads the design of collaboration technology solutions for the Digital Workplace Solutions practice at Unisys. He has more than 30 years of experience as a director, advisor and innovator for digital, cloud and social collaboration systems such as Microsoft365, Microsoft Teams, Google Workspace and Workplace by Facebook.

Related Articles

Back to top button