The potential impact of AI on our lives and livelihoods is one of the most discussed and debated topics of our time. Since November 2022 and the launch of ChatGPT, the development of AI tools for business and personal use has only accelerated.
Companies are looking at the resulting wave of new and emerging AI services to determine what can help boost profits, for example through automating more arduous and routine tasks. Deloitteās 2023Ā State of Ethics and Trust in Technology reportĀ showed that three-quarters (74 percent) of companies had started testing generative AI technologies and two-thirds (65 percent) were also using them internally.
Shadow AI
While leaders are looking at the official adoption of AI as part of their business operations there remains the shadow use of AI in their organisations. This is the deployment of AI without formal authorization or supervision from management or IT departments. In essence, it is when employees independently integrate AI to boost productivity or streamline processes without adhering to established protocols or asking for permission first. According to a survey by communications platform, Fishbowl, this is the caseĀ 70 percent of the time.
Shadow AI is an issue as it sees unauthorized technology implemented without any controls in place. As such, it could pose security threats through potential data breaches, impact the quality of work delivered, introduce inconsistencies in operations, and even violate industry regulations.Ā
AI use policy
To avoid inadvertently encouraging shadow AI, company rules and procedures need to keep up with the rise of AI and employees need to be educated about what is ā and is not ā permissible. Key to this is the development and communication of clear policies and guidelines concerning the use of AI within business operations. This starts with an AI use policy.
An AI use policy should help your business deploy AI technology safely, reliably, and appropriately, thereby reducing potential risks. Its purpose is to educate and direct your employees on the proper usage of AI within your company. But what is the best practice for such a policy?
Introduction and purpose
In the introduction and purpose sections of the AI use policy, it is beneficial to establish the overall context and objectives. Recognize that AI tools have the potential to transform business operations by automating straightforward tasks, enhancing productivity, enabling faster data processing and analysis, and facilitating more effective decision-making.Ā
However, it is crucial to acknowledge that AI tools also pose certain risks and challenges, particularly concerning intellectual property, data security, and data protection.
When drafting this policy, consider how the use of AI aligns with the culture of the organization and its people. For example, āthe company aims to employ AI in a manner that is human-centric, respects confidentiality and privacy, and upholds third-party rights.āĀ
Scope
The scope of the policy should be clearly defined to ensure comprehensive coverage. For instance, āthis policy applies to all staff, including employees, consultants, and contractors, and encompasses all work-related tasks, such as data analysis, content generation, coding, research and development and producing materials and documentation for company operations.āĀ
It may also be necessary to mention the use of AI tools on company-owned and provided equipment and devices, including any software with AI functionality.
Furthermore, it is important to reference any other relevant company policies that intersect with the AI use policy. These may include IT use and acceptable use policies, data and IT security policies, and data protection and records retention policies.Ā
Personnel may be required to read, acknowledge, and sign the policy before they can use AI tools within the company, integrating the policy into the broader compliance framework. It may also be necessary to outline the consequences of non-compliance to emphasize its importance.Ā
The policy could be supported and enforced through additional measures, such as training and user coaching and implementing data loss prevention (DLP) controls to detect or prevent unauthorized AI tool usage.
Approving AI Tools
The policy should also include a section dedicated to approving AI tools, starting with a list of any pre-approved tools, such as OpenAIās ChatGPT or Googleās Gemini.
It is worth noting that some commonly used AI tools may be based on these pre-approved tools. For instance, a browser powered by a pre-approved tool, such as Microsoft Edge with Copilot powered by ChatGPT. Conversely, some commonly used browsers might have extensions that have not been pre-approved, for example, the Sider extension for Google Chrome.
For each pre-approved AI tool, outline any restrictions regarding which teams or functions within the company are authorized to use the tool, the specific business purposes for which it can and canāt be used, and any other limitations, guidance, or cautions ā including where further guidance or approvals are required.
It is crucial to clarify whether the information provided is guidance or a rule. If it is a rule, state definitively that any AI tool that has not been approved cannot be used for the companyās business or operations. Additionally, outline the process for approving new AI tools. This might include providing an email address or an intranet template form to guide requesters.
Consider setting out the relevant evaluation criteria in the policy. At a high level, the minimum standard should be that the AI tool is legally compliant, transparent, accountable, trustworthy, safe, secure, and ethical. If appropriate, include more granular criteria, or handle this as a separate standard operating procedure (SOP) or checklist for the approver team or organization.
The evaluation criteria should be tailored to the type of organization and the most likely usage scenarios. For example, the intended use of the AI tool can significantly influence the level of risk and the relative importance of the evaluation criteria.Ā
Compare, for instance, a sales team using an AI tool to create content for a pitch document with software engineers using an AI tool for code generation and optimization. The type of AI tool should also be considered, such as whether it has controlled user access with a narrower or single-purpose use case or if it is a more open, generative tool with wider community access.
Data security
Data security and confidentiality are paramount. Reference the companyās policies and standards and consider if the AI tool can re-use or re-generate the companyās commercially sensitive information in a recognizable form.Ā
Intellectual property is another critical aspect, encompassing both inputs and outputs. Be particularly wary of any assertions by the AI tool or vendor that they own IP rights. Data protection and privacy should be rigorously ensured, referencing the companyās own data privacy policies and standards.
Ensure the tool is legally compliant and conduct a Data Protection Impact Assessment (DPIA) if any personal data or Personally Identifiable Information (PII) is, or could be, processed in the AI tool.
Human element
The policy should also emphasize human-centric considerations. Determine if the AI tool has built-in human safety factors and ethical guidelines. It is worth noting that many major AI developers have committed to adhering to AI-specific codes of practice and testing regimes. Legal and regulatory compliance factors will vary depending on the nature of the companyās business, and there may be AI-specific regulatory rules or guidance that need to be reviewed and followed. Additionally, assess any other compliance policies and governance procedures applicable to the AI tool and its proposed use.
Vendor evaluation is another critical aspect. Assess whether the vendor or tool is reputable, evaluating all providers involved, including the original developer if the vendor is a reseller or managed service provider. Review the relevant terms and conditions, terms of service, and privacy policy. Conduct a risk-benefit analysis to balance potential benefits and opportunities against the potential risks and impacts of misuse, including loss of control over the AI toolās outputs, misinformation, or deep fakes.Ā
Finally, consider risk-rating the tool to drive an appropriate level of ongoing monitoring and review of its usage against industry developments, legal changes, and regulatory oversight.
Suggested dos and donāts for input
When using AI tools, do not input any company’s confidential or commercially sensitive information without proper assessment or guidance from the approver or legal team. Avoid entering the companyās valuable information or intellectual property, such as proprietary source code. Ensure you have the right to use any third-party intellectual property, like images or copyrighted text, before inputting it; seek guidance if unsure.Ā
Do not input any personal data or PII, whether it concerns customers or co-workers. Be mindful of not perpetuating bias, discrimination, or prejudice, as AI tools, particularly large language models, can inadvertently do so. Always use AI tools in compliance with the companyās data security policies, including not sharing login credentials or passwords. Consider that prompts and content entered into external AI tools might re-surface unpredictably in future outputs.
Suggested dos and donāts for output
Always ensure a āhuman-in-the-loopā at each significant stage of AI tool usage, maintaining overall human oversight. Verify the accuracy of AI outputs before wider use, sharing or publishing, as AI-generated content can be prone to āhallucinationsā.
For AI use related to personal data, seek additional approvals and legal review or advice. Ensure that a human has the final decision in any use of AI that could impact living persons, checking compliance with GDPR and non-discrimination policies.
Recognize that your company may not own or control intellectual property in AI-generated content, and if this poses a problem, refrain from using AI tools for content creation. Clearly label AI-generated content, even for internal use.
AI technologies are now commonplace and rapidly evolving, so companies need to be proactive in updating their policies and educating their workforce. The policy should be reviewed and updated regularly, at least annually, to keep pace with the rapid developments in AI technology and evolving laws and regulations. Clear version control and ownership should be established to ensure accountability.
The responsible use of AI within an organization lets people reap its benefits while minimizing risks. Ultimately, an AI use policy not only safeguards the organization but also fosters a culture of responsible innovation, keeping people at the core of technological progress.