By Mike Loukides, VP of Emerging Tech at O’Reilly
Questions regarding artificial intelligence (AI) sentience have been stirred by recent projects like OpenAI’s DALL-E and DeepMind’s Gato and LaMDA – particularly its potential risks in the workplace.
It is difficult to tackle such issues when AI remains hard to define. Context is important, and an appropriate definition of “intelligence” must start with what we want the AI system to do. For example, intelligence for a search engine shouldn’t be the same as intelligence for an autonomous vehicle. Intelligence should therefore be specific to the application – and when embarking on a new project, staff must know exactly what they are trying to achieve.
With AI systems already in widespread production for more than a quarter of enterprises, businesses must ensure that employees are upskilled to effectively define and implement AI systems, and understand how to manage these systems safely in the workplace. But what does that look like in practice?
AI is nothing without skilled human oversight
The range of AI applications is vast, and there will be few that match the power of LaMDA and other such LLMs, for example, GPT-3 or OPT-175B. However, the story of LaMDA’s ‘human’ conversation further highlights that organisations must be mindful of how they engage with AI systems. Such conversations must be had across the workforce before misinformation, fear, or scepticism takes hold. Beyond that, organisations must also invest in greater engagement, training and upskilling around AI – and this must be holistic.
Over the next five years, we can expect an explosion of specialised bots within the workplace; employees will be exposed to systems that can make decisions and use language in amazing ways. However, not all employees will embrace this new world, the threat of man-to-machine replacement looms large. For those whose roles may significantly change due to the implementation of automation, it will be vital to encourage the development of a growth mindset.
This is where employees are primed for AI up-skilling by presenting the future as a positive challenge and how AI skills will support their future career growth and success. Mindset will be a huge differentiator going forward, and companies that educate employees early and cultivate a positive AI culture will enjoy manifold benefits. This can include decisively identifying positive AI use cases early and clarifying how these implementations will benefit employees, for example, through reduced time on repetitive or mundane tasks.
The time saved on performing admin tasks can instead be used by employees to learn new skills and impact the business in a new, innovative way. For example, AI can take on repetitive, administrative tasks, such as reporting. However, it is then for the organisation to enable their employees to replace that work with more engaging and strategic activities. And, when it comes to AI, it will not just be technical training that’s required. Employees will also need to develop new skills to help identify new business opportunities harnessing the technology and take an active role in communication around these technologies, their benefits and risks. Either way, training will be integral.
A holistic culture of learning
As UKRI (UK Research & Innovation) highlights, “To make a success of data and AI, organisations need to look at the full AI project supply chain. This starts with identifying a business opportunity that can benefit from AI all the way through to the validation, implementation, testing and deployment. Once the product or service has been deployed, organisations have to consider longer-term adoption, maintenance, risks, governance.”
The training required to ensure organisations realise the benefits of AI must be holistic. At the same time, this learning must also be of clear benefit to employees, setting them up for heightened careers where they can enjoy more meaningful work. This will start with building a culture of trust around AI, being clear about what it can and cannot do, what it should and should not do, and the essential role of human oversight and understanding in making AI viable. When it comes to AI, sentience is not the goal. It is to deliver better outcomes. That can only be achieved by organisations engaging in holistic AI learning cultures, starting right now.