The world will never be the same after the events of 2023, AI has crashed into the tech landscape creating huge waves across society. Our appetite for this new technological revolution is seemingly insatiable, with each day another AI headline and new possibilities announced.
More recently the automotive industry jumped on the bandwagon with Volkswagen announcing at CES 2024 that they are integrating ChatGPT into their cars. With advancements in AI in automotive industry, companies are exploring innovative applications to enhance vehicle automation, safety, and user experience.
Whilst announcements such as the one from Volkswagen generated significant column inches it barely scratches the surface when you consider the impact artificial intelligence can have on the automotive sector through autonomous mobility.
AI has been booming and its stock never higher, yet we’ve not seen this optimism translate into the autonomous vehicles sector. As we expect AI to continue on its current trajectory of growth in 2024, will autonomous vehicles benefit from AI’s meteoric rise or continue to be left in its shadow?
Why is there a disconnect between artificial intelligence and autonomous vehicles?
The gap between AI and autonomous vehicles, is one of perception rather than significant difference in the progress of development of these two technologies. The stratospheric growth of AI over the past 18 months has principally been driven by large-language-models (LLMs) such as Open AI’s ChatGPT or Google’s Bard.
Whilst undoubtedly demonstrating impressive capabilities, the remit that they operate in allows for far many mistakes and errors than autonomous vehicles. Consequently, we’ve seen adoption skyrocket for LLMs with end users allowing for the odd mistake here or there that simply and quite rightly wouldn’t be accepted in an autonomous vehicle
The challenges the autonomous vehicle sector is grappling with currently will however become equally important in the wider AI landscape as its use cases begin to expand. . One of these big challenges revolves around data. An advanced driver assistance system (ADAS) or autonomous driving (AD) system relies on sensors (such as cameras and radar) to ‘see’ the world around them. The data these sensors collect is processed by machine learning to train an AI algorithm, which then makes decisions to control the car. However, handling, curating, annotating and refining the vast amounts of data needed to train and apply these algorithms is immensely difficult. As such, autonomous vehicles are currently pretty limited in their use cases.
AI developers outside the autonomous mobility world are similarly drowning in data and how they collate and curate data sets for training is equally crucial. The issue of encoded bias resulting from skewed, low quality data is a big problem across sectors: bias against Black people has been found in hiring and loans algorithms and in art. As applications of AI only continue to increase and reshape the world around us, it’s critical that the data feeding algorithms are correctly tagged and managed.
In other sectors, errors are more readily tolerated, even while bias harms. Consumers may not mind the odd mistake here and there when they enlist the help of ChatGPT, and even find these lapses amusing, but this leniency won’t last long. As reliance on new AI tools increases, and concern over its power grows, ensuring applications meet consumer expectations will be increasingly important. The pressure to close the gap between promise and performance is getting bigger as AI moves from science fiction to reality.
Can humans and machines align?
Such questions carry into the important topic of AI alignment – a new focus in artificial intelligence. It’s a field of safety research that centres on aligning AI with human and societal values and looks to build a set of rules or principles which AI models can refer to when making decisions, so outcomes are in tune with human goals.
This concept of humans setting standards that AI must meet, rather than being dictated to by code, will be vital in shaping the future of both autonomous vehicles and AI as a whole. One of the reasons true self-driving cars are struggling to materialise is because there is no absolute truth with driving: driving is subjective and everyone will do it differently.
Navigating the complexity and subjectivity of driving means a new methodology is needed. Old tactics of training AI through observing human behaviour won’t work – instead, developers need to employ an outcome-based approach and first decide how they want a product to behave, then, how they will achieve this behaviour.
At the heart of this new way of working is an iterative approach. As an algorithm is developed it should be monitored and the evolving dataset shaped, to ensure it aligns with the predetermined product goals. Incremental progress may not grab as many headlines but it’s crucial in prioritising safety, winning consumer trust and marrying expectation with end results. And there are more immediate economic wins to be gained, too, as iterative processes can help manufacturers cut costs.
The future of artificial intelligence and autonomous mobility are inextricably linked and they both face fundamental challenges and barriers to overcome. Tackling the issue of AI alignment must be at the top of the agenda for leaders and developers in AI and automotive sectors, to realise the true potential of AI and autonomous mobility over the coming years.