Future of AI

Overcoming the Barriers of the Physical World: Enabling AI Advancements

The rapid progress of artificial intelligence (AI) has revolutionised our lives and work, leveraging breakthroughs in large-scale machine learning and natural language processing. With innovations like ChatGPT, we are witnessing the emergence of true “intelligence” that was once confined to the realm of science fiction. However, a significant challenge remains in bridging the gap between technical brilliance and real-world implementation.

While AI has made remarkable virtual advancements, the next crucial step towards integrating AI-powered general-purpose robots into the physical environment is still far from reach. But what exactly are the obstacles hindering this progress, and how can we address them?

Identifying the Barriers to AI Deployment in Real-World Settings

Several notable gaps separate the advanced digital capabilities achieved by AI developers and the practical deployment of AI in physical contexts. One prominent obstacle is energy efficiency. At its core, a robot can be seen as a self-propelled laptop. Just as anyone who uses a laptop on the go knows, even the most advanced models struggle to perform optimally for more than a few hours before requiring a recharge.

The energy consumption extends beyond the screen to all the internal processes, and with robots, there is the additional consideration of physical movement. For safety reasons, tethered connections are impractical, necessitating a battery life that lasts significantly longer than the current average of 90 minutes.

Currently, the mechanics of robots and autonomous devices lack the necessary energy efficiency to operate continuously for sustained periods. They require regular and prolonged charging to maintain peak performance. Although the first generation of robots is utilised in industrial settings for manufacturing, they remain tethered to a power source. Existing general-purpose robots, such as Sanctuary’s Phoenix, a carbon-based humanoid, still exhibit clunky designs and high costs. It is anticipated that several iterations—potentially five to 10—will be needed to develop a genuinely independent model capable of freely performing tasks.

Closing the Gap: Bridging the Divide

To effectively develop robots that function in the physical world, it is essential to start with smaller, simpler applications that can serve as stepping stones towards full AI integration. This is where cobots, designed for specific tasks, come into play.

Examples include self-driving wheelchairs for physically impaired individuals, robots capable of scaling buildings to clean windows, or autonomous technology used for complex tasks like smoke diving.

Focusing on single-duty performance not only enhances energy efficiency but also ensures the highest standard of work can be achieved.

Integrating AI into the Physical Environment

Overcoming the energy efficiency challenge in robots is fundamentally tied to complexity. Human navigation in the real world requires extensive mental processing, making it difficult to convey this knowledge to robots.

One potential solution lies in the use of sensors. 3D sensors, such as depth cameras, can capture physical object geometry and texture. By analysing this data, AI algorithms can develop a comprehensive understanding of objects in the physical world. This understanding is crucial for addressing factors like spatial relationships, object and human movement, and facilitating safe and efficient navigation and interaction. Thus, the development of AI-powered mapping and localisation systems—using sensors and cameras to generate physical environment maps and track object movement—becomes vital for creating genuinely autonomous robotic assistants.

Additionally, improving mechanical efficiency is an area of focus. By enhancing robotic movement through artificial muscles and joints, we can reduce the energy required for their operation. However, achieving humanoid technology that closely emulates human motion is still a considerable distance away.

A Nuanced Approach: Collaboration for Advancement

For the technology industry, the aspiration to create intelligent robots has persisted for decades. However, a more nuanced approach is now necessary. Instead of relying solely on overarching solutions from industry giants, an evolutionary methodology is called for. Specialised startup companies with expertise in specific areas can contribute individual components that address the diverse challenges faced by developers. Once these components are in place, collaboration among stakeholders can pave the way for creating highly efficient, functional, and affordable general-purpose robots.

In conclusion, while AI has significantly impacted our virtual experiences, its integration into the physical world requires innovative solutions to overcome existing barriers. By addressing energy efficiency, leveraging advanced sensors, enhancing mechanical systems, and fostering collaboration, we can realise the ultimate vision of efficient, practical, and accessible AI-powered general-purpose robots.

Jonas Angleflod, Group CEO at Theories Group, a startup accelerator on a mission to fund and build startups leveraging technology automating the monetisation of high-intent digital traffic using technology and AI.

Author

Related Articles

Back to top button