
As we scale artificial intelligence systems and embed processes globally, now is an essential time to take a deeper look at whether we are designing the models correctly. By doing so, we unlock their full potential and uncover how these models work to improve efficiency. Â
You’ve undoubtedly experienced AI that disappoints – a voice assistant failing in a noisy environment, or a recommendation system offering irrelevant suggestions. These moments highlight a deeper issue: current AI, for all its power, often struggles with digesting excessive amounts of data. This isn’t just about the occasional glitch; it points to a foundational fragility in how many of our most advanced AI systems are designed. Our pursuit of ever-larger models, driven by the “bigger is better” mantra, has often created powerful but opaque “black boxes.” These systems, while impressive in controlled environments, become prone to unpredictable errors and vulnerability when unleashed in the wild. On top of these challenges, there is an urgent need to develop next-generation energy-efficient AI systems.Â
To find solutions to these challenges, a re-examination of AI’s most basic structural components – its network motifs – is needed. An AI is a network of processing units, and within this, motifs are the recurring small patterns of connections, known as the micro circuits. These recurring micro-circuits within an AI dictate how information flows and is processed, mimicking how networks are built in the natural world. Â
By understanding how biological ecosystems build intricate, intelligent, and resilient networks, we can find crucial clues for designing artificial ones. Our work is to decode the dynamics of cellular regulatory circuits into fundamental equations that seek causality within the data, thereby inspiring a new approach to AI architecture. Intentionally designing these micro-structures means AI can be engineered not only to be more robust but also inherently more explainable and energy efficient.Â
There are two fundamental motif types:Â
Coherent loopsÂ
Characterised by an even number of negative connections, coherent loops make AI relentlessly pursue high-gradient regions during learning. While seemingly efficient, this leads to fragility; these networks are easily confused by noise, sacrificing stability and accuracy, making them highly susceptible to real-world imperfections. Â
Incoherent LoopsÂ
The true breakthrough lies with incoherent loops, distinguished by an uneven number of negative connections. While such loops, in which activation and repression act on the same target, are abundant in biological systems, they have remained intriguing. Surprisingly, we found that networks dominated by these loops learn far more stably and handle noisy, real-world data significantly better. They possess superior representational capacity and numerical stability, avoiding the pitfalls of chasing high-gradient regions and acting as nature’s blueprint for robust learning. Â
Leveraging these structural insightsÂ
These findings allow us to move beyond simply scaling up AI models and instead design systems that are smarter by construction. By engineering the underlying network motifs, we can build AI that is more efficient, adaptable, and deployable across everything from data centres to edge devices. Crucially, this approach makes AI explainable at the level of process, not just outcomes, by linking behaviour directly to structure. This transparency is essential for building trust in sensitive applications, where understanding how decisions are made matters as much as the results themselves.Â
Thus, to use nature and the brain in particular as inspiration, we find new ways to build robust AI. Ultimately, this approach to examining how AI makes decisions and behaves internally directs us to a more reliable, predictable, and safe system. For users, this could enable autonomous vehicles to navigate complex environments with greater confidence, provide more accurate medical diagnostic tools, and enable personal AI assistants to seamlessly integrate and enhance our lives. Drawing wisdom from living systems, such as the human brain’s intricate microarchitecture, reinforces the idea that applying similar principles to AI is a critical next step. The future of AI is not just about what it can do, but how it does it. By prioritising sophisticated structural design at the foundational level, we may truly revolutionise the way we build robust, explainable, and trustworthy AI systems that are not just powerful but profoundly intelligent in the face of our wonderfully noisy world.Â
