Healthcare

How Explainable AI Accelerates Drug Discovery

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to the AI Experience newsletter and join 50k+ tech enthusiasts.

Bridging the gap between designing a theoretical drug and a clinical trial remains one of the most challenging milestones for any pharmaceutical or biotech company.

Using explainable artificial intelligence (AI) is a massive competitive advantage in highly regulated industries, such as the pharmaceutical industry. It has huge potential to accelerate disease understanding and drug design by revealing the biological mechanisms that drive drug activity, stability, safety, and delivery. These are insights that traditional AI – often referred to as black-box AI – is simply not able to reveal.

When certain drug properties only appear in later steps of the design process (bioavailability, stability, immunogenicity, off-target effects, distribution, toxicity, etc.), it’s crucial to have an understanding of how and why these properties arise.

Imagine if these insights and understandings could be revealed in silico, an environment that is much less resource-intensive than in vitro and in vivo environments. If you’re imagining this, then you’re imagining a future where we are creating targeted, innovative drugs at much-reduced costs and getting them to market to the patients who need them much faster.

But if you’re still applying traditional, black-box AI to drug design, then you’re mired in the traditional drug discovery process.

The challenge of traditional drug discovery

Traditional drug discovery methods have long been criticized for being slow, expensive, and cumbersome. Researchers spend years experimenting with thousands or even millions of lead candidates, hoping to find one that exhibits a desired therapeutic effect. This laborious process often leads to a low success rate, making drug discovery a challenging and expensive endeavor.

This is bad news because every pharmaceutical and biotech company is looking towards the quickly approaching “patent cliff” in 2030 when several drug patents will expire. To protect their intellectual property (IP) and survive as an organization, pharmaceutical and biotech companies must bolster their drug pipelines by working on new products that strategically carve out their IP and market space. So with the slow and expensive nature of traditional methods impacting the timeline for creating new drugs, there is a need for more efficient approaches.

Explainable AI vs black-box AI in drug discovery

Enter explainable AI, which offers a transformative solution to the obstacles faced by traditional drug discovery approaches. Explainable AI algorithms not only provide accurate

predictions but they also offer insights into the underlying reasoning behind those predictions. This level of transparency and interpretability allows researchers to understand and trust the predictions made by AI models.

This isn’t to be confused with generative AI. Generative AI, ironically, is really bad at generating new ideas. It’s called “generative” AI because it generates outcomes, but it doesn’t generate possibilities that weren’t possible with the original training data.

These outcomes might seem new because users weren’t aware of them in the first place, or because they are new combinations of already existing data based on what the AI was trained on. But it is becoming clearer and clearer to users that generative AI cannot come up with new ideas, at least not in the innovative or scientific sense, and that it is even more likely to present unproven or unverified re-combinations of data as fact.

Moreover, generative AI is a black box: there is no visibility into how a prediction is made. So even if a prediction is a “correct answer,” it’s like flipping to the back of your mathematics book. You have an outcome, but without being able to see the work, you have no understanding of how to solve the problem. Therefore, you also have no real trust that the prediction is right and no possibility to test its validity.

Would you be willing to build a multi-million euro rocket to go to space that depended on such a prediction? I didn’t think so. And that’s precisely why black-box AI fails in drug discovery.

Accelerating the identification of lead candidates

One of the critical stages in drug discovery is the identification of lead candidates, which have the potential to become effective drugs. As with many things: More is not always better. An infinite number of lead candidates does not mean an infinite number of drugs, let alone an infinite number of good drugs.

Focusing on candidates with the highest likelihood of success based on thoughtful, early-stage work is the best way to balance capacity and infrastructure constraints. If you can understand, confirm, and learn while you design, optimize, and experiment, then you have the insights to confidently identify and progress the best leads.

Explainable AI enables researchers to not only rapidly analyze data, including chemical modifications, genomic information, and experimental data, but also understand the relationships within that data. Computer simulations are comparatively much less expensive and resource-intensive than in vitro and in vivo experiments, which is why moving this work to in silico models saves valuable time and resources. Adding understanding to such predictions is like icing on the proverbial cake.

Conclusion

Explainable AI is playing a key role in ushering in a new era of precision medicine. In RNA therapeutics specifically, explainable AI models are identifying molecules with high activity and minimal off-target effects, heralding a new era of precision medicine. And here at Abzu, we’re getting closer to in silico designs that work in humans by identifying late-filling drugs and designing drugs that can make it to the finish line.

It’s essential to remember that AI serves as a tool, and like any tool, there are numerous variations – even within the same category. Successfully utilizing AI involves understanding your specific scenario and intended results.

If your goal is to speed up the creation of active and safe drugs (e.g., identifying the actual mechanisms that drive drug properties in vitro contributes to fewer, but more exceptional, lead candidates), then your only choice is explainable AI. If your goal is to reduce the resources required for in vivo testing (e.g., understanding the actual mechanisms that drive drug properties contributes to fewer and more successful in vivo outcomes), then your only choice is explainable AI.

Simply put: If your goal is to understand, then your only choice is explainable AI.

Author

  • Casper Wilstrup

    Casper is the inventor of the QLattice® symbolic AI algorithm. With over 20+ years of experience building large-scale systems for data processing, analysis, and AI, Casper is passionate about the impact of AI on science and research and the intersection of AI with philosophy and ethics. Casper is a distinguished speaker, having spoken on stage at events worldwide on topics such as the ethics of AI, explainable AI, and AI regulation. Casper is also one of the only people in the world who is fluent in reading ancient Sumerian

Related Articles

Back to top button