
In this interview, MD Akram Hussein, a seasoned leader in business analysis and product management, shares his perspective on how artificial intelligence is reshaping the future of regulated industries. With over nine years of experience driving digital transformations across healthcare, finance, and technology, Hussein has led enterprise-wide implementations of EPM, ERP, and EHR systems on platforms including Anaplan, Workday, SAP, and Epic. With an MBA and Advanced Certified Scrum Product Owner designation, he has a proven track record of defining product strategy through design thinking and applying Agile methodologies to optimize workflows. Here, Hussein discusses the expanding responsibilities of product managers in the era of AI, the most promising applications in life sciences, and the steps organizations must take to turn AI initiatives into measurable value.
From your perspective, how is AI reshaping the role of product management, particularly in industries with complex systems like healthcare and finance?
I think the product role is evolving rapidly with the emergence of AI and will continue to do so for the foreseeable future. Traditionally, product roles, especially in the regulated space, have focused on managing features by balancing technical, regulatory, and operational requirements, but with AI, the role is expanding to consider experiment design, model metrics, ethics, and governance. Also, product management professionals are now uplevelling their cross-functional expertise as they work with an additional set of stakeholders, such as ML engineers, data scientists, and so on. I believe more than ever, the focus now is on designing ethical AI solutions that solve critical clinical, financial, and supply chain challenges in the regulated space, so the AI product leaders need to ensure they can deliver innovative models with human-centric solutions.
What are some of the most promising applications of AI in life sciences today, and where do you see the biggest gaps between hype and actual adoption?
What I find quite promising is that some AI models can create hyper-personalized treatments based on patients’ metabolisms or reactions to previously prescribed plans. Some AI models have also been developed to guide AI-discovered drug production, which is now progressing through clinical trials more quickly than usual. We are also seeing startups as well as seasoned giants in this space deploying models in radiology and other areas of medical imaging to help detect conditions that humans are not always able to detect based on scans and images. Other areas in the healthcare space, such as administration, registration, scheduling, and billing, benefit from automated documentation, coding, and virtual chatbots to help patients. With that being said, I believe there are still gaps, particularly due to limited clinical approvals and regulatory uncertainty. Another issue seems to be resistance to change by the workforce, especially when it comes to AI, about trusting the data and processes vs. what they are traditionally trained to rely on or follow in the existing SOPs.
You have extensive experience using design thinking in product roadmaps. How can design thinking frameworks help organizations translate AI potential into practical, user-centered solutions?
I am starting to see a lot of product professionals drift away from focusing on the user and instead focusing on how to deploy the ‘perfect’ AI model. I don’t think that has any value, especially with design thinking, because you lose your focus on user needs. Instead of focusing solely on the innovation and the capabilities of the AI model, the primary focus should still be on – ‘what real human problems are we trying to solve and what do they actually need?’. With design thinking and AI, we need to create a compelling journey map and design artifact that can be understood by all the relevant stakeholders – compliance, developers, business product owners, ML engineers, and data scientists- and this can then help make regulatory workflows more efficient and tie into an auditable workflow. We should work with prototypes that can test user trust before building the models entirely, and that can help prevent scope creep. I think the overall goal should always be to prioritize the users and determine if we can deliver value to them based on their needs, rather than solely on the awesomeness of the AI model.
For life sciences organizations that want to adopt AI, what are the first three steps they should take to ensure the implementation creates measurable value rather than becoming a “proof-of-concept trap”?
The first step is to align your primary business goals and strategy with the initiative for implementing AI. You don’t necessarily need to focus on implementing the latest and greatest AI models, but rather on identifying what would help create a positive impact and contribute to long-term ROI. Involve the right stakeholders early in the process – legal, compliance, operations, and technical teams to identify gaps, and establish relevant baseline metrics and KPIs. Second, start with a pilot initiative in an area where it would create the most value and can measure business success before scaling up. You can assign cross-functional product owners and have regular checkpoints to ensure you are on track. Finally, focus on having reasonable and production-style data sets to build and test your models. I have often seen how having insufficient data or irrelevant mock-up data can create confusion or even an illusion among relevant stakeholders, which can derail progress. I also believe you should have a proper guideline for exit rules so that you know when to stop or pivot if things are not headed in the right direction.
What are the most common obstacles you’ve seen in enterprise-wide digital transformations when integrating AI, and how can product managers anticipate and address them early on?
I think integrating AI with complex legacy systems can become quite challenging, and if there’s lack of ownership of the systems, it can get even worse. Leadership or organizational changes up the chain can sometimes create barriers, too, if there’s no shared vision. I think where PMs can excel in such scenarios is to have discussions early in the process and involve the right stakeholders. They can design pilot increments with some human oversight and assess data readiness early on, or at least try to have some level of production-quality data to test. It’s also crucial to keep ML technical debt in check and clearly distinguish between model success metrics vs. business success metrics.
In your work with ERP, EPM, and EHR systems, how can AI be applied to optimize workflows, reduce inefficiencies, and improve decision-making? Could you share a practical example?
AI can definitely help with standardizing and automating redundant processes and help with documentation in the first place. Usually, pre-defined checkpoints can be automated, especially when working in a regulated setting. I think it can also help with detecting any errors early in the systems to reduce risk. AI can be used to generate reports for senior leadership faster in order to help with decision-making. One example I can think of is using AI capabilities efficiently during the testing cycle to automate some or all of the automation testing with AI/AI agents. This is gradually starting to be discussed in larger enterprises, similar to the concept of ‘vibe coding’ we see today.
How does Agile methodology complement AI adoption in regulated industries like healthcare and life sciences, and what adjustments need to be made for compliance and safety concerns?
Agile’s concept of iteration helps bring everything together, especially when trying to deploy in high-stakes, regulated environments. You can build in increments, test, and if there are issues with the model, it’s easier to fix those early on without derailing progress long-term. One thing that can work is having a hybrid delivery model where you can be Agile for discovery and model iteration, but also have a gated and documentation-heavy prototype validation phase before moving to production, and to check for compliance, bias, safety, etc. Demos should still be consistent to keep things transparent and also to ensure everyone’s alignment while being audit-ready.
Looking ahead, what skills will product managers need to develop to successfully lead AI-driven projects, especially in highly specialized fields like life sciences?
Depending on the setting and domain, PMs should be comfortable with basics to intermediate-level concepts on GenAI, LLM, model training/evaluation, data science, prompt engineering for co-pilots today, and also be comfortable with data governance and regulatory concepts. The world of AI is rapidly evolving, and I personally believe every traditional PM will, to some extent, need to skill up and be ready to be AI PMs in the near future. Like I said earlier, besides AI/ML literacy, PMs should also be able to lead cross-functional teams now comprising ML engineers, data scientists, etc., and that everyone understands the ‘why’ of their model-building purpose. In regulated settings like healthcare, PMs should not just focus on the technical AI rigor and fluency but also have proper regulatory insights to ensure ethical model deployment.