Future of AIAI

5 Lessons from 10 Years of Building AI in Healthcare

By Elad Lachmanovich, Chief Technology and Product Officer, TytoCare

After a decade of developing and deploying AI in healthcare, I’ve learned that meaningful progress depends on far more than high-performing algorithms. It requires a deep understanding of how data systems, regulatory expectations, clinical expertise, and real-world workflows intersect. AI in healthcare forces you to navigate ambiguity, communicate across disciplines, and design for environments where the stakes are high and the margins for error are small. These lessons reflect what it truly takes to move AI from theoretical potential to practical clinical impact.  

  1. Strong data infrastructuredeterminessuccess. 
    Every effective healthcare AI model begins with high-quality data, but collecting, annotating, and filtering that data is not a task with a clear end point. It is an ongoing, iterative process that evolves alongside the model itself. Data pipelines need continuous refinement, just like any software system. When teams invest early in robust data foundations, they reduce long-term costs, prevent avoidable bottlenecks, and ensure the model can be improved reliably over time. Every choice about labeling protocols, quality control, and feature definition shapes the model’s diagnostic precision and its ability to scale. 

Only after an initial deployment can you see where the gaps are and what needs restructuring. In fact, many teams discover that their biggest breakthroughs come not from changing the architecture of the model, but from improving data quality, restructuring annotation guidelines, or tightening the definition of what counts as “ground truth” in the first place. In healthcare AI, data infrastructure isn’t a supporting function. It is the backbone of the entire development process. 

  1. Regulation evolves as fast as technology.
    Regulation in healthcare is not static. Standards shift, new evidence changes expectations, and each clinical domain has its own regulatory culture. What aligned with requirements two years ago may no longer be acceptable. Teams must plan for continuous adaptation, recognizing that regulatory pathways are shaped by evolving science, patient safety considerations, and real-world outcomes. 

Treating regulatory engagement as a long-term partnership with authorities creates more resilient products and smoother review cycles. Regulators today are also learning in real time, especially as AI introduces new questions about transparency, validation, bias, and safety. This means companies must be prepared to explain their models at a deeper level than in the past, including how they were trained, how they generalize across different populations, and how they perform in edge cases. Mature teams question assumptions, track changes proactively, and build flexible systems that can be updated without disruption. Organizations that succeed in this field understand that regulation is a living framework, not a fixed checklist. 

  1. Clinical understanding must extend beyond clinicians.
    Healthcare AI can’t be built in isolation from medicine. Engineers, data scientists, product managers, and regulatory specialists all need a strong grasp of clinical context. Understanding how diseases develop, what symptoms clinicians prioritize, how chronic conditions evolve, and how different types of errors impact patient care is essential for designing responsible AI. 

When everyone on the team is aligned on clinical reasoning, decision-making becomes clearer, communication improves, and models are built with a more accurate representation of real patient needs. This shared understanding helps ensure that AI systems support clinicians rather than disrupt their workflow. True progress comes from tight collaboration and a shared vocabulary between technical and clinical teams. 

  1. The first domain is the hardest, but also the most formative.
    Developing the first AI model in a new medical domain is demanding. Data is often sparse. Labels can be inconsistent. Definitions of clinical success may not yet be standardized. Teams must determine how to evaluate diagnostic accuracy, how to demonstrate clinical relevance, and how to validate outcomes ethically. 

Because we were among the first to bring a solution of this kind to the market, the work carried both the excitement of breaking new ground and the challenge of building without established benchmarks. There were no playbooks, few precedents, and limited agreement on how digital diagnostics should be evaluated. This early phase requires persistence, experimentation, and constant communication between technologists and clinicians. 

But once the first model gains approval, scaling becomes significantly easier. Data grows more rapidly, regulatory expectations become clearer, and organizations gain the confidence to iterate faster. The expertise developed during this stage becomes a strategic advantage, shaping future models, training processes, and evaluation strategies. This foundational work sets the stage for every subsequent advancement, creating patterns and structures that accelerate progress across the entire domain. 

  1. Clinical consensus is rare, and that shapes how AI must be built.
    Unlike other fields where model evaluation is more straightforward, clinical AI must confront the reality that even expert physicians often disagree. There is rarely a single correct interpretation of a case. As a result, AI models cannot rely on binary ground truths. They must be trained and evaluated through a probabilistic lens, emphasizing calibration and confidence scoring rather than definitive outputs. 

Development should involve multiple clinicians with diverse backgrounds to ensure that models reflect a broad range of real-world perspectives and reduce bias. This diversity isn’t just helpful; it is necessary. The aim is not to replace clinical judgment. It is to provide well-calibrated insights that support human decision-making, especially in situations where ambiguity is unavoidable. Embracing clinical variability is essential for building AI that clinicians can trust.

Building AI for healthcare means designing systems that respect the complexity of medicine and the people who depend on it. Progress requires patience, collaboration, scientific rigor, and a willingness to revise assumptions as new information emerges. It also requires acknowledging that adoption takes time. Models must earn trust through transparency, reliability, and ongoing validation in real clinical environments. 

The work is demanding, but when AI is developed responsibly and with a deep understanding of clinical reality, it can strengthen decision-making, improve consistency of care, and expand access for populations who need it most. Ultimately, the goal is not to automate medicine but to enhance it, giving clinicians better tools and patients better outcomes. That long-term impact is what makes every challenge worthwhile and what continues to drive innovation in this rapidly evolving field. 

Author’s bio: 

Elad Lachmanovich is the Chief Technology and Product Officer at TytoCare, transforming the primary care industry by bringing doctor’s visits into the home with remote physical exams that provide affordable, always-on, and accessible primary care for all. TytoCare works with healthcare insurers and providers to provide better access to primary care virtually, with a handheld exam kit that connects users with a clinician for a medical exam and telehealth visit no matter where they are.  

In the decade since the foundation of the company, Mr. Lachmanovich has led the product development of TytoCare as a major player in the telehealth market. Under his leadership, the company has built partnerships with over 250 major healthcare players across the world, and TytoCare has become synonymous with the best telehealth examinations and monitoring while TytoCare has established a track record of improving access to healthcare and better telehealth adoption and results than other solutions on the market. Recently, Mr. Lachmanovich led the creation of the world’s first FDA-approved AI Lung Suite, included in the TIME 2025 Best Inventions List. 

Mr. Lachmanovich has over 15 years of Product Development experience within the Healthcare sector, having previously worked with companies such as Covidien and Superdimension (acquired by Covidien). Previously, Elad led development in an elite IDF technology unit. 

 

Author

Related Articles

Back to top button