Future of AIAI

5 Lessons from 10 Years of Building AI in Healthcare

By Elad Lachmanovich, Chief Technology and Product Officer, TytoCare

After a decade of developing and deploying AI in healthcare,ย Iโ€™veย learned that meaningful progress depends on far more than high-performing algorithms. It requires a deep understanding of how data systems, regulatory expectations, clinicalย expertise, and real-world workflows intersect. AI in healthcare forces you to navigate ambiguity, communicate across disciplines, andย design forย environments where the stakes areย highย and the margins for error are small. These lessons reflect what it truly takes to move AI from theoretical potential to practical clinical impact.ย ย 

  1. Strong data infrastructuredeterminessuccess.ย 
    Every effective healthcare AI model begins with high-quality data, but collecting, annotating, and filtering that data is not a task with a clear end point. It is an ongoing, iterative process that evolves alongside the model itself. Data pipelines need continuous refinement, just like any software system. When teams invest early in robust data foundations, they reduce long-term costs, prevent avoidable bottlenecks, and ensure the model can be improved reliably over time. Every choice about labeling protocols, quality control, and feature definition shapes the modelโ€™s diagnostic precision and its ability to scale.ย 

Only afterย an initialย deployment can you see where the gaps are and what needs restructuring. In fact, many teams discover that their biggest breakthroughs come not from changing the architecture of the model, but from improving data quality, restructuring annotation guidelines, or tightening the definition of what counts as โ€œground truthโ€ in the first place. In healthcare AI, data infrastructureย isnโ€™tย a supporting function. It is the backbone of the entire development process.ย 

  1. Regulation evolves as fast as technology.
    Regulation in healthcare is not static. Standards shift, new evidence changes expectations, and each clinical domain has its own regulatory culture. What aligned with requirements two years ago may no longer be acceptable. Teams must plan for continuous adaptation, recognizing that regulatory pathways are shaped by evolving science, patient safety considerations, and real-world outcomes.ย 

Treating regulatory engagement as a long-term partnership with authorities creates more resilient products and smoother review cycles. Regulators today are also learning in real time, especially as AI introduces new questions about transparency, validation, bias, and safety. This means companies must be prepared to explain their models at a deeper level than in the past, including how they were trained, how they generalize across different populations, and how they perform in edge cases. Mature teams question assumptions, track changes proactively, and build flexible systems that can be updated without disruption. Organizations that succeed in this field understand that regulation is a living framework, not a fixed checklist.ย 

  1. Clinical understanding must extend beyond clinicians.
    Healthcare AIย canโ€™tย be built in isolation from medicine. Engineers, data scientists, product managers, and regulatory specialists all need a strong grasp of clinical context. Understandingย how diseases develop, what symptoms clinicians prioritize, how chronic conditions evolve, and howย different typesย of errors impact patient care is essential for designing responsible AI.ย 

When everyone on the team is aligned on clinical reasoning, decision-making becomes clearer, communication improves, and models are built with a moreย accurateย representation of real patient needs. This shared understanding helps ensure that AI systems support clinicians rather than disrupt their workflow. True progress comes from tight collaboration andย a sharedย vocabulary between technical and clinical teams.ย 

  1. The first domain is the hardest, but also the most formative.
    Developing the first AI model in a new medical domain is demanding. Data is often sparse. Labels can be inconsistent. Definitions of clinical success may not yet be standardized. Teams mustย determineย how to evaluate diagnostic accuracy, how toย demonstrateย clinical relevance, and how toย validateย outcomes ethically.ย 

Because we were among the first to bring a solution of this kind to the market, the work carried both the excitement of breaking new ground and the challenge of building withoutย establishedย benchmarks. There were no playbooks, few precedents, and limited agreement on how digital diagnostics should be evaluated. This early phase requires persistence, experimentation, and constant communication between technologists and clinicians.ย 

But once the first model gains approval, scaling becomes significantly easier. Data grows more rapidly, regulatory expectations become clearer, and organizations gain the confidence to iterate faster. Theย expertiseย developed during this stage becomes a strategic advantage, shaping future models, training processes, and evaluation strategies. This foundational work sets the stage for everyย subsequentย advancement, creating patterns and structures that accelerate progress across the entire domain.ย 

  1. Clinical consensus is rare, and that shapes how AI must be built.
    Unlike other fields where model evaluation is more straightforward, clinical AI must confront the reality that even expert physicians often disagree. There is rarely a single correct interpretation of a case. As a result, AI models cannot rely on binary ground truths. They must be trained and evaluated through a probabilistic lens, emphasizing calibration and confidence scoring rather than definitive outputs.ย 

Development should involve multiple clinicians with diverse backgrounds to ensure that models reflect a broad range of real-world perspectives and reduce bias.ย This diversityย isnโ€™tย just helpful; it is necessary.ย The aim is not to replace clinical judgment. It is toย provideย well-calibrated insights that support human decision-making, especially in situations where ambiguity is unavoidable. Embracing clinical variability is essential for building AI that clinicians can trust.

Building AI for healthcare means designing systems that respect the complexity of medicine and the people who depend on it. Progress requires patience, collaboration, scientific rigor, and a willingness to revise assumptions asย new informationย emerges. It also requires acknowledging that adoption takes time.ย Modelsย must earn trust through transparency, reliability, and ongoing validation in real clinical environments.ย 

The work is demanding, but when AI is developed responsibly and with a deep understanding of clinical reality, it can strengthen decision-making, improve consistency of care, and expand access for populations who need it most.ย Ultimately, theย goal is not to automate medicine but to enhance it, giving clinicians better tools and patients better outcomes. That long-term impact is what makes every challenge worthwhile and what continues to drive innovation in this rapidly evolving field.ย 

Authorโ€™s bio:ย 

Eladย Lachmanovichย is the Chief Technology and Product Officer atย TytoCare, transforming the primary care industry by bringing doctor’s visits into the home with remote physical exams that provide affordable, always-on, and accessible primary care for all.ย TytoCareย works with healthcare insurers and providers to provide better access to primary care virtually, with a handheld exam kit that connects users with a clinician for a medical exam and telehealth visit no matter where they are.โ€ฏย 

In the decade since the foundation of the company, Mr.ย Lachmanovichย has led the product development ofย TytoCareย as a major player in the telehealth market. Under his leadership, the company has built partnerships with over 250 major healthcare players across the world, andย TytoCareย has become synonymous with the best telehealth examinations and monitoring whileย TytoCareย hasย establishedย a track recordย of improving access to healthcare and better telehealth adoption and results than other solutions on the market. Recently, Mr.ย Lachmanovichย led the creation of the worldโ€™s first FDA-approved AI Lung Suite, included in the TIME 2025 Best Inventions List.โ€ฏ

Mr.ย Lachmanovichย has over 15 years of Product Development experience within the Healthcare sector, having previously worked with companies such as Covidien andย Superdimensionย (acquiredย by Covidien). Previously, Elad led development in an elite IDF technology unit.ย 

ย 

Author

Related Articles

Back to top button