
Adam Miribyan has spent nearly a decade building software for EAP and wellness providers. He currently leads a development team of 20 and oversees AI integration into a platform serving tens of thousands of members. We spoke with him about deploying AI when the stakes involve someone’s wellbeing, how he thinks about crisis detection systems, and what acquirers miss when evaluating mental health technology platforms.ย
Adam, for readers unfamiliar with this space, what are Employee Assistance Programs, and why has software become so central to how they deliver mental health support at scale?ย
An Employee Assistance Program (EAP) is a voluntary, confidential, employer-sponsored benefit designed to help employees and their family members address personal or work-related issues that might impact performance or well-being. EAPs often include counseling for mental health, financial and legal guidance, or work life balance resources.ย
Software has become the backbone of modern EAPs. It delivers content and resources created by licensed clinicians that reach unlimited people facing similar struggles. Machine learning, guided by clinical experts, makes member experience more personalized. Members and counselors can communicate asynchronously in secure, private spaces. It simplifies real-time or in-person appointment scheduling and removes access barriers. Cloud computing and enterprise infrastructure scale these services to millions. Encryption and strict compliance protocols safeguard personal healthย data above all.ย ย
You’veย spentย nearly aย decade building technology for EAP and wellness providers.ย How did you end up specializing in this particular intersection of software and mental health?ย
I’veย always been fascinated by medicine and technology, and I loved studying economics in high school. I taught myself programming when I was 13 and later started freelancing as a web developer. During my second year in medical school, I dropped out to pursue a degree in economics and technology instead.ย
Back in 2013, one of my clients was a company calledย CipherHealth, where I was building software for healthcare providers. That intersection of healthcare and software felt like where I could make the biggest difference.ย
At the time I was living in Prague, in my early twenties, trying to figure out life one problem at a time โ which included battling depression. That experience deepened my empathy for others facing mental healthย challenges, andย drew me toward creating software that could provide real support.ย
Your team atย Curalincย Healthcare has started integrating AI into a platform that serves tens of thousands of members. What does AIย actually doย in an EAP context, and how do you approach deployment when the stakes involve someone’s mental health?ย
Given the sensitivity of mental health, we take a careful approach to rollout. Every AI feature goes through clinicalย review,ย weย monitorย closely after launch, and we default to caution.ย
Weโreย not trying to build AI that “replacesโ the humans who provide mental health support. The clinicianย remainsย in the driver’s seat, while AI enhances the process. Whether a member wants to use AI is entirely up to them โย we’veย put in considerable technical effort to ensureย it’sย easy for someone to opt out of AI completely in their EAP platform.ย
For example, AI can answer a member’s question, but that answer will always be grounded in data produced or curated by our team of licensed clinicians. AI can analyze trends in member behavior to aid the clinician, but it will never prescribe care itself.ย
Can you walk us through a specific technical challenge your team solved that improved how members access care or how clinicians deliver it?ย
A recent challengeย Iโmย proud of is building real-time crisis detection in our member portal. When someone is struggling with suicidal ideation, they oftenย donโtย call โย insteadย they search. They might type โI donโt want to be here anymoreโ into our search bar, and the challenge was detecting the intent at that moment and intervening appropriately.ย
We built a semantic risk detection layer that runs parallel to every search query. It was important that itย wasnโtย just keyword matching. The language around suicide is full ofย euphemisms. Phrases like โI just want the pain to stopโย donโtย containย obvious crisis words, but theyย indicateย serious distress. We decided to favor over-detection, because a false positive means someone gets offered support theyย didnโtย need, while a false negative means we miss someone in crisis.ย
When high-risk language is detected, we suppress normal search results andย immediatelyย display a supportive safety message with pathways to human support. Beyond the engineering work, the clinical language also had to pass legal and ethical review.ย
It functions as a digital safety net. Members who might never pick up the phone are beingย identifiedย and guided to human care in real time.ย
Curalincย was recentlyย acquiredย by private equity.ย You’veย also done technical due diligence on other healthcare platform acquisitions. What do acquirers typically overlook when evaluating a mental health technology platform, and what should they be asking?ย
The standard diligence areas โ revenue, contracts, HIPAA, tech debt โ buyers are usually fine there. But they sometimes missย the stuffย thatโsย specific to mental health.ย
First isย clinicalย workflow.ย How does data actually flow between the platform and the clinicians?ย Can a therapist see how theย memberย interacted with the app before the session or their case history from digital interactions? A lot of platforms look integrated, but the clinical team is often workingย blind.ย
Second isย configurationย complexity. Mental health platforms serve hundreds of employers, each withย itsย own eligibility rules, branding, platformย preferences. That adds up fast. Buyers can end up inheriting thousands of edge cases thatย nobodyโsย documented.ย
Third isย outcomes.ย Can you actually prove the platform improves clinical outcomes?ย Beyond logins or page visits โ real PHQ-9 score changes, crisis interventions, return-to-work. If youย canโtย measure that, it limits growth.ย
Honestly, the questionย Iโdย ask in diligence is simple: Show me what happens when a member in crisis uses your platform at midnight. That one question tells you about safety architecture, clinical integration, and operational maturity all at once.ย
You lead a development team of about 20 people while also serving as part-time CTO at a hospitality tech company. How doย youย context-switch between healthcare and hospitality, and does that cross-pollination inform how you build software?ย
I work best when I have longย stretchesย of uninterrupted time, so I try to avoid frequent context-switching. I block half the day each Monday and Wednesday to workย onย the hospitality tech company, which leaves me ample time for my work withย Curalinc.ย
The cross-pollination is real. Healthcare software demands high compliance and enterprise-grade architecture. That discipline directly informs how we build the hospitality platform. We get more scalable, secure foundations whileย maintainingย a startup-like delivery pace. On theย flip side, my experience with the hospitality startup helped me better understand team incentive structures and sharpen my ability to spot talent when hiring.ย Itโsย one of the reasons my team atย Curalincย maintains a strong delivery pace despite the rapid growth.ย
Many healthcare organizations struggle to move AI projects from pilot to production. What separates the implementations thatย actually reachย patients from the ones that stall out?ย
AI proof-of-concepts are so compelling thatย theyโreย easy to green-light. The reality is that if you have AI in your project, you get toย benefitย from the tailwind of easier leadership buy-in, market pressure and FOMO. But past a certain point you still do have to tackleย all ofย the complexity of aย healthcare software project. Using AI adds its own challenges with increased scrutiny around data governance and compliance, security requirements, infrastructure, and vendor agreements.ย So,ย in a way,ย it makes it really easy to start but harder to finish.ย
Overall, that makes releasing AI projects harder, especially in healthcare.ย Soย the projects that do get released are released because of better product vision and deeper understanding of the value AI is creating. Teams that use AI to amplify their product are more successful in that sense. Conversely, by hyper-focusing onย AI as your main value proposition, you lose track of what really has the potential to differentiate your product โ especially if your AI feature only does what ChatGPT, Claude, or Gemini will offer natively in a month. OpenAIโs ChatGPT Health and Claudeโs Apple Health integration only reinforce that this is where things are headed.ย
Mental health data is particularly sensitive. How do you balance the potential of AI-driven insights againstย the privacyย and trust concerns that come with this kind of information?ย
When someone reaches out for mental health support,ย theyโreย trusting us with something real. That shapes how we build.ย
We only collectย what’sย really necessaryย to help โ not everything we could analyze, but what weย should.ย
The underlying technical foundation is non-negotiable: end-to-end encryption, de-identification protocols, strict role-based access controls, and comprehensive audit trails are table stakes.ย
But protection that peopleย can’tย seeย doesn’tย build trust.ย Soย we’reย designing our experience around felt control โ giving members clear choices aboutย what’sย shared,ย what’sย analyzed, and what stays private, as well as the ability to access their data, correct it, orย deleteย it entirely.ย Itโsย front andย centerย and weย donโtย bury it in settings.ย
The potential of AI in mental health only exists because of trust. Privacy is the foundation, not a constraint on innovation.ย ย
What changes do you expect in how technology supports workplace mental health over the next few years, and what role will AI play in that shift?ย
Workplace mental health support is headed toward more proactive and preventative care. Over the next few years, I expect tools to become more integrated into workplace systems and more personalized, while also putting greater emphasis on privacy and trust.ย
AI will playย a central roleย byย detecting early risk patterns and helping clinicians respond sooner with targeted resources and interventions. Used well, AI can become part of the infrastructureย that empowers human connection, creativity, and performance. Companies that treat mental health as a checkbox will fall behind, and those that treat it as core infrastructure will lead.ย ย


