When most companies consider artificial intelligence (AI) and how best to use it, the conversation typically revolves around automation, efficiency gains, and cost savings. Commercial players are pouring incredible amounts of capital into AI-powered initiatives to optimize their workflows, slash operational expenses, and/or generate marketing content faster. We recommend a fundamentally different route—one that’s rooted in education, empathy, and accessibility.
Our mission is to make multiple decades of complex auditory neuroscience not just available, but actually accessible, to people confronting hearing loss and hearing loss-related challenges (e.g., tinnitus, hyperacusis, hidden hearing loss, and dementia). So, our approach was not simply to deploy AI to do more with less. Instead, we’re using it to communicate better, understand deeper, and connect more authentically.
That mission led us to build OTIS (in beta version now), a conversational AI (currently English-only) that uses a retrieval-augmented generation (RAG) architecture—pairing a domain-specific vector database with OpenAI’s GPT models—so every answer is grounded in our curated auditory neuroscience corpus.
We uncovered something unexpected in doing so. AI isn’t just changing how we engage with data—it’s transforming how people engage with themselves and their health.
Using RAG to bridge the AI-human gap
The technical foundation of this approach is RAG, which enhances the generative capabilities of large language models (LLMs) like ChatGPT by injecting domain-specific data into the model’s reasoning process. This allows the AI to generate responses grounded in trusted, up-to-date information rather than relying solely on the model’s pretraining knowledge.
In our case, that domain-specific data included more than 30 years of translational and basic auditory neuroscience. Data of this kind is largely inaccessible to the general public due to its density, technical language, and publication in academic silos. By carefully curating and embedding hundreds of relevant peer-reviewed studies and internal research documents, we were able to surface insights quickly and succinctly for site visitors.
Why do we recommend RAG? Because it ensures the answers OTIS provides aren’t generic. They’re instead rooted in evidence. Responses reflect the hard-won insights of decades of human expertise—reformatted and rephrased into language real people can understand and act on.
AI that meets people where they are
We assumed OTIS users would ask about mechanisms, molecules, and scientific papers given the base dataset was built on scientific literature. But as OTIS launched and real interactions began, a different (but not surprising) pattern began to emerge.
Users were asking deeply personal questions like “Will this help my tinnitus?” or “Can this make a difference for my dad, who’s showing early signs of dementia?”
Even though it’s somewhat obvious, the shift in thinking was still meaningful for us, as it highlighted something often overlooked in AI development. People don’t just want information. They want clarity, reassurance, and personalized context. They want to understand how something specifically relates to them.
By combining LLMs with RAG and a human-centered interaction design, we made it possible for OTIS not just to answer questions, but to understand intent. And that’s where AI is already going—away from transactional interactions and toward real conversations.
Human-centered AI in the health industry
A longstanding challenge in translational medicine has always been bridging the gap between promising lab research and practical, real-world application. AI has huge potential to be that bridge.
Our chatbot acts as a translator between the researchers and the readers. So, rather than reading a dense 15-page paper on synaptic magnesium buffering, a user can ask OTIS, “What causes hidden hearing loss, and can anything reverse it?” In seconds, they’ll get an explanation that yes, synthesizes rigorous research, but it’s returned to the prompter at an accessible level.
This model—using AI to humanize complex medical information—has big implications. Not just for our field of auditory health, but for all domains of the larger industry, where scientific literature far outpaces public understanding. In the U.S. and U.K. alone, where health literacy levels remain low, such AI applications can empower people to take informed action earlier and ultimately improve outcomes.
Toward responsible, domain-specific AI
AI certainly isn’t suffering from a shortage of hype. But the next wave of transformation won’t come from general-purpose AI tools with flashy interfaces. It will instead come from domain-specific AI systems trained to answer the questions real people are asking. And that’s where RAG shines. By grounding generative models in specific, relevant data, RAG can boost accuracy and increase trust.
However, this approach requires discipline. It’s not enough to dump documents into a vector store and call it a day. We stress-tested OTIS across edge cases and ensured the chatbot’s tone balanced scientific accuracy and a neutral voice. Monitoring and iterative development are ongoing priorities.
A new kind of competitive advantage
So, while it’s tempting to chase flashy outputs and applications, our experience suggests a more enduring strategy that invests in understanding versus output alone. OTIS is constantly improving based on user interactions, but chats are anonymized, securely stored, and reviewed only to improve response accuracy.
By building systems that reflect how people actually think and feel, we can all unlock AI’s promise, creating experiences that educate, support, and empower. That’s the ultimate ROI no matter your industry.
AI is already a lens through which we view and redesign the way information is accessed, processed, and understood. We chose to focus that lens on the decades-old communication gap in hearing science and hearing preservation.
We’ve seen firsthand how AI, when thoughtfully used, doesn’t replace people. It better connects us.