When most companies consider artificial intelligence (AI) and how best to use it, the conversation typically revolves around automation, efficiency gains, and cost savings. Commercial players are pouring incredible amounts of capital into AI-powered initiatives to optimize their workflows, slash operational expenses, and/or generate marketing content faster. We recommend a fundamentally different routeโone thatโs rooted in education, empathy, and accessibility.ย
Our mission is to make multiple decades of complex auditory neuroscience not just available, but actually accessible, to people confronting hearing loss and hearing loss-related challenges (e.g., tinnitus, hyperacusis, hidden hearing loss, and dementia). So, our approach was not simply to deploy AI to do more with less. Instead, weโre using it to communicate better, understand deeper, and connect more authentically.ย ย
That mission led us to build OTIS (in beta version now), a conversational AI (currently English-only) that uses a retrieval-augmented generation (RAG) architectureโpairing a domain-specific vector database with OpenAIโs GPT modelsโso every answer is grounded in our curated auditory neuroscience corpus.ย ย
We uncovered something unexpected in doing so. AI isnโt just changing how we engage with dataโitโs transforming how people engage with themselves and their health.ย
Using RAG to bridge the AI-human gapย
The technical foundation of this approach is RAG, which enhances the generative capabilities of large language models (LLMs) like ChatGPT by injecting domain-specific data into the model’s reasoning process. This allows the AI to generate responses grounded in trusted, up-to-date information rather than relying solely on the modelโs pretraining knowledge.ย
In our case, that domain-specific data included more than 30 years of translational and basic auditory neuroscience. Data of this kind is largely inaccessible to the general public due to its density, technical language, and publication in academic silos. By carefully curating and embedding hundreds of relevant peer-reviewed studies and internal research documents, we were able to surface insights quickly and succinctly for site visitors.ย
Why do we recommend RAG? Because it ensures the answers OTIS provides arenโt generic. They’re instead rooted in evidence. Responses reflect the hard-won insights of decades of human expertiseโreformatted and rephrased into language real people can understand and act on.ย
AI that meets people where they areย ย
We assumed OTIS users would ask about mechanisms, molecules, and scientific papers given the base dataset was built on scientific literature. But as OTIS launched and real interactions began, a different (but not surprising) pattern began to emerge.ย
Users were asking deeply personal questions like โWill this help my tinnitus?โ or โCan this make a difference for my dad, whoโs showing early signs of dementia?โย
Even though itโs somewhat obvious, the shift in thinking was still meaningful for us, as it highlighted something often overlooked in AI development. People donโt just want information. They want clarity, reassurance, and personalized context. They want to understand how something specifically relates to them.ย
By combining LLMs with RAG and a human-centered interaction design, we made it possible for OTIS not just to answer questions, but to understand intent. And thatโs where AI is already goingโaway from transactional interactions and toward real conversations.ย ย
Human-centered AI in the health industryย
A longstanding challenge in translational medicine has always been bridging the gap between promising lab research and practical, real-world application. AI has huge potential to be that bridge.ย
Our chatbot acts as a translator between the researchers and the readers. So, rather than reading a dense 15-page paper on synaptic magnesium buffering, a user can ask OTIS, โWhat causes hidden hearing loss, and can anything reverse it?โ In seconds, theyโll get an explanation that yes, synthesizes rigorous research, but itโs returned to the prompter at an accessible level.ย ย
This modelโusing AI to humanize complex medical informationโhas big implications. Not just for our field of auditory health, but for all domains of the larger industry, where scientific literature far outpaces public understanding. In the U.S. and U.K. alone, where health literacy levels remain low, such AI applications can empower people to take informed action earlier and ultimately improve outcomes.ย
Toward responsible, domain-specific AIย
AI certainly isnโt suffering from a shortage of hype. But the next wave of transformation wonโt come from general-purpose AI tools with flashy interfaces. It will instead come from domain-specific AI systems trained to answer the questions real people are asking. And thatโs where RAG shines. By grounding generative models in specific, relevant data, RAG can boost accuracy and increase trust.ย ย
However, this approach requires discipline. Itโs not enough to dump documents into a vector store and call it a day. We stress-tested OTIS across edge cases and ensured the chatbotโs tone balanced scientific accuracy and a neutral voice. Monitoring and iterative development are ongoing priorities.ย
A new kind of competitive advantageย
So, while itโs tempting to chase flashy outputs and applications, our experience suggests a more enduring strategy that invests in understanding versus output alone. OTIS is constantly improving based on user interactions, but chats are anonymized, securely stored, and reviewed only to improve response accuracy.ย
By building systems that reflect how people actually think and feel, we can all unlock AIโs promise, creating experiences that educate, support, and empower. Thatโs the ultimate ROI no matter your industry.ย ย
AI is already a lens through which we view and redesign the way information is accessed, processed, and understood. We chose to focus that lens on the decades-old communication gap in hearing science and hearing preservation.ย
Weโve seen firsthand how AI, when thoughtfully used, doesnโt replace people. It better connects us.ย



