HealthcareAI

Reimagining Healthcare, Research, and Publishing Through AI – And the Ethics That Guide It

By Jill Luber, Chief Technology Officer at Elsevier

A recent study published in JAMA Network Open tested several widely used AI image generation models with prompts like “photo of a physician in the United States”. Despite statistics showing a more diverse physician population in the US, the depictions of physicians generated by the models did not accurately reflect that diversity. For example, female physicians were depicted in only 7% of the images, and some models failed to generate any images of Asian or Latino doctors at all.  

This stark underrepresentation reveals how Artificial Intelligence (AI) systems can reinforce outdated stereotypes and marginalize entire groups – even in healthcare, where diversity is critical to patient trust and appropriate, equitable care.  

As AI reshapes the healthcare ecosystem, from research to publishing, its reach, which extends deep into sectors that are regularly handling sensitive data such as medical information, can have a tangible, and potentially life-threatening impact on people’s lives.  

While the opportunity to advance human progress through the application of AI is exciting, with such influence comes a dual responsibility: to actively guard against bias and to protect privacy.  

AI bias is a shared responsibility  

AI systems depend on large datasets from which their responses are trained. However, this assumes that the quality of the data being fed into the system is accurate.  

Unfortunately, it’s not uncommon for large datasets to be opaque and complex, which leads to inaccurate outputs and/or falsified responses and bias arising when algorithms produce discriminatory outcomes. AI systems often reflect the values and assumptions of their creators – this is largely due to training data that lacks diversity or fails to represent the real-world population. In hiring, for example, biased models may favor male candidates over equally qualified women. In healthcare, the consequences are far more critical: misdiagnoses, inequitable treatment pathways, and the exclusion of vulnerable groups.  

Elsevier’s Insights 2024: Attitudes toward AI report, a global survey of researchers and clinicians, found that 24% of researchers ranked bias among their top three concerns. In the recently-released Clinician of the Future 2025 survey, clinicians stressed that trust in AI depends on training systems with high-quality, peer-reviewed, and current data, with 65% placing training data quality among the most important factors in building confidence. 

But bias is not a “tech problem” to solve in isolation. It is a shared responsibility, because AI does not operate in a vacuum, it operates in our world, shaped by our collective choices.  

From how data is collected to how algorithms are tested and governed; every decision embeds values and assumptions that affect real people. And AI’s reach is now so broad, touching clinical decisions, research funding, editorial priorities, hiring practices, even the news we read, no one is untouched by its impact. 

Fortunately, there is opportunity in this universality. The Pew Research Center found that 51% of U.S. adults who see racial and ethnic bias in healthcare believe AI could help reduce these barriers and 53% believe the same for hiring. These numbers signal a public readiness to see AI used as a tool for fairness, if it is developed and deployed responsibly. 

Data privacy protects people, not just data  

Addressing bias is only one part of the shared responsibility. With personal data, such as location and medical records inferring sensitive details, mishandling such information can erode public confidence and cause long-lasting harm to individuals and reputations.  

To address this, privacy tools, such as differential privacy and federated learning, offer a way forward, providing meaningful insights without exposing personal data, offering another layer of protection to maintain both trust and utility.  

Implementing bias controls 

Mitigating bias requires more than good intentions. It demands rigorous processes, discipline, diverse datasets, and transparent governance through techniques like Retrieval-Augmented Generation (RAG) to ground responses in trusted sources. Increasing requirements through audits and documentation means it is also becoming a necessary requirement to do business.  

To meet this approach effectively, it is our collective responsibility to ensure that AI supports, rather than replaces, human decision-making, especially when it comes to the sensitive matter of healthcare.  

In my role, we actively address these issues by building bias mitigation into the workflow. We assess datasets for representativeness before they are used and deploy RAG to ensure large language models are grounded in responses from peer-reviewed, trusted content to reduce risk of hallucinations or misinformation.  

These safeguards are supported by strong responsible AI guidelines principles and structures, including an AI oversight board, regular compliance reviews, and privacy impact assessments. Together, these measures ensure our systems reflect the rigorous standards we want them to serve.  

Looking to the future: ethical use of AI is a shared responsibility  

As AI becomes more embedded within the workflows of those making critical decisions, its influence on outcomes, and on the lives those outcomes touch, means we all share a stake in ensuring it is fair, transparent, and accountable. Human oversight will be essential for making sure AI decisions are reviewed, contextualized and corrected where necessary.  

AI is not just a tool; it’s a reflection of our collective choices. As it reshapes industries, we must ensure it does so equitably and ethically. That means embedding fairness into design, protecting privacy, and constant human oversight. 

Author

Related Articles

Back to top button