
AI- and data-driven innovations are touted as the next leap forward in medical technologies, with the potential to improve care and relieve overburdened healthcare systems. However, even as AI innovation continues apace in research centres and tech labs all over the world, a major hurdle remains: trust. The trust issue has a dual nature, driven in part by technical issues such as data bias and black box models, but also by social and cultural responses fuelled by scepticism and limited understanding of emerging technologies. At Trilateral Research, our work is founded in the belief that this dual nature requires a sociotechnical response, rooted in interdisciplinary collaborations bringing together expertise in data science and technology alongside ethics, law, sociology, and other social science and humanities (SSH) disciplines.Ā Ā Ā Ā
Below are two case studies from our research focused on delivering medical AI tools technically and ethically trustworthy enough to be used in real world healthcare contexts. Both deploy cutting-edge technical approaches, such as federated learning and explainability techniques, via sociotechnical collaborations, ensuring technical robustness is driven by interdisciplinary expertise.Ā
Developing a fair, transparent AI model for rehabilitationĀ
Trilateralās work in the ongoing EU-funded PREPARE project demonstrates how sociotech collaboration can be embedded in cutting-edge technical work to develop trustworthy andĀ transparent AI. The project is developing AI tools that support clinicians by predicting personalised rehabilitation recommendations for patients with various conditions. Trilateral leads the development of the algorithmic bias identification and mitigation module, responsible for ensuring that the AI-based recommendation systems developed in the project operate fairly across all patient groups.Ā Ā Ā Ā
The PREPARE AI systems are trained using federated learning, an advanced machine learning approach that allows models to be trained across decentralised data sources without ever transferring sensitive patient data. The federated learning approach is especially suited to healthcare, where patient privacy is paramount and data sharing across institutions can be legally, ethically, and administratively complex. This approach enables the AI tool to learn from a diverse range of data sets across institutions, bolstering robustness while protecting patient privacy.Ā Ā
But federated learning presents distinct challenges for fairness auditing. Trilateralās bias detection and mitigation tool has been specifically tailored to this setting. The tool builds upon our interdisciplinary research expertise across technology, data protection, healthcare ethics, work systems, and law, constituting a novel and forward-looking contribution to ethical AI in privacy-preserving environments.Ā Ā Ā
Our work on bias auditing focused on bridging the gap between algorithmic design and societal acceptance. Specifically, the aims of our sociotechnical deployment were to identify and assess algorithmic bias in the AI model, define and optimise fairness metrics tailored for supervised learning in healthcare, and integrate bias analysis and explainability tools into the platform to enable oversight, transparency, and confident deployment by end users.Ā Ā
To achieve these goals, our sociotechnical teamās approach involved identifying potential biases through comprehensive literature reviews and analysing demographic patterns, followed by data risk mapping of clinical datasets to spot risks early on. They also developed a bespoke bias assessment tool to evaluate fairness across the federated learning pipeline while ensuring the integrity of data privacy. Finally, they explored fairness optimisation through post-processing mitigation techniques, testing techniques such as differential weighting to balance predictive performance while ensuring fairness across diverse patient groups.Ā Ā
Crucially, ethical considerations were not a box-ticking exercise secondary to technical performance. Each component, from initial literature reviews to algorithmic development, was part of an integrated interdisciplinary strategy to build models that are not only technically sound but also ethical, trustworthy, and deployable in real-world healthcare contexts.Ā Ā
Bias assessment and mitigation in medical AI toolsĀ
In the EU-funded iToBoS project, our sociotechnical collaboration was deployed to fix an underperforming medical AI tool. The project sought to develop an AI diagnostic platform for early detection of melanoma via a full-body scanner and an AI Cognitive Assistant integrating clinical, genetic, imaging, and other medical dataāan innovation with significant potential impact as melanoma cases rise and patients face uneven access to dermatologists. However, the tool was found to be misdiagnosing female patients with a higher rate of false positives, indicating gender bias.Ā Ā Ā Ā
In this instance, our sociotechnical approach was applied as a problem-solving tool to mitigate bias in the toolās outputs. Our team initiated a sociotechnical collaboration in which a data scientist and a researcher with expertise in law and healthcare ethics worked together to apply algorithms for pre-processing the training data and re-trained the tool, considering gender and other demographics that could trigger biased outputs, such as age, educational attainment, and socioeconomic status.Ā Ā
Throughout the process, Trilateral completed ethical, legal, privacy, and social impact assessments to evaluate the toolās real-world effects, applying a risk-based approach to minimise harm and embed privacy-, data protection-, and security-by-design techniques. The team also contributed to explainable AI techniques such as Concept Relevance Propagation, which explains the concepts a model uses to reach a prediction, to enable clinical oversight of the toolās outputs, and consulted with clinicians, patients, and patient advocates to ensure the tool met end user needs and is deployable in real-world healthcare settings.Ā
Three tips for responsible sociotechnical innovationĀ Ā
Below are three practical tips for research and innovation teams looking to embed a sociotechnical approach in their work.Ā
- Embed interdisciplinary expertise from the start: teams should bring together technical, ethical, legal, and social expertise from the beginning of the design process, to ensure interdisciplinary expertise is a core element of their work. This enables proactive risk identification, alignment with real-world settings, and design choices that reflect a range of values and considerations.Ā
- Operationalise fairness and transparency: translate ethical commitments into concrete technical methods, such as bias assessment tools, explainability mechanisms, and privacy-preserving techniques. This approach supports the creation of usable and trustworthy AI systems and helps ensure regulatory compliance across jurisdictions.Ā
- Co-create with end users and stakeholders: Meaningful engagement with stakeholders such as clinicians, patients, and domain experts throughout the design process ensures innovations are usable, socially acceptable, and appropriate for their intended context. By rooting tools in the realities of their intended environments, co-creative approaches strengthen both technical performance and ethical robustness.Ā Ā
ConclusionĀ
Often, ethical and legal requirements and other SSH considerations are framed as burdens to technological development. The case studies presented above complicate that narrative for a few reasons. First, they demonstrate that sociotechnical collaborations can be integrated into AI development without disrupting workflows. Second, they show how these collaborations enable researchers to anticipate and pre-emptively address emerging problems, supporting the development of robust technologies. Finally, they evidence the utility of the sociotechnical approach as key to developing the trust needed to bring new technologies out of the lab and into use in real-world settings.Ā
In a crowded innovation landscape dominated by speed and scale, trust is becoming a core differentiator. Without it, even the most advanced tools will stall in pilot phases, unable to transition to deployment in critical industries. Our experience indicates that sociotechnical approaches are key to surmounting this challenge, and that these methodologies should be sites of continuous innovation on par with technological development. As a multidisciplinary SME experienced in agile, cross-sector collaborations, Trilateral Research is uniquely positioned to lead this shift.Ā Ā