We are witnessing an unprecedented acceleration and user adoption of Artificial Intelligence technologies in the Healthcare crisis as the COVID-19 pandemic grips, one country after the other.
AI helps fight the Pandemic
Daily headlines speak of Public health authorities around the world using Conversational AI in keeping citizens informed of the disease spread and symptoms. AI researchers are using open datasets such as CORD-19 (COVID-19 Open Research Dataset), a library of 24,000 research papers pertaining to the virus, to create machine learning modelsĀ that can help scientists find the information they need to fight the virus.
Among them is AI that involvesĀ Non- Pharmaceutical Intervention (NPIs) such as travel ban and school closure, which peruses the dataset to study their effectiveness in flattening the COVID-19 curve. AI helped reduce the time needed toĀ design testing kits in South Korea from three months to a few weeks.
The threat to Privacy, Safety, and Autonomy
This acceleration in AI deployment, though largely welcome, comes with ethical concern. A case in point is the facial recognition technology being adopted globally as a way to track the virusā spread. Privacy experts are concerned that in the rush to implement COVID-19 tracking capabilities, important and deep-rooted issues around data collection and storage, user consent, and surveillance are being undermined.
Beyond the concerns around privacy, unchecked AI deployment has raised disquiet in the broader field of Bioethics. American Medical Association (AMA) Journal of Ethics posits that some AI technologies have tremendous capability to threaten patient preference, autonomy, and safety. They are of the view that the current policy and ethical guidelines for AI technology are lagging behind the progress AI has made in the field of medicine.
Within the context of the pandemic, health care professionals who are on the frontline come up with the choices of who to save and who to let go. These are human choices but what if these choices were made by Artificial Intelligence. An AI algorithm ālearnsā to prioritise patients it predicts to have better outcomes for a particular disease. This leads to non-intentional injustice to individuals and becomes an important issue for AI in healthcare.
Impact on Principles of Bioethics
Let us elaborate and discuss non-intentional injustices within the framework of four commonly accepted principles of Bioethics;
- Principle of respect for autonomy which acknowledges the capacity of individuals for self-determination and the right to make choices based on his/her own values and beliefs. Algorithmic activities, like profiling, re-conceptualise reality. An individual using personal health app has limited oversight over what passive data it is collecting and how that is being transformed into a recommendation, limiting their ability to challenge any recommendations made and a loss of personal autonomy and data privacy.
- The principle of nonmaleficence requires health care providers that they do not intentionally create harm or injury to the patient, either through acts of commission or omission. MIT research reported that commercial face recognition software performed dismally on darker-skinned women. This opened concerns over the use of AI systems that rely on computer vision to help diagnose skin cancers in systematically misdiagnosing a subset of the population.
- The principle of beneficence requires health care providers to have a duty to be of a benefit to the patient, as well as to take positive steps to prevent and to remove harm from the patient. The harm caused by algorithmic activity is hard to attribute (to detect the harm and find its cause). If a decision made by clinical decision support software leads to a negative outcome for the individual, it is unclear who to assign the responsibility and or liability to and therefore to prevent it from happening again.
- Principle of justice deals with the distribution of resources within a society and the non-discrimination of individuals. Ranking algorithms are of particular concern in many fields of application. An algorithm ālearnsā to prioritise patients it predicts to have better outcomes for a particular disease. Due to the data-driven nature of AI techniques, the selection of datasets for training, multiple peer-reviewed studies have shown bias against those from poorer backgrounds receiving treatment, thus unjustly impacting those from Black and Minority ethnic communities in the US and parts of Europe.
To conclude, the current pressures on the beleaguered healthcare systems have predicated large scale deployment of AI-led automated clinical and non-clinical decision making in health because of the need for speed and efficiency. But as we move forward, we must examine these new measures and put in place checks and balances to ensure that we promote ethical principles of safety, autonomy, fairness, and justice for all human beings.
To paraphrase quote ascribed to Hippocrates -Wherever the art of medicine is loved, there must also be a love of humanity!
[…] AI will not save us from the Healthcare crisis, Humanity will! […]