Photo credit: Pixabay / CC0 Public Domain

The rapid deployment of artificial intelligence and machine learning to combat the coronavirus has yet to be ethically reviewed, as otherwise we may harm already disadvantaged communities in the rush to defeat the disease.

This emerges from research by researchers at the University of Cambridge’s Leverhulme Center for the Future of Intelligence (CFI) in two articles published today in the British Medical Journal warning of a flashing use of AI for data collection and medical decision-making in combat to get back to normal in 2021.

“Relaxing ethical standards in a crisis could have unintended harmful consequences that extend well beyond the life of the pandemic,” said Dr. Stephen Cave, director of the CFI and lead author of one of the articles.

“The sudden introduction of complex and opaque AI, the automation of judgments once made by humans, and the soaking up of personal information could undermine the health of disadvantaged groups and the long-term public trust in technology.”

In another article by Dr. Alexa Hagerty, co-authored by the CFI, the researchers highlight potential consequences of AI now making clinical decisions on a large scale – for example, predicting the rates of deterioration in patients who may need ventilation – when this is for biased reasons happens data.

Datasets used to “train” and refine machine learning algorithms are inevitably directed against groups with less access to health services, such as health care providers. B. ethnic minority communities and those with “lower socio-economic status”.

“COVID-19 is already having a disproportionate impact on communities at risk. We know these systems can discriminate, and any algorithmic bias in treating the disease could deliver another brutal blow,” Hagerty said.

Protests erupted in December when Stanford Medical Center’s algorithm gave homeworkers vaccination priority over those in COVID wards. “Algorithms are now being used locally, nationally and globally to define vaccine allocation. In many cases, AI is playing a pivotal role in determining who is best able to survive the pandemic,” Hagerty said.

“In a health crisis of this magnitude, fairness and justice are extremely important.”

Together with colleagues, Hagerty highlights the established “creep of discrimination” that can be found in AI and that uses “Natural Language Processing” technology to record symptom profiles from medical records – which reflects and exacerbates prejudices against minorities already contained in the case notes.

They point out that some hospitals are already using these technologies to extract diagnostic information from a number of records, and some are now using this AI to identify symptoms of COVID-19 infection.

Similarly, using track and trace apps creates the potential for biased records. The researchers write that in the UK, over 20% of people over the age of 15 do not have significant digital skills and up to 10% of the population sub-groups do not have a smartphone.

“Whether it comes from medical records or everyday technology, biased one-size-fits-all data sets to fight COVID-19 could prove harmful to the already disadvantaged,” Hagerty said.

In the BMJ articles, the researchers point to examples such as the fact that a lack of data on skin color makes it nearly impossible for AI models to accurately calculate blood oxygen levels on a large scale. Or how an algorithmic tool used by the U.S. prison system to calibrate repeat offenses, which has been shown to be racially biased, has been re-used for managing COVID-19 infection risk.

The Leverhulme Center for the Future of Intelligence recently launched the UK’s first Masters in Ethics in AI. For Cave and colleagues, machine learning in the COVID era should be viewed through the prism of biomedical ethics – specifically through the “four pillars”.

The first is charity. “The use of AI is said to save lives, but this should not be used as a blanket justification to set otherwise undesirable precedents, such as the widespread use of facial recognition software,” said Cave.

In India, biometric identity programs can be linked to vaccination distribution, raising privacy and security concerns. Other vaccine assignment algorithms, including some used by the COVAX Alliance, are controlled by private AI, Hagerty says. “Proprietary algorithms make it difficult to look into the ‘black box’ and see how they determine vaccine priorities.”

The second is “non-malice” or avoiding unnecessary harm. For example, a system programmed solely to sustain life does not take into account “long COVID” rates. Third, human autonomy must be part of the calculation. Professionals need to trust technology, and designers should consider how systems affect human behavior – from personal precautions to treatment decisions.

Finally, data-driven AI needs to be underpinned by ideals of social justice. “We need to engage diverse communities and consult a range of experts, from engineers to medical front-line teams. We need to be open to the values ​​and tradeoffs inherent in these systems,” said Cave.

“AI has the potential to help us solve global problems and the pandemic is undoubtedly a major one. However, trusting a powerful AI in this time of crisis brings with it ethical challenges that must be considered in order to gain public confidence to back up.”

Follow the latest news on the Coronavirus (COVID-19) outbreak

More information:
Tackling COVID-19 with Ethical AI, British Medical Journal (2021). DOI: 10.1136 / bmj.n364

Does “AI” Stand for Growing Inequality in the Age of COVID-19 Health Care? British Medical Journal (2021). DOI: 10.1136 / bmj.n304

Provided by the University of Cambridge

Quote: Using AI to fight COVID-19 may harm “disadvantaged groups,” experts warn (2021, March 15), accessed March 15, 2021 from -covid-disadvantaged-groups -experts.html

This document is subject to copyright. Except for fair trade for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.