Racist and other biases in AI algorithms for healthcare can be combated with public support

Transparenz: Redaktionell erstellt und geprüft.
Veröffentlicht am

Members of the public are being asked to help eliminate bias based on race and other disadvantaged groups in artificial intelligence algorithms for healthcare. Health researchers are calling for support in addressing the question of why “minorized” groups who are actively disadvantaged by social constructs would not benefit from the use of AI in healthcare in the future. The team, led by the University of Birmingham and University Hospitals Birmingham, writes today in Nature Medicine about launching a consultation on a set of standards that they hope will reduce biases known to exist in AI algorithms. There is always…

Mitglieder der Öffentlichkeit werden gebeten, dazu beizutragen, Vorurteile aufgrund von Rasse und anderen benachteiligten Gruppen in Algorithmen der künstlichen Intelligenz für das Gesundheitswesen zu beseitigen. Gesundheitsforscher fordern Unterstützung bei der Auseinandersetzung mit der Frage, warum „minorisierte“ Gruppen, die durch soziale Konstrukte aktiv benachteiligt werden, künftig keinen Nutzen aus dem Einsatz von KI im Gesundheitswesen ziehen würden. Das von der University of Birmingham und den University Hospitals Birmingham geleitete Team schreibt heute in Nature Medicine über den Start einer Konsultation zu einer Reihe von Standards, von denen sie hoffen, dass sie Vorurteile reduzieren, die bekanntermaßen in KI-Algorithmen bestehen. Es gibt immer …
Members of the public are being asked to help eliminate bias based on race and other disadvantaged groups in artificial intelligence algorithms for healthcare. Health researchers are calling for support in addressing the question of why “minorized” groups who are actively disadvantaged by social constructs would not benefit from the use of AI in healthcare in the future. The team, led by the University of Birmingham and University Hospitals Birmingham, writes today in Nature Medicine about launching a consultation on a set of standards that they hope will reduce biases known to exist in AI algorithms. There is always…

Racist and other biases in AI algorithms for healthcare can be combated with public support

Members of the public are being asked to help eliminate bias based on race and other disadvantaged groups in artificial intelligence algorithms for healthcare.

Health researchers are calling for support in addressing the question of why “minorized” groups who are actively disadvantaged by social constructs would not benefit from the use of AI in healthcare in the future. The team, led by the University of Birmingham and University Hospitals Birmingham, writes today in Nature Medicine about launching a consultation on a set of standards that they hope will reduce biases known to exist in AI algorithms.

There is growing evidence that some AI algorithms work less well for certain groups of people – particularly those from racial/ethnic minorities. Part of this is due to biases in the data sets used to develop AI algorithms. This means patients from black and minority ethnic backgrounds may receive inaccurate predictions, which can lead to misdiagnosis and incorrect treatments.

STANDING Together is an international collaboration that will develop best practice standards for health datasets used in artificial intelligence to ensure they are diverse, inclusive and do not leave behind underrepresented or minority groups. The project is funded by the NHS AI Lab and the Health Foundation, and funding is managed by the National Institute for Health and Care Research, the NHS public health and social care research partner, as part of the NHS AI Labs AI Ethics Initiative.

Dr. Xiaoxuan Liu, from the Institute of Inflammation and Aging at the University of Birmingham and co-leader of the STANDING Together project, said:

"By having the right data foundation, STANDING Together ensures that no one is left behind as we seek to unlock the benefits of data-driven technologies like AI. We have made our Delphi study available to the public so we can maximize our reach." to communities and individuals. This allows us to ensure that STANDING Together's recommendations truly reflect what is important to our diverse community.

Professor Alastair Denniston, Consultant Ophthalmologist at University Hospitals Birmingham and Professor at the Institute of Inflammation and Aging at the University of Birmingham, is co-leading the project. Professor Denniston said:

“As a doctor in the NHS, I welcome the introduction of AI technologies that can help us improve the healthcare we provide – faster and more accurate diagnoses, increasingly personalized treatments and healthcare interfaces that give the patient more control.” But we also need to ensure that these technologies are inclusive. We must ensure they work effectively and safely for everyone who needs them.”

This is one of the most rewarding projects I have ever worked on because it reflects not only my keen interest in using accurate validated data and good documentation to support discovery, but also the urgent need to include minority and underserved groups in the research that benefits them. Of course, the latter group also includes women.”

Jacqui Gath, patient partner, STANDING Together project

The STANDING Together project is now open for public consultation through a Delphi consensus study. The researchers invite members of the public, healthcare professionals, researchers, AI developers, data scientists, policymakers and regulators to help review these standards to ensure they work for you and everyone you work with.

Source:

University of Birmingham

Reference:

Ganapathi, S., et al. (2022) Tackling bias in AI datasets through the STANDING together initiative. Natural medicine. doi.org/10.1038/s41591-022-01987-w.