While Ai technology is becoming ever more prominent in the healthcare industry, Quadrant Health looks at the potential threat ‘Ai bias’ causes in data, and how it can be avoided.
Artificial Intelligence technology refers to a machine-based system that can make predictions and decisions in real or virtual environments. They’re designed to operate with varying levels of independence-based datasets and algorithms.
Ai technology is known for revolutionising the work of healthcare workers and increasing the development of drug discovery, genomics and radiology, with patient focused solutions at the heart of its use in health.
With future achievements to include assisting healthcare providers in avoiding errors and allowing clinicians to focus on providing care and solving complex cases, it allows the path of future advances of achieving universal healthcare to look ever more likely.
The techno-optimism of Ai is allowing the rapid belief that the technology will continually improve the lives of people. Yet, this could cause issues in unequal distribution, resulting in a divide between wealthy and low-income countries and worsen inequitable access to healthcare technologies.
High-quality datasets are essential in avoiding racial or ethical bias within Ai
Existing bias in healthcare is already being captured in the data in which modern machine-learning models are trained and do show in the recommendations made by Ai guided technologies. Although, the concern is that suggestions will be irrelevant or inaccurate for the populations excluded from the data.
Cerys Wyn Davies, a Healthcare Expert and Partner at Pinsent Masons told Quadrant Health: “Ai is only as good as the data it learns from, the results delivered by Ai primarily depend on the data that is input. To avoid bias sufficient data needs to be input both in terms of volume and representation.”
In health it’s challenging to collect individuals’ health data for use other than for the patient’s treatment. This is particularly the case in relation to the health data of minority groups (whether based on ethnicity, sex, disability etc.). The lack of data and lack of representative data means that the results which emanate from the AI are likely to be biased
Ai technology is influenced towards the majority data sets, as this includes populations where there is more data to obtain, so in unequal societies, it places a minority population at a disadvantage in data sets.
The current data sets used to train Ai models exclude girls and women, ethnic minorities, elderly people, rural communities and disadvantaged groups due to the restricted availability to take part in data research.
Women in LMIC are much less likely than men to have access to a mobile phone or internet, causing a digital divide and data bias
According to The WHO guidance on Ethics & Governance of Artificial Intelligence for Health, “327 million fewer women have access to a mobile phone or internet.” Therefore, women will not only contribute to fewer data sets for Ai but are also less likely to benefit from services.
Another example of current bias in Ai technologies is that the machinery in the detection of skin cancer excludes people of colour. The WHO guidance also states that: “The data used to train one highly accurate machine-learning model are for “fair-skinned” populations in Australia, Europe and the USA.”
“Thus, while the technology assists in diagnosis, prevention and treatment of skin cancer in white and light-skinned individuals, the algorithm were neither appropriate nor relevant for people of colour, as it was not trained on images of these populations.”
These systematic biases, when enhanced in Ai, can become normative biases and worsen existing imbalances in healthcare through fixating these within the data’s algorithm.
Cerys Wyn Davies also told Quadrant Health: “At this time, I consider it unlikely that Ai bias in health can be avoided completely as there is a fundamental difficulty in collecting sufficient and sufficiently representative health data.”
However, if the issue of trust can be addressed across all patient groups this should unlock the availability of more data and representative data. It should also be possible to ensure a diverse Ai developer team. Additionally, Ai itself may be able to “learn” to ensure that it is processing sufficient and representative data
Balanced ethics guidance is essential for the design and implementation of Ai for global health. Ethically optimised tools and applications could sustain a widespread use of Ai to improve human health and the quality of life while lessening or eliminating many risks and bad practices the technology may possibly cause.
Looking at the future of Ai technology in health, Cerys Wyn Davies added: “Life Sciences has stepped up to overtake other sectors/industries in the deployment of Ai in 2020/21, partially accelerated by the pandemic.”
Rounding up the interview, Carys explained: “This acceleration is expected to continue as the great benefits of the use of Ai and digital health have been experienced during the pandemic, including the speed of development of new drugs, diagnosis and the development of vaccines.”