Clinical and operational decision-making in healthcare is supported by artificial intelligence systems that are becoming more and more popular in healthcare. Although these systems have the possibility of providing clinical benefit, they also present additional risks compared to traditional medical software
AI bias is one of the most important and the least understood risk. Uncontrolled biased AI systems may be a source of unsafe recommendations, unjust results, and a lack of organisational trust.
This article describes AI bias as it manifests in healthcare environments and presents mitigation measures, which are practical and governed by authorities.
The meaning of AI bias in healthcare.
AI bias is a situation where the system provides results which do not accurately represent a particular demographic of the population. Within the healthcare field, this will impact:
- Diagnostic accuracy
- Risk stratification
- Availability of services or interventions.
- Clinical workflow prioritisation.
Prejudice may be unconscious and hidden without being actively monitored.
Sources of AI bias in healthcare.
Discrimination may occur on various levels of AI lifecycle.
Data imbalance
There can be under-representation of those groups of patients, or those clinical presentations with training datasets. This may cause lower fidelity of such populations through deploying systems.
Historical inequalities
AI may be trained using historical data, resulting in the reflection of past inequitable patterns of care. These patterns can be strengthened instead of being corrected without any intervention.
Proxy variables
Restricted protected characteristics, which should not be used to inform AI output could be discerned from proxy values.
Mismatch between the deployment contexts.
An AI system that is proven to work in a specific clinical setting might not work when implemented in a different one. Population, workflow, or data capture differences may change the results drastically.
Why AI bias is an organisational risk
Immediate AI bias is uncontrolled and generates numerous risks:
- Clinical risk, in which some groups are not safe to receive recommendations.
- Equity risk, which leads to unfair or unequal care.
- Reputational risk, especially in the case when bias is apparent.
- Regulatory risk, in case of a lack of governance and assurance.
Practical approaches to mitigating AI bias
Mitigation of bias is best when instilled in governance procedures.
Bias assessment in the purchasing process.
The organisations must evaluate:
- Training data population coverage.
- Known performance shortcomings.
- Evidence of bias testing
- Unintended and intended uses.
This allows informed purchasing.
Accuracy of documentation and transparency of datasets.
Explainable documentation reflects comprehension of:
- Data sources and exclusions
- Known limitations
- Performance variability
Transparency brings about the continued observation and monitoring.
Human oversight mechanisms
Organisations need to define:
- Who reviews AI outputs
- How concerns are escalated
- When human judgement overrule automated outputs.
Both staff and the patients are safeguarded by clear accountability.
Post-deployment monitoring
Prejudice may arise when systems are utilized in the long run. Monitoring enables organisations to:
- Detect performance drift
- Uncover inequities that are emerging.
- Modify controls and precautions.
This supports the reduction of bias as an ongoing role.
Why bias mitigation is ongoing
Clinical practices, populations of patients, and healthcare systems do not remain the same. Over time, service patterns, demographics and patterns of care change. Due to these changes, the context of the AI system also changes, which has a direct effect on the way in which an AI system produces its outputs and perceives them.
The AI systems should be tested constantly in their clinical environment. What was acceptable in the initial validation, may not be so once the system comes under the exposure of a new patient group, or a new workflow or a new data quality condition. The preliminary testing and assurance measures will only give a baseline knowledge of the risk of bias, and will not alone guarantee continued fairness.
Incorporating the concept of bias awareness into your governance frameworks can make the concept of fairness and safety dynamic and not fixed. With routinely reviewing, monitoring, and escalating, organisations can detect emerging bias, address it in accordance, and remain assured that AI systems remain able to assist in delivering equitable and safe care during its lifecycle.
Governing AI bias through education and assurance
BMS Digital Safety assists healthcare organisations to learn and regulate AI bias as a subset of overall clinical safety and regulatory harmonisation. They are based on the actual clinical risk, proportionate controls, and defensible assurance instead of the theoretical ethics. Good governance will make sure AI enhances fair, secure and responsible care.