Artificial intelligence does not just require technology, but also organisational capability to allow safe adoption of artificial intelligence in healthcare. AI systems affect the clinical judgement, workflow, and prioritisation, implying that their safe use depends on knowledge and behaviour of the workforce working with them.
Unless AI systems receive proper training, they can be misinterpreted, over-utilized, or misused. This poses clinical, operational and governance risks.
The article presents the ways in which healthcare organisations can establish role-based AI training programmes, which can be structured to promote clinical safety, governance, and responsible adoption.
The need of AI training in organisations as a safety requirement.
The risk associated with AI is more of usage than design. The necessity of AI training in organisations must be perceived as an essential safety measure and not a potential possibility. Most of the risks associated with AI do not occur because of bad system design but rather due to the use of systems.
Without proper training, users can misinterpret the output of AI, exaggerate the accuracy of the system, are unable to escalate anomalies, or use AI tools in a way other than intended usage. Training in a structured format assists staff to learn what the system can or cannot do, how to use it, and where to go to get assistance in escalation. Through this, training is a key risk control tool, as it enhances operational and organisational assurance in the integration of AI in decision-making.
Untrained users may:
- Misinterpret AI outputs
- Expecting system accuracy: Overstating it.
- Fail to escalate concerns
- Put systems into non-intended use.
Training thus serves as a risk control, which helps in safety as well as assurance.
Crucial areas where AI needs training in the workforce.
AI training ought to be position and responsibility-specific.
Board and executive management.
Executives need knowledge of:
- AI risk profiles and shortcomings.
- Governance and assurance functions.
- Monitors of questions and decision process.
Operational users and clinicians.
Users must understand:
- Intended use and boundaries
- Restrictions and hesitation.
- The bias of automation and escalation channels.
CSOs and clinical safety teams.
The roles of clinical safety necessitate competence in:
- AI system hazard identification.
- Safety case development
- Change management and monitoring after the deployment.
Data and IT teams
The technical teams need to know about:
- Validation and performance monitoring of the model.
- Access controls and quality of data.
- Lifecycle management of a system.
Information governance, legal and procurement.
Assurance functions need abilities in:
- Evaluation of the suppliers.
- Alignment of contracts and assurance.
- DPIA and data protection issues.
Core AI training curriculum
An organized AI training program normally encompasses:
- The basics of AI in healthcare.
- Ai systems clinical risk management.
- performance monitoring, validation, assurance.
- Prejudice, impartiality, and quality of data.
- Risk in human factors and workflow.
- Information protection and information governance.
- Learning and incident reporting.
- Supply and supplier assurance.
The modules must be in line with the organisational processes of governance.
Career development courses and ladders.
Successful programmes take progressive courses:
- Background: Artificial intelligence literacy among general audiences.
- Practitioner: Practical training of role.
- Specialist: High-level CSOs/governance training.
This helps to build a long-term capacity and not a single-time awareness.
Incorporating training in governances.
Training must relate directly with:
- Clinical safety committees
- Digital assurance boards
- Standard operating procedures.
- Incident reporting systems.
The fact that is not operationalised does not minimise risk.
Evaluating training performance.
The results of the training can be evaluated by:
- Completion rates by role
- Checks of competence based on scenarios.
- Usage and escalation trends
- Incident and near-miss data
- Maturity in governance documentation.
- Completeness of supplier assurance.
These measures are a sign of organisational preparedness.
Potential loopholes in the enablement of AI workforce.
Organisations face problems when:
- Training is generic as opposed to role-based.
- Learning does not involve suppliers.
- The systems change without refreshing content.
- The issue of AI competence accountability is ambiguous.
Such loopholes are detrimental to safe adoption.
Education and BMS Digital Safety assurance.
BMS Digital Safety provides AI education programmes which assist with:
- Clinical safety and governance goals.
- Position based competence development.
- Correspondence to NHS operational practice.
- Practical application and not theory.
Training will be aimed at fitting in with existing assurance structures.
The use of AI systems changes the process of care delivery, and their safety is up to the way individuals perceive and utilize them. Role-based training in place of the structured training will help organisations control the AI risk and facilitate clinical safety and assurance over time. Safe AI adoption is thus a fundamental need based on its workforce capability.