The World Health Organization (WHO) has recently published a comprehensive guide on key regulatory considerations for artificial intelligence (AI) in the healthcare sector. This publication highlights the need for ensuring the safety and effectiveness of AI systems, making them readily available to those who require them, and fostering collaboration among stakeholders involved in AI development and implementation.
The increasing availability of healthcare data and the rapid advancements in analytic techniques have paved the way for AI tools to transform the healthcare industry. WHO acknowledges the potential of AI in improving clinical trials, enhancing medical diagnosis and treatment, enabling self-care and person-centered care, and supplementing the knowledge and skills of healthcare professionals. This technology can particularly benefit areas with a shortage of medical specialists, such as interpreting retinal scans and radiology images.
However, the rapid deployment of AI technologies, including large language models, without a complete understanding of their performance could potentially harm end-users, including healthcare professionals and patients. AI systems that utilize health data may have access to sensitive personal information, necessitating robust legal and regulatory frameworks to ensure privacy, security, and data integrity. The WHO publication aims to assist in establishing and maintaining such frameworks.
Dr. Tedros Adhanom Ghebreyesus, WHO Director-General, acknowledges the promise of AI for health but also underscores the challenges it presents. These challenges include unethical data collection, cybersecurity threats, and the potential for amplifying biases and misinformation. The new guidance is designed to help countries regulate AI effectively, harness its potential, and minimize associated risks in areas such as cancer treatment and tuberculosis detection.
To address the growing need for responsible management of AI health technologies, the publication outlines six areas for the regulation of AI in the healthcare sector.
1. Transparency and documentation: The publication emphasizes the importance of documenting the entire product lifecycle and tracking development processes to foster trust.
2. Risk management: Issues such as intended use, continuous learning, human interventions, training models, and cybersecurity threats must be comprehensively addressed, with an emphasis on keeping models as simple as possible.
3. External validation of data and clarity on the intended use of AI systems help ensure safety and facilitate regulation.
4. Commitment to data quality: Rigorous evaluation of systems before their release is crucial to avoid the amplification of biases and errors.
5. Privacy and data protection: Complex regulations like GDPR in Europe and HIPAA in the United States are addressed, emphasizing the understanding of jurisdiction and consent requirements.
6. Collaboration: Collaboration among regulatory bodies, patients, healthcare professionals, industry representatives, and government partners can ensure compliance with regulations throughout the lifespan of AI products and services.
AI systems are complex and rely not only on their code but also on the data used for training, which often comes from clinical settings and user interactions. Better regulation can help manage the risks of AI amplifying biases in training data. For instance, regulations can ensure that attributes like gender, race, and ethnicity of individuals featured in training data are reported, and datasets intentionally made representative of diverse populations to mitigate biases, inaccuracies, or failures.
The WHO publication aims to provide governments and regulatory authorities with key principles that can aid in developing new guidance or adapting existing guidance on AI at national or regional levels. By following these principles, stakeholders can navigate the complexities of AI regulation, fostering safe and effective implementation while minimizing risks.