WHO issued its first global report on Artificial Intelligence in healthcare and six guiding principles for its design and use, entitled Ethics and Governance of Artificial Intelligence for Health. With a number of ethical concerns emerging in recent years in relation to surveillance, infringement on the privacy rights and autonomy that the use of AI present in healthcare, WHO hopes that the set principles will be used as a basis for all stakeholders when adopting ethical approaches to the use of AI.
To implement the principles laid down, the report holds all stakeholders – both public and private - accountable and responsive to the healthcare workers who will rely on these technologies.
Outlining the risks and limitations of AI
WHO recognised the enormous potential of AI in strengthening the delivery of health care and medicine, cautioning that the full power of AI can be only harnessed once ethics and human rights are put at the heart of its design, deployment and use. Throughout the report, challenges and risks related to AI in the health context are highlighted, including unethical collection and use of health data, as well as biases encoded in algorithms. Unregulated use of AI might additionally undermine the rights of patients and breach a patient’s privacy.
Key ethical principles for the use of AI in healthcare
To support governments, developers and regulators in developing the foundation of AI technology, the WHO experts came up with the following six guiding principles:
The first, protecting human autonomy, requires that the use of AI should not undermine human autonomy and should keep people in full control of their medical decisions.
The second key principle advocates the need for promoting human well-being and safety and public interest. AI designers should satisfy regulatory requirements for safety, accuracy and efficacy for well-defined cases or indications.
The WHO also draws attention to the need for transparency, explainability and intelligibility in order to ensure AI technologies are understandable to medical professionals, patients, users and regulators. As part of transparency and understanding, sufficient information should be published before the AI technology is designed or deployed. Meaningful public consultations should also take place on how any given technology should be used.
While AI technologies are designed to perform certain tasks, it is the stakeholders’ responsibility to ensure that they are used under appropriate conditions and by trained people (fostering responsibility and accountability).
The fifth principle is to ensure inclusiveness and equity for the AI to be accessible to the largest possible number of people, regardless of their age, sex, gender, race or ethnicity.AI tools and systems should be monitored to identify disproportionate effects on specific groups of people.
The last principle, promoting AI that is responsive and sustainable, calls on designers, developers and users to transparently assess AI applications during their actual use.
As AI becomes more commonplace in healthcare systems, the WHO hopes these principles will guide the future work of regulators in supporting the full potential of AI in healthcare. The main question is to what extent the WHO report will contribute to shaping the way AI technologies are adopted and deployed across the globe.