Artificial Intelligence (AI) has rapidly matured over the years and can be deployed in just about any activity of the health sector, from clinical decision-making to biomedical research and medicine development. It presents great potential to enhance the efficiency and effectiveness of health care systems, yet its successful application requires a deep understanding of its strengths and challenges.
In a recent working paper, the Organisation for Economic Cooperation and Development (OECD) described current applications of AI, its limitations and risks, and provided policy guidance on how to improve the uptake of AI on the global level and harmonise international policies on data access and data protection.
Artificial Intelligence: harnessing the potential and addressing risks
AI applications are accelerating across an array of clinical practice examples: tumour detection in radiology, in the surgical field where clinicians operate with high doses of ionizing radiation, and in providing a personalised patient protocol, to name only a few. When comparing to the clinical setting, the rise of AI in healthcare is, however, more visible in biomedical and population health research. AI programmes support researchers in developing new medicines, tailoring medical treatment to individual patients, or matching of individuals to clinical trials.
While AI offers myriad benefits, there are also several risks. Authors note that often applications in health are still in research and development phases, concentrated in high-income countries and based on narrow intelligence. Because they are designed to accomplish a very specific problem-solving, they are not able to generalise outside the limits within which the model was trained. Despite the complexity of AI programmes, these tools are limited in their ability to inform decisions and recommended actions.
Further to that, most existing health data are rarely of high enough quality, meaning they are difficult to exchange, process or interpret.
Developing policies to promote trustworthy, safe and reliable AI in health
With AI applications remaining mostly in the development stages, policymakers are given an opportunity to identify real-world issues and opportunities where AI may help, while considering mechanisms to make sure risks are mitigated. The OCED working paper puts forward recommendations on how to address the fundamental barriers to the successful deployment of AI in the health sector.
One of the key priorities that policymakers should focus on is improving data quality, interoperability and access in a secure way through better data governance, through implementing and operationalising the OECD AI Principles. The training data is specifically important for the further development of methodologies and the application of new technologies.
Strong frameworks for health data governance should be developed. To address data quality and availability, while ensuring the protection of privacy and data, policymakers should focus on the right implementation of the OECD Health Data Governance Recommendation. This recommendation also provides guidance on how to address barriers to the efficient interoperability of health data.
Finally, the need for further investments in the field of AI, global coordination and cross-border collaboration should not be omitted when laying the foundations for AI in health.
There is a rising need to develop an enabling policy environment for AI that would not only include a risk management approach and allow countries to further foster trustworthy AI technologies, but would also support the health of patients and communities across the world.
With the publication of this report, once more the OECD highlighted the need to harmonise approaches to the use of AI in healthcare. The newly created HMA is currently establishing a working group to contribute to these global efforts and is inviting interested medical professionals and holomedicine experts to join them.