As innovation in artificial intelligence-based technologies freely progresses in both healthcare and broader society, calls have mounted for regulatory oversight to control, support, and direct AI-based innovations and their application in society.
A European approach to AI regulation
Likening the current regulatory landscape for AI technologies to “the Wild West”, the European Commission (EC) in February launched a White Paper on Artificial Intelligence to outline a strategy for the regulation of and investment in AI technology to bridge the gap between regulation and technology.
The paper weds twin European ambitions of becoming a “leader in innovation” in the application of ‘big data’, and instigating a “high quality” regulatory framework to orchestrate the “AI ecosystem” to the benefit of the health sector and wider industry. The EC toes a typically ethics-oriented line in its regulatory proposals, in which risk assessment and the consumer protection imperative take primacy, stipulating the human oversight of systems and safety requirements for large datasets.
Regulating and supporting AI in healthcare
The EC cast healthcare as a prime beneficiary of the new regulations – the health sector can, in the terms of the EC, “benefit from the data revolution” through more efficient and precise diagnosis at a cheaper cost, among a string of other benefits.
Placing healthcare at the heart of the AI ecosystem, the EC envisages strengthening the existing use of AI across the supply chain, from AI-focussed medical research, where the development of further testing and experimentation will be prioritised, to the expanded use of professional service robots in healthcare delivery.
AI technology is increasingly routinely used in radiology departments, for example, to diagnose patients more precisely by comparing screening slides with large datasets of previously diagnosed cancers. Such datasets, and the application of the AI technology, will be subject to the new regulations to be implemented over the coming years.
Scaling AI regulatory policy to meet medical practice
The instigation of clear liability rules will be imperative as AI uptake increases in healthcare – medical responsibility will need to be clearly designated as clinical decision-making becomes increasingly automated. Requirements on human oversight will go some way in ensuring liability remains clear, though the growth of medical negligence law stands testament to the existing difficulties in ascertaining liability.
Significant work thus lies ahead for the EC in drafting a suite of regulations which meets the contentiousness of existing issues around medical risk and technological application. And as the advance of AI technologies and their application continue exponentially, the legislative task may grow beyond the capacity of the European Parliament’s new Special Committee on AI.
Reception of the regulatory proposals
Sennay Ghebreab, Professor of Socially Intelligent Systems at the University of Amsterdam, has suggested the focus of the new regulation should focus on the companies using the AI, rather than the technology itself, to avoid the Sisyphean task of attempting to regulate new technologies. Ghebreab further argued data protection regulation (GDPR), which lays the groundwork of new AI regulation, has hindered the medical applications of AI, with questions around ethics and privacy distracting political decision-makers from the benefits of AI.
Stakeholder responses to the consultation this month were nonetheless broadly supportive of the regulatory proposals. Of the 1,215 respondents, representing academics, industry groups and citizens, support for the various GDPR-esque regulations on liability and security ranged from 83% to 91% depending on the requirement. There is a clear appetite for change – only 3% of respondents indicated current legislation is ‘fully sufficient’, and the overwhelming majority called for either a new regulatory framework or significant modifications to current legislation.
While overwhelmingly supported by stakeholder groups, the EC nonetheless faces a considerable task in providing regulations on the disparate and diverging development and application of AI. Even when focussing only on the healthcare sector, an overarching regulatory framework will require extensive auxiliary legislation to ensure policy meets a continually evolving practice, and fresh modes of regulation, in-line with Ghebreab’s suggestion, may be necessary.