Within the next decade the Artificial Intelligence (AI) healthcare market is predicted to be worth more than $61 billion. The COVID-19 pandemic has seen AI used to detect the virus in chest x-rays. In the context of this enormous growth and social salience, the European Commission (EC) has published its proposal for a ‘Regulation Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act)’. This follows the February 2020 White Paper and develops many of its themes. The White Paper outlined how the EC would promote the uptake of AI while simultaneously assessing ethical and safety risks and ensuring consumer protection.
The EC continues this risk-based approach to AI and emphasises healthcare as a sector which will be significantly affected. It acknowledges the “wide array of economic and societal benefits” AI can produce and argues that it should be a “tool for the people”, a “force for good in society”, and “human centric”. However, they emphasise that rapid technological change can threaten citizens’ fundamental rights and personal safety. To create an “ecosystem of trust”, and give people confidence to embrace AI, the EC adopts a “proportionate risk-based approach” to prohibit certain technologies, identify high-risk AI systems, and produce transparency obligations. It aims to create and enforce a legal framework which protects fundamental rights and upholds safety requirements.
Impact on healthcare
The proposal includes measures that address two of the major concerns pertaining to the use of AI in healthcare.
First, the issue of patient safety and use of data. New AI healthcare technology may expose patients to unforeseen risks and require a reassessment of safety thresholds. It certainly raises questions over how AI providers use data, how data is used in machine learning, and, fundamentally, who the data belongs to. The EC’s repeated emphasis on its “risk methodology” and its imposition of “conformity assessment procedures” on AI providers appears to mitigate this problem. It also sets out legal requirements for high-risk AI systems in relation to data and data governance and transparency obligations when AI collects information.
Second, the need for clarity over clinical decision-making and liability. The use of AI creates a need to establish who has ultimate authority to diagnose, who is liable in the event of patient harm, and how to differentiate between faulty content and incorrect operation. Although this will need to be addressed by national healthcare systems, the EC will produce a database of AI providers and encourage national authorities to create agencies to regulate supervision and liability in relation to AI. This should promote external critical appraisal of tech companies and clinicians.
Issues for the future
The proposal has highlighted the need for more specific wording regarding what technology is banned and, potentially, the need for stricter controls. Two letters to EC President Ursula von der Leyen have called for further restrictions on the powers of AI and stronger anti-discrimination language. Similarly, the self-regulation involved in the conformity assessments completed by providers has led to some MEPs suggesting that this gives too much power to the tech industry and will not sufficiently protect citizens.
The main question is to what extent this regulation will frustrate progress in the AI industry. Although the proposal is meant to address risk without “unduly constraining or hindering technological development”, the above concerns suggest that further restrictions may be imminent. The balance of patient safety and innovation will certainly impact the amount of AI technology and the extent to which it can be integrated into the healthcare system.