FormalPara Key Summary Points

Medical AI algorithms and software fall under European medical devices legislation and require a suitable certification to operate within the European market.

The approach taken by European legislators is vastly different from that taken by the USA. Certification is based on level of perceived risk, stratified into four classes: I, IIa, IIb, and III. Class I certification is a self-certification process, without external oversight or verification.

Current legislation states that software providing information used to make decisions regarding diagnosis, or treatment, is at least class IIa; however, due to transitionary measures, some artificial intelligence software may still legally operate under class I until 2024.

Details regarding certification level, or certification documents, are not publicly available or verifiable, which may change in the future with the introduction of a new purpose-built database—the European Database on Medical Devices (EUDAMED), and the new proposed legislation—Artificial Intelligence Act (AIA).

Commentary

Innovations and improvements within information technology and computer science have transformed medicine over the past decades. More recently, the emergence and rapid development of deep-learning and convolutional neural networks has provided computers with a new set of tools for tackling issues that previously required skilled human and/or assessment input. This new approach provides vast possibilities, including within the world of healthcare, but as with any other technological breakthrough, its own set of problems have become evident. These new capabilities of devices and software pose new challenges regarding their oversight and regulation, particularly in industries where safety is paramount: automotive, flight, and, of most interest to us, medicine. Deep-learning-based medical devices are already commercially used in Europe, and almost every month a new medical artificial intelligence (AI) device becomes available in the European market. With that in mind, it is essential to understand the regulations surrounding the introduction and use of such devices within Europe, and how they compare to the approach taken in the USA.

The regulations regarding medical devices are ratified and applied within the European Economic Area (EEA), which consists of the European Union (EU) and the European Free Trade Agreement countries—Iceland, Liechtenstein, Norway, and Switzerland. The validation process for any kind of device is centered around the Conformitè Europëenne (CE) mark, which is used to signify conformity with European health, safety, and environmental-protection standards. All electrical devices within the EEA are required to achieve the CE mark, with specific regulations regarding each type of device. Advances in medical devices and software, including the deep-learning devices, prompted the legislators to update the previous legislature in 2017 to keep up with the nuances of those devices and their classifications. This crucial legislation is the Medical Device Regulation (EU) 2017/745, sometimes referred to as “MDR” [1]. Four basic classes of devices are differentiated by the European legislature: class I, IIa, IIb, and III devices, the higher classes being reserved for progressively higher risk devices. Class I devices represent the lowest risk level and do not require any external validation, and the certification process is based on self-certification, where the manufacturer is solely responsible for ensuring compliance with regulations. Prior to the 2017 legislative update, some medical AI diagnostic software was certified under class I as nonspecific; however, the aforementioned regulation defines the following rules for classifying medical software:

  • Software that provides information used to make decisions regarding diagnosis, or treatment, is at least class IIa.

  • If the software decisions can cause death, or irreversible deterioration of patients health, it is class III.

  • If software decisions can cause serious deterioration of health, or require surgical intervention, it is class IIb.

  • Software intended to monitor physiological processes is classified as class IIa, except if it is intended for monitoring of vital physiological parameters, where the nature of variations of those parameters is such that it could result in immediate danger to the patient, in which case it is classified as class IIb.

  • All other software is classified as class I.

Looking at the above-mentioned list, the majority of AI medical software currently operating and being introduced in the EU falls under class IIa or class IIb. Both class IIa and IIb devices require an external audit and certification process for introduction. However, in contrast to the centralized certification process employed in the USA, conducted by the FDA, a public body, in the EEA the process is conducted by private entities—the “notified bodies.” At the time of writing this article, 34 notified bodies are authorized to conduct certification of general medical devices, and 7 are authorized to certify in vitro medical devices [2, 3]. There is no centralized, singular regulatory body, or hierarchy, within the notified bodies, and companies can conduct their certification with a notified body of their choice.

The lack of external validation, or oversight, in the process of acquiring a self-certification under class I of the CE mark leads to potential pitfalls with misinformed, or malicious, companies self-certifying their product under class I, regardless of actual legal classification level required. The potential for unintended errors in classification is compounded by the fact that deep-learning software is commonly developed by startups and other small ventures, and details of the certification process, or its requirements and classifications, are not common knowledge.

A major issue relating to the EU certification is the generalized lack of transparency of the whole process. Suppose that one considers including a new AI software or device in their local practice, clinic, or screening, and wants to make sure that the product is legally allowed to be used in EU—surely there is a website, database or dedicated office that would allow us to check that? Unfortunately, the simple answer is that there is no way to check or confirm the certification level of a product in the EU without relying on the benevolence of the manufacturer or distributor in making the certification documents available to us. Assuming the software, or device, is incorrectly self-certified by the distributor as class I, the user has no way of externally verifying the certification level, or whether the software has any classification at all, and has to rely on information provided by the distributor. Similarly, the documents and evidence submitted by companies for certification with a notified body is confidential and not available to the public.

As part of the new legislation, the MDR introduces an extended transitional period for devices that were classified as class I under previous legislation and would now require a notified body under the MDR. Notably, most software falls under this umbrella. The grace period requirements to be met are certification documentation (according to previous legislature) to be dated prior to 26 May 2021, full compliance with the new rules by 26 May 2024, and no significant changes to the product during this time. Additionally, several interim measures are to be complied with from 26 May 2021 [Post-Market Surveillance (Articles 83–86, 92, Annex III), Market Surveillance (Articles 93–100), Vigilance (Articles 87–92), Registration of Economic Operators and Devices (Articles 31 and 29)]. Once again, external verification of this compliance with regulations is impossible, and registration data for devices are unavailable. Moreover, this means that a significant part of the legislation introduced in 2017, to keep up with rapid changes in medical devices and healthcare, does not go fully into effect prior to 2024. From a consumer standpoint, 7 years seems like a particularly long time in a field that innovates and evolves every year—in 2024, or shortly after, the legislation may need another update to respond to new realities and issues regarding the widespread adoption of new, autonomous, digital technology with yet another transitional period, thus further delaying essential regulation and effective oversight. Of course, for the companies affected, the grace period gives time to prepare and update, or up-classify, their products, with most new AI diagnostics organizations opting for a IIa certification in accordance with new MDR.

This is in contrast to the practices in the USA, where the FDA publishes a list of approved artificial intelligence and machine learning-enabled medical devices on their website, with access open to everyone [4]. Although the relevant legislature may be complex, verifying whether an AI-based medical system is operating legally is simple, takes a few minutes, and does not require contacting individual companies, or intricate knowledge of the surrounding requirements and legislature.

Fortunately, things are changing for the better in the EU, with a new database introduced under the MDR legislation, the EUDAMED [5]. EUDAMED is an online information technology (IT) system and database meant to improve the transparency and coordination of information on medical devices in the EU market. EUDAMED is currently functioning on a voluntary basis, with the current development timeline estimating mandatory use around 2026. Once fully operational, EUDAMED will function as both a database of approved medical devices as well as a centralized system for managing the certification process and its requirements, such as clinical investigations and post-market surveillance.

Most companies opt for the CE mark first, prior to obtaining FDA approval. This is likely due to the less stringent evaluations and lower monetary cost of obtaining a CE mark [6]. In the field of ophthalmology there are currently two medical AI devices approved for use both in the USA and Europe—IDx-DR (Digital Diagnostics, Indiana, USA) and EyeArt (Eyenuk, California, USA). These are the only such devices approved in the USA, but many more are available in Europe. In Table 1we present the EU- and USA-approved AI-based diagnostic devices in ophthalmology, that we are aware of, and their certification levels. IDx-DR and EyeArt both require a specific model of a fundus camera to operate in the USA, namely, the Topcon TRC-NW400, and must be paired with it for legal use. No such restrictions are in place in Europe, and any fundus camera can be used. This specific example is a good demonstration of the overall difference in approach between US and European legislation, with stricter control and more cautious approach taken by the US FDA.

Table 1 Ophthalmology AI devices available in the USA and the EU and their certification levels

The EU is currently in the process of preparing purpose-built legislation, aimed at AI software and devices, since president of European Commission, Ursula von der Leyden, pledged new legislation on AI prior to her election. The new regulation will be founded on similar principles as the General Data Protection Regulation, that is, protecting human dignity and fundamental rights [7]. With this in mind, the European Commission published a proposal for a new AIA, touted to be “the first initiative, worldwide, that provides a legal framework for Artificial Intelligence (AI)” [7, 8].

Conclusions

Machine learning and other “AI” techniques have a lot enthusiasm and expectations associated with them, promising to tackle both old and upcoming healthcare issues. Nevertheless, the road to real-life adoption of those devices is not without its intricacies, problems, and hazards. The devices in question deal with extremely sensitive data: collecting, processing, and sending images and diagnoses over the internet. They may be vulnerable to cybersecurity attacks, they may introduce racial or ethnic bias, the studies proving their efficacy may be poorly designed, or they may not reproduce well in real-life screening. This highlights the need for effective legislation and oversight from the governing bodies, as well as the need for understanding the legalities of their use and their limitations from those implementing the devices in clinical practice. The European “open” approach is vastly different from the strict perspective of the US FDA, as reflected by the number of available medical AI devices. The current European approach puts a large burden of verifying the effectiveness and integrity of the AI device on the consumer, but the low entry bar also leads to many devices being available, possibly driving competition and innovation in the European market and hopefully with a final benefit to the patients.