The AI Act: What does this mean really for your medical device?

This post was automatically translated from English. To see the original, use the language switcher at the top right.

Artificial intelligence (AI) is exploding right now, expanding so quickly that regulations have yet to catch up, until now. The growth trajectory of AI has only increased with time, with an expected annual growth rate of 37.3% from 2023 to 2030. 

The European Commission AI Act (we will call it the Act for the rest of this blog) is the first regulation of its kind. It’s intention? To ensure that AI is created in a way that is not discriminatory, is traceable and transparent, and does not pose a danger to the public. 

A major part of this act is that the EU wants to make sure AI systems are overseen by humans, not just automation left unchecked. This is especially true when we consider AI in our medical devices helping make clinical decisions.

The Act defines processes of developing, documenting, and maintaining AI systems that closely mirrors what is required for MDR 2017/745 compliance. With these obvious redundancies, it is possible that this will be integrated with the MDR conformity assessments down the road. Currently though, it’s too early to tell how that will work.

Don’t want to read it in complete detail? We have a TL:DR at the end.

Risk Class for AI Systems in Medical Devices

The Act uses a risk-based approach to classify AI systems into 3 risk classes with differing regulatory requirements: unacceptable risk, high-risk, and general purpose. You can use this tool to see what risk class your device falls into and your obligations. 

Unacceptable Risk

There are some general ethical rules for what the Act declares that AI systems can and can not do. The practices below apply to all risk classes, and going against them would result in the classification of the AI in your device as an “unacceptable risk,” prompting a ban on the device that would go into effect in September 2024. 

  1. AI can not employ subliminal, manipulative, or deceptive techniques to change a person’s beliefs or behavior. 
  2. AI systems can not prey upon or change the behavior of groups based on disability, socioeconomic status, age, or ethnicity in a way that may cause harm.
  3. AI systems can not classify or catalog people over time based on social behavior or personality characteristics that may lead to unfair treatment of that group or person or predict the likelihood of them committing a criminal offense. 
  4. AI can not create or expand facial recognition databases.
  5. AI can not analyze or predict emotions in the workplace or educational settings. 
  6. AI can not use biometrics to infer individual characteristics like race, political opinion, union membership, religious/philosophical beliefs, or sex life/orientation. 
  7. The use of AI for real-time biometric identification is prohibited in public spaces. 

You can think of this as being boiled down to: Don’t use AI to misuse biometrics, harm disadvantaged groups, or manipulate people. These rules apply to most AI models, but we would be remiss not to mention there are exceptions for law enforcement and government use. 

High Risk

Most, if not all, medical devices are classified as high-risk given their safety risk to patients. The Act also considers certain healthcare AI systems to be high risk, even if they aren't medical devices, such as emergency patient triage systems and AI used by public authorities to evaluate eligibility for services. 

Additionally, any AI that utilizes biometrics, affects critical infrastructure, assesses or aids in education or vocational training, is used for recruitment or decision-making for employment, or affects access to public and private services and benefits is also considered high-risk. 

There are some exceptions, like when AI is used for a simple procedural task or for the purpose of improving the result of an activity previously completed by a human. If you believe that your device is not high-risk, you can note that as part of your assessment that you provide to the national competent authority. 

The EU has created and will maintain a database of high-risk AI systems, which you will have to register for before you can bring your product to market. You can see what kind of information you’ll need to provide for registration on the database here

General Purpose

General purpose AI (GPAI) systems are typically Large Language Models (LLM) that can be used for a variety of tasks and can accept multiple types of information input. They are characterized by their scale and reliance on transfer of learning. General purpose AI systems are often the foundation of models used in healthcare. Everyone’s favorite, GPT-4, is an example of a general purpose LLM. 

The biggest regulatory concern with general AI is related to transparency. Users need to be informed that they are interacting with an AI. 

The AI must also mark any generated audio, video, image, or text outputs as artificially generated, which is essentially to prevent deep fakes and prevent public misinformation. 

General purpose AI, when it exceeds a set threshold, it is considered to have “systemic risk.” Systemic risk is an additional classification of GPAI that imposes slightly more stringent rules, which is prompted by the device’s high-impact capabilities. 

What do we mean by high-impact capabilities? The Commission will evaluate your AI model based on:

  • Number of parameters in the model
  • Quality or size of the dataset used
  • Input and output modalities
  • Number of registered users
  • Cumulative computation - threshold of FLOPs (Floating Point Operations per Second)some text
    • This metric indicates the total number of calculations a model uses during its training process. This metric helps quantify the computational effort and resources required to train an AI model.

If your model passes the computational threshold, you have a two week period in which to notify the commission. After the AI Act goes into effect fully (March 2027), you must alert the commission if you pass the threshold so they can update their list of systemic risk systems. The commission will keep a list of general purpose models with systemic risk and continue to update threshold values based on the emergence of new technologies. 

Passing the computational threshold can happen at any time from the initial training of the AI to years after its release. Since you have to keep meticulous documentation, specs, and logs on your system’s performance, you will know when your model passes the threshold. 

Requirements for High-Risk AI-Enabled Medical Devices

Medical devices with high-risk AI must have external labeling that has contact information for the manufacturer and must affix the CE mark to the device itself or its packaging. It must also be registered with the commission to add to the database the Commission and Member States are developing (yes, another database) for high-risk devices. 

If your device is already on the market and you come to find out that it is not totally compliant with the new AI act, you’ll need to take corrective actions to remedy that. You can do a product recall or an update and should inform your notified body. Unless your device has “unacceptable risk” you have until March 2026 for Annex III devices (infrastructure, biometrics, and devices to be used by law enforcement or the government) or March 2027 for all other AI-enabled medical devices to fix these issues. 

Quality Management System

When we consider medical devices, both ISO 13485 and the AI Act require a quality management system - what is the difference? Fortunately, there is a lot of overlap between the two so you won’t have to create all-new documentation. 

The below requirements are ones you want to pay attention to as they may not already be set up in your ISO 13485 QMS for your AI. You must have written politics and procedures that cover the following items:

  1. Regulatory compliance strategy, including a procedure for modifications to the systemsome text
    1. This modification procedure requirement is similar to the Predetermined Change Control Plan (PCCP) required by the FDA in the United States. This would outline your procedure for updating accuracy, adding training data, etc. This guidance document could be helpful in outlining your change procedures. 
  2. Systems for data management, acquisition, collection, analysis, labeling, storage, filtration, mining, aggregation, and retention
  3. Risk management systemsome text
    1. The system involves identifying, analyzing, estimating, and evaluating emerging risks, as well as adopting targeted measures to address these risks. High-risk AI systems must be tested to ensure consistent performance and compliance, with special consideration given to the impact on vulnerable groups, including those under 18.
  4. Post-market monitoring system some text
    1. This system should collect and analyze performance data to ensure ongoing compliance and must include a post-market monitoring plan as part of the technical documentation. Existing monitoring systems under Union legislation can be integrated with these requirements to avoid duplication and minimize burdens, provided they offer equivalent protection.
  5. Procedures for reporting incidents and communicating with national authorities and notified bodies, including how you will be providing access to the relevant data

Logs 

One requirement for High-risk AI systems that may be new for medical device manufacturers is that the AI systems must automatically record events throughout their lifetime to ensure traceability. 

Logging capabilities should record events relevant for identifying risks, facilitating post-market monitoring, and monitoring operations. Specifically, they must log the period of use, reference databases, matched input data, and identify individuals involved in result verification. You’ll have to retain this documentation for at least six months. 

Human Oversight

High-risk AI systems must be designed for effective human oversight, ensuring risks to health, safety, or fundamental rights are minimized. Oversight measures should be built into the system or implemented by the deployer. See, we need humans for something after all.

The person you dedicate to oversee AI operations should understand the system’s capabilities, monitor its operation, avoid automation bias, interpret outputs correctly, and have the ability to override or halt the system if necessary. For certain high-risk systems, decisions based on AI outputs must be verified by at least two competent individuals.

Conformity Assessment and Declaration

As we mentioned, the Act has conformity assessment procedures that sounds awfully familiar to those of you going through an MDR 2017/745 conformity assessment for medical devices.

The AI Act conformity assessment procedure has two options: Internal Control and QMS/Technical Documentation Assessment. The Document Assessment is similar to a medical device conformity assessment, where you hand over your documentation to a notified body, and they evaluate it. 

You can, however, opt-out of 3rd party assessment using the Internal Control method, if eligible (see section below). For medical devices, it sounds like already undergoing a MDR 2017/745 conformity assessment and adding in the additional controls is enough to do a self-assessment and get out of the full conformity assessment.

However, we will see if regulators agree with this as the Act comes into enforcement.

Internal Control Method (Annex VI) - Self-Assessment

  • Use This Method If: Your high-risk AI system is covered by Union harmonization legislation, where the legislation allows opting out from a third-party conformity assessment. Under section A of Annex I, where common specifications are set out for particular types of devices, you’ll find medical devices and in-vitro medical devices. To our knowledge, this means you can opt out of an assessment as long as you meet the requirements below and are CE marked under MDR 2017/745 or IVDR 2017/746.
  • Requirements:
    • Verify that the your quality management system complies with Article 17 (make sure you add the QMS requirements we mentioned above) 
    • Meet the common specifications/harmonized standards set out for your type of device, as is required for any medical device entering the EU market 
    • Make sure your technical documentation has all of the elements listed out in Annex IV (it likely does already but you should double check).
    • Include specific language for monitoring the AI in the post-market surveillance processes for your company according to Article 72 if it isn’t there already.

QMS/Technical Documentation Assessment (Annex VII) - Notified Body Assessment 

  • Use This Method If: If you are not sure if your QMS, tech specs, or PMS adheres directly to the guidelines in the AI Act, you can involve a 3rd party (your notified body) to help evaluate your model. 
  • Requirements:
    • Have your quality management system and technical documentation assessed by a notified body.
    • Submit documentation covering all aspects listed under Article 17, including procedures to maintain the adequacy and effectiveness of the QMS.
    • Provide full access to training, validation, and testing datasets, and possibly to the training and trained models, subject to certain conditions.
    • Receive periodic audits and surveillance to ensure ongoing compliance with the approved QMS.

If your technical documentation specifies that your AI model is static or changes only according to a predefined testing plan, you do not need a new conformity assessment. However, if you modify your AI system beyond what is outlined in your documentation, you will need another assessment. 

Regulators are particularly concerned about the performance and safety changes that could occur when AI models learn from clinical data, so it’s crucial to define parameters for what information the model can use to adapt.

Implementation and Ongoing Developments

AI Act enforcement is a phased rollout, giving companies time to adopt procedures, create documentation, and adjust their quality management systems. The first step is coming up: a ban on any “unacceptable risk” classified devices will go into effect in September 2024. The timeline for AI Act effective dates is below.

AI Act Implementation Timeline

The commission will be releasing more guidance on things like transparency disclosures, prohibitions, and provider requirements, with no set timeline as to when they’ll be released. 

So, do you really need to get another compliance assessment? Yes, unless you’re eligible to opt-out. We hope that the AI Act will be integrated into the MDR guidelines to make this process easier for all of us, but as you know, things like this tend to move slowly. If anything, a self-assessment would reduce a lot of redundancies and burden for having to go through another full conformity assessment.

Some experts, like Koen Cobbaert, who spoke at RAPS Euro Convergence, have voiced concerns that the AI Act is incompatible with the MDR guidelines. He argues that they have similar structures but conflicting terminology. Another panelist points out that the legislation casts a wide net, applying to practices from production to ethics, further complicating things and dealing with entirely different issues than the MDR. This, unfortunately, means that we’ll just have to make it work for the time being. 

So, What Are Your Next Steps?

The grace period before regulations go into effect is intentional, this is your time to start preparing. There’s a lot you’ll have to create, change, track, and document before March 2027. Here are some actionable steps for you to start painlessly integrating AI Act requirements:

  1. Understand Your Risk Category and Don’t Get Banned
    First thing’s first, figure out which risk category your AI system falls into: unacceptable risk, high-risk, or general purpose. Most medical devices with AI are likely high-risk, so let's focus on that. The ban on unacceptable risk devices takes effect September 2024, so you should make sure you don’t fall into that category or make any changes before then.

  2. Conduct a Gap Assessment
    Start by reviewing all your existing technical documentation, QMS documents, Technical Documentation, and your PMS. Perform a gap assessment to identify what updates are needed to align with the AI Act's requirements. This will help you determine the areas that need improvement or additional documentation for the conformity assessment.
  1. Start Adjusting Your QMS, Technical Documentation, and PMS
    Based on the gap assessment, update your QMS and PMS to ensure compliance with the AI Act. This might involve setting up new procedures and strategies or refining existing ones. Key areas to focus on include: some text
    • Systems for data management, analysis, and storage
    • Change plans 
    • Incident reporting 
  1. Build-in Human Oversight
    Design or modify your AI systems to allow for effective human oversight. This means:some text
    • Training your team to understand and monitor AI capabilities
    • Setting up processes to override or halt the AI if necessary
    • Verifying decisions based on AI outputs, especially for critical applications
  1. Prepare for Conformity Assessment
    Start to think about whether you'll use the Internal Control method or QMS/Technical Documentation Assessment. If you need more clarity, talk to your notified body. Check out the common specifications and harmonized standards for medical devices and in-vitro devices to see where your device falls and whether you can opt out of a third-party assessment. 
  1. Update Your Labels and CE Marking
    Make sure your high-risk AI-enabled devices have external labeling with manufacturer contact information and the CE mark. Register your device with the commission to add it to the database.
  1. Stay Informed and Adapt
    The AI landscape is always changing, so keep an eye out for new guidelines and updates from the commission. Regularly review and update your practices to stay compliant.

If you want to dive into the AI act a bit more, this explorer tool breaks down each chapter, article, and annex without you having to scroll through a 458-page PDF document. 

FormlyAI: Your Partner in Medical Device Certification

At FormlyAI, we understand the complexities of MDR medical device certification. Our service provides a streamlined, efficient path to CE marking and all the tools to ensure your medical device adheres to all necessary EU regulations. All for a fraction of the cost of using traditional consultants. This way you can build your compliance documentation and apply for CE marking in a matter of weeks, not years.  

Learn more about our software and accelerate your path to CE marking with Formly.

Frequently Asked Questions

Do I need to do another conformity assessment?
A plus sign representing the option to maximise the answer to the question.

Unless you can do an internal evaluation and prove that your device conforms to the harmonized legislation for its type, you’ll need a third-party assessment. Medical devices and in-vitro diagnostic devices should qualify for internal control and can opt-out if they are CE marked.

Am I eligible to opt-out of the third-party assessment?
A plus sign representing the option to maximise the answer to the question.

You may be eligible to opt-out if you can demonstrate conformity to medical device or in vitro diagnostic device regulations (aka CE marked in Europe under MDR 2017/745 or IVDR 2017/746). 

When will medical devices be subject to the AI Act rules?
A plus sign representing the option to maximise the answer to the question.
  • If your device does not abide by the ethical code for AI from Article 5, it will be banned in September 2024.
  • If your device is to be used for biometrics, law enforcement, education, or infrastructure (see Annex III) the AI Act rules will apply as of March 2026. 
  • If your device is a high-risk medical device that abides by the ethical code and will not be used for any of the purposes listed in Annex III, AI Act rules will apply in March 2027. 
What is the difference between the MDR QMS requirements vs AI Act QMS requirements?
A plus sign representing the option to maximise the answer to the question.

You’ll need to integrate AI-specific risk management and monitoring protocols into your existing QMS. This includes continuous learning algorithms, change management, and ensuring AI systems are "static" unless predefined changes are validated and documented.

What is the difference between the MDR PMS requirements vs AI Act PMS requirements?
A plus sign representing the option to maximise the answer to the question.

PMS for AI systems will need to include real-time monitoring and reporting of AI-specific metrics such as algorithm performance, data drift, and unforeseen interactions with clinical data.

Are there any changes I need to make to my security controls for the AI act?
A plus sign representing the option to maximise the answer to the question.

You will need to implement advanced cybersecurity measures tailored for AI, such as robust encryption, access controls ensuring proper human oversight, and regular security audits to protect against evolving threats specific to AI applications.

Do I need to create more documentation for the AI act?
A plus sign representing the option to maximise the answer to the question.

Likely yes, your documentation must show how AI-specific risks are managed and mitigated. You’ll need to have detailed descriptions of AI models, data sources, validation procedures, and continuous learning protocols. 

Subscribe to Our Blog

Get notified for updates and new blog posts.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.