Balancing Regulation and Risk of AI and Machine Learning Software in Medical Devices

News
Article
Infection Control TodayInfection Control Today, July/August 2025 (Vol. 29 No.4)
Volume 29
Issue 4

Artificial intelligence may be revolutionizing sepsis detection and diagnosis, but can we trust it without strong regulatory guardrails? As AI/ML-enabled medical devices rapidly evolve, IPs must stay informed and involved to ensure safety keeps pace with innovation. Let’s take a closer look at how to advocate for innovative safe implementation.

Balancing Regulation and Risk of AI and Machine Learning Software in Medical Devices  (Adobe Stock 1387477192 by Fidel)

Balancing Regulation and Risk of AI and Machine Learning Software in Medical Devices

(Adobe Stock 1387477192 by Fidel)

Artificial intelligence (AI) and machine learning (ML) software have experienced significant growth across various industries, including health care. These developing technologies already have the potential to detect complications caused by infection quickly and accurately. However, integrating these new devices into an established regulatory process has proven challenging. Additionally, there are concerns about trusting devices that produce unpredictable outputs. Due to their distinct characteristics, specific medical devices utilizing AI/ML technology must undergo an adjusted regulatory process to be marketed.

Advantages of AI/ML Software

AI-based software within the medical device industry is on the rise, with over 1,015 devices with AI-enabled software authorized for marketing in the US by the FDA as of March 25, 2025.1 Included on the FDA’s AI/ML-Enabled Medical Devices List1 is the Sepsis ImmunoScore. Approved in April 2024, this was the first FDA-authorized AI software designed to aid in the prediction and diagnosis of sepsis.2,3 The Sepsis ImmunoScore uses AI/ML software to pull inputs that include demographic data, vital signs, and blood culture results from the electronic medical record, enabling the ImmunoScore to output the risk for the presence or development of sepsis for that patient within 24 hours.3

In a prospective study published in November 2024, it was explained that the AI algorithm for ImmunoScore was developed through a derivation cohort of patients and was assessed for accuracy on both internal and external cohorts.3 The results of this study concluded that the ImmunoScore had a “high overall diagnostic accuracy for predicting sepsis” and was “highly predictive” of secondary outcomes, including “in-hospital mortality, length of stay, ICU [intensive care unit] admission within 24 hours, use of mechanical ventilation within 24 hours, and use of vasopressors within 24 hours.” The ImmunoScore is just 1 example of AI’s potential to assist health care professionals through informed decision-making. Although the capabilities of these devices seem promising, concerns regarding their trustworthiness must be addressed by regulatory bodies and standardization organizations.

Artificial Intelligence/Machine Learning Software

Understanding the differences among various types of AI software is essential for recognizing the need for heightened regulation of devices that utilize these systems. In its second white paper, the Association for the Advancement of Medical Instrumentation (AAMI)/British Standards Institution (BSI) Initiative on AI provides an overview of AI systems used in the medical device industry.4 This includes rules-based AI, systems that “mimic human behavior-making decisions by applying static rules to arrive at predictable decisions,”4 compared with data-driven machine learning systems that can learn from inputs and make judgments.4 Data-driven ML systems can be further categorized into locked ML models and continuous learning models.4 In locked machine learning models, the algorithms and outputs will not change automatically and require external approval to change,4 whereas continuous learning models can automatically adjust their algorithms and update outputs based on changes.4 From a regulatory perspective, these continuous learning models are the most challenging, as the algorithms may change automatically, making it difficult to predict outcomes.

Concerns With AI/ML

A significant concern with AI in the medical device industry, particularly with continuous learning models, is trusting these unpredictable systems with tasks that were previously performed by humans, which can pose a risk to patient safety if they fail to perform as expected. The AAMI/BSI Initiative recognizes this unique situation by describing how, in the medical device industry, trust is generally gained not only by an overview of the device’s mechanisms and technologies but also by validating that the device produces “reliable and predictable outputs.”4 This is where concern arises for many: If AI/ML devices can change their outputs after validation, how can they be trusted to produce those reliable and predictable outputs?

Some considerations from AAMI and BSI addressing this concern include that the input data used to train algorithms should be “predefined, relevant, and appropriate” and that performance metrics should be implemented to allow for live monitoring against predicted outputs.4 Guidance from the FDA has also suggested that developers describe anticipated algorithm changes before marketing the device,5 reducing the potential of unknown and unpredictable outputs once in use.

Regulation of AI/ML-based Medical Devices

Just as there are different types of AI technologies, from a regulatory perspective, there are also other types of software that can be used in (or as) medical devices. According to the FDA, the primary types of software within the medical device industry include Software as a Medical Device (SaMD), software used in medical devices, and software used in the manufacturing process of a device.6 When AI/ML software is “intended to treat, diagnose, cure, mitigate, or prevent disease or other conditions,” it is considered a medical device and referred to as SaMD by the FDA and International Medical Device Regulators Forum .7

Furthermore, medical devices (including SaMD) can be categorized into 3 classes by the FDA. These classifications give insight into the risk the device poses to the patient. For example, a Class I device poses a low risk to the patient, whereas a Class III device poses the greatest risk. The FDA has different premarket submission requirements depending on the device’s classification. If a device is classified as Class I or II, a 510(k) submission is required to market the device (excluding exemptions).8 However, because Class III devices “support or sustain human life, are of substantial importance in preventing impairment of human health, or present a potential, unreasonable risk of illness or injury,”9 and pose the highest risk to the patient, these devices require a different type of premarket submission. Class III devices require a premarket approval application (PMA) by the FDA, providing scientific evidence that the device would be safe for use.9

The unique capabilities of AI/ML SaMD to adapt and change its algorithms after the PMA process has been completed present new challenges for regulating these devices. In 2019, the FDA released a discussion paper and requested feedback reviewing the process for modifications to AI/ML-based SaMD.7 The paper notes that the FDA expects algorithm changes to be anticipated and described in the premarket submission.5 Additionally, some of these algorithm changes would still require a premarket review.5

Based on the feedback provided in response to the request from the discussion paper, the FDA released an action plan for AI/ML SaMD.5,10 In this action plan, predetermined change control plans (PCCPs) were discussed, specifically, how anticipated and prespecified changes may be captured in a PCCP, which may avoid the need for a new PMA every time an algorithm change occurs.5 This approach can be more efficient, allowing changes to be anticipated and described where a PMA may not be needed. Of course, changes that would pose heightened risk or differ from those described in a PCCP may still require a PMA, as previously noted. The action plan also mentions that “FDA liaisons participate in the standardization efforts of [the AAMI] AI Committee to address the risk management of AI-driven medical devices,”11 and that the FDA is committed to continue working with organizations such as AAMI and BSI to address AI/ML developments.10

In 2023, the FDA published a set of guiding principles for PCCPs.5,12 According to AAMI Array author Eric Henry, as of October 2024, the only AI/ML-specific guidance published by the FDA was the draft guidance for PCCPs.11 In December 2024, the FDA published its final guidance for marketing submission recommendations for PCCPs for AI-enabled devices.13 In January 2025, the FDA published an additional draft guidance intended to “[provide] lifecycle management and marketing submission recommendations consistent with a TPLC [Total Product Life Cycle] approach for AI-enabled devices,”14 which also references the final guidance for submission recommendations for PCCPs. The FDA requested comments for this draft guidance to be submitted by April 7, 2025.15

Conclusion

AI and ML technology have the potential to become an important asset for use in future medical devices. As the use of this technology grows, health care professionals must continue to advocate for the appropriate regulation of these devices. Advocacy for an adjusted regulatory process for these devices enables industry professionals to continually embrace evolving technology while maintaining confidence that these devices are safe and reliable for use.

References:

  1. Artificial intelligence and machine learning (AI/ML)-enabled medical devices. FDA. March 25, 2025. Accessed May 22, 2025. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices
  2. Device classification under Section 513(f)(2)(de novo). Updated June 2, 2025. Accessed May 22, 2025. https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/denovo.cfm?id=DEN230036
  3. Bhargava A, López-Espina C, Schmalz L, et al. FDA-authorized AI/ML tool for sepsis prediction: development and validation. NEJM AI. 2024;1(12). doi:10.1056/aioa2400867
  4. Machine Learning AI in Medical Devices: Adapting Regulatory Frameworks and Standards to Ensure Safety and Performance. AAMI; 2020. doi:10.2345/9781570207914
  5. Reddy S. Global harmonization of artificial intelligence-enabled software as a medical device regulation: addressing challenges and unifying standards. Mayo Clin Proc: Digit Health. 2024;3(1):100191. doi:10.1016/j.mcpdig.2024.100191
  6. Software as a Medical Device (SAMD). FDA. December 4, 2018. Accessed May 22, 2025. https://www.fda.gov/medical-devices/digital-health-center-excellence/software-medical-device-samd
  7. Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD). FDA; 2019. https://www.fda.gov/media/122535/download?attachment
  8. Classify your medical device. FDA. February 7, 2020. Accessed May 22, 2025. https://www.fda.gov/medical-devices/overview-device-regulation/classify-your-medical-device
  9. Premarket approval (PMA). FDA. May 16, 2019. Accessed May 22, 2025. https://www.fda.gov/medical-devices/premarket-submissions-selecting-and-preparing-correct-submission/premarket-approval-pma
  10. Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. FDA; 2021. Accessed May 22, 2025. https://www.fda.gov/media/145022/download?attachment
  11. Henry E. AI/ML in medical devices: US & EU regulatory perspectives. AAMI. October 23, 2024. Accessed May 22, 2025. https://array.aami.org/content/news/ai-ml-medical-devices-us-eu-regulatory-perspectives
  12. Predetermined change control plans for machine learning-enabled medical devices: guiding principles. FDA. December 3, 2024. Accessed May 22, 2025. https://www.fda.gov/medical-devices/software-medical-device-samd/predetermined-change-control-plans-machine-learning-enabled-medical-devices-guiding-principles
  13. Marketing submission recommendations for a predetermined change control plan for artificial intelligence-enabled device software functions. FDA. December 3, 2024. Accessed May 22, 2025. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/marketing-submission-recommendations-predetermined-change-control-plan-artificial-intelligence
  14. Artificial intelligence-enabled device software functions: lifecycle management and marketing submission recommendations. FDA. January 7, 2025. Accessed May 22, 2025. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/artificial-intelligence-enabled-device-software-functions-lifecycle-management-and-marketing
  15. Artificial intelligence-enabled device software functions: lifecycle management and marketing submission recommendations; draft guidance for industry and food and drug administration staff; availability. Federal Register. January 7, 2025. Accessed May 22, 2025. https://www.federalregister.gov/documents/2025/01/07/2024-31543/artificial-intelligence-enabled-device-software-functions-lifecycle-management-and-marketing

Newsletter

Stay prepared and protected with Infection Control Today's newsletter, delivering essential updates, best practices, and expert insights for infection preventionists.

Recent Videos
Advanced Leadership Certification in Infection Prevention & Control (AL-CIP)  (Image courtesy of CBIC)
Hospital recovery patient single room   (Adobe Stock 253433239 by Mongkolchon)
 Futuristic UV Sanitizer with Sleek Design on a white background.  (Adobe Stock 1375983522 by Napa)