• Sustainability
  • DE&I
  • Pandemic
  • Finance
  • Legal
  • Technology
  • Regulatory
  • Global
  • Pricing
  • Strategy
  • R&D/Clinical Trials
  • Opinion
  • Executive Roundtable
  • Sales & Marketing
  • Executive Profiles
  • Leadership
  • Market Access
  • Patient Engagement
  • Supply Chain
  • Industry Trends

Unlocking the Potential of AI in Healthcare–5 Key Takeaways

Feature
Article

The technology has the potential to be implemented across multiple aspects of the industry.

Timothy Bubb

Timothy Bubb
Technical director
IMed Consultancy

The growth of AI across the world is explosive. A 2023 PwC report states that AI has the potential to contribute $15.7tr to the global economy by 20301. Certainly, the healthcare landscape offers rich opportunity for this game-changing technology, but the sector is having to navigate what is still relatively new terrain with care. So, what’s the big picture? Where is the sector now in relation to AI, and what does the development-path look like? Here are five key takeaways.

Vast and growing data sources fueling AI

Digital health technologies are transforming the healthcare landscape. The emergence of wearables, connected, and smart medical devices is generating a huge volume of data, and it is this data that is increasingly feeding the development of Artificial Intelligence/ Machine Learning (AI/ML) powered diagnostic and interventional tools.

AI-powered algorithms can analyze vast troves of health data–whether that data is presented in text form, as video or imagery–helping to save hours of manual analysis and cross-checking and suggesting interpretations that would otherwise take human researchers years to complete. The applications are broad-reaching and numerous, from identifying disease biomarkers to predicting disease trajectories, tailoring interventions to individual patients, and much more. The ongoing challenge lies not so much in harnessing and interpreting this data but in defining and regulating resulting applications to ensure safety and efficacy.

No limit for AI applications

The potential impact of AI and ML in medical device development and healthcare delivery is huge, spanning from diagnosis and treatment to patient monitoring and management.

AI/ML can rapidly analyze radiology images, histological data, posture, eye movement, speech, and a whole range of other types of input. This versatility opens myriad usage options, with exciting implications for the sector.

Some example areas for application of AI/ML within medical devices include diagnostic imaging and specifically the analysis of medical images such as X-rays, MRI scans, CT scans, and ultrasounds to aid in the diagnosis of various conditions; remote patient monitoring, monitoring patients remotely via AI enabled devices facilitates the timely analysis of data on vital signs and other relevant metrics. This enables early detection or prediction of changes in the patient’s condition allowing for more timely intervention where needed.

Other areas for application are personalized medicine, where AI/ML algorithms analyze patient data, including genetic information, medical history, and lifestyle factors, to personalize treatment plans; robotic surgery, with AI-powered surgical robots assisting surgeons during minimally invasive procedures by enhancing precision, dexterity, and control, and finally predictive analytics for healthcare management, where AI and ML models analyze large volumes of healthcare data, including electronic health records, insurance claim data, and operational metrics to identify patterns, trends, and risk factors.

And so, the list goes on. In fact, sector analysts are only beginning to understand the full scope of possibilities, pointing to further exploration around life-changing ideas related to the detection of genetic or environmental factors that impact health, and the signaling of warnings many years before diseases begin to manifest.

Defining digital medical devices is crucial

There are intricacies of classification within the digital health landscape, and manufacturers must be alert to these subtleties if development is to proceed safely and efficiently. The level of regulatory scrutiny that an app or software is subject to largely depends on whether the tool can be defined as a medical device. However, making this definition is becoming increasingly complex.

Today, a spectrum of technologies falls within the broad ‘digital health’ bracket - from non-medical devices designed to monitor well-being through to medical devices tailored for specific medical purposes. Tools which are not medical devices, for example, include the many mindfulness apps, sleep tracking devices and similar which are increasingly prevalent.

The challenge for the industry’s regulators lies with evaluating and approving medical devices that do not conform to traditional paradigms and do not have a physical presence in the traditional sense, often using data from external providers and obtained using consumer electronics.

A definition for ‘software as a medical device’ exists–devised by the International Medical Device Regulators Forum–but it remains open to interpretation. The challenge of definition has since been taken on by the UK Medical Device Regulator, MHRA, with the agency confirming in its June 2023 Roadmap that it is developing guidance to help clearly identify what is Software as a Medical Device (SaMD). Only with clear guidance can manufacturers proceed confidently.

Evolving regulation must be monitored and navigated

As the development of software-based medical devices accelerates, manufacturers are grappling with varying definitions and regulatory requirements across regions.

This year, a milestone was reached with the adoption of the European (EU) Artificial Intelligence Act (AIA) by the European Parliament – a journey some three years in the making.

The Act uses a risk-based approach to classify AI systems into four categories–minimal risk; limited risk; high-risk and unacceptable risk. Medical devices including IVDs that incorporate AI/ML-enabled device functions will likely be classified as high-risk AI systems. The requirements for high-risk AI systems include:

  • Data governance
  • Quality Management System (QMS)
  • Technical documentation
  • Record keeping
  • Transparency
  • Human oversight
  • Accuracy, robustness and cybersecurity
  • Conformity assessment with Notified Body involvement

It seems inevitable that some of these requirements will overlap with those required under more established legislation – namely the EU Medical Devices Regulation (MDR) and In Vitro Diagnostic Medical Devices Regulation (IVDR). The onus falls on manufacturers to assess the detail and to understand where crossovers and gaps may fall in order to achieve compliance.

In the USA, a recent Executive Order is expected to impact how the FDA regulates the use of AI/ML in medical devices. The FDA currently applies its “benefit-risk” framework requiring devices to conform to some basic principles such as the demonstration of sensitivity and specificity for devices used for diagnostic purposes, the validation of intended purpose and stakeholder requirements against specifications and development that ensures repeatability, reliability, and performance. The agency is also piloting processes that would represent a significant milestone in regulatory innovation, with the intention of enabling certain pre-authorized software changes to be agreed by the manufacturer and the FDA which can then be deployed without the need for further regulatory assessment. This would be a step ahead of traditional assessment methods used in EU medical device assessments where such flexibility and adaptive development is difficult.

In the UK, the MHRA intends to develop a system based more on guidance than regulation, allowing for more frequent updates. Its 10 guiding principles – developed alongside the FDA and Health Canada - inform the development of Good Machine Learning Practice (GMLP) that are safe, effective, and promote high-quality medical devices that use artificial intelligence and machine learning (AI/ML).

In July 2024, the agency intends to pilot a new regulatory sandbox, designed to provide a safe space for AI tool developers in healthcare to trial products using live patient data in the view of regulators, particularly focused on AI applications in healthcare where the risks are greater and less well understood, such as autonomous decision making.

Collaboration with experts will ease the development path

Clearly, regulation and definitions relating to AI in healthcare will continue to evolve just as the technology itself continues to develop and grow in sophistication.

The emerging raft of regulation is not intended to stifle innovation and growth in this dynamic field. Rather, it exists to ensure failsafe compliance, security and excellence in a marketplace that cannot allow standards to drop.

Navigating this regulatory landscape will require constant vigilance and expertise–and manufacturers are already seeking to partner with professionals who possess forensic knowledge of regional regulations and who demonstrate a clear understanding of objectives, potential and risks.

There really is no limit to the potential of the technology. Early applications are emerging across every field of healthcare, and it is no exaggeration to state that AI has the power to revolutionize healthcare delivery.

It is in the best interests of the industry–and for society as a whole–for manufacturers to be afforded a path to development that is free of confusion and unnecessary regulatory delay. As the industry and regulators work hard to fine-tune operating frameworks for the AI age, third-party experts are already helping to find the path to development success.

Timothy Bubb is the technical director at IMed Consultancy

Sources

  1. PwC’s Global Artificial Intelligence Study: Exploiting the AI Revolution. PWC. https://www.pwc.com/gx/en/issues/data-and-analytics/publications/artificial-intelligence-study.html


Recent Videos