Since 2016, the 21st Century Cures Act has required the Food and Drug Administration (FDA) to consider the utility of any real-world evidence (RWE) study submitted by a sponsor in support of an efficacy claim for a medical product. However, even as the FDA is beginning to accept more submissions with RWE, many manufacturers still face uncertainty and roadblocks preventing optimal utilization.
A major reason for this confusion stems from how the FDA has accepted the use of real-world data (RWD) for drug safety issues. Since before Bitcoin, the iPhone, and hybrid cars, pharmacoepidemiologists have been using RWD obtained from large databases of electronic healthcare records (EHR), and other observational data sources, to describe associations between medical products and potential health risks (or benefits) to support regulatory filings.
As fine wines tend to mature with time, so have observational drug safety studies. However, unlike drug safety issues, the type of data collected along with the formal mechanisms to submit RWD for drug efficacy purposes must go through a similar process of maturation.
Though the Agency has released several guidances on the use and submission of RWE studies, there remains significant confusion concerning the implementation and utility of these studies. This article details the most common roadblocks, based on my experience as a regulator and as a biopharma consultant, that pharma companies experience when incorporating RWD in regulatory submissions and looks at how they can be addressed.
Say you want to find the astronomical body with the most water in the solar system. Obviously, it’s Earth, right? Well, that depends. If you’re looking for the body with the most liquid water on its surface, then Earth is your answer. However, if you’re looking for the total water (liquid and ice), then Earth is not even close. The answer is probably Ganymede, one of Jupiter’s moons, which has many times more total water than Earth.
Similarly, addressing the appropriate scientific and regulatory questions in an RWE study is multifactorial and highly nuanced. For example, a common regulatory question might be, “Is the risk/benefit profile of Drug X favorable for patients?” A Phase 2 clinical trial may be designed to answer the question, “What proportion of patients respond to treatment, how long does the response last, and what are the major toxicities and adverse events?” This scientific question informs the regulatory one, but there are still gaps, leaving an opportunity for further information generated via RWE. For example, “How does the risk/benefit profile of Drug X compare to existing therapies or the standard of care?” Answering this question requires deep and accurate data capable of identifying the appropriate patient population and their experiences.
However, sponsors often run into challenges ensuring that the RWD they can collect will address the appropriate regulatory questions. Instead, they often focus on the questions they can answer, which may miss the mark. For example, if their RWD has missing data, they may choose to omit certain clinically important characteristics (such as cancer stage, organ function, comorbidities, or life expectancy) from their matching scheme, or simply not to match at all. In doing so, they are not addressing the correct scientific or regulatory question, because their RWD cohort will not be comparable to the clinical trial cohort (or more specifically, the degree to which the cohorts are comparable, or not, will not be known).
This issue is addressed by finding RWD capable of answering your scientific and regulatory questions, rather than settling for, or molding, your study to the RWD that is available. Often, there are better ways to collect the data you are seeking, and if the data you have cannot answer the regulatory question, a different approach is likely necessary.
The characteristics of a fit-for-purpose data source depends on the scientific and regulatory context. Factors such as the drug’s efficacy, toxicity, the natural history of the indicated disease, common comorbidities, patient prognosis, route of administration, whether the drug is a new molecular entity or an established product, if the sponsor is seeking accelerated or full approval, or fulfilling a post-marketing requirement, all influence the type and quality of RWE needed to support a regulatory decision.
Fit-for-purpose data are also dependent on its intended function, be that patient preference or outcome information to supplement clinical trial data, comparative efficacy via an external control arm, long-term safety and efficacy data, or some other issue. Each application is unique and has different necessary thresholds for evidence generation.
Whether a source of RWD is fit-for-purpose often comes to two qualities: size and depth. Historically, RWD used to generate evidence to inform safety issues required big data, such as is found in EMRs, to identify rare adverse events. However, RWE studies to support drug efficacy often require very deep data, such as may be found via medical chart review, which is notably often considered a gold standard for observational safety studies. There is often a discrepancy, however, because many indications, especially for oncology drugs, are very rare, required data that are both big and deep. Further, RWE for efficacy may need deep data that allows researchers to explore the “why” behind patient and provider decisions. Deep data allows researchers to understand the full patient journey describing access to healthcare services, treatment patterns, adherence, and outcomes, as well as the decisions and behaviors that drive those factors.
Especially when comparing two or more distinct sources of data, identifying the appropriate patient population is essential. While this issue pertains to all study types, it is especially relevant when RWE is generated from an external source for comparison to clinical trial data.
Compared to clinical trial participants, real-world patients tend to be older, sicker, less adherent, less diverse, and have less access to quality healthcare.1 Further, these characteristics can be difficult, or impossible, to measure in sources of RWD. Some indications, such as very rare diseases, conditions that typically are not screened for, or relapsed and refractory status, can be very difficult to identify in RWD. Also, inclusion and exclusion criteria used in clinical trials often cannot be replicated using RWD. As such, in many cases, real-world patient cohorts will almost certainly be different, and thus experience very different disease progression and adverse event rates compared to clinical trial patients. If a real-world patient cohort is characteristically dissimilar to the corresponding clinical trial patients, direct comparisons are not appropriate.
While there are ways to work around these roadblocks, the potential solutions may not be straightforward. One option may be to design the inclusion and exclusion criteria for the clinical trial to be less stringent to be better representative of the real-world population and by using elements that are easier to measure in sources of RWD. Another possible solution is to collect prospective RWD, making it easier to apply the same trial entry criteria to select the real-world external cohort, as well as improving standardization for many study elements such as follow-up interval, treatment patterns, and type and method of data collection.
With proper methodological and statistical considerations, direct patient matching, and comparisons may be appropriate.
One of the most important characteristics of RWE studies in regulatory submissions is the presence of sensitivity analyses, or the changes in results when key variables are varied in a pharmacoeconomic model. When a clinical trial is submitted to the FDA, the agency expects an opportunity to review and comment on protocols, statistical analyses plans, and all study reports. As part of this, the FDA will expect comprehensive data exploration and analyses including sensitivity and sub-group analyses.
The same is true of RWE studies.
RWE studies, especially those involving external controls, are highly susceptible to bias. Successful studies are designed to account for bias related to patient selection, misclassification, missing data, survivor bias, competing risks, adherence, loss to follow-up, immortal time, and many others. While eliminating bias from RWE is nearly impossible, the agency expects sponsors to make all efforts to minimize bias with study design, to control for bias statistically, and to describe remaining bias and explore the robustness of the study results.
A sponsor planning to submit RWE in support of a regulatory filing should contact the FDA early and often. It may be that for a given indication, the Agency might not consider RWE as supportive evidence at all, or may only accept certain types of RWE, such as patient-reported outcomes or prospective external control data. Also, as with any study, an RWE study simply may not work out due to a lack of identifiable patients or inability to assess the appropriate exposures and outcomes. Sometimes, RWE studies are not necessary, or may be insufficient without additional sources of supportive evidence.
Executing a regulatory-grade, fit-for-purpose RWE study is strongly dependent on context and often highly difficult. However, a well-executed study can provide important supportive or primary evidence, potentially having a major influence as to whether the FDA decides to approve a product or expand its indication.
Note
Scott Swain, PhD, MPH, is Director of Regulatory Sciences and Real-World Evidence at Cardinal Health.