The use of real-world evidence can prevent pharma companies from performing studies based on flawed data.
In recent years, the healthcare landscape has seen the rapid generation and accessibility of real-world data (RWD), spurred on by significant advancements in technology.
As even more RWD becomes available, regulators and researchers are increasingly leveraging observational studies to analyze RWD and produce real-world evidence (RWE) to identify new indications for approved products. These observational studies can provide critical insights into the impact of medications in real-world settings and clinical practice.
However, with all these new data, researchers must first ensure they are capturing and analyzing them correctly. Although many observational studies have reported impressive findings related to product effectiveness, many of these studies are affected by significant time-related biases that are often overlooked, leading to ultimately flawed observational studies. These flawed studies can then serve as the basis for conducting unnecessary randomized controlled trials (RCTs) to test the hypotheses generated from the observational study, resulting in wasted time and resources.
To design and execute truly effective RWE studies that accurately support label expansion, researchers should engage in critical review of observational studies and look at new-user designs that eliminate time-related bias.
When crafting an effective RWE study, data veracity is critical. The number of databases and database studies has exploded over the past two decades. Unfortunately, this intense growth has also resulted in an explosion of flawed data and study designs, resulting in findings that are either incorrect or biologically implausible.
For example, one study1 published last year detailed a Canadian-led international randomized trial that looked at metformin as a treatment for breast cancer, following 3,600 women for five years. Metformin is used to manage blood sugar levels for patients with type 2 diabetes; one wonders what would have sparked the idea that it could be used to treat cancer? Another Canadian randomized trial2 looked at statins–drugs that help manage cholesterol–for the prevention of exacerbations and mortality in chronic obstructive pulmonary disease.
These trials were introduced in a similar manner; based on observational studies that suggested these drugs would be effective in the treatment of other diseases well outside of their original indications. With the metformin trial, more specifically, many retrospective, pharmacoepidemiologic studies have suggested that patients with diabetes treated with metformin have reduced cancer risk, improved cancer prognosis, and improved survival. Ultimately, however, these studies led to trials that have shown no benefit for these newer indications.
The ultimate flaw in these studies was that they did not account for several time-related biases, primarily immortal time bias. This refers to a period of follow-up during which the outcome of interest (e.g., death, stroke, delirium) for one of the cohorts cannot possibly occur. In pharmacoepidemiology studies specifically, immortal time typically arises when the determination of an individual’s treatment status involves a delay or waiting period during which follow-up time is accrued. For example, a patient waiting for a prescription to be dispensed after discharge from the hospital, with the discharge date representing the start of the follow-up period.
This period is considered immortal because individuals who end up in the treated or exposed group must survive (be alive and event-free) until the treatment definition is fulfilled. If they have an event before beginning treatment, they are necessarily considered to be in the untreated or unexposed group.
Bias comes into play when this period of “immortality” is either misclassified regarding treatment status or excluded from the analysis. This is particularly problematic as it now places the results in favor of the treatment under study by conferring a spurious survival advantage to the treated group.
Another example of time-related biases often present in these observational studies is immeasurable-time bias, which exaggerates drug benefits due to exposure misclassification arising from the inability to measure in-hospital medications in typical claims databases. These types of biases can skew observations and result in researchers deeming treatments as “highly effective,” when these assumptions are based on inaccurate findings.
To craft effective RWE studies in the future, researchers must steer away from the outdated ways of conducting observational studies, and instead begin utilizing emulated randomized trial approaches using real-world observational data. For example, the 2015 treatment guidelines for idiopathic pulmonary fibrosis (IPF) recommended the use of proton pump inhibitors (PPI) based on observational studies impacted by immortal time bias.
A team at McGill recently conducted an observational study of this question using a prevalent new-user design that imitates a randomized controlled trial. The team used data from the Clinical Practice Research Data Link (CPRD) to identify a cohort of patients with IPF who initiated PPIs and were matched to IPF patients who did not receive a PPI at a similarly timed (relative to IPF diagnosis) visit.
This new user design eliminated immortal-time bias because the patients who start the PPI are matched to patients who do not start PPI at the same time point, using time-conditional propensity scores to facilitate the matching. The team found that there was no difference between PPI users and non-users in terms of all-cause mortality over a period of up to five years of follow-up.
This is an example of a situation in which the team showcased that an actual RCT was not needed, because the foundation of the hypothesis is essentially flawed. As a result of this study, the 2022 treatment guidelines no longer recommended the use of PPIs for IPF.
We’re already seeing the potential impact of the new-user design approach, with the FDA and the National Institutes of Health investing in projects to help uncover its full potential. One initiative is working to build an empirical evidence base for RWD through large-scale replication of RCTs.
A recent paper3 suggests that, although still in the learning phase and not without challenges, RWE studies can reach similar conclusions as RCTs when design and measurements are closely emulated. With even more evidence and research, the FDA, other regulators, and the pharma industry at-large will come to see the full value of these real-world evidence studies.
Expanding the indications of older and inexpensive treatments is incredibly valuable, and a necessary effort to improve patients’ lives and health outcomes; however, to truly be effective, it must be done right. Before moving forward with RCTs, researchers should conduct critical reviews of observational studies to ensure that time-related and other biases are minimized.
Observational studies that emulate randomized trials, however, can provide even more accurate real-world effects for potential new indications. With this approach, researchers and regulators can avoid bias and prevent unnecessary costs and effort from conducting futile randomized trials.
About the Authors
Dr. Samy Suissa, McGill University and Founder of the Canadian Network for Observational Drug Effect Studies (CNODES) and Dr. Meg Richards, Executive Director of Scientific Engagement, Panalgo.
References
1. Goodwin, Pamela J: Chen, Bingshu; Gelmon, Karen; (May 24, 2022); JAMA Network; Effect of Metformin vs Placebo On Invasive Disease-Free Survival in Patients With Breast Cancer; https://jamanetwork.com/journals/jama/fullarticle/2792615#:~:text=The%20incidence%20rates%20for%20invasive,0.84%2D1.21%3B%20P%20%3D%20.
2. (June 5, 2014); The New England Journal of Medicine; Simvastatin for the Prevention of Excerbations in Moderate-to-Severe COPD; https://www.nejm.org/doi/full/10.1056/nejmoa1403086
3. Wang, Shirley: Schneeweiss, Sebastian; (April 25, 2023); JAMA Network; Emulation of Randomized Clinical Trials With Nonrandomized Database Analyses; https://jamanetwork.com/journals/jama/article-abstract/2804067