• Sustainability
  • DE&I
  • Pandemic
  • Finance
  • Legal
  • Technology
  • Regulatory
  • Global
  • Pricing
  • Strategy
  • R&D/Clinical Trials
  • Opinion
  • Executive Roundtable
  • Sales & Marketing
  • Executive Profiles
  • Leadership
  • Market Access
  • Patient Engagement
  • Supply Chain
  • Industry Trends

Managing Data from Clinical Trials: Q&A with Rust Felix

Feature
Article

Slope’s CEO and co-founder discusses technological advances that are solving problems with data collection and sorting from clinical trials.

Rust Felix

Rust Felix
CEO & co-founder
Slope

Clinical trial sites produce massive amounts of data, all of which is important. Thankfully, new technologies are helping pharma companies sort through this data much more efficiently. Rust Felix, CEO and co-founder of Slope, spoke with Pharmaceutical Executive about some of these new technological advances.

Pharmaceutical Executive: How much have Pharma companies struggled with managing data from clinical trials in the past?
Rust Felix: Sponsors have historically grappled with major challenges in managing study data, particularly when it comes to biospecimens. Ensuring sites have the lab kits and supplies they need in order to collect samples from patients is just the first hurdle. If sites don’t have the resources to properly manage their clinical inventory, they run the risk of turning patients away, accidentally missing required sample collections, or mishandling patient samples–resulting in downstream data quality issues. Once collected, samples are often shipped across multiple labs and geographies, and the data generated needs to be ingested and reconciled across different systems and stakeholders that manage and capture data differently and not in real time. However, the issues start long before data reporting, at the research sites themselves. Improper sample collection, processing, storage, and shipment can all jeopardize the quality and integrity of the data from the outset.

When sites lack the tools, support, and resources to optimize their sample management processes, it leads to missing or inaccurate data and sample mishandling. This breakdown doesn’t just slow down data reporting; it causes a domino effect that impacts timelines for patient treatment, study completion, and sponsor decision-making around everything from interim analyses and database locks to addressing site performance issues and amending their protocols. This is a huge problem because the stakes are high–missing data or data that hasn’t been cleaned can delay the ability to bring life-saving and life-changing therapies to market. The heavy resources and budgets allocated to managing these issues reflect the scale of the burden, but the problem is still pervasive across the industry.

PE: What are the new technologies helping to solve these issues?
Felix: Until recently, technology has struggled to keep pace with the complexity of managing biospecimens and their associated data. Most solutions have been piecemeal–focused on data aggregation, tracking shipments, or attempting to reconcile sample metadata. While AI could eventually be used to break down data silos, identify discrepancies, and speed up sample tracking, its current application is still limited to individual aspects of the biospecimen lifecycle or based on data that may not be accurate in the first place. What’s been missing is a holistic, end-to-end solution that covers the entire biospecimen management process from research sites through to labs and beyond.

In addition, various technology tools are used alongside other services to help sponsors manage and leverage their sample data. Most of these central lab portals are designed to provide sponsors with access to uncleaned sample metadata, and it’s often days after the sample initially arrived at the lab. As a result of the rampant discrepancies in sample metadata that must be reconciled, the QC’ed data often isn’t available for several weeks or even months after it has been collected. This isn’t ideal for making critical study decisions.

Lab-specific e-requisition solutions are starting to emerge, but nearly all of these solutions come with limitations due to the fact that they are tailored to that specific lab’s needs and they aren’t able to be adopted across every lab and site that supports a trial. Because these solution providers rarely involve research site staff in e-req development, their solutions don’t conform to site processes, contributing to operational friction and lower compliance. What’s more, the majority of research sites still rely on paper requisition forms to document sample metadata, which introduces manual errors and adds significant friction to data flows.

PE: What are the operational challenges of biospecimen collection?
Felix: One of the biggest challenges in modern clinical trials is managing biospecimen logistics at scale. Samples must be sent to various labs–sometimes across different countries with unique logistical challenges and regulatory restrictions–and tracking those shipments is critical to avoid loss, delays, or improper processing. Even managing the informed consent forms (ICFs) that dictate how samples and data can be used requires dedicated teams and specialized databases.

Beyond that, research site processes for collecting, storing, and shipping samples can vary widely. Without standardized workflows, samples are often mishandled or stored incorrectly, leading to delays or the need to recollect samples.

Each sample’s data also needs to be accurately captured and reconciled across multiple databases. With the rise of precision medicine, where treatments are tailored to individual patients, the stakes are even higher. More labs are involved, more data points need to be managed, and the logistical complexity grows exponentially.

Regulatory hurdles only add to the challenge. While we’ve seen steps forward, such as the FDA’s recent ruling on lab diagnostic tests, the industry still lacks standardized regulations for biospecimen data reporting and management. This is where a unified, integrated solution is needed in order to streamline sample tracking, ensure compliance, and mitigate operational risk at every stage.

PE: What are the best practices for data governance and integrity?
Felix: Effective data governance and integrity rely on careful planning and a rigorous approach to every part of the biospecimen lifecycle, starting at the research sites. It’s critical to establish clear, standardized processes for tracking and reconciling sample data at the site level to prevent issues from snowballing as the trial progresses.

Site training and vendor qualification play a pivotal role in maintaining data quality. Too often, study teams focus on the data aggregation side while overlooking the importance of the front-end processes at research sites, CROs, and labs. Ensuring the right workflows, vendor partnerships, and systems are in place from the very beginning is essential to avoid costly downstream problems.

It’s important to support complex sample management workflows using technology, in addition to leveraging strategic insights to better align your data management and sample tracking processes across all of your vendors. By connecting the dots between clinical and biospecimen operations, data management, and translational medicine, sponsors make critical study decisions that are based on accurate, real-time data–reducing the risk of missed samples or discrepancies that could compromise the science and data that is crucial to successful clinical research.

Recent Videos