• Sustainability
  • DE&I
  • Pandemic
  • Finance
  • Legal
  • Technology
  • Regulatory
  • Global
  • Pricing
  • Strategy
  • R&D/Clinical Trials
  • Opinion
  • Executive Roundtable
  • Sales & Marketing
  • Executive Profiles
  • Leadership
  • Market Access
  • Patient Engagement
  • Supply Chain
  • Industry Trends

Validating COAs: Key Steps for Accurate and Reliable Clinical Trial Data

Feature
Article

With the increasing adoption of COAs, the importance of standardization across training, collection, and implementation to ensure high quality and consistent data collection has become apparent in recent decades.

Ryan Murphy

Ryan Murphy, PhD
Outcomes researcher
Patient centered outcomes
ICON

The concept of clinical outcomes assessments (COA) has a long history in medical practice, with COAs being used to support claims in product labelling and to make decisions that improve patient care, thus improving their functional status, quality of life (QoL), overall care management, satisfaction with care, and survival rates.

With the increasing adoption of COAs, the importance of standardization across training, collection, and implementation to ensure high quality and consistent data collection has become apparent in recent decades. Appropriately training raters is a key component in COA utilization, reducing variability in measurements, rater interpretation, bias, and scale administration errors.

Challenges to consider when developing COAs

While improving standardization and validation are opening more opportunities for COA usage, several challenges to utilizing COAs as endpoints in clinical research still exist. One of the fundamental barriers to incorporating COAs as endpoints is uncertainty around what COA instruments are available and how these fit into trial procedures. Further, respondents of a recent ICON poll noted that a lack of standardization in scaling COAs across sites and a lack of indication-specific validated instruments pose greater challenges.

New COAs are continually being developed to fit the varied needs of clinical research. The development process is a structured, scientific, and iterative process designed to produce high-quality instruments. During this process, developers must:

  • Conduct a landscape assessment that includes a literature review, gap analysis and interviews of key opinion leaders (KOLs) before drafting a conceptual model.
  • Conduct qualitative research with patients via concept elicitation interviews or focus groups to ensure that the instrument will measure what matters to patients. Revise the conceptual model based on the qualitative research findings, then organise the model into a conceptual framework.
  • Create a first draft of the instrument from the conceptual framework. Then, employ experts to review the draft and provide input.
  • Establish content validity by conducting cognitive interviews to determine if the instrument is being interpreted and understood as intended. This is an iterative process, and the instrument should be modified and retested until there is a consensus that the draft is the best it can be.
  • Perform a psychometric evaluation that tests the instrument with a very large sample. The instrument is only finalised after this last evaluation and approval.

Keys to effective COA data collection

Common risks to proper COA implementation include rater error, unclear instructions, improperly followed directions, and delegation to untrained staff. Proper training is foundational to effective COA administration and is the most important safeguard against these COA implementation risks. More than practice or collaboration, respondents to the recent ICON poll felt that formal standardized training is the key to avoiding errors and is the leading factor for supporting valid data collection. Further, recent regulatory and industry communications indicate a growing awareness of the need for standardised training.

To successfully incorporate outcomes measures into research as endpoints for regulatory decision making, sponsors should ensure that the COAs they use are implemented in a standardized manner. Instructions to patients need to be given exactly as prescribed in the assessment form or administration guidance. The assessor must then follow all administration rules, including set-up procedures and the rules for discontinuation. The rater must also adhere strictly to the scoring anchors that are provided as part of administration guidance.

Validation is also essential to the successful COA data collection. The validation process ensures that an assessment or questionnaire is accurately and reliably understood by those who are intended to use it, and that the assessment or questionnaire accurately and reliably measures, predicts, or assesses a concept of interest that is relevant and of interest to the patient population.

Sponsors must ensure COAs are translated and localized as needed, as they should be completed in the patient’s primary language to prevent issues arising from a language barrier or cultural misunderstanding. Translations must be linguistically validated to ensure that they maintain relevance and are sensitive to the culture and norms of the local patient population without losing the instrument’s scientific intent.

Endpoints captured through a COA must also be consistent across modalities. This could include a legacy paper and pen modality, telephone administration, interviewer administration, or electronic administration.

Cognitive interviewing and usability testing studies can help ensure that a COA is fit for purpose, equivalent across modalities, and easy for the user to complete.

Adopting new modes of COA administration responsibly necessitates that sponsor companies adhere to the best practices outlined here to ensure that COAs produce reliable data. Importantly, this includes being proactive in furnishing training on how to properly administer and rate COAs. Doing so will help to ensure that the data on how a patient feels, functions, or survives will be reported and evaluated consistently.

Recent Videos
Related Content