Clinical Researcher—August 2023 (Volume 37, Issue 4)
RULES & REGULATIONS
Arthur Ooghe, BASc, MEng; Samuel Branders, MS, PhD
In May 2023, the U.S. Food & Drug Administration (FDA) released final guidance on “Adjusting for Covariates in Randomized Clinical Trials for Drugs and Biological Products.” One relatively minor difference from the draft version actually represents a major step forward in statistical analyses for randomized controlled trials (RCTs).
This article explains the history of the use of covariates, the importance of this nuanced final guidance from the FDA, and a case study for drug developers seeking to apply this method to their statistical analyses.
The Role and Prevalence of Covariates in RCTs
By nature, people are heterogeneous. From differences in age and gender to medical history and psychology, heterogeneity is an important concept for managers of clinical trials to consider. In fact, drugs are required to be evaluated in a diverse population of patients from a sample intended to represent the general population.
While necessary to ensure clinical trial results are translatable, heterogeneity also introduces challenges to clinical trial data. It often translates into the heterogeneity of response. Furthermore, certain patient characteristics influence disease progression but are not necessarily indicative of drug response.
An age-old example of this challenge is age itself. A clinical trial patient’s age could influence his or her response to treatment. However, this doesn’t mean the drug is ineffective. This factor induces noise in clinical trial data, making it more difficult to demonstrate statistically significant differences between treatment groups. Another example is the baseline severity of a disease. For example, in a chronic pain indication, someone who has higher pain at the start of a study may see a sharper reduction in pain throughout treatment. However, this doesn’t indicate drug ineffectiveness for those who have less pain at the start of the study.
Noise from prognostic factors like these makes it so the study statistician can’t “see” anything else in the results between the active and control arm of the trial. In RCTs, there also may be bias in the data because of unequal distribution of patients between groups from random sampling. What can statisticians do to minimize these differences and biases while still being able to prove efficacy and safety for a generalized population?
The answer is called a covariate adjustment: a technique that aims to isolate the effect of the treatment being studied while accounting for the potential impact of baseline characteristics (covariates) on the outcome.
The use of covariates is not new to RCTs. In fact, in a survey of RCTs published across four journals from 2009 to 2010, 84% reported using covariates.{1} Because they are so widely used, the FDA and the European Medicines Agency (EMA) have issued regulatory guidance on their practical utility. The EMA’s “Guideline on adjustment for baseline covariates in clinical trials” went into effect as of September 2015, and the FDA put forth a first draft of “Adjusting for Covariates in Randomized Clinical Trials for Drugs and Biologics with Continuous Outcomes: Guidance for Industry” in April 2019.
So, what the new final guidance on the subject from the FDA provide statisticians with that they didn’t have before is a necessary framework for adjusted analyses to be more precise than ever.
The FDA’s Final Word
There are relatively few differences between the final version (dated 2023) and the last draft version (dated 2021). This signals that substantial reflection had taken place over the two years to underpin the concepts that RCTs have followed for years.
In summary, the FDA highlights several important recommendations about the technique as a whole and selecting covariates themselves{2}:
- An analysis of an efficacy endpoint can be conducted unadjusted, but an analysis adjusted for baseline covariates can lead to a reduction in the confidence interval of the treatment effect estimation, leading to more powerful hypothesis testing with a minimal impact on the Type I error rate (false positive).
- Covariates should be few in number relative to the sample size and measured at baseline before randomization and treatment start.
- Covariates can be prognostic indices derived from scientific literature or defined and constructed based on previous studies.
The last bullet point above is where things get interesting for RCTs looking to improve treatment effect size evaluation. “Prognostic indices” refers to the use of composite covariates: a variable that combines multiple individual covariates into a single measure to simplify analysis and reduce confounding risks.
In light of this new final guidance, composite covariates are considered as any other individual baseline covariate, so long as the analysis model complies with the guidance. The FDA’s final word intends to encourage the correct use of prognostic factors to improve estimation precision.
What This Means for RCTs
Covariates that statisticians typically measure include demographic variables that are easily collected (age and gender) and baseline values of primary outcomes. However, while covariate adjustment is a tried-and-true statistical method, RCTs still face a high rate of failure. In fact, nine out of 10 drug candidates fail in Phase I, II, and III clinical trials,{3} with up to 50% of these failures attributed to a lack of clinical efficacy.{4}
Evidently, treatment effect estimation is still imprecise, but adding more covariates to chip away at those numbers is not the answer either. This is because too many covariates increase the risk of becoming confounding factors with the exact opposite intended effect.{5} So, statisticians have to select the fewest number of covariates that are most likely to have a strong association with the outcome. While the classical approach works for simple situations, it is not enough for more complex scenarios at play in RCTs. This is where the final FDA guidance is a big deal, as it introduces a solution with composite covariates.
Adjusting for Complexity: The Placebo Response
One such complex scenario present in RCTs is the placebo response. The placebo response is the measured improvement of a patient after receiving a sham treatment, which results from a combination of several different factors that may mimic drug response, including baseline disease intensity, regression to the mean, and the placebo effect (where multiple psychological factors are involved).
The placebo response is a significant, specific source of variability that has plagued drug development for decades, representing a major cause of clinical trial failure.{6} Here are just a few examples of its prevalence across indications:
- Fibromyalgia: an average of 60% of the treatment response can be attributed to placebo response across endpoints.{7}
- Osteoarthritis: an average of 75% of the treatment response for pain endpoints can be attributed to the placebo response.{8}
- Depression: 68% of the measured treatment response was attributable to the placebo response, which was highest for the primary outcome (depression) but also substantial for anxiety, general psychopathy, and quality of life.{9}
It is well understood that the placebo response is an innate characteristic to patients, and it is evident that it is responsible for a lot of noise in clinical trial data that leads to higher rates of failure. This begs the question: Can this characteristic be used as a covariate in clinical analysis like age or baseline intensity of a disease?
Up until recent technological breakthroughs, the answer has been no.
Constructing a Placebo Responsiveness Composite Covariate
As discussed, a covariate must be measured at baseline. For years, placebo responsiveness could only be estimated at the end of the study and only for the patient receiving a placebo, which means it couldn’t be used in a covariate adjustment.
Today, however, it is possible to construct a composite covariate for placebo responsiveness based on baseline data.{10} First, this requires an understanding of individual patient psychology based on stable personality traits and expectations. This information, combined with other patient baseline data (age, intensity of disease, etc.), provides a clear picture of an individual’s characteristics that may impact treatment estimation. This is where technology comes into play.
Machine learning is a subset of artificial intelligence (AI) that uses statistics to find patterns in massive amounts of data. This is exactly what clinical trials need to be able to do: Find patterns in historical patient psychology data to predict placebo response. In 2023, this technology is more mature than ever, and disease-specific predictive machine learning models exist that have been calibrated based on historical data. The assessment of individual patient psychology can be combined with this trained algorithm to calculate a relative placebo responsiveness score for each patient at the beginning of the trial. This score, which is a combination of multiple factors associated with placebo response, represents a composite covariate that can be used in the statistical analysis.
This method has already been successfully applied in RCTs testing areas like pain, osteoarthritis, and Parkinson’s disease. In specific cases, it has improved assay sensitivity (the ability to distinguish placebo treatment from drug treatment) by nearly 40%{11} and improved study power by 14%. Moreover, the approach can be implemented for about 1% to 3% of the total per-patient cost for the trial and can be applied to virtually any therapeutic area or indication.{12}
The advent of AI-based methods{13} has allowed researchers to mitigate the negative consequences of high placebo response rates by way of a covariate adjustment. In doing so, researchers can reduce the number of covariates for adjustment and increase the expected association with the outcome—aligning perfectly with the FDA’s final guidance.
Conclusion
The FDA has published its final official opinion on the use of covariates as a recommendation to improve treatment effect size evaluation—and it presents significant opportunity for managers of RCTs who are struggling with treatment effectiveness estimation due to the placebo response.
Placebo responsiveness almost always impacts results and represents a leading cause of Phase II and III trial failures. With machine learning technology, the characteristic can be accurately predicted before the study, which means statisticians can use the covariate approach to manage this significant source of data variability.
It is important to think critically and follow proper guidance as it relates to selecting covariates. However, this is a breakthrough method for RCTs that, when supported by the FDA’s final guidance, will unlock an entirely new level of precision in statistical analyses.
References
- Ciolino JD, Palac HL, Yang A, et al. 2019. Ideal vs. real: A systematic review on handling covariates in randomized controlled trials. BMC Med Res Methodol 19(136). https://doi.org/10.1186/s12874-019-0787-8
- Ooghe A. 2023. The FDA highlights the tools to enhance the precision of primary analyses in clinical trials. https://cognivia.com/the-fda-highlights-the-tools-to-enhance-the-precision-of-primary-analyses-in-clinical-trials/
- Dowden H, Munro J. 2019. Trends in clinical success rates and therapeutic focus. Nature Reviews Drug Discovery18(7):495–6. https://doi.org/10.1038/d41573-019-00074-z
- Sun D, Gao W, Hu H, Zhou S. 2022. Why 90% of clinical drug development fails and how to improve it? Acta Pharmaceutica Sinica. B12(7):3049–62. https://doi.org/10.1016/j.apsb.2022.02.002
- Branders S. 2020. From covariates to confounding factors: the danger of having too many covariates. https://cognivia.com/from-covariates-to-confounding-factors-the-danger-of-having-too-many-covariates/
- Dumitrescu TP, McCune J, Schmith V. 2019. Is Placebo Response Responsible for Many Phase III Failures? Clinical Pharmacology and Therapeutics 106(6):1151–4. https://doi.org/10.1002/cpt.1632
- Whiteside N, Sarmanova A, Chen X, Zou K, Abdullah N, Doherty M, Zhang W. 2018. Proportion of contextual effects in the treatment of fibromyalgia-a meta-analysis of randomised controlled trials. Clinical Rheumatology37(5):1375–82. https://doi.org/10.1007/s10067-017-3948-3
- Zou K, Wong J, Abdullah N, Chen X, Smith T, Doherty M, Zhang W. 2016. Examination of overall treatment effect and the proportion attributable to contextual effect in osteoarthritis: meta-analysis of randomised controlled trials. Annals of the Rheumatic Diseases75(11):1964–70. https://doi.org/10.1136/annrheumdis-2015-208387
- Rief W, Nestoriuc Y, Weiss S, Welzel E, Barsky AJ, Hofmann SG. 2009. Meta-analysis of the placebo response in antidepressant trials. Journal of Affective Disorders118(1-3): 1–8. https://doi.org/10.1016/j.jad.2009.01.029
- Branders S, Pereira A, Bernard G, Ernst M, Dananberg J, Albert A. 2022. Leveraging historical data to optimize the number of covariates and their explained variance in the analysis of randomized clinical trials. Statistical Methods in Medical Research 31(2):240– https://doi.org/10.1177/09622802211065246
- Branders S, Dananberg J, Clermont F, et al. 2021. Predicting the Placebo response in OA to Improve the Precision of the Treatment Effect Estimation. Proceedings of the 2021 OARSI Connect, Virtual World Congress on Osteoarthritis (Late Breaking Abstract).
- Smith E. 2022. Using Machine Learning to Predict Placebo Response and Increase Clinical Trial Success. Applied Clinical Trials. https://www.appliedclinicaltrialsonline.com/view/using-machine-learning-to-predict-placebo-response-and-increase-clinical-trial-success
- Smith EA, Horan WP, Demolle D, Schueler P, Fu DJ, Anderson AE, Geraci J, Butlen-Ducuing F, Link J, Khin NA, Morlock R, Alphs LD. 2022. Using Artificial Intelligence-based Methods to Address the Placebo Response in Clinical Trials. Innovations in Clinical Neuroscience 19(1-3):60–70.
Arthur Ooghe, BASc, MEng, is a Data Mining and Statistical Research Scientist at Cognivia.
Samuel Branders, MS, PhD, is Data Mining Scientist Director at Cognivia.