Data Monitoring and Complex Clinical Trials: Toward a Solution

Clinical Researcher—December 2023 (Volume 37, Issue 6)

SPECIAL FEATURE

Professor Deborah Ashby, MSc, PhD; Professor Jennifer Visser-Rogers, MSc, PhD

 

 

Complex trial designs may be more efficient than the traditional two-group model, but they also present important data monitoring challenges.

Standard one-size-fits-all approaches to evaluating accumulating data are not feasible when working with multiple variables, whether they be multiple endpoints, multiple treatments, or both. Ensuring a robust framework upon which to base stop/go calculations is essential to protecting participant safety whilst accelerating access to life-changing new treatments. The weight of responsibility carried by data monitoring committees (DMCs) is not insignificant, so it is important that these complexities are worked through.

Background

DMCs are a crucial element of clinical trial conduct. These committees confidentially review data as they accrue, to monitor primarily for safety (and often efficacy) to inform continuous stop/go decisions. This is particularly important when we consider the staggered nature of clinical trial recruitment—many studies will still be enrolling when data on early participants become available. DMC review can reduce the risk of additional patients being exposed to serious adverse events or toxicity, or speed up access to treatments that show efficacy. These independent panels also ensure the continuing validity and scientific merit of the trial. Statisticians are an integral part of this process.

However, the emergence of more complex trials, such as platform trials and those with multiple endpoints, complicate the work of DMCs. One-size-fits-all statistical methods are no longer valid. Instead, researchers must employ a much more nuanced approach that considers a variety of factors, including the research question, therapy area, and patient preferences.

An Emerging Landscape

For decades, clinical research has been based on a standard two-group, singular endpoint methodology. This robust methodology has served healthcare well, but it is also slow and inefficient. It can take decades to bring a new drug candidate through clinical development and onto market, and more investigational products fail than succeed.

As such, new ways of working are emerging. They include seamless phase trials, interim analyses, as well as more innovative designs such as platform trials, which allow new interventions to be added, assessed, and removed (either for efficacy or futility) as data accumulate. While such models, which can significantly accelerate development, are not new, they have become much more widely used since 2020, when they were used to investigate treatments during the COVID-19 pandemic.

PRINCIPLE, REMAP-CAP, and RECOVERY are among the successful UK-based platform trials that were initiated early in the COVID-19 pandemic, each testing multiple potential treatments for the virus. Through this efficient design, the in-hospital RECOVERY trial, for example, was able to demonstrate the efficacy of agents such as dexamethasone and tocilizumab within months of study set up. Others, including a study of hydroxychloroquine, were stopped early due to futility. These experiences showed that innovative trial design can deliver access to efficacious treatments quickly and safely, while also limiting the time and resources spent on clinical dead ends.

While the potential benefits of these designs are clear, they complicate the nature of DMC assessments, which remain the bedrock of making such stop/go decisions. Complex clinical trials provide opportunities, but the question of how best to manage data monitoring for these studies remains. Some of the issues have been thought through, but there are still outstanding questions that are not easy to answer.

Models and Challenges

Multiple endpoints can arise in treatment trials but are more frequently associated with prevention studies, where a treatment may prevent the condition for which it is intended but increase the risk of other health considerations. Teams will need to balance these different endpoints, considering trade-offs and weighting for their stop/go strategies. A clinical trial for trastuzumab in women with non-metastatic, operable primary invasive breast cancer, for example, showed a reduction in the proportion with either disease progression or death but an increase in cardiotoxicity. This is, in fact, a common problem for many oncology drugs, so how should one proceed?

Numbers needed to treat and numbers needed to harm are a common way to quantify these increases in benefit and risk, but sometimes it is more nuanced and requires human thought alongside the numbers. These approaches assume that one cares equally about breast cancer recurrence and cardiotoxicity, which may not be the case. Furthermore, cardiotoxicity is easy to treat if you are looking for it, so by being a known risk factor associated with oncology drugs, in a way, decreases its realized risk.

Patients themselves will also have differing viewpoints on tolerability. Hormone replacement therapy (HRT) is known to affect, for example, cardiovascular risk, colon cancer risk, endometrial cancer risk, and chances of hip fracture. How should all these risks (and more) be weighed up? It’s important to look at not just these side effects, but also symptoms of the menopause itself and personal preferences. Women with highly bothersome menopause symptoms will have a higher threshold for HRT side effects than those who are asymptomatic. Such considerations are challenging to translate into statistics, but decision aids and visualisations can help individual women make their own treatment decisions. The difficulty for DMCs is making decisions at a clinical trial level; they cannot make decisions for individual women—they either stop the trial or they don’t stop the trial.

Working with multiple treatments can be equally as challenging. While factorial designs are useful when all the treatments are known in advance and will be used in the same group of people, they are not helpful when not all interventions are known up front, as was the case during the COVID-19 pandemic. In this scenario, fully adaptive platform trials are extremely powerful. They are representative of the evolution of scientific learning but create issues in terms of running the trials and DMC considerations.

The community care–based PRINCIPLE trial, for example, evaluated eight treatments and interventions using an adaptive Bayesian “find the winner” model. It also used a response-adaptive randomization, which favored the treatment that was doing best. Via regular DMC review, the evaluation of hydroxychloroquine was stopped due to futility in May 2020, and azithromycin and doxycycline were stopped in January 2021. Inhaled budesonide was found to be beneficial and rolled out for routine use in April 2021.

Despite its immense success, this trial was not without DMC challenges, some of them being all the more relevant in terms of best practices for trial conduct during a pandemic. For example, accounting for changes with the control group from non-vaccinated to vaccinated status whilst the trial is ongoing. Bayesian time-machine models can help with this but are heavily dependent on modelling and their associated assumptions, so sensitivity analyses are important.

Other difficult questions involve how long a patient should be randomized to a treatment when early data suggest that it is not effective. In a pandemic situation especially, where media interest is high, these decisions can be influenced by publicized assumptions about treatment efficacy or other trials and observational studies which might sway opinions about treatment.

How should DMCs for different trials interact with each other and what information should be shared? Further, how much data on completed treatments can be published while a study is still ongoing? Sharing too many results whilst the trial is still ongoing could reveal information on a trial’s current status, and it is difficult to reveal full details on a control group whilst it is still being used as a comparator for other treatments. How to balance these issues against transparent decision making is a difficult decision.

Answered and Unanswered Questions

It is clear that questions around the most effective, robust data monitoring processes for complex clinical trials remain, and there is much need for further debate and framework development.

However, what we have learnt so far is that the DMCs cannot work in silos, and that approaches must often be designed on a case-by-case basis. There is no one-size-fits-all for complex trial designs and experts must work together to answer outstanding questions. Solutions need to be flexible, start with a shared understanding of the trial’s purpose, and be based on a clear understanding of the decision guidance (which should be determined before any interesting data have been seen) as well as its implications.

Decisions around weighting and trade-offs are often informed by patient preferences and lived experience. As such, there is a huge need for researchers to work closely with the people they are setting out to serve from the earliest possible point in development.

The consequences of getting this wrong—either by allowing trials to continue when they are doing harm, or by incorrectly halting a study of an efficacious treatment—are too significant to ignore. Importantly, statisticians, who tend to be a lone voice on DMCs, need to be able to communicate their methods, calculations, and the implications of their work to the wider panel. Complex clinical trials can be hard to understand, but it’s vital that statisticians help their clinical colleagues navigate this new landscape.

Conclusion

Complex trials hold huge potential. They are more efficient, they deliver answers quicker, and they are less expensive to conduct. If we are to seize this opportunity, we need to be prepared.

It is true that there is much work to be done, but it is incumbent on statisticians to ensure that their methodology is in order and in place if these emerging clinical trial models are to deliver on their promises.

Deborah Ashby

Professor Deborah Ashby, MSc, PhD, is Interim Dean of the Faculty of Medicine at Imperial College London where she holds the Chair in Medical Statistics and Clinical Trials.

Jennifer Visser-Rogers 

Professor Jennifer Visser-Rogers, MSc, PhD, is Vice President, Statistical Research and Consultancy, with Phastar.