Effective Data Analysis for Nontraditional Study Designs

Clinical Researcher—April 2022 (Volume 36, Issue 2)

TRIALS & TECHNOLOGY

Dejun (DJ) Tang, PhD

 

The nice-to-have advantages of nontraditional study designs have become must-haves as the need for innovative approaches to clinical trials has gained critical mass. The scientific community has always embraced a conservative acceptance of unconventional design to tackle specific challenges like those in rare disease research. The rise of big data, passage of the 21st Century Cures Act in 2016, and the novel coronavirus crisis have all accelerated the need to more broadly systematize a framework for putting innovative designs and methodologies into practice and reaping the benefits.

For traditional, relatively straightforward studies, the study designs and statistical analysis methods may have been developed and standardized to a degree and included in the protocol by rote. Decentralized, adaptive, and complex innovative trial designs (CID), however, require biometrics teams to push the boundaries and approach planning, execution, and analysis with feasibility and meaningful interpretation in mind.

This article will discuss the differences between traditional study designs (including those listed in ICH E9 from the International Council for Harmonization and other common ones) and nontraditional study designs, and then explain how biometrics teams can successfully navigate the changing approaches, including through:

  • examples of nontraditional study designs (for example, enrichment designs and event-driven designs);
  • advanced data analysis tools for unconventional data;
  • the application of digital technologies to increase flexibility while preserving integrity; and
  • future development of biostatistical methods for nontraditional designs.

Downstream Impacts of Nontraditional Study Designs

Nontraditional studies require more careful considerations of many design elements. Following are some of the more familiar unconventional approaches and how each impacts data analysis.

Adaptive

The term “adaptive design” in clinical trials covers a very broad area with different types of adaptations. In general, any preplanned changes in the middle of trials based on the results of one or more interim analysis can be categorized as adaptive design.

Some specific adaptive designs have been studied and developed for more than a decade and the study designs and analysis methods are relatively mature (e.g., Phase II/III seamless design with the group sequential method). However, many other types of adaptive designs are still considered nontraditional, under development, or controversial (e.g., unblinded sample size reassessment to increase the sample size of a clinical trial).

One of the key components for the adaptive design is to control the familywise type-I error rate while maintaining the study power. In many cases, this proves challenging, and we often need to develop special approaches for the analysis. For example, an adaptive design needs to set up the go/no-go criteria for the interim analysis. When the criteria are based on multiple endpoints, it is not a straightforward matter to determine the distribution of the alpha-spending among different endpoints at the time of the interim analysis. A special method to control the alpha-spending needs to be developed to tailor that situation.

Enrichment

Enrichment study design is usually employed for the purpose of selecting more efficacious patient population or populations for the investigational medicines under study. It can be applied to some medical products that may not show effectiveness in the general patient population, but which have good treatment effects for a very specific patient population.

The main challenge is how to predefine the criteria to select the suitable candidate populations. When sufficient data are available for different populations, it may not be a difficult task to identify the efficacious patient populations; in this case, however, the enrichment design may not help too much.

When designing a study with an enrichment setting, it means that we do not have much information and we need to rely on the early stage of the study to collect the data and then to determine the appropriate populations. Setting the predefined criteria won’t be an easy task due to lack of sufficient information.

Event-Driven

Many clinical studies define time-to-event variables as key efficacy measurements; this is especially common in oncology trials. These types of studies could last the treatment duration for each patient, which means the study doesn’t end until the last patient completes the last assessment visit. This study design maximizes the treatment time for all patients, which will increase the opportunities to observe the predefined events since the number of events is the major contributor to the study power.

With event-driven studies, biometrics teams need to manage for the possibility of operational biases as the enrollment duration for these studies can be relatively long. Changes to the participant pool and even investigator behavior can shift over time, leading to variability in the data. During the analysis stage, some sensitivity analysis should be planned carefully to evaluate the existence and impact of possible operational biases.

Noninterventional

In a noninterventional study, patients are prescribed the marketed medicine according to the label. The study investigators plan to have as little influence as possible on the patients’ condition. In general, noninterventional studies allow researchers to study a drug’s efficacy and safety in real-life settings. Usually, the studies are not as restrictively controlled as clinical studies in Phases I–IV.

Due to the “uncontrolled” nature of noninterventional studies, many operational or statistical biases may be introduced during the studies. Many realized or unrealized confounding factors may exist among studies endpoints, which will compromise the assumptions for many statistical analysis methods. The statistical analysis should be carefully planned to account for such factors.

Master Protocol

Master protocol studies, whereby a group of individual clinical studies are governed by a common document, can save much of the time and effort inherent in conducting multiple clinical studies with similar or relevant objectives.

The concept of master protocol can be applicable to many areas, from key efficacy and safety assessments to similar types of clinical studies, and the challenges for statistical evaluation are many. The methodologies selected need to appropriately serve each individual study as well as to work in concert. Any contradictory needs require a thorough evaluation and often more complex and layered statistical approaches.

Preserving Integrity with Innovative Approaches to Data

Preserving scientific integrity is the core challenge for data and biostatistics teams when involved in a nontraditional study, and the concerns can be sorted into three broad categories: data source, data collection, and data analysis.

In traditional clinical trials, the data are almost always randomized. To use real-world evidence, or alternative sources, we need to apply new statistical methodologies for handling the nonrandomization as well as for combining these sources of data in a consistent and accurate manner. When there is new incoming information, we need to invent corresponding statistical models to correctly process that information.

For example, vital signs come in the case report form, blood draw data go to a central or local lab and are analyzed by machine, and X-ray or MRI imagery must be read by a physician or radiologist who writes up a report or fills out a data collection form for the clinical trial.

Biostatistics teams have to appropriately incorporate human-entered data from the site, machine-read data from the analytical lab, and imagery reports which are often in narrative form. In Phase II or III studies, any data sources from the earlier phases or even preclinical can also be included in analysis. Further, all that historical information has to be translated into quantitative numbers to add to the analysis.

Big Data and Clinical Trials

Despite the conservative nature of the clinical trial research community, big data are almost certainly part of our future. The right mix of statistics, computer modeling, data mining, and machine learning can enable a deeper level of understanding resulting in new insights faster and with fewer risk.

The digital technologies for utilizing and applying big data are not currently mature enough to pass a risk management assessment for participant safety, but the frameworks are being tested and perfected in other industries. A close example of this concept is the auto-pilot tool in Tesla automobiles. When using the tool, real-time data on the road need to be collected and analyzed to help the driving decisions—a technological feat in and of itself.

The situation in clinical studies, however, is even more complicated than driving on the highway. Unlike highways, which are relatively static features, clinical trials have few if any similar guiderails. It is not practical to preload what is ahead to guide a particular clinical study. Yet.

However nascent, the work has begun. Several universities, including Stanford and Oxford, have established research institutes to study the applications of digital technologies in clinical research. While many challenges can make the progress feel painstakingly slow, the fundamental potential is heady.

Conclusion

As the exploratory work progresses on the application of big data to various currently unsolvable challenges, heads of biostatistics and trial teams can continue to stretch our innovation muscles. Our ability to approach data analysis for nontraditional studies with flexibility, creativity, and integrity today will become the foundation for growth in a big data tomorrow.

Dejun (DJ) Tang, PhD, is Senior Vice President of Data Services at Firma Clinical Research, and has 25 years of experience in biometrics in the pharmaceutical and healthcare industry.