OPINION: Is Bias Inherent in the Current Reporting Practices for Adverse Events?

In clinical research, the collection of adverse events (AEs)-not the scientific method-is how safety is demonstrated. The U.S. Food and Drug Administration (FDA) defines an adverse event as “any untoward medical occurrence associated with the use of a drug in humans, whether or not considered drug related.”

There are two sources for the collection of AEs-the first is the subject and the second is the principal investigator (PI), who will identify changes in the examination of the subject and in laboratory tests.

At the outset, the term “adverse event” in and of itself bespeaks a certain prejudice. An unintended and unrecognized effect of using the term is the introduction of a subtle source of error. The term springs from the unfounded assumption that every event not directly planned to be among the results of a treatment will be adverse.

Although the informed consent generally instructs the patient to report changes in health, the data on unexpected effects that are actually sought and captured are almost entirely about AEs. This bias is reflected in the mindset of the clinical researchers, and tends to be communicated to potential subjects during the consent process.

This is to say, there is no specific mechanism in place for capturing “positive side effects.” (It should be noted that these observations are not the result of a formal review of the literature, but are based upon personal experience from conducting clinical trials over the last 10 years.)

The Scientific Method in Clinical Research

The framework for modern clinical research is formed by the principles of the scientific method, along with codified principles of human protection, including regulations of the FDA and the tenets of Good Clinical Practice (GCP) from the International Conference on Harmonization. The boundary at which these two sets of principles meet is the collection of AEs. The gathering of such data plays a critical role in insuring not only the safety of study subjects, but also the safety of future consumers.

Considering the historical development of clinical research as an area of specialty in healthcare, it should not be surprising that the role of bias in the field has been studied extensively with regard to how experiments are managed in human subjects; however, the effect of bias in the collection of AEs has received scant attention. Beyond the sound statistical treatment of AEs, there has been surprisingly little critical attention paid to how they are collected.

To more fully understand and to even improve clinical trials, it is instructive to examine the evolution of clinical research from an historical perspective. The philosophical bedrock common to all fields of modern science is the concept of the scientific method. The Oxford English Dictionary defines the scientific method as “a method or procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses.”1 This is a rational process that gradually draws our understanding of the universe into an ever more clear focus.

This somewhat dry definition does not convey the initial excitement and enthusiasm that first ignites the process. Curiosity and wonderment of an observed natural phenomenon are frequently captured within the first step. The initial observation is followed, sometimes quickly and sometimes slowly, by an intuition or insight that imparts meaning to some aspect of the phenomenon. This is called the hypothesis.

The next step is an attempt to prove veracity; the hypothesis is tested in an experiment designed to yield certain results if the hypothesis is true. The experiment stands at the very heart of the scientific method-it may be defined as a process of testing, under controlled conditions, the validity of a hypothesis as determined by an evaluation of the measurements obtained during the testing. The final step brings the collection and interpretation of the data.

Applying What We Know

Humankind’s strivings to understand the greater world around it have co-existed for millennia with attempts to understand the human diseases within. Both of these aspects of understanding have evolved over time, but it only a relatively recent development that the scientific method has been applied to the study of human disease. In 1943, the patulin study for the treatment of the common cold was the first double-blind, controlled study, and in 1946, the trial of streptomycin was the first randomized, controlled study.2

Meanwhile, utilizing the scientific method in the study of human disease has introduced an unprecedented challenge in terms of the protection of the involved human subjects. Efforts to ensure the safety of human subjects begin long before a drug or device reaches the stage of clinical research. Most compounds are eliminated during extensive preclinical trials; ideally, only the most promising, in terms of both effectiveness and safety, ever make it to clinical trials.

When experimentation involves the use of investigational drugs or devices in human subjects, both efficacy and safety must be demonstrated. Since safety is such an important issue, it is not only reasonable, but appropriate, that the reporting of AEs has attained such a prominent role in clinical trials. These two goals are the very essence of the FDA’s mission statement:

“FDA is responsible for protecting the public health by assuring the safety, efficacy, and security of human and veterinary drugs, biological products, medical devices, our nation’s food supply, cosmetics, and products that emit radiation.”3

Therefore, it is understandable that the current system has been purposely designed to protect consumers from the consequences of approving a drug with unrecognized health hazards.

Standing the Test of Time

The scientific method is the means by which efficacy is shown; it has stood the test of time and led to an explosion of information, a well-founded understanding of how the world around us works, and breathtaking advancements in applied science. It has also become clear that, even in the most carefully designed experiments, errors can occur. Most of the refinements in the scientific method have been advanced as a direct result of the relentless pursuit of identifying and eliminating, or mitigating, such errors.

Error is broadly defined as “the difference between the true value of a measurement and the recorded value of a measurement.”4 Error can be divided into two broad categories-random error and systematic error, as described further below:

  • There are a number of sources of random error. For example, variation in how measurements are obtained is a common problem and is addressed by rigorous standardization procedures. Furthermore, because random error is, in fact, random and not directional, the net effect of this type of error tends toward zero when the sample size is large enough.
  • Systematic error, also known as bias, is not the result of variations due to chance. Bias is the tendency, either intentional or unintentional, to over- or under-estimate the effects of an intervention. Since bias (as opposed to random error) is directional, increasing the sample size or the number of observations does not ameliorate the effect. According to one source, “In fact, bias can be large enough to invalidate any conclusions. In human studies, bias can be subtle and difficult to detect. Even the suspicion of bias can render judgment that a study is invalid. Thus, the design of clinical trials focuses on removing known biases.”4

Clearly, establishing the validity of clinical trials by ensuring that they are free from as much bias as possible is the major focus of evidence-based medicine. To this end, the authoritative Cochrane Handbook for Systematic Reviews of Interventions provides guidelines for evaluating the quality of clinical research and identifies six subclasses of bias: selection, performance, detection, attrition, reporting, and miscellaneous.5

Consequences of Bias in the Reporting of Adverse Events

“Does it really matter if only AEs are collected?” is a reasonable question. After all, there are postapproval processes (e.g., Phase IV studies, registries, annual reports, postmarketing surveillance) in place that could capture these data. The problem is that all of these processes focus on the collection of AEs only. One can find on the FDA website a statement that, “Because all possible side effects of a drug can’t be anticipated based on preapproval studies involving only several hundred to several thousand patients, FDA maintains a system of postmarketing surveillance and risk assessment programs to identify [AEs] that did not appear during the drug approval process.”6

Therefore, the answer to the above question is a resounding yes, for several reasons:

  • Subjectivity-Asking subjects to report “adverse events” as opposed to “changes in health” introduces a greater degree of subjectivity. Some subjects may interpret the exact same symptom in two diametrically opposed ways. For example, suppose a drug in a clinical trial causes mild anorexia. An obese subject may not report this symptom as an AE since the subject may actually view it as a positive effect, whereas an underweight subject may report it as an AE. If subjects were counseled to report all changes in health-both positive and negative-then this particular symptom would have been captured in both subjects. The same sort of problem could be encountered in the assessment of lab results. The PI may interpret a slight decrease in the hematocrit as an AE, but not an increase in the hematocrit of similar magnitude.
  • Greater Understanding-Positive changes in the subject’s symptoms, physical findings (the lowering of blood pressure, for example), and labs (such as the lowering of cholesterol) could provide scientists with a greater understanding of a drug’s mechanism of action.
  • Overlooking Benefits-By ignoring positive changes in health, researchers could potentially overlook important, as-yet unrecognized, uses for the drug under study. Amantadine is only one such example; the FDA first approved its use in 1966 for seasonal influenza, yet three years later approved it for the treatment of Parkinsonism because a positive change in health was observed.
  • Innovation-This bias toward negative AEs is so ingrained and pervasive that it can blind researchers to a potential positive use suggested by the negative effect. For example, Neucardin™ is a fragment peptide of human neuregulin-1 that binds to the epidermal growth factor ErbB4 receptor tyrosine kinase on cardiac myocytes. When inhibition of NRG-1 was first studied as a possible treatment for breast cancer, a serious, attributable AE occurred in the form of congestive heart failure. A less inquisitive investigator would have stopped there and relegated the compound to the dust bin. Undeterred, this researcher reasoned that if inhibition caused heart failure, then stimulation may improve heart failure. Such reasoning has led to a very promising new avenue in the treatment of congestive heart failure.
  • Negative Perception-Although not proven, subjects who are instructed to report only “side effects” may be more likely to report a greater number of AEs than subjects instructed to report any changes in health.
  • Evidence-Based Medicine-Finally, a dispassionate search for all effects-both positive and negative=fosters a sense of objectivity that is a hallmark of all scientific endeavor

Working Toward Bias Elimination

The first step before undertaking any change in reporting practices is to verify the nature and scope of the suspected problem described here. If it should be determined that shortcomings due to typical AE reporting practice are pervasive, then the next step would be to determine how to address them.

It will take a groundswell of interest to eliminate biased reporting language from clinical trial data. While researchers, scientists, and pharmaceutical companies may take the initial step in creating an industry-wide dialogue and awareness on this issue, it will ultimately require the support and collaborative involvement of the FDA to eliminate biased reporting language.

This brief discussion of biased reporting language should be sufficient for the FDA and the pharmaceutical and medical device industries to recognize that similar events may not be interpreted and reported the same (or at all) by every subject, or be viewed by every investigator in every trial as “adverse.” The informed consent instructs subjects to report all “changes in health”; therefore, all changes in health observed during the course of a clinical trial, including both “positive side effects” and “adverse events,” should be sought and captured under the more neutral, unbiased term of “changes in health.”

PEER REVIEWED

References

  1. Simpson JA, Weiner ESC. 1989. The Oxford English Dictionary. Oxford: Clarendon Press
  2. Bhatt A. 2010. Evolution of clinical research: a history before and beyond James Lind. Persp Clin Res 1(1):6–10. www.picronline.org/downloadpdf.asp?issn=2229-3485;year=2010;volume=1;issue=1;spage=6;epage=10;aulast=Bhatt;type=2
  3. U.S. Food and Drug Administration. Statement of FDA Mission. www.fda.gov/downloads/aboutfda/reportsmanualsforms/reports/budgetreports/ucm298331.pdf
  4. Penn State Eberly College of Science. STAT 509—Design and Analysis of Clinical Trials. Lesson 4: bias and random error. https://onlinecourses.science.psu.edu/stat509/node/26
  5. Higgins JPT, Green S. 2008. Cochrane Handbook for Systematic Reviews of Interventions. Chichester, England: Wiley-Blackwell.
  6. U.S. Food and Drug Administration. Postmarketing Surveillance Programs. www.fda.gov/Drugs/GuidanceComplianceRegulatoryInformation/Surveillance/ucm090385.htm

[DOI: 10.14524/CR-16-0012]

Robert Jeanfreau, MD, CPI, (robertjeanfreau@medpharmics.com) is an internist affiliated with multiple hospitals and serves as the president and medical director of MedPharmics in Metairie, La.