Results of clinical trials are only as good as the data upon which they rest. This is especially true in terms of diversity—if most people in a trial are from a certain race or socioeconomic group, then the results may not be broadly applicable.
This form of potential bias is not a novel concept, but a group of researchers at the University of Illinois Chicago (UIC) that includes an ACRP Student member as well as colleagues from other institutions has identified a potential hidden source of bias: electronic health records (EHRs).
According to a press release about the group’s Contemporary Clinical Trials commentary on this topic, the authors looked at challenges arising from the conduct of embedded pragmatic clinical trials, or ePCTs, which are generally considered a way of including more diverse participants in clinical trials by expanding beyond the confines of more rigidly controlled traditional trials.
ePCTs test the effectiveness of medical interventions in real-world settings, but the authors say they may still leave out people who are from underrepresented and underserved groups. Even when participants from these groups are included, study teams may collect incomplete or inaccurate data. These sorts of trials are conducted during routine clinical care on a wide range of patients, unlike more traditional clinical trials that use laboratory conditions and have stricter rules about who is eligible, often excluding people with underlying health conditions.
ePCTs rely heavily on EHRs for data collection, which is problematic in a few ways, the authors write. To start with, only people who access healthcare services will have a health record, so health information from groups that have difficulty seeing healthcare providers, because of cost or travel time or distrust of the medical system, won’t be in these systems. Furthermore, ePCTs sometimes rely on participants to self-report their symptoms within a patient portal that connects them to their electronic records. However, these systems can be inaccessible for people who don’t have reliable access to the internet and smartphones, and can be difficult to understand for those with less education or who have difficulty with the languages used in the questionnaires.
This reliance on electronic records is “almost a hidden form of bias,” explained Dr. Andrew Boyd, UIC associate professor of biomedical and health information sciences and lead author of the commentary. This is especially problematic when it comes to artificial intelligence (AI) algorithms, which are becoming more common in medical decision-making. “If these groups are not deliberately sought out for trials, then ultimately the AI or machine learning isn’t going to meet their needs,” Boyd noted.
Among the coauthors on the published commentary is ACRP Student member Juanita Darby, DC, MBA, MSHIM, RN, a doctoral student and biomedical research fellow who is focusing on clinical research studies in the UIC College of Nursing. Darby is also a self-employed chiropractic physician and registered nurse running the Darby Wellness & Health Center, and an associate faculty member of the University of Phoenix’s College of Health Professions.
“I began a second career in healthcare and nursing after a lengthy first career in the business information technology [IT] arena,” Darby told ACRP. “I [had] worked in IT in mid-size and large corporations, including working in healthcare IT on major Epic EHR System implementations.” After earning her Doctor of Chiropractic designation and working as a chiropractic physician for several years, she says she went back to school yet again for nursing studies and “became interested in the beneficial work of clinical research and the idea of developing new knowledge for clinical use as evidence of effectiveness of processes, interventions, technologies, etc. Because of my background and experience, I have a strong interest in research on EHR and electronic medical record systems, including the use of telehealth. Because of my background as a chiropractic physician and an acupuncturist, I have been involved in research on pain management and will continue to do so in the future.”
Darby and her fellow authors on the EHR commentary offer up several ideas for how to remedy the problems they highlight. For example, researchers could use text messages to recruit participants who don’t have an EHR. This would also help those who do have an electronic record, but don’t have easy access to the internet and might have to spend a good deal of time traveling elsewhere to use a computer. Studies that use patient-reported outcomes must also ensure that the questionnaires are written to the correct literacy level; the authors recommend that community groups be involved in reviewing these sorts of questionnaires. EHRs should also include information about people’s multiple identities and experiences, such as religion, sexual identity, and educational status, so that researchers can consider the effects of intersectionality on what is being tested in an ePCT.
The commentary grew out of discussions among a national group of researchers, including Darby, who all conduct ePCTs. At UIC, the authors are part of an NIH-funded study on using guided relaxation and acupuncture to reduce the chronic pain of sickle cell disease.
Edited by Gary Cramer