A Roundup of Perspectives and Predictions on Research Misconduct, Inclusivity, AI, and More

Clinical Researcher—December 2025 (Volume 39, Issue 6)

INSIGHTS & INTROSPECTIONS

Edited by Gary W. Cramer, Managing Editor for ACRP, with contributions from Muhammad Waseem, MBBS, MS, FAAP, FACEP, FAHA, FSSH, CPI, FACRP; Joy Jurnack, RN, CCRC, FACRP; Olutola O. Adetona, MD, MPH, CPI; and a sampling of industry leaders

 

Building a Culture of Transparency and Trust in the Shadow of Research Misconduct

Scientific research depends on honesty, objectivity, and trust. This includes trust that investigators will follow ethical standards, that institutions will protect participants, and that results will benefit the public. When this trust is broken through misconduct, the effects extend beyond a single study or organization. How the research community responds, whether openly or secretly, determines if the public’s confidence in science is lost or regained.

Transparency is crucial, especially when research misconduct threatens scientific trust. History provides many warnings about how a lack of openness can cause more harm. The Tuskegee Syphilis Study (1932–1972) is a key example. U.S. Public Health Service researchers hid treatment and misled participants, leading to unnecessary suffering and death. The eventual discovery, after decades, caused outrage and seriously damaged trust in the medical community, particularly among Black Americans. This mistrust continues to impact participation in clinical research today.

Similarly, the Guatemala Syphilis Experiments (1946–1948), in which U.S. researchers intentionally infected prisoners and psychiatric patients without their consent, were kept secret for decades and only publicly acknowledged in 2010. The delay in transparency worsened moral injury and hindered reconciliation efforts.

In the biomedical research field, the Andrew Wakefield vaccine–autism scandal (1998) demonstrates how institutional hesitation to confront fraud quickly can spread misinformation. The failure to address his falsified data openly and immediately allowed vaccine hesitancy to grow worldwide, resulting in disease outbreaks and ongoing distrust in public health advice.

The Duke University cancer genomics case (2006–2010) is a recent example of transparency through correction. After discovering that key genomic data had been altered, Duke launched an independent investigation, retracted faulty papers, and implemented reforms to research integrity. Although the institution faced reputational damage, its decision to publicly share the investigative results was a positive step toward restoring credibility.

Beyond biomedical research, failures in transparency have damaged public trust. The Stanford Prison Experiment (1971), once seen as proof of situational cruelty, was later revealed to involve unreported manipulation by researchers. The reluctance to revisit or disclose these issues for many years skewed scientific discussions and cast doubt on academic integrity. Similarly, social psychology’s “replication crisis,” where many vital findings could not be replicated, demonstrates that sharing open data and utilizing transparent methods are crucial not only for preventing misconduct but also for supporting scientific self-correction.

Institutions that prioritize transparency, even when it’s challenging, demonstrate accountability. The Harvard University investigation of Marc Hauser (2010), a prominent cognitive scientist found guilty of research misconduct, led to public disclosure of findings and policy reforms to enhance oversight. The university’s transparency, though difficult, showed that integrity is more important than reputation.

Taken together, these cases show that misconduct is often less harmful than the secrecy that follows it. Public trust isn’t lost because scientists make mistakes. It is lost when those mistakes are hidden, denied, or downplayed. Transparency about wrongdoing, along with effective corrective measures, shows respect for participants and the public and supports the scientific field.

Disclosing conflicts of interest is essential for promoting transparency and building trust in research. It should be taken seriously because it influences how people view the independence of a study and can also impact on the researchers’ future reputation and credibility. When investigators openly acknowledge personal, financial, or professional relationships that might affect their judgment or interpretation of results, it allows peers, participants, and the public to evaluate the research more critically. Transparent disclosure does not imply misconduct or bias; instead, it demonstrates accountability and integrity.

By identifying and managing potential conflicts early, institutions can put safeguards in place, such as independent data analysis, oversight committees, or recusal from decision-making roles. This proactive approach helps preserve the integrity of scientific research, reassures participants that their well-being is a priority, and boosts public trust in research institutions. In this way, conflict-of-interest disclosure is not a punishment but a vital tool for transparency and ethical responsibility in research.

Institutions must take deliberate and proactive steps to promote transparency and rebuild trust. First, research organizations should conduct independent audits of compliance and misconduct, following established policies that require prompt reviews, corrective actions, and public reporting of confirmed misconduct, clear descriptions of violations, and measures taken to address them. Second, individuals directly affected by misconduct should be notified quickly and respectfully, acknowledging the harm and explaining the corrective steps. Third, academic journals and funding agencies should support open data policies, pre-registration, and post-investigation disclosures to ensure accountability at every stage of research. Finally, education in research ethics should emphasize that transparency is not merely a procedural formality, but a moral obligation rooted in honesty, respect, and justice.

In the long run, transparency protects not only the credibility of individual studies but also the very foundation of science itself. By openly addressing misconduct, science upholds its highest value—the pursuit of truth—even when that truth is uncomfortable.

Muhammad Waseem, MBBS, MS, FAAP, FACEP, FAHA, FSSH, CPI, FACRP, Professor of Emergency Medicine and Pediatrics at Weill Cornell Medicine and Research Director for Emergency Medicine at Lincoln Medical & Mental Health Center, where he is also Vice Chair of the Institutional Review Board

Thoughts on Diversity, Equity, and Inclusion

Since the U.S. Food and Drug Administration (FDA) removed its draft guidance on diversity, equity, and inclusion (DEI) in clinical trials earlier this year, I have refused to see this as an obstacle—in fact, it’s a challenge. There are ways to ensure DEI remains front and center because we know, as researchers, that drugs/devices can work differently for different individuals, depending on their race, ethnicity, gender, and other factors.

When I worked at Mount Sinai Medical Center in New York City from 1992 to 2003, part of our yearly institutional review board submission included proving the breakdown of race based on the U.S. Census. This was brilliant, because you got a clear picture of who represented the immediate catchment area. If memory serves me, it was mostly a white area, with less than 10% Asian or African Americans. So, it was no surprise when our enrollment reflected this.

Using New York City as an example, sponsors need to consider what the targeted racial breakdown is and go to research sites that serve that community. Sites cannot readily enroll with racial diversity being a priority if success is unlikely given the population they serve.

It would be helpful if sites knew what their population mix is like from reviewing data from the U.S. Census Bureau. Currently the race categories used are: American Indian and Alaska Native, Asian, Black or African American, Native Hawaiian or Other Pacific Islander, and White, with the ability to select multiple entries. Latino or Spanish are considered part of one’s ethnicity, allowing the option to further collect more specific data.

The variety of individuals with particular racial and/or ethnic backgrounds within different areas varies greatly. Sites can easily provide specific breakdowns when submitting feasibility documents. Sponsors would use these data to ensure that participants with the specifically targeted DEI factors will be enrolled. This is one way we, as researchers and sponsors, can ensure that DEI is thoughtfully considered and that enrollment reflects diversity.

— Joy Jurnack, RN, CCRC, FACRP, CEO of Coastal Research

Addressing the Challenges of Patient Recruitment with AI

Patient recruitment and retention in clinical trials pose significant challenges in drug development. Factors such as geographical constraints, time commitments, and lack of awareness about trials contribute to low recruitment and high dropout rates. Additionally, historical mistrust in the medical system and inadequate outreach efforts deter participation, especially among minority groups.

Addressing these challenges is crucial for enhancing the efficiency and cost-effectiveness of clinical trials. The integration of artificial intelligence (AI) holds promise in revolutionizing patient recruitment and retention processes. AI can streamline trial designs, optimize screening assessments, and enhance patient outreach strategies. By leveraging AI technologies, researchers can personalize patient engagement, improve communication, and overcome barriers to participation.

AI-powered tools offer the potential to identify suitable candidates more efficiently, tailor trial information to individual needs, and predict patient behaviors to enhance retention rates. Through AI-driven analytics and algorithms, the healthcare industry can transform how clinical trials are conducted, making the recruitment and retention of participants more effective and sustainable.

In conclusion, AI presents a transformative opportunity to address the complexities and inefficiencies associated with patient recruitment and retention in clinical trials. By harnessing the power of AI, stakeholders can enhance accessibility, engagement, and trust within the clinical trial process, ultimately advancing drug development and improving patient outcomes.

— Originally posted on LinkedIn by Olutola O. Adetona, MD, MPH, CPI, at Cedar Health Research, and shared here with his permission

Predictions from Industry Leaders for 2026

Clinical development is bottlenecked by manual processes. Clinical research associates (CRAs) are overworked, often experiencing burn out. In 2026, agentic artificial intelligence will unlock the keys for improved clinical development, starting by alleviating within-trial white space and empowering CRAs to focus more on sites and the trial operations rather than administrative tasks. To achieve value, agents must be measured against the outcomes they deliver, including end-to-end autonomy. In 2026, we will see agents begin to deliver extremely valuable aspects of the value chain—ultimately enabling full clinical development autonomy with as-needed human oversight for key activities.

— Michelle Longmire, CEO/Co-Founder of Medable

While political cycles may drive temporary uncertainty, the fundamentals of good clinical research remain global. The continued discovery of applicable scientific innovation, diversity of patient populations, pursuit of personalized and rare disease cohorts, and the need for international regulatory data ensure that cross-border trials will endure. The likely outcome next year is not a wholesale return of trials to the U.S., but rather a recalibration: a more resilient, hybrid model that strengthens domestic participation without sacrificing global reach. For forward-looking contract research organizations and sponsors, this transition offers a chance to build smarter, more flexible, technology-enabled trial ecosystems that thrive regardless of political tides.

— Patrick Flanagan, CEO of Veristat

As sponsors face growing pressure to attract and retain high-performing research sites, 2026 will likely bring a renewed focus and greater investment in strategies designed to support and empower sites. These efforts will range from reassessing sourcing models (contract research organization [CRO] vs. functional service provider vs. in-house) to adopting more flexible and responsive engagement tools. Unfortunately, many sites remain skeptical of new “support” systems that often add complexity rather than reduce it. A recent Tufts Center for the Study of Drug Development Impact Report found that 70% of global investigative site staff believe trials have become significantly harder to manage over the past five years. So, while sponsors and CROs share a commitment to improving transparency and trial execution, it’s essential that any changes minimize the operational burden on sites. The next wave of advancement will leverage technologies like artificial intelligence to streamline operations and enhance visibility without disrupting the natural flow of daily site activities.

— Kevin Williams, Executive Vice President and Chief Strategy Officer of Ledger Run

In 2025, more medical technology (MedTech) commercial teams moved toward putting precise, timely market data in the hands of their representatives rather than static annual plans so they could see market shifts early and adjust as needed. With Medicare and Medicaid cuts coming next year, those shifts are going to be faster and more severe, forcing commercial organizations to become even more agile. In 2026, MedTech commercial teams will stop relying on the annual plan to tell them where to go next. The best teams will use timely data to reallocate resources weekly, just like technology companies do with product sprints. Instead of static yearly plans that go stale by the second quarter, winning organizations will adapt in real time. We will see shorter feedback loops between product, marketing, and sales, and more decisions made from the field up, not the boardroom down. That shift will separate the companies that grow from the ones that just react.

— Alex Wakefield, Chief Revenue Officer for AcuityMD

**ACRP**