Embracing the Future: Opportunities and Challenges of AI Integration in Healthcare

Clinical Researcher—February 2024 (Volume 38, Issue 1)

PEER REVIEWED

Shriya Das, MS, MSc

 

 

 

The rapid evolution of artificial intelligence (AI) in healthcare promises transformative impacts across various medical domains. Notably, AI excels in harnessing vast amounts of data for a variety of purposes, and this article emphasizes the significant potential of AI in achieving “precision medicine” among those purposes. However, the surge in AI technologies comes with notable challenges, including concerns over patient data privacy, the unpredictability of AI in clinical scenarios, and potential breaches linked with extensive data sharing. The future integration of AI demands a multifaceted approach—from regulating data management and ensuring transparency to reshaping medical education to better incorporate technology understanding. In this context, stakeholder perspectives and prioritizing human-led decisions stand out as pivotal. To truly harness the benefits of AI in medicine, there’s an imperative to navigate these challenges and embrace a holistic, informed approach.

Background

AI in healthcare is advancing rapidly with numerous applications across medical domains. Studies highlight AI’s role in interpreting radiographs, detecting cancers from mammograms to skin lesions, analyzing CT scans, identifying brain tumors, and predicting Alzheimer’s disease from PET scans. AI assists in areas like pathology, retinal imaging, arrhythmia detection, hyperkalaemia identification from ECGs, colonoscopy polyp detection, genomics, facial genetic condition identification, and optimizing in vitro fertilization success. Using vast electronic health record (EHR) data, AI extracts critical clinical information, predicts patient risks, improves diagnostic evaluations, anticipates health deteriorations, enhances decision-making, and streamlines clinical workflows. This includes tasks like analyzing doctor-patient interactions and predicting hospital appointment attendance.{1}

Clinical care is multifaceted, necessitating a strategic use of AI to elevate care quality and benefit both patients and providers. Operationally, AI in clinical management should address challenges in care logistics, data handling, and algorithm oversight. Broader societal efforts are essential for crafting ethical, regulatory, and payment structures. A holistic, informed approach to AI will foster a dynamic environment for its integration into clinical practice.{2}

Meanwhile, the future of AI in medicine hinges on the ability to protect the privacy and security of health data. With the rising concerns about hacking and data leaks, there’s a pressing demand for algorithms that safeguard a patient’s medical information.{3}

AI is the Future

AI excels in gathering and merging vast and diverse data to individualize medicine and constantly evolve from it. Big data also has a role to play. Complementary technologies such as “smart wearables” have the potential to increase the power of medical AI through the provision of large volumes of diverse health-relevant data, collected directly from the user. The combined impact of these technologies will help us to move closer towards achieving precision medicine, an emerging approach to disease treatment and prevention that takes into account individual variability in genes, environment, and lifestyle.{4}

In terms of using AI for pre-screening potential subjects for a clinical trial at site level, numerous studies indicate that EHRs contribute significantly to administrative strain and the rising burnout among both trainee and practicing doctors. AI can make this process of potential participant identification via database search and collecting their medical histories for review faster.

While natural language processing (NLP) can aid in streamlining medical record-keeping, ambient clinical intelligence (ACI) provides an interactive digital backdrop for both doctors and patients. This environment can potentially interpret patient-doctor interactions and automatically update EHRs. This can also help site staff to identify a potential participant in real time, if the correct parameters are fed to the AI tools being used. AI can also be used at study start-up stage where AI tools can give real-time information on potential sites that have reliable patient populations and regulatory approval timelines at the country and site levels, among other considerations.

Numerous initiatives aim to harness ACI, marking a pivotal AI application in medicine to address contemporary challenges faced by medical professionals. One major obstacle in embracing AI-driven medical technologies is the concern over medicine becoming impersonal. This is often linked to the expanding administrative tasks placed on doctors. Yet, innovations like ACI and NLP promise to alleviate such administrative pressures, allowing medical professionals to center their attention on patient care.{5}

Quality and Regulatory Compliance Risk

Machine learning and deep learning need vast datasets, often larger than those from clinical trials. While specialties like ophthalmology have thrived due to abundant, curated imaging datasets, this also poses risks. While the principle of “do no harm” guides healthcare, breaches in patient privacy can result in significant damage, affecting areas like employment or insurance and risking identity theft. Completely anonymizing data is challenging, and there’s always a risk of re-identification. For instance, facial recognition can be applied to CT scans, and machine learning can discern age, gender, and health factors from various images. Over time, even non-image datasets may be able to perform brisk identification by linking with other accumulated patient information. This hinges on handling patient data, intertwining with concerns of patient privacy.{6}

When utilizing AI in various stages of clinical research such as the development of protocols, execution of clinical trials, and manuscript creation, copyright issues can inadvertently arise. AI often sources information from a multitude of databases and publications to generate content or make decisions. There’s a risk that the AI might use copyrighted material, unbeknownst to the user, leading to legal complications and questions regarding the originality and authenticity of the research output. Ensuring that AI systems are designed to recognize and avoid the use of copyrighted material is essential to maintain the integrity of the research process.

AI’s data privacy challenges arise from the need for specialized expertise and resource-intensive computing, especially for rare diseases which require data pooling from multiple institutions. This inter-institutional sharing can elevate data breach risks. Partnerships with large pharmaceutical and tech corporations heighten concerns, given the recent emphasis on data monetization, likening it to “data being the new oil.” The increasing intersection of healthcare businesses and academic data can intensify threats to privacy. Exclusive deals that limit clinical data sharing may contradict the Belmont principle of justice.{7}

Owing to AI’s inherent complexity in terms of transparency, close collaboration with the firms that design and uphold this tech becomes crucial. The U.S. Food and Drug Administration has shifted its focus to accrediting the entities behind AI development, acknowledging the ever-evolving nature of AI. The European Commission has introduced laws setting standardized AI guidelines, emphasizing privacy and data management similar to the European General Data Protection Regulation. Meanwhile, countries like Canada are still in the process of defining AI-specific regulations.{8}

Although AI systems might be trained on extensive datasets, they can come across unfamiliar data and situations in a clinical environment. This unpredictability might reduce their accuracy and reliability, potentially compromising patient safety. Several challenges impede AI’s full acceptance in the medical field. These range from the absence of clinical studies proving its superior reliability compared to conventional methods, to concerns about assigning blame for medical mistakes. Particularly, the application of data protection rules to this context warrants thorough consideration. Properly governing this technology via legal measures is vital to prevent potential loss of human touch.{9}

Potential Enhancements of AI Tools in Clinical Research

Grasping the perspectives of stakeholders is pivotal when crafting policies for AI in clinical settings. It’s imperative for AI developers to prioritize learning, foster dialogue, and team up to address differences in opinions among stakeholders. Tools that aid in visual data interpretation and data aggregation are more welcomed by healthcare professionals compared to ones that directly shape clinical verdicts, or that might jeopardize the clinician-patient bond or the independence of clinicians. Routine digital data tasks, like interpreting radiological or dermatological images, are perceived as more suitable for AI than hands-on or dialogue-based tasks, such as surgical procedures or consultations.

Privacy concerns are paramount, especially when AI is applied in clinical trials involving sensitive patient data. Different applications of AI, whether in protocol development, trial execution, or manuscript creation, come with varying levels of privacy concerns. For instance, when AI is used in the execution of clinical trials, it may have access to sensitive patient information, raising concerns about data security and patient confidentiality. Ensuring that AI systems adhere to stringent data protection regulations and ethical guidelines is essential to safeguard participant privacy and maintain the trust and integrity of the research process. Clinicians express concerns about potential privacy violations and the challenges in comprehending or directing AI tools. Meanwhile, patients fear the diminished role of clinicians and a lack of inclusive decision-making. A shared sentiment emphasizes the importance of human-led decisions and maintaining compassionate, personalized communication during clinical interactions. Observational studies indicate that patients lean toward human counselors who can understand their individual situations, envisioning AI as a supportive tool rather than a replacement for clinical guidance.{10}

Further, accuracy is very important in clinical research. The reliability of AI in developing protocols, executing clinical trials, and creating manuscripts heavily relies on the quality of the datasets it utilizes. AI systems are only as accurate and reliable as the data they are trained on. Inaccurate, outdated, or biased data can lead to misleading or erroneous outcomes, which can compromise the validity of the research. It’s crucial to ensure that AI systems have access to comprehensive, up-to-date, and accurate datasets to enhance the reliability and validity of their contributions to clinical research.

Finally, there might be concerns about the originality and innovation of the AI-generated protocols. In the execution of clinical trials, issues might revolve around the AI system’s decision-making process, its adaptability, and its ability to handle unexpected scenarios. In manuscript creation, concerns might focus on the authenticity and credibility of the AI-generated content. Tailoring strategies and safeguards specific to each application of AI is crucial to address these unique challenges effectively.Top of Form

Conclusion

Medical training needs to integrate modern technology more effectively. Current curriculums offer limited exposure to technologies that healthcare professionals will encounter. For AI systems to be properly integrated into clinical care, specialized training on these technologies, which might independently perform tasks like diagnosis and surgery, is essential. As clinician roles transform, the focus should shift toward handling intricate health situations and to mastering the skills necessary for interpreting and conveying diverse data relevant to specific medical cases. To prepare future doctors for these challenges, a more comprehensive educational approach that encompasses an understanding of technology and its outcomes is vital.

References

  1. Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D. 2019. Key challenges for delivering clinical impact with artificial intelligence. BMC Med 17(1):195. https://bmcmedicine.biomedcentral.com/articles/10.1186/s12916-019-1426-2
  2. Shung DL, Sung JJY. 2021. Challenges of developing artificial intelligence‐assisted tools for clinical medicine. J Gastroenterol Hepatol 36(2):295–8. https://pubmed.ncbi.nlm.nih.gov/33624889/
  3. Bouarar AC, Mouloudj K, Asanza DM (editors). 2023. Integrating Digital Health Strategies for Effective Administration. IGI Global. https://www.igi-global.com/book/integrating-digital-health-strategies-effective/313178
  4. Hamid S. 2016. The Opportunities and Risks of Artificial Intelligence in Medicine and Healthcare. CUSPE Communications. https://www.cuspe.org/wp-content/uploads/2016/09/Hamid_2016.pdf
  5. Briganti G, Le Moine O. 2020. Artificial Intelligence in Medicine: Today and Tomorrow. Front Med 7. https://www.frontiersin.org/articles/10.3389/fmed.2020.00027/full
  6. Alonso A, Siracuse JJ. 2023. Protecting patient safety and privacy in the era of artificial intelligence. Semin Vasc Surg 36(3):426–9. https://pubmed.ncbi.nlm.nih.gov/37863615/
  7. Tom E, Keane PA, Blazes M, Pasquale LR, Chiang MF, Lee AY, et al. 2020. Protecting Data Privacy in the Age of AI-Enabled Ophthalmology. Transl Vis Sci Technol 9(2):36–6. https://doi.org/10.1167/tvst.9.2.36
  8. Murdoch B. 2021. Privacy and artificial intelligence: challenges for protecting health information in a new era. BMC Med Ethics 22(1):1–5. https://bmcmedethics.biomedcentral.com/articles/10.1186/s12910-021-00687-3
  9. Guarda P. 2019. ‘Ok Google, am I sick?’: artificial intelligence, e-health, and data protection regulation. BioLaw Journal (Rivista di BioDiritto) (1):359–75. https://teseo.unitn.it/biolaw/article/view/1336
  10. Scott IA, Carter SM, Coiera E. 2021. Exploring stakeholder attitudes towards AI in clinical practice. BMJ Health Care Inform 28:100450. http://informatics.bmj.com/

Shriya Das, MS, MSc, (shriyadas513@gmail.com) is an Independent Researcher based in Flower Mound, Texas.