Healthcare organizations have sometimes been slow to adopt new AI tools and other cutting-edge innovations because of legitimate concerns about security and transparency. But to improve the quality of care and patient outcomes, healthcare needs these innovations.
It is imperative, however, that they are applied correctly and ethically. Just because a generative AI application can pass a medical school exam does not mean it is ready to become a practicing physician. Healthcare should use the latest advances in AI and large language models to put the power of these technologies in the hands of medical experts so they can provide better, more accurate, and safer care. »docs-internal-guid-86f54ae1-7fff-7ad9-1291-e6052bd334b0″ >Dr Tim O'Connell is a practicing radiologist and CEO and co-founder of emtelligent, a developer of AI-powered technology that transforms unstructured data. »ltr» We spoke with him to better understand the target= »ltr »>importance »docs-internal-guid-86f54ae1-7fff-7ad9-1291-e6052bd334b0″guardrails for AI in healthcare because it helps modernize the practice of medicine. We also talked about how algorithmic discrimination can perpetuate health inequities, legislative action to set AI safety standards – and why a human in the loop is essential. »https://www.healthcareitnews.com/news/healthcare-must-set-guardrails-around-ai-transparency-and-safety » Q. How important are safeguards for AI in healthcare, as this technology helps modernize the practice of medicine?
A. »docs-internal-guid-86f54ae1-7fff-7ad9-1291-e6052bd334b0″AI technologies have opened exciting opportunities for healthcare providers, payers, researchers, and patients, offering the potential to achieve better outcomes and reduce healthcare costs. However, to fully exploit the potential of AI, particularly for medical AI, we must ensure that healthcare professionals understand both the capabilities and limitations of these technologies.
>This includes awareness of risks such as non-determinism, hallucinations, and issues with reliable referencing of source data. Healthcare professionals must be equipped not only with knowledge about the benefits of AI, but also with a critical understanding of its potential pitfalls, to ensure they can use these tools safely and effectively in a variety of clinical settings. >It is essential to develop and adhere to a thoughtful set of principles for the safe and ethical use of AI. These principles must include consideration of concerns about privacy, security, and bias, and they must be rooted in transparency, accountability, and fairness. »ltr»
target=”ltr”>Reducing bias requires training AI systems”docs-internal-guid-86f54ae1-7fff-7ad9-1291-e6052bd334b0″on more diverse datasets that account for historical disparities in health diagnoses and outcomes, while shifting training priorities to ensure AI systems are aligned with real-world healthcare needs. This focus on diversity, transparency, and rigorous oversight, including the development of safeguards, ensures that AI can be a highly effective tool that remains resilient to errors and helps deliver meaningful improvements in healthcare outcomes. »docs-internal-guid-86f54ae1-7fff-7ad9-1291-e6052bd334b0″ This is where safeguards – in the form of well-designed regulations, ethical guidelines, and operational safeguards – become essential. These protections help ensure that AI tools are used responsibly and effectively, addressing concerns about patient safety, data privacy, and algorithmic bias. »_blank » >They also provide accountability mechanisms, ensuring that any errors or unintended consequences of AI systems can be traced back to specific decision points and corrected. In this context, safeguards act as both safeguards and enablers, allowing healthcare professionals to trust AI systems while guarding against their potential risks.Q. How can algorithmic discrimination perpetuate health inequalities, and what can be done to address this? »ltr» »docs-internal-guid-86f54ae1-7fff-7ad9-1291-e6052bd334b0″>A. »ltr »If the AI systems we rely on in healthcare are not developed and trained properly, there is a very real risk of algorithmic discrimination. AI models trained on datasets that are not large or diverse enough to represent the full range of patient populations and clinical characteristics can and do produce biased results. »docs-internal-guid-86f54ae1-7fff-7ad9-1291-e6052bd334b0″ >This means that AI could provide less accurate or less effective care recommendations for underserved populations, including racial or ethnic minorities, women, people from lower socioeconomic backgrounds, and people with very rare or uncommon diseases. »ltr» For example, if a medical language model is trained primarily on data from a specific demographic, it may struggle to accurately extract relevant information from clinical notes that reflect different medical conditions or cultural contexts. This can lead to missed diagnoses, misinterpretations of patient symptoms, or ineffective treatment recommendations for populations the model was not trained to correctly recognize.
>Indeed, the AI system could perpetuate the inequalities it is intended to alleviate, particularly for racial minorities, women, and patients from lower socioeconomic backgrounds who are often already underserved by traditional health care systems. »ltr» >To solve this problem it is essential to »docs-internal-guid-86f54ae1-7fff-7ad9-1291-e6052bd334b0″ target=>ensure that AI systems rely on large, diverse »ltr» datasets that capture a wide range of patient demographics, clinical presentations, and health outcomes. The data used to train these models should be representative of different races, ethnicities, genders, ages, and socioeconomic statuses to avoid skewing the system’s results toward a narrow view of healthcare. »docs-internal-guid-86f54ae1-7fff-7ad9-1291-e6052bd334b0″ >This diversity allows models to perform accurately in diverse populations and clinical scenarios, minimizing the risk of perpetuating bias and ensuring that AI is safe and effective for all. »ltr»“docs-internal-guid-86f54ae1-7fff-7ad9-1291-e6052bd334b0”>Q. Why is human-in-the-loop essential for AI in healthcare? »ltr»“docs-internal-guid-86f54ae1-7fff-7ad9-1291-e6052bd334b0”>A. »ltr »While AI can process vast amounts of data and generate insights at speeds that far exceed human capabilities, it lacks the nuanced understanding of complex medical concepts that are integral to delivering high-quality care. Humans in the loop are essential to AI in healthcare because they provide the clinical expertise, oversight, and context needed to ensure that algorithms operate accurately, safely, and ethically. »docs-internal-guid-86f54ae1-7fff-7ad9-1291-e6052bd334b0″ >Consider a use case of extracting structured data from clinical notes, lab reports, and other healthcare documents. Without human clinicians to guide development, training, and ongoing validation, AI models risk missing important information or misinterpreting medical jargon, abbreviations, or contextual nuances of clinical language. »https://www.healthcareitnews.com/news/healthy-datasets-are-cornerstone-effective-ai-initiatives» For example, a system might wrongly flag a symptom as important or overlook critical information embedded in a doctor’s note. Human experts can help refine these models, ensuring they correctly capture and interpret complex medical language. >From a workflow perspective, humans in the loop can help interpret and act on AI-generated information. Even when AI systems generate accurate predictions, healthcare decisions often require a level of customization that only clinicians can provide. »ltr » Human experts can combine AI findings with their clinical experience, knowledge of the patient’s unique circumstances, and understanding of broader healthcare trends to make informed, compassionate decisions.“ltr”Q. What is the status of legislative efforts to establish AI safety standards in healthcare, and what should legislators do?
HAS.Legislation to establish AI safety standards in healthcare is still in its early stages, although there is growing recognition of the need for comprehensive guidelines and regulations to ensure the safe and ethical use of AI technologies in clinical settings. »docs-internal-guid-86f54ae1-7fff-7ad9-1291-e6052bd334b0″ >Several countries have begun to introduce AI regulatory frameworks, many of which are based on trusted, fundamental AI principles that emphasize safety, fairness, transparency and accountability, which are beginning to shape these conversations. In the United States, the Food and Drug Administration has introduced a regulatory framework for AI-based medical devices, particularly software as a medical device (SaMD). The FDA’s proposed framework follows a »docs-internal-guid-86f54ae1-7fff-7ad9-1291-e6052bd334b0″ approach, which aligns with the principles of trustworthy AI by emphasizing continuous monitoring, updates, and real-time evaluation of AI performance. However, while this framework addresses AI-driven devices, it has not yet fully considered the challenges posed by non-device AI applications that deal with complex clinical data. »docs-internal-guid-86f54ae1-7fff-7ad9-1291-e6052bd334b0″ >Last November, the American Medical Association »ltr»>published proposed guidelines for the use of AI »ltr »in an ethical, fair, responsible and transparent manner. »docs-internal-guid-86f54ae1-7fff-7ad9-1291-e6052bd334b0″ >In its "ltr" the AMA reinforces its position that AI enhances human intelligence rather than replacing it and argues that it is >By fostering this collaboration between policymakers, healthcare professionals, AI developers, and ethicists, we can develop regulations that promote both patient safety and technological progress. Policymakers must strike a balance, creating an environment where AI innovation can thrive while ensuring that these technologies meet the highest standards of safety and ethics. »ltr » >This includes developing regulations that enable agile adaptation to new advances in AI, ensuring that AI systems remain flexible, transparent and responsive to the evolving needs of healthcare. »docs-internal-guid-86f54ae1-7fff-7ad9-1291-e6052bd334b0″ Follow Bill's coverage of HIT on LinkedIn: target=”ltr”>Bill Siwicki »docs-internal-guid-86f54ae1-7fff-7ad9-1291-e6052bd334b0″Email him: target= »ltr »>bsiwicki@himss.org »docs-internal-guid-86f54ae1-7fff-7ad9-1291-e6052bd334b0″Healthcare IT News is a HIMSS Media “total product lifecycle” publication class= »ltr »rel= »docs-internal-guid-86f54ae1-7fff-7ad9-1291-e6052bd334b0″target=>Read more »ltr »