News

INFORMS Analytics article: Nine Principles for the Safe Use of AI in Healthcare

Note: The views expressed in this article are those of the authors and do not necessarily reflect the views of Mass General Brigham.

INFORMS Analytics
INFORMS Analytics
May 15, 2024

An original article by Anant Vasudevan and Thaddeus Fulford-Jones, published in INFORMS Analytics:

https://pubsonline.informs.org/do/10.1287/LYTX.2024.02.02/full/

It’s a situation doctors and nurses see every day: An older adult with a history of diabetes and high blood pressure is admitted to the hospital due to chest pain and shortness of breath. In the hospital, we have evidence-driven guidelines that tell us how to diagnose and treat a patient like this. In most cases, we know her likely prognosis and outcomes. Thanks to proven risk scoring tools, we even have an idea of her risk of rehospitalization and likely complications down the road.

The promise of artificial intelligence (AI) and machine learning (ML) is to go one step further: a real-time “copilot” to doctors and nurses that can help optimize and personalize the entire care plan, not just “for patients like me” but truly “for me.” It’s an exciting promise – but not without risk. Anyone who has experimented with ChatGPT knows that it’s all too easy for the machine to misunderstand the assignment. “Slightly wrong” quickly cascades into “bewilderingly wrong,” and when challenged, the AI confidently cites fictional “data” to back up its claims.

How do we ensure that next-generation AI/ML solutions truly achieve the intended goal in healthcare settings? Using our hospitalized older adult as an example, we’ve put together a set of nine “safety principles” for healthcare executives to incorporate into the playbook as they assess and explore new AI/ML solutions.

  1. Data Privacy and Cybersecurity:Upon admission, the hospital’s AI system, compliant with HIPAA and cybersecurity standards, securely processes the patient’s information, ensuring privacy and data security.
  2. Factual Validity:All AI/ML systems should be rigorously tested and validated in real-world settings before launch. This ensures accurate analysis of clinical, functional and social aspects of the patient’s history. As a result, the care team has confidence that the system will generate accurate and reliable recommendations for the patient’s care.
  3. Bias Mitigation:The AI model, trained with diverse data, recognizes that adjusting for socioeconomic risk factors is a critical step in preventing the perpetuation of health inequity and ensuring a holistic, practical and personalized care plan.
  4. Change Management:The hospital staff must be thoroughly briefed on the AI system’s functionalities. This allows them to confidently explain to the patient how the AI aids in managing their condition, alleviating concerns about technology replacing human care.
  5. Information Accountability:The AI system’s “interpretability” feature allows the care team to build intuition around and trust in AI-generated insights.
  6. Unlocking Human Potential:The AI system streamlines data analysis and frees up staff to do what only humans can do. This means doctors and nurses can spend less time on information gathering and more time on creatively and iteratively problem-solving against a backdrop of evolving and nuanced patient parameters. It also means more time at the bedside for compassionate and personal interactions with patients and their families.
  7. Built to Evolve:Details of the patient’s clinical course are continuously fed into the AI system, which adapts its recommendations based on the latest data, ensuring the care plan and discharge plan remain optimized and up-to-date.
  8. Training and Education:The care team’s training on how to use the AI system enables them to seamlessly integrate AI insights into their clinical decision-making process. It also ensures they understand the system’s limitations and know how to provide sufficient oversight.
  9. Decision Accountability:All decisions guided by the AI are documented. If the patient’s condition changes unexpectedly, the team can quickly trace the decision-making process, identifying the underlying root cause and implementing care plan adjustments to resolve the issue.

Our Patient’s Outcome

The above example illustrates our nine principles for the safe use of AI in healthcare settings. The AI-assisted insights led to timely management and a robust transitional care plan for what ended up being a new cardiac condition. The transparency and security of the AI system, along with the staff’s adept use of technology, enhanced the patient’s trust in the care she received. As a result, the patient experienced a faster recovery with a well-informed, personalized treatment plan.

Anant Vasudevan

Anant Vasudevan, M.D., MBA, is an Instructor in Medicine, Harvard Medical School; Hospitalist, Brigham and Women’s Hospital; and Chief Medical Officer, Radial.

Thaddeus Fulford-Jones

Thaddeus Fulford-Jones, Ph.D., is the cofounder and CEO of Radial.

Better care outcomes for everyone

We believe in the transformative potential of Al—not to replace the decision-making authority of clinical staff, but to empower teams to make optimal decisions faster.

About Radial
Better care outcomes

Experience Radial

Organizations like yours are transforming care with our support. Learn how your business can leverage Radial.

Book A Demo
Workspace