Medical Care
"From AI Potential to Real-World Healthcare: A Responsible Blueprint"
2024-12-03
In a significant development, a framework has emerged that successfully bridges the gap between the vast potential of AI and its practical applications in the real world of healthcare. This framework shines a light on AI's crucial role in transforming patient care while ensuring the utmost safety and equity.

Uniting AI's Potential with Healthcare Excellence

Study: Establishing Responsible Use of AI Guidelines

A dedicated AI Monitoring Committee has been established, granting it full authority to halt AI model deployment in case of any issues. This ensures accountability throughout the integration process, safeguarding the interests of patients and healthcare institutions.

Researchers at Harvard Medical School and the Mass General Brigham AI Governance Committee have developed comprehensive guidelines for integrating AI into healthcare effectively and responsibly. A cross-functional team of 18 experts from various domains, including informatics, research, legal, data analytics, equity, privacy, safety, patient experience, and quality, was formed. Through an extensive peer-reviewed and gray literature search, critical themes were identified.

Focus on Nine Principles

The researchers focused on nine key principles: fairness, robustness, equity, safety, privacy, explainability, transparency, benefit, and accountability. Three focus groups were established to refine these guidelines: one focusing on robustness and safety, another on fairness and privacy, and the third on transparency, accountability, and benefit. Each group consisted of 4-7 expert members.

A structured framework was developed and executed to facilitate the application of AI guidelines within a healthcare setting. Generative AI and its application in ambient documentation systems were selected as a representative case study, considering the unique challenges of monitoring such technologies, such as ensuring patient privacy and mitigating AI hallucinations.

Pilot Study and Shadow Deployment

A pilot study was conducted with select individuals from different departments. Privacy and security were given top priority, with strictly de-identified data shared with the vendor to enable continuous updates and improvements. Close collaboration with the vendor ensured strict de-identification, data retention policies, and controlled use of data solely for enhancing model performance.

Subsequently, a shadow deployment phase was implemented where AI systems operated in parallel with existing workflows without disrupting patient care. After shadow deployment, key performance metrics such as fairness across demographics, usability, and workflow integration were rigorously evaluated.

Collaboration with Vendors

Collaboration with vendors played a vital role. Rigorous discussions were held on data retention policies, continuous model updates, and safeguarding patient privacy through strict de-identification protocols. This collaborative effort was crucial in ensuring the successful integration of AI into healthcare.

The researchers identified several components critical for the responsible implementation of AI in healthcare. Mandating diverse and demographically representative training datasets helps reduce bias. Outcomes should be evaluated through an equity lens, and regular evaluations of equity should include model reengineering to ensure fair benefits for all patient populations.

Transparent communication of the AI system's Food and Drug Administration (FDA) status is equally important. Specifying whether FDA approval is required and detailing the current status of the AI system helps ensure compliance and build trust. A risk-based approach should be adopted to monitor AI systems, with more robust monitoring for applications that pose higher risks to care outcomes.

Preliminary Phase and Feedback

The preliminary phase (pilot study) allowed for comprehensive functionality assessments and feedback collection. This was crucial in identifying issues early in the implementation process. During shadow deployment, most users were from the departments of emergency medicine and internal medicine.

Feedback revealed both the strengths and areas for improvement of the AI system. While most criticisms focused on documenting physical examinations, the system received praise for its accuracy when working with interpreters or patients with strong accents.

Conclusions and Future Work

In conclusion, this study presented a methodology for incorporating AI into healthcare. The multidisciplinary approach provides a blueprint for non-profit organizations, healthcare institutes, and government bodies aiming to implement and monitor AI responsibly. Challenges such as balancing ethical considerations with clinical utility were highlighted, emphasizing the importance of ongoing collaboration with vendors to refine AI systems.

Future work will focus on expanding testing to include broader demographic and clinical case diversity while automating performance monitoring. These efforts aim to ensure that AI systems remain adaptable and equitable across various healthcare environments. The study demonstrates the importance of continuous evaluation, monitoring, and adaptation of AI systems to ensure their efficacy and relevance in challenging clinical settings.

Journal reference:Saenz, A. D., Centi, A., Ting, D., You, J. G., Landman, A., & Mishuris, R. G. (2024). Establishing responsible use of AI guidelines: A comprehensive case study for healthcare institutions. Npj Digital Medicine, 7(1), 1-6. DOI: 10.1038/s41746-024-01300-8, https://www.nature.com/articles/s41746-024-01300-8
Healthcare AI: Ensuring Successes & Avoiding Accidents
2024-12-03
Given the rapid spread of AI in U.S. healthcare, it's no surprise that unintended effects are emerging. While some may be pleasant, others pose risks. To navigate this landscape, healthcare organizations and AI developers must collaborate. Two researchers emphasize this in a recent JAMA opinion piece.

Strengthening Healthcare with AI Safety and Transparency

Conducting Real-World Clinical Evaluations

Before implementing AI-enabled systems into routine care, it's crucial to conduct or wait for real-world clinical evaluations published in high-quality medical journals. As new systems mature, healthcare organizations should conduct independent testing with local data to minimize patient safety risks. Iterative assessments should accompany this risk-based testing to ensure the systems benefit patients and clinicians while being financially sustainable and meeting ethical principles.

For example, imagine a hospital considering a new AI-powered diagnostic tool. By waiting for proper evaluations, they can ensure its accuracy and reliability before relying on it for patient care. This not only protects patients but also builds trust in the use of AI in healthcare.

Moreover, different medical specialties may have specific requirements for AI systems. By conducting local evaluations, organizations can tailor the use of AI to meet the unique needs of their patients and clinicians.

Involving AI Experts in Governance

Inviting AI experts into new or existing AI governance and safety committees is essential. These experts can include data scientists, informaticists, operational AI personnel, human-factors experts, and clinicians working with AI. Regular meetings of these committees allow for the review of new AI applications, consideration of safety and effectiveness evidence before implementation, and the creation of processes to monitor AI application performance.

For instance, a data scientist can provide insights into the data used by the AI system, ensuring its quality and relevance. An informaticist can help integrate the AI system into the healthcare workflow seamlessly. Human-factors experts can focus on how clinicians interact with the AI, minimizing potential errors.

By having a diverse group of experts involved, healthcare organizations can make more informed decisions about AI implementation and ensure its safe and effective use.

Maintaining an Inventory of AI Systems

Healthcare organizations should maintain and regularly review a transaction log of AI system use, similar to the audit log of the EHR. This log should include details such as the AI version in use, date/time of use, patient ID, responsible clinical user ID, input data, and AI recommendation or output. The AI committee should oversee ongoing testing to ensure the safe performance and use of these programs.

Let's take a hospital's radiology department as an example. By maintaining an inventory of their AI-enabled radiology systems, they can easily track which systems are in use, by whom, and for what patients. This allows for quick identification of any issues and enables proactive monitoring of system performance.

Regular reviews of the inventory help organizations stay updated on the status and usage of their AI systems, ensuring they are meeting the needs of the patients and clinicians.

Creating Training Programs for Clinicians

Initial training and subsequent clinician engagement with AI systems should include a formal consent-style process with signatures. This ensures that clinicians understand the risks and benefits before accessing the AI tools. Steps should also be taken to ensure patients understand when and where AI systems are used and the role of clinicians in reviewing the output.

For example, a training program for cardiologists using an AI-based heart disease diagnosis system might include detailed explanations of how the AI works, its limitations, and the importance of clinician review. Clinicians would sign a consent form indicating their understanding and agreement to use the system.

By providing clear instructions and engaging clinicians in the process, healthcare organizations can enhance the safe and effective use of AI in clinical practice.

Establishing a Reporting Process for Safety Issues

Developing a clear process for patients and clinicians to report AI-related safety issues is crucial. A rigorous, multidisciplinary process should be implemented to analyze these issues and mitigate risks. Healthcare organizations should also participate in national postmarketing surveillance systems to aggregate and analyze safety data.

Imagine a situation where a patient experiences an unexpected outcome after using an AI-enabled surgical system. With a clear reporting process, both the patient and the clinician can quickly report the issue. A multidisciplinary team can then investigate and take appropriate actions to prevent similar incidents in the future.

Participating in national surveillance systems allows for a broader analysis of safety data and the sharing of best practices among different healthcare organizations.

Providing Disabling Authority for AI Systems

Similar to preparing for EHR downtime, healthcare organizations must have policies and procedures in place to manage clinical and administrative processes when the AI is not available. Clear written instructions should enable authorized personnel to disable, stop, or turn off AI-enabled systems 24/7 in case of an urgent malfunction.

For instance, during a power outage or a system failure, having the ability to quickly disable the AI system ensures the safety of patients and allows for a smooth transition to manual processes. Regular assessments of how AI systems affect patient outcomes, clinician workflows, and system-wide quality are also essential.

If AI models fail to meet pre-implementation goals, revisions should be considered. If revisions are not feasible, the entire system may need to be decommissioned to protect patient safety and maintain the integrity of the healthcare system.

See More
Health Vending Machines Installed in Omaha with Free Healthcare Items
2024-12-03
In Omaha, a remarkable initiative is taking shape with the installation of health vending machines. These machines are not your typical vending machines filled with junk food; instead, they offer a wide range of essential health products.

"Access to Health at Your Fingertips - Omaha's Health Vending Machines"

Locations and Installation

The Siena Francis House is one of the five locations where the Douglas County Health Department has installed these health vending machines. These machines can be found at various places across Omaha, including the Douglas County Health Department, American Dream bar, Nebraska Urban Indian Health Coalition, Charles B. Washington Library, and the Siena Francis House emergency shelter. Four of the machines are currently placed outdoors, while the one at Siena Francis House is awaiting an outside electricity hookup. They were installed last week with no marketing or promotion.

These machines offer a diverse range of items such as STI test kids, COVID test kits, condoms, lubricant, pregnancy tests, tampons, pads, wound care kits, emergency contraceptive, and fentanyl test strips. It's truly a one-stop solution for many health needs.

Access and Convenience

One of the most significant aspects of these health vending machines is their accessibility. People can return items like STI test kits directly to the vending machine, and the health department will collect them the next day. There is no need to provide a name, and the machine tracks demographic data such as zip code, age, race, ethnicity, and gender.

This provides a convenient way for people to access essential health products without the hassle of going through traditional channels. It's just one more avenue where the health department is meeting people where they are.

Impact and Usage

In just the first six days since the installation, about 120 items have reached the hands of those who need them. This shows the immediate demand and the positive impact these machines are having.

Chris Bauer, the chief development officer with Siena Francis House, is proud of the fact that these tools are available. He hopes that it will inspire people to think about their health and take steps to change. Leah Casanave, leading the charge on this project with the health department, believes that this is a significant step towards making health accessible for everyone.

The vending machines are funded through grants and private donations, ensuring that no taxpayer dollars are used. The money they have is expected to last for five years, and after that, the health department will seek further funding to keep this initiative going.

See More