MPIE Risk Management GEM 2023 (Volume II): Artificial Intelligence: Thoughtful Incorporation into Practice

Technology, specifically artificial intelligence, is fast growing and offers advancements in healthcare delivery. However, if not utilized thoughtfully, it may result in less-than-optimal patient outcomes. Artificial intelligence (AI) has demonstrated great potential in advancing the patient and clinician experience; however, using this technology requires due diligence, ethical considerations, provider education, and maintenance of communication in the patient-provider relationship. While AI has many benefits, it cannot replace the patient-provider interaction.

The use of AI in healthcare has accelerated in the past two years. Global investment in AI is expected to increase by 41% by 2026 (Buttel, 2022). AI’s potential for improving patient safety and outcomes in healthcare is well documented. The potential is limitless, from robotic technology demonstrating sterilization of operating rooms to performing highly intricate surgical procedures with little human intervention. Additionally, innovations in self-aware AI have been embedded in healthcare applications resulting in the accumulation of vast amounts of patient data and the ability to aid medical professionals in the identification of potential diagnoses.

AI offers opportunities for advancement in medicine, but it also raises questions about the reliability of the data and how it should be utilized. Healthcare leaders are inundated with options in the AI technology space. Though many are not, some of these options are regulated by the FDA. For this reason, questions arise about the dependability of the technology, the potential for poor performance outcomes, and the potential risk of patient harm. As such, proper vetting of AI options is paramount.

In assessing AI options, healthcare leaders must leverage reliable sources for the most dependable research on AI applications that demonstrate solid governance and positive patient care outcomes. The Brookings Institute, a nonprofit organization devoted to independent research and policy development, has created an AI governance initiative related to the complexity of this area and the analysis of healthcare innovations in AI. (See Artificial Intelligence and Emerging Technology Initiative (brookings.edu)). Stanford Medicine (Stanford University) has created a Healthcare AI Applied Research Team (SeeHealthcare AI Applied Research Team | Healthcare AI Applied Research Team | Stanford Medicine). This team is investigating AI products focused on healthcare and how these products impact patient problem-solving and outcomes.

The American Board of Artificial Intelligence in Medicine (ABAIM) is a nonprofit that offers another educational resource for clinicians related to understanding and implementing AI in healthcare (SeeHome – ABAIM). The ABAIM provides courses and certification in AI application usage in healthcare and offers technical knowledge of AI tools and guidance on the ethical use of AI in healthcare.

Access to third-party resources in performance research, patient-specific applications, and education are great tools for healthcare leaders to leverage to increase the confidence of frontline users.

Another consideration in using AI is the relationship between the medical professional and the patient. Given the progressive nature of AI, it would be easy to lose the nuances in this relationship. For this reason, the benefits of AI must be weighed against more than minimizing organizational costs. An AI cost-benefit analysis should always include assessing patient benefits and risks. Furthermore, the use of AI should consist of, rather than replace, ongoing engagement with the medical professionals providing care. Communication is essential to healthcare. A communication breakdown may lead to anything from a negative patient experience, to a patient grievance, a delay in treatment, or potential litigation in the event of an adverse outcome.

Some AI products can analyze indicators of future diseases and potential diagnoses. However, if too much data is pushed to the frontline clinician, the potential for data fatigue increases. In addition, these indicators are limited to history taking over time and the application’s programming. Therefore, they should only be relied upon with medical judgment, as this could present a risk to patients. AI indicators should never be relied upon solely in decision-making but instead utilized as a tool to aid clinicians in their diagnosis and subsequent treatment planning. The nuances of the patient-provider conversation cannot be undervalued against AI. AI self-awareness technology cannot replace this communication in the treatment process.

References:

  1. Buttel, Amy (2022) “AI Offers Opportunities and Risks for Providers, Organizations, and MPL carriers.” Inside Medical Liability 2022, (Q3), p. 1-8. https://www.mplassociation.org/Web/Publications

If you are an employed provider of a healthcare system and have questions on this subject, please consult your organization’s risk management department for advisement as to system policy or protocol.