News | December 11, 2025 How AI is changing medical malpractice and hospital risk Stay current with healthcare and senior care provider regulatory trends, news and solutions delivered right to your inbox. Sign up for our newsletter. Keep me informed Get Solutions Notice: JavaScript is required for this content. RISK MANAGEMENT Setting standards for the responsible use of AI in healthcare and how to address AI in medical malpractice AI medical malpractice and hospital risk is a big topic. AI, like humans, is not perfect and can make mistakes. As AI increasingly becomes embedded in hospital workflows, conversations about legal liability are becoming more commonplace.David A. Simon, an associate law professor at Northeastern University and co-director of the Amy J. Reed Collaborative for Medical Device Safety, spoke with U.S. News on the topic. What criteria should be used to define ‘AI malpractice’? The existing framework works. It’s just a question of how it is going to apply. The more physicians offload tasks to AI, the more expensive litigation for mistakes with AI will be because it will become more difficult to show, particularly if it’s a black box AI. You don’t even know what it’s doing or how to explain what it’s doing. In cases where the AI framework has led to patient harm, who should be held liable: the clinician, the hospital or the AI developer? That’s a very fact-dependent question. So all of them could be liable. It could be that there’s a defective product that the AI developer made, and the clinician in the hospital used it without validating it, and the hospital implemented it without validating it or checking to see if it worked properly, and the clinician misused it. What practical steps can hospitals take to reduce their exposure to AI-related malpractice claims? Whenever the hospital is taking an action like adopting an AI technology or implementing an AI technology, they have to pay attention to a couple things. One is, what’s their agreement with the manufacturer? What do the indemnification provisions of the agreement say? In other words, who’s going to be taking responsibility if something goes wrong and on what terms? Read the whole story AI LIMITS AI healthcare trends and legal considerations While AI shows immense potential for routine tasks, current research suggests it primarily acts as an augmentation tool for healthcare professionals. Why Complete Replacement is Unlikely Soon: Human Skills: Empathy, ethical decision-making, complex communication, and building patient trust remain crucial human elements in care. Complexity: AI struggles with unique patient situations, unpredictability, and the nuanced “art” of medicine. Accountability: Clear legal and ethical frameworks for autonomous AI medical decisions are still developing. Bias Risks: AI algorithms can reflect biases present in their training data, potentially worsening health disparities if not carefully managed (Fenech et al., 2024). AI Challenges and Limitations in Law: Confidentiality: Protecting sensitive client data when using third-party AI tools is a major concern. Accuracy: AI can generate incorrect information or “hallucinate” fake precedents, posing significant risks. Human Judgment: Core legal skills involving strategy, negotiation, advocacy, client counseling, and ethical reasoning remain human domains. The ongoing evolution of liability and regulatory standards around AI will require analysts to focus on compliance, risk management, and the ethical boundaries of AI-assisted decision-making in complex or precedent-setting cases. AI trends in healthcare, diagnostics, liability & accountability, insurance THURSDAY, DECEMBER 11, 2025 AT 12 PM Join us Attend the event As we enter the fourth quarter, to ensure everything is finalized well before year-end, now is the ideal time to submit your work for review. Contact Excelas today! You can always read all our newsletters online! Excelas helps organizations respond accurately and quickly to claims and litigation brought against them Partnering with attorneys, health care organizations, and insurance companies since 1995, Excelas provides medical legal analyses and tools for building winning defense strategies. When expertise, accuracy, reliability, and on-time delivery count, you can count on Excelas. Post Tags: ai hospital risk ai legal liability ai limitations in law ai medical malpractice ai trends ceo executive thought leadership Excelas standards for responsible ai