Artificial Intelligence in Healthcare – Regulations, Standards, and Legal Issues

 The use of artificial intelligence is exploding.  Concurrently, the healthcare sector, particularly long-term care, is facing ongoing workforce challenges.  Artificial intelligence (AI) tools can potentially alleviate some tasks to allow medical staff to focus on direct patient care as well as augment clinical decision-making.  Opportunities with, and concerns around, the use of AI are at the forefront of the government, regulatory bodies, industry groups, and legal and medical professionals.

The Biden Administration

On October 20, 2023, the Biden Administration issued an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.  This EO outlines specific areas related to healthcare, including, but not limited to:

  • Department of Health and Human Services (HHS) must establish an AI Task Force which must develop a strategic plan within one year on responsible deployment of AI in healthcare, including safety, privacy, and security standards.
  • HHS must prioritize grants for development of AI by healthcare technology developers and the Secretary of Veterans Affairs must host nationwide AI Tech Sprint competitions to advance veterans’ healthcare quality through AI systems.
  • The Office of Management and Budget (OMB) must assess privacy implications and recommend revisions to agencies.
  • HHS must establish an AI safety program that includes identifying and capturing clinical errors from AI and develop best practices to avoid these errors and ensure patient safety.
  • HHS must develop regulations for drug development using AI.

Standards and Regulations

Many industry and governmental organizations have advanced recommendations concerning the development and regulation of artificial intelligence.  These include plans surrounding technical standards as issued by the National Institute of Standards and Technology (NIST), and a proposed rule by the Office of the National Coordinator for Health Information Technology that would require EHR systems that use AI to provide users with a description of the data it uses.  The FDA currently has AI regulations in place relating to various areas including medical devices, drug applications, product labeling, clinical trials, and safety reporting.

The World Health Organization (WHO) published Regulatory considerations on artificial intelligence for health in November 2023.  This 50-page report, developed by multiple healthcare stakeholders, concludes with key recommendations in the following areas for developing AI systems:

  • Documentation and transparency
  • Risk management and AI systems development lifecycle approach
  • Intended use, and analytical and clinical validation
  • Data quality
  • Privacy and data protection
  • Engagement and collaboration

Legal Considerations

Leaders in all aspects of the healthcare industry are concerned with the liability issues related to artificial intelligence.  Technology professionals note that liability is an issue which is not addressed in the Executive Order, and vague standards may have to be resolved in court, such as adverse events that are traced back to the use of AI.  Medical professionals grapple with the reliability of AI in patient care, with concerns about how programs were built and the underlying data.  Insurers face lawsuits alleging AI algorithms were used to deny coverage.

What are some of the issues around medical liability claims associated with the use of artificial intelligence?  The law is slow to keep up with the precipitous speed in which computer processing and AI development is occurring.  This includes regulatory bodies such as the FDA.  There is also a question as to how jurors will perceive AI in healthcare and the level to which judges are educated on AI issues.

General strategies to mitigate liability issues include using AI to augment, not replace, a physician’s clinical decision as well as obtaining informed consent for AI to be used.

One example is when prescribing devices that employ AI (e.g., wearables to reduce falls), a checklist should be developed, reviewed with the patient, and documented in the chart.  Such a checklist includes calibration, training, security, and access.  Further, the device must be preserved if there is an error.  This checklist provides clear substantiation of how the device supported decision-making as well as patient understanding of the use of the device.

Beyond “smart devices,” claims could arise based on a reliance on AI recommendations in clinical decision-making.  The plaintiffs will scrutinize AI bias and underlying data, and healthcare providers, including the health system and physician, will be accountable.  The defense of such cases will include greater collaboration with attorneys, risk management, medical records, IT, software engineers, and potentially the AI third-party vendor.  Audit trails are needed regarding the AI algorithm, including changes, maintenance, and production.

Artificial intelligence is impacting many aspects of life, and its rapid evolution brings both benefits and uncertainties.  This is particularly true in healthcare, where the benefits are potentially great through improved quality and efficiency of care, but the potential risks are also great, with possible biased or inaccurate underlying data.  Government, industry, and legal initiatives must keep pace with this expanding technology.

Excelas continues to monitor advancements in healthcare technology, with a special interest in integration of data from AI applications and remote patient monitoring tools with the electronic medical record.  Excelas can assist your facility in documentation reviews to ensure timely and complete integration of data into residents’ medical records.  Comprehensive and complete medical record data is critical not only to patient care, but to reimbursement, compliance, and risk management. 

Post Tags: