News & Articles

AI in Healthcare: Legal Questions and Implications

by Sophia M. Brasseux, Esq.

Virginia, along with several other states, has enacted laws in recent years directly responding to the increased prevalence of artificial intelligence (“AI”).  This past year the Virginia Consumer Data Privacy Act went into effect, giving consumers the right to opt out of profiling in furtherance of automated decisions, and requiring data protection assessment for activities which pose a “heightened risk of harm,” such as targeted advertising.  While the increased use and development of AI will likely have effects on nearly all areas of the law, incorporating AI into our healthcare system poses several unique legal, ethical, and logistical questions.

Since launching in November 2022, ChatGPT, a natural language processing model (“NLP”), has been the center of much controversy.  The model has been criticized for inaccuracy and potential bias in its training, as well as an increase in potential privacy violations, cheating and plagiarism, and copyright infringement.  However, ChatGPT has also been praised for its ability to save time and increase accessibility of certain services.

A study published in February 2023 revealed that ChatGPT “performed at or near the passing threshold” for all three of the Medical Licensing Exam (“USMLE”) sections “without any specialized training or reinforcement.” The results of the study suggested that language models such as ChatGPT may have the potential to assist with medical education as well as clinical decision making.  The potential uses of language learning models discussed in this study are already making their way into patient care.

NLPs are currently being used in healthcare in several ways.  The adoption of NLPs has allowed providers and health systems to search and analyze patient databases to assess patient outcomes, analyze clinical decision making, and predict future health service needs.  This data collection can help support a provider’s real time decision making and provide insights to improve patient care that would otherwise be too time and labor intensive to gather. The implementation of NLPs in healthcare has also expanded beyond data analysis and into direct patient care.

One example is Wysa, a mental health application which launched in 2015.  The application offers an option to connect with a human therapist for traditional teletherapy services, as well as a chatbot-only version.  In the chatbot only version, the chatbot “coach” asks questions about the user’s emotions, and then analyzes words and phrases in the user’s responses to provide supportive messages or gives advice about managing chronic pain or grief. The chatbot’s advice and messages stem from a database of responses authored by psychologists trained in cognitive behavioral therapy.

Mental health needs have spiked in recent years while traditional mental health services have become less accessible due to the increased demand and limited supply.  It is easy to imagine how applications like Wysa which use NLPs can combat the ever-growing gap in these services by making them more financially and logistically accessible.  However, accessibility is only one factor to consider when assessing the risks and benefits of certain healthcare services. Once a user has access to the care he or she needs, questions arise about whether the quality of that care complies with accepted standards and what happens when that care results in harm.

In Virginia, medical providers must comply with the standard of care and the failure to do so can lead to legal liability for any resulting harm.  In a medical malpractice action, “the standard of care by which the acts or omissions [of the healthcare provider] are to be judged shall be that degree of skill and diligence practiced by a reasonably prudent practitioner in the field of practice or specialty in this Commonwealth. . .” This standard takes into account the fact that healthcare providers use a certain level of professional judgment and skill when making decisions regarding patient care and proving patients care and treatment.  While NLPs, like ChatGPT, can process vast amounts of data to make decisions, they lack the human experience and considerations that go into many decisions made by medical providers.  While applications like Wysa are not typically treated as independent “healthcare providers,” but assistive tools, this line may become less clear as NLPs become further integrated into our health system and patients and providers rely on applications using these models more regularly.

Regardless of whether NLPs are being used for data analysis or direct clinical care, legal and ethical questions arise regarding the allocation of liability, confidentiality, the disruption of the typical physician-patient relationship, and potential bias in training data.  Notably, OpenAI allows for unrestricted access and explicitly disclaims responsibility for generated texts, suggesting that responsibility for errors rests on the user. In a medical malpractice case, this could mean a healthcare provider using an AI assistive application would bear liability.  However, there are currently no laws or regulations specifically addressing this unique issue.  There is also an open question of whether the developer or programmer of the NLP would bear any responsibility for the determinations and recommendations of the model.  As well, there is a question regarding the level responsibility a patient might bear, if any, when seeking health services directly from an NLP based application as opposed to a traditional human healthcare provider.

Besides the question of who is liable, there is also the question of what sort of doctrine is applicable.  Though medical cases are generally based on principles of negligence, when AI is involved, it is also possible that products liability issues could be raised as the AI is a product being used by the healthcare providers and/or patients.  Given that AI has an ever-growing autonomous nature, there is also a question of how and when vicarious liability may come into play.  In general, a hospital can be held vicariously liable for the negligence of an employee or agent within its control.  It is unclear whether a hospital could be found to have the requisite control over an NLP to be found liable under the theory of respondeat superior.  If courts find NLPs are fully autonomous, then holding a hospital responsible for injury caused by the NLP would be nearly impossible in theory.

Further, the increased use of NLPs also poses privacy concerns, particularly concerns regarding confidential information protected by HIPAA. As noted above, several states have already begun enacting legislation to address privacy issues posed by NLPs; however, regulations specially related to protected health information may also be necessary to ensure patients are fully protected.

Implementation of licensing and regulations will become necessary to ensure providers who use or rely on NLPs, and the NLPs themselves meet required standards of care and do not pose undue risks to patients.   At this time, there are no regulations specifically targeted at the use of NLPs in healthcare in Virginia.  This creates a legal gray area in terms of the standards applicable to the development and use of NLPs in medicine, and liability in medical malpractice cases involving NLPs.  While most states have yet to enact legislature to address this issue, small strides are being made.  In February 2023, HB1974, An Act Regulating the Use of Artificial Intelligence (AI) in Providing Mental Health Services, was introduced in the Massachusetts House of Representatives. If passed, the Act would regulate the use of AI in mental health services by requiring that the use of AI by mental health professionals meet certain conditions including pre-approval by a licensing board, certain safety and effectiveness standards, patient notice, and informed consent.

The use of AI, including NLPs in healthcare, is quickly evolving and becoming more prevalent.  Because AI can help make healthcare more accessible and affordable, this trend will likely not be changing soon.  Although most states have not yet enacted laws addressing this growing use of AI in healthcare, it is still important to take precautions now to limit potential future liability related to the use of AI.  Until more research is done about the accuracy of NLPs in the medical context, they should be used with caution, and only as assistive tools.  While providers may use NLPs to obtain a deeper understanding of a patient’s history or data regarding a specific disease or condition, providers should avoid relying on NLPs entirely while providing direct patient care and NLPs should not be used in lieu of a human healthcare provider.  It may also be prudent for hospitals and practices to develop policies and procedures related to the use of NLPs and take additional measures to protect protected patient health information when such models are being used.