Authored By: Lydia Kotowski and ChatGPT 4
As the healthcare landscape evolves, artificial intelligence (AI) is emerging as a transformative tool with the potential to improve patient outcomes, enhance operational efficiency, and support financial sustainability. Recently, we had the opportunity to review insights from the Health AI Partnership webinar, hosted by Duke AI Health and DLA Piper. This session explored the adoption of large language models (LLMs) in health care, providing valuable lessons for California stakeholders.
Here’s what healthcare providers need to know about the opportunities and challenges of deploying AI in their practices.
Understanding the Role of AI in Health Care
Large language models are increasingly being used for tasks such as generating patient communication drafts, summarizing medical literature, and supporting clinical decision-making. However, the adoption of these tools requires a careful balance. While the potential benefits are significant, there are risks of inaccuracies, unintended outputs, and legal implications that must be addressed to ensure patient safety and compliance.
For providers, the challenge lies in evaluating which AI models are appropriate for their needs and how best to integrate these tools into their workflows.
Red-Teaming: A Proven Strategy for Safer AI Deployment
One of the key strategies discussed during the webinar was “red-teaming,” a concept borrowed from cybersecurity. This involves testing AI systems by simulating potential misuse or unintended outcomes to identify and mitigate risks.
Red-teaming can help uncover vulnerabilities, such as:
- Legal Risks: Exploring scenarios where a model could inadvertently generate outputs that violate laws or regulations.
- Sociotechnical Harms: Identifying how the model might perpetuate biases or produce misleading information.
While red-teaming is crucial throughout the AI lifecycle—design, development, validation, deployment—there are no universal standards for testing. The webinar speakers emphasized the importance of combining internal testing with periodic evaluations by third-party experts to strengthen accountability and reliability.
Real-World Applications and Challenges
Case studies from the webinar highlighted both the promise and limitations of AI in health care. For example, when an LLM was used to draft responses to patient inquiries, around 6% of the drafts contained inaccuracies, often referred to as “hallucinations.” This underscores a key point: all AI-generated outputs are approximations, and they must be validated by human experts to ensure accuracy and relevance.
Providers must also be mindful of how LLMs interact with the conventions of medical practice. For instance, these models may not prioritize the use of recent literature, which is critical in evidence-based medicine.
A Continuous Journey of Improvement
In sum, this webinar drove home the point that adopting AI in health care is not a one-time decision but an ongoing process of testing, refining, and adapting. As your organization considers how to use AI, make sure to consider whether you are ready for the following:
- Iterative Testing: Solutions often reveal new challenges, necessitating continuous evaluation and updates.
- Increased Obligations as a High-Risk Domain: Health care is frequently classified as a high-stakes environment for AI, emphasizing the need for robust safeguards.
- Collaborative Efforts: Successful implementation requires collaboration between providers, data scientists, and legal experts to ensure both safety and compliance.
There is no question that AI and health care are going to continue to evolve together. To make sure your organization adopts and monitors your systems in a replicative, robust manner, red-teaming can be a great solution.
Sources
- Health AI Partnership Webinar: https://healthaipartnership.org/insight/red-teaming-ai-systems-in-healthcare
- Duke AI Health: https://aihealth.duke.edu/
- DLA Piper: https://www.dlapiper.com/en-us
- Speakers:
- Danny Tobey MD, JD: https://www.dlapiper.com/en-us/people/t/tobey-danny
- Sam Tyner-Monroe, PhD: https://www.dlapiper.com/en-us/people/t/tyner-monroe-sam
- Bogdana Rakova: https://bobirakova.com/
Leave a Comment