AI in health care: 7 principles of responsible use
By Daniel Yang, MD, Vice President, Artificial Intelligence and Emerging Technologies
A gap often exists between emerging technologies and their implementation. New technologies can improve our lives. But they can also change our lives. These changes may initially cause fear and anxiety.
Emerging technology is especially complicated in health care. There are so many factors to consider, including patient preferences and federal regulations.
At Kaiser Permanente, it’s our job to navigate these issues as we consider how new technologies might help us provide better care for our patients. Artificial intelligence is no exception.
We believe our clinicians and care teams can use AI to improve health outcomes for our members and the communities we serve. But we also know that nothing slows down the adoption of new technologies more than lack of trust — or worse, technologies that can lead to patient harm.
That’s why we use a responsible AI approach. This means we adopt AI tools and solutions only after we thoroughly assess them for excellence in quality, safety, reliability, and equity. With a focus on building trust, we use AI only when it advances our core mission of delivering high-quality, affordable health care services.
Our principles for assessing responsible use
So how do we assess and deploy AI tools to make sure they meet our standards?
We start with privacy. AI tools require a vast amount of data. Ongoing monitoring, quality control, and safeguarding are necessary to protect the safety and privacy of our members and patients.
We continually assess for reliability. What works today may not work a few years down the road as technology, care delivery, and patient preferences evolve. We choose AI tools that will work for the long term.
We focus on outcomes. If an AI tool doesn’t advance high-quality and affordable care, we don’t use it.
We strive to deploy tools transparently. We make patients aware of and ask for consent to our use of AI tools whenever appropriate. For our employees who use AI, we provide explanations of how our AI tools were developed, how they work, and what their limitations are.
We promote equity. People and algorithms (the instructions that AI tools follow) alike can contribute to bias in AI tools. Our AI tools are built to minimize bias. We also know AI has the potential to harness large amounts of data and to help identify and address the root causes of health inequities. So, we are also focused on that potential.
We design tools for our customers — in the case of AI, our customers are our members, doctors, and employees who will use the tools. Tools must prioritize their needs and preferences.
We build trust. We know there’s uncertainty about the effectiveness of AI. We choose tools that offer excellence in safety and performance, and alignment with industry standards and leading practices. We further build confidence by continually monitoring the tools we use. We continue to invest in research that rigorously evaluates the impact of AI in clinical settings.
Our principles in action: Assisted clinical documentation
One example of how we’ve applied these principles is with our use of an assisted clinical documentation tool. The tool helps our doctors and other clinicians focus more on their patients and spend less time on administrative tasks.
The tool summarizes medical conversations and creates draft clinical notes. Our doctors and clinicians can use the tool during patient visits.
Afterward, the doctor or clinician reviews and edits the notes before entering them into the patient’s electronic health record.
In deploying the tool, named Abridge, we carefully applied each of our principles of responsible AI. For example:
The tool is compliant with state and federal privacy laws. It encrypts patients’ data to protect their privacy. We also get consent from each patient before using the tool. If a patient doesn’t want us to use it, we don’t.
We require our doctors and other clinicians to review and edit any clinical notes drafted by the tool. Our patients can trust that AI does not make medical decisions at Kaiser Permanente. Our doctors and other clinicians do.
Before making the tool widely available at Kaiser Permanente, we conducted a rigorous quality assurance process. We made sure it worked for all patients, including our non-English-speaking patients. And we continue to collect feedback from our patients and clinicians about their experiences with the tool.
How policymakers can help
As we work to make sure AI is used responsibly, policymakers can help by:
Supporting the launch of large-scale clinical trials. Health care organizations need more robust evidence to evaluate the safety and effectiveness of AI tools. This evidence is critical to building public trust.
Establishing systems to monitor AI tools used in clinical care. Monitoring systems would allow health care organizations to learn from each other’s experiences. We could share performance data, safety risks, and best practices.
Supporting independent quality assurance testing of AI algorithms. Policymakers and regulators should work with health care organizations to create a nationwide health AI assurance lab network to test health AI performance. AI developers could then test their algorithms on diverse datasets and help demonstrate the safety and effectiveness of AI tools across populations and geographies. Many other industries rely on similar forms of independent testing. For example, the consumer electronics and automotive industries do this kind of testing. The testing wouldn’t replace our own validation of AI tools. It would complement it.
To realize AI’s full potential, we and all health care organizations must use it responsibly.
At Kaiser Permanente, we’re diligently following our AI principles. And we’re working closely with policy leaders to support industrywide efforts.