Is artificial intelligence your friend or foe?
AI in the perioperative setting can have tremendous benefits and serious drawbacks.

Artificial intelligence (AI) is making more inroads into medicine every day. But can it be trusted? Do the risks outweigh the benefits?
Those were some of the questions an expert panel sought to answer in the Saturday session “Artificial Intelligence and Patient Safety: Best Friends … or Worst Enemies?”
There can be no denying the impact AI is already having — and will continue to have — on the anesthesiology profession, said session moderator Monica Harbell, MD, FASA, Associate Program Director of Anesthesiology Residency at Mayo Clinic Arizona.
“AI is poised to revolutionize modern medicine as it reshapes how we approach diagnostics, patient monitoring, workflow automation, and clinical decision-making,” she said. “In anesthesiology, AI is already being used to process vast amounts of patient data in order to predict perioperative complications.”
One way AI does this, Dr. Harbell said, is by extracting patient history, including prior complications and medical interactions, from the electronic medical record. This helps anesthesiologists in risk stratification and identification of medical errors and adverse reactions that might otherwise be missed.
“AI can be used to identify patients who are at high risk for complications, such as postoperative nausea and vomiting, postoperative bleeding, respiratory failure, and acute injury,” she said. It also can be used to predict postoperative morbidity and mortality.
Panelist Jason Cheng, MD, FASA, National Director for Patient Safety with the Kaiser Permanente Health System in Bellflower, California, said AI can also lighten the load, reduce stress, and increase patient safety.
“What are all of the things that overload us as anesthesiologists? There is obviously production pressure, the growing amount of medical information, and our workflow. There are so many different stressors,” he said. “As human beings, we have a limited cognitive bank, and that taxation on our cognitive bank is really a patient safety issue.”
Dr. Cheng said AI can ease the burden on anesthesiologists and enhance patient safety in a number of ways, including:
- Real-time patient monitoring that allows early detection of complications during surgery
- Automated dosing algorithms that can optimize anesthetic drug delivery and minimize the risk of human error in the OR
- Clinical decision support systems that can assist anesthesiologists with evidence-based decisions and improve team communication
- Enhanced safety protocols using machine learning to identify risk patterns, leading to improved patient safety protocols throughout anesthesiology care
But Dr. Cheng said AI has the potential to go further than that. Automated anesthesia systems are already being developed at MIT and Massachusetts General Hospital. Researchers there have developed a deep-learning algorithm for propofol dosing. In clinical simulations, the algorithm performed well in helping to maintain unconsciousness, use less drug, and reduce the risk of dosing errors.
Although this may sound like AI is coming for your job, he said there is no reason to worry.
“Imagine having a co-pilot being able to alert you, provide information, and generate insights from the data that’s coming,” he said. “Just think about the promise to be able to have all this information and insights and to be able to interpret for actions in a proactive way.”
That all sounds great, but panelist Keith Ruskin, MD, FASA, Professor of Anesthesia and Clinical Care at the University of Chicago, warned there are a few hurdles that need to be overcome before AI can have a genuinely transformative presence in the medical community.
One of the biggest, he said, comes down to a matter of trust. Anesthesiologists must be able to trust AI to perform correctly and do what you expect it to do. If that trust ever fails in a high-risk medical setting, it will be extremely difficult to recover.
“Having an AI system that goes bad, a clinical decision support tool that gives you incorrect information, … you’re not going to trust any part of that system or even the adjacent systems anymore,” he said.
Another hurdle with generative AI systems like ChatGPT is that they are designed to predict what the next word might be in a given sequence of text, Dr. Ruskin explained. In other words, despite the name, it can’t truly think for itself. Although it may seem correct on the surface, some of the deeper meaning may be lost.
“ChatGPT is not an anesthesiologist. It’s a statistical model. It doesn’t store or retrieve information. It doesn’t even search for information,” he said. “There’s no guarantee that the answers to the questions that it gives you are actually true.”
He said a generative AI called OpenEvidence is a much better alternative than ChatGPT for medical information.
Dr. Ruskin also cautioned anesthesiologists not to become overly reliant on AI. This could lead to “deskilling,” or forgetting how to do things that have been taken over by AI. And then there is the issue of liability.
“If AI does make a mistake, who’s responsible?” he asked.
Dr. Ruskin said anesthesiology needs to take the lead in regulating AI and how it is used within the specialty.
“There will be some governmental involvement, but we need to do these ourselves before we attract unwanted attention. And as we do develop these regulations, we need to comply with them. No shortcuts,” he said.
The goal should be to meet the needs of health care professionals and the best interests of the patient, he said. The best way to do that is to work with everyone — not just the experts in the room, but all of the clinicians involved in patient care — to see where this evolving technology can help us or hurt us and develop strategies to mitigate the potential risks.