
In Thursday’s episode of The Pitt, the long-simmering tensions over the use of AI at the Pittsburgh Trauma Medical Center boiled over.
In season two of the five-time Emmy winning medical drama, a new attending physician, Baran Al-Hashimi (Sepideh Moafi), is determined to improve efficiencies at the hospital. She tells her skeptical staff that new AI systems can cut down their time spent on charting by 80%, allowing them to spend more time both at the bedside and at home.
But in episode six, doctors discover that the AI tool has made up false details about a patient and confused “urology” for “neurology.”
“AI’s two percent error rate is still better than dictation,” Al-Hashimi says, adding that it needs to be proofread for errors. But an irate Dr. Campbell (Monica Bhatnagar), who works in internal medicine, responds: “I don’t really give a sh-t whether or not you want to use robots down here. I need accurate information in the medical record.”
Like many of the show’s themes, this storyline mirrors real-life debates happening in hospitals across the country. Two-thirds of physicians say they use AI to some degree, according to a 2025 survey by the American Medical Association. Some have found it invaluable in providing care and reducing burnout. But other healthcare workers say that it is being rolled out too fast and makes too many errors for such a high-stakes field.
AI as a medical sounding board
In The Pitt, AI is introduced first and foremost as a tool for charting: the process by which doctors document their encounters with patients. Charting is one of the biggest pain points for doctors, as they often have to stay hours late to finish. For several years now, hospitals have implemented ambient AI scribes, which listen to conversations with patients and then summarize them for doctors’ charts.
Murali Doraiswamy, a physician and professor at Duke University School of Medicine, says that current AI scribes let doctors focus on patients rather than typing notes during appointments. But he says that the tools only actually save one or two minutes per encounter, because doctors then have to spend time editing what the AI has created (as Al-Hashimi notes on The Pitt). “It does not significantly save what we call pajama time,” he says. “But overall, it’s an improvement, and the hope is it gets better and better.”
Some AI charting tools go even farther. Last year, Presbyterian Healthcare Services in New Mexico, piloted an AI assistant called GW RhythmX, which can provide doctors with a summary of their upcoming patients’ medical history, potentially saving the doctor from having to cull through months of charts and lab files before the appointments.
Lori Walker, Presbyterian’s Chief Medical Information Officer, says the RhythmX tool can also provide solutions to complex patient problems. For instance, she says that a patient was recently admitted for an infected wound—but was allergic to many antibiotics that might have treated the bacteria. Previously, a doctor would consult an infectious disease specialist—a process that can take between 24 and 48 hours. Instead, the doctor queried the chatbot, and received an effective prescription immediately.
Sudheesha Perera, a second-year resident at the Yale School of Medicine, says that he and his colleagues use OpenEvidence, a large language model chatbot trained on vetted medical literature, on a near-daily basis. “If there’s a patient with an infection, I might ask it, ‘I picked this medication for this reason. What are the alternatives?” Perera says, noting that it’s faster than using Google or a medical textbook.
Perera is helping Yale build out an AI curriculum advising residents on best practices when using the technology. And at Yale’s Cardiovascular Data Science Lab, he uses Claude Code and Gemini to help him write code for data analysis. “I can just kind of tell it in plain text: ‘This is what my data looks like, and this is what I want.’ That’s really game changing in terms of getting things done.”
Mistakes and risks
But many fears and risks loom. Just like in The Pitt, AI tools have made plenty of mistakes in real medical settings. Michelle Gutierrez Vo, a resident nurse and a president of the California Nurses Association and National Nurses Organizing Committee, says that three years ago, her hospital tried to implement a new tool in order to replace the judgment calls of case managers. But when they tested the tool, it mishandled many cases, including suggesting that a cancer patient who was admitted for a month of chemo treatment be discharged within two to three days.
“We have proven time and time again that the implementation or the use of AI is actually worse, and more expensive for them,” she says. A 2024 poll found that two-thirds of unionized RNs said AI undermined them and threatened patient safety.
Gutierrez Vo worries that AI is simply being used to cut costs and increase profits, forcing already-dwindling staff to work even harder. This concern is echoed by The Pitt’s protagonist Dr. Robby (Noah Wyle): “It’ll make us more efficient—but hospitals will expect us to treat more patients without extra pay,” he says.
Meanwhile, there is a major concern about de-skilling: that even if AI helps doctors now, it will harm their intrinsic knowledge and decisionmaking during times when it is most needed. This idea is explored at the end of this week’s episode of The Pitt: a cyberattack forces the hospital to go fully analog and rely solely on their skills and training.
This scenario resonates with Perera. “When the patient is crashing in front of your eyes, you need to have knowledge at the front of your mind. An AI tool is too slow,” Perera says. “It’s very true that at the end of the day, we need to practice without tools.”
Perera is especially concerned that if a new generation of doctors becomes too reliant on AI tools without learning skills first, the entire medical field could be severely harmed. “The same kid who never wrote a college essay and just used ChatGPT might turn into the doctor that never wrote a critical assessment and plan and just uses OpenEvidence,” Perera says. “Teaching medical residents how to be good stewards of these tools, at the right time in their training, will be important.”
Doraiswamy hopes that tools will be built in ways that are designed to support the judgment of doctors rather than supplant them. “The more we can make AI make doctors ask the right questions, rather than automatically just taking the answer, the better it is,” he says. “We want something that makes us think.”
More Must-Reads from TIME
- Cybersecurity Experts Are Sounding the Alarm on DOGE
- Meet the 2025 Women of the Year
- The Harsh Truth About Disability Inclusion
- Why Do More Young Adults Have Cancer?
- Colman Domingo Leads With Radical Love
- How to Get Better at Doing Things Alone
- Michelle Zauner Stares Down the Darkness
Contact us at letters@time.com