All About AI: Transforming Healthcare from Research Labs to Patient Care
Hosted by Aaron Burnett with Special Guest Dr. John Scott
In this “All About AI” episode of The Digital Clinic, Dr. John Scott, Chief Digital Health Officer at UW Medicine, explores how artificial intelligence is reshaping healthcare delivery and medical research. From reducing physician burnout through automated charting to enabling groundbreaking discoveries in protein design, Dr. Scott shares insights on AI’s current impact and future potential in medicine.
About Dr. John Scott
Dr. John Scott is a professor of allergy and infectious disease at the University of Washington Medical School and medical director of the Harborview Liver Clinic. Dr. Scott also is the Chief Digital Health Officer for UW Medicine. In this conversation, Dr. Scott shares his experiences and perspective on the risks and value of AI in medical research and care settings, as well as the potential impact on healthcare professionals, improvements in patient outcomes, and the fundamental changes to the medical profession that are likely to result from AI.
Listen & Subscribe:
Professional Background and Telemedicine Journey
Aaron Burnett: Tell me a little bit about your background and the work that you do at the University of Washington.
Dr. John Scott: I came up to the University of Washington in 2002 from the Bay Area, where I grew up and completed my internal medicine training at Stanford. I came up to do my fellowship in infectious diseases.
I followed a traditional path of academia. After graduating from my fellowship, I got an NIH grant and was on that pathway of academia and doing research science. Around 2008, I had a bit of an epiphany. I was interested in hepatitis C, and the treatment at that time was very toxic. It was a combination of medications called interferon and ribavirin. Very few people would treat the condition because it was so toxic. I had patients driving from all over Washington to see me, and I was very concerned about their safety driving home just because the treatment was like chemotherapy. I thought, “There’s got to be a better way of doing this.”
That’s when I came across the Project ECHO program. ECHO stands for Extension for Community Health Outcomes. It was started in 2004 at the University of New Mexico, and it’s a telementoring program. When I heard about it, I said, “I want to try this.” Dr. Sanjeev Arora, who started the program, said, “Let’s write a grant and see if we can expand it to another state.” We were very fortunate that Robert Wood Johnson Foundation funded us. It was just one of the most fun things I had done, and it was really helping a lot of people. The program is really built on the case conference model, which is how we’re trained in medicine. The classic thing we would do is stay up all night admitting patients and the next morning have to present those cases to our colleagues and more senior doctors. We don’t make people stay up all night for ECHO, but we do ask them to bring their toughest cases to a multi-specialty panel. That got me on the pathway of telemedicine. The next step was doing a sabbatical in Australia in 2011-2012 at the University of Queensland, where I really saw how the Australians have tackled the problems of distance and bringing knowledge to the people. That was great to learn from.
When I came back to the University of Washington, I thought this telemedicine thing might be important and asked to be the first medical director. I started in that position in 2013 and slowly grew the program. One of the key parts of the program was having a unified strategy and plan, consolidating our contracting and technology, and then really working with our legislature to change some of the laws. We were fortunate to have several members of the legislature who really understood how important this was going to be.
We were in really good shape when COVID hit. We had good training programs and had a lot of our processes set up. Compared to a lot of other states, we had good infrastructure that allowed us to pivot pretty quickly. Our program grew quite a bit—we had to go from having about 200 providers doing telemedicine regularly to over 4,000 in just a couple of weeks. It was a very busy time, but we were able to do it. We hit a peak of 35,000 telemedicine visits in May of 2020, and for the most part, it worked really well.
Along the way, we realized that the whole patient journey often starts with an internet search, going to Google if you have an ailment or something like that. We really wanted to come alongside patients digitally and help them understand their condition better and, if appropriate, book an appointment. Around two years ago, we rebranded from telehealth to digital health, and we took on this portfolio of what we call the digital front door.
That has been a new area we’re exploring. It includes things like marketing but also just understanding how patients want to navigate. We have a philosophy that there’s no wrong front door, trying to meet them where their most important criteria is for selecting a physician. It’s been quite a journey. It’s been a lot of fun. We have a great team, and now AI is changing the calculus quite a bit.
Aaron Burnett: And along the way, you became Chief Digital Health Officer.
Dr. John Scott: That’s correct. When we rebranded in 2022, I went from being the medical director for telehealth to Chief Digital Health Officer. We have a team of about 15 to 20 people. We have the dyad model, and I have a great dyad partner who helps with a lot of the strategy, HR, and business planning.
Aaron Burnett: Can you explain the dyad model?
Dr. John Scott: The dyad model is basically linking a physician leader with an administrative leader. I think Mayo Clinic is probably the best example of that, and that’s something we employ quite a bit at UW Medicine.
Initial AI Applications at UW Medicine
Aaron Burnett: You and I have been corresponding and talking about AI as it’s the focus of this series. Can you tell me a little bit about how AI has enabled and impacted your work?
Dr. John Scott: It’s impacted quite a bit, and it’s something I’ve had to educate myself on extensively. I’m on the guideline committee for the Washington State Medical Association and also on the AG task force for AI. I think the first use case at UW Medicine that we’ve employed for AI is really trying to make sure that our workforce is not burning out, because COVID really affected the workforce from physicians to nurses to all the support staff. So the first two deployments have really been aimed at trying to help physicians with non-physician activities, number one being charting.
We talk about this metric of “pajama time” through our EMR. We can see when people are active on the EMR, and surprisingly, there are a lot of people doing activity around 10 o’clock when they should be in their pajamas and thinking about bedtime. One of our first deployments will be using ambient listening, and instead of the physician typing away at the computer, they’re actually just going to be talking to the patient and the technology will write that note. Then they’ll have to go back afterward to clean it up and edit it. At least in other initial deployments around the country, it’s really saved a lot of time and led to much better work satisfaction for our physicians. So we’re excited about that.
The other thing that has gone live was inboxes. MyChart messages are a great way to stay in touch with your physician, but that has exploded with the pandemic. I think every patient can say that it’s been harder to get in to see any kind of physician, so they’ve really relied on that, but our inbox messages have just exploded. We are doing a pilot looking at using ChatGPT to do the initial draft. There never is going to be a time where no human touches any kind of interaction—the physician needs to see the first draft, clean it up, and check it for accuracy. The first pilots are going well. What we’ve learned is it may not save any time, but I think it adds higher complexity. Interestingly enough, it’s also leading to more empathic responses.
Aaron Burnett: One of the things that people worry about with AI is hallucination. Even in the past week, I’ve read stories about AI being used for charting or note-taking during patient visits, and that at times, information is captured that wasn’t actually present in the conversation. How do you protect against that?
Dr. John Scott: I think, just like anything else, you have to read the note. When we talk about this in the discussion about adapting this technology, we say we have a lot of notes written by medical students, residents, or fellows, and they might be hallucinating or might have made something up or copied forward. So I don’t think there’s an expectation that even with humans charting, it’s 100 percent accurate. It’s the same as before—you need to read the whole note and check for accuracy and inconsistencies.
Aaron Burnett: Is there work that you do or your teams do to specifically configure the LLMs that you use or to build them out in a specific and constrained environment so that the behavior is different than others might experience with ChatGPT?
Dr. John Scott: I think we’re using the out-of-the-box ChatGPT. It will be trained on our data though, so it’s specific to UW Medicine. We’re working primarily in these initial forays with large established companies. Microsoft acquired Nuance, and then for ambient listening, we’ve gone with a company out of Pittsburgh called Abridge. So we’re really excited about that opportunity.
AI and Research
Aaron Burnett: There are a couple of categorical ways that AI is impacting medicine. One is certainly in the patient experience. The other is in research—recently, a researcher at the University of Washington won a Nobel Prize for his discoveries around protein folding and protein design in combination with DeepMind out of Google. So it’s a combined AI-human Nobel Prize. I know that isn’t work that you do yourself, but can you give us a sense of that partnership and also the implications of protein design, which seems pretty profound on a global level?
Dr. John Scott: I think you’re talking about Dr. David Baker, who has a great local story. His parents were professors at the University of Washington. He went to Garfield High School and is a professor here. He runs the Institute for Protein Design. Proteins are the building blocks of life—they control things like cell signaling. They’re very important in hormone function and things like that. They’re a key part of any kind of immune response, so monoclonal antibodies, things like that are super important. What he did is he had a computer program, an AI program, and he was basically designing proteins that didn’t exist naturally in nature. That had never been done before. They could say, “Hey, this is some need here.” For me as an infectious disease doctor, it’s really exciting for drug discovery.
As some folks may know, there’s a growing problem of antibiotic resistance or antimicrobial resistance. You can talk about malaria resistance, things like that. There’s not a lot of investment from pharma, unfortunately, just because there’s often not a lot of money in it. Usually, you’re talking about one or two weeks of therapy. So it’s a race—we’re always feeling like we’re just one step ahead of the bacteria and viruses.
It’s led to the discovery of new, whole new classes of antibiotics. Probably the most famous one is a compound called Halicin that came out of the MIT labs. I’m really excited about that. It’s also shortened the whole drug discovery process. Before, it was a lot of working in Petri dishes and testing against a whole library, but now the initial discovery is in silico, looking at the three-dimensional structure of that protein and how maybe another compound could fit in there. That’s usually what’s happening when we’re talking about antivirals or antibiotic compounds.
The other opportunity in research is trying to match patients with clinical trials. Before I got into this area of telehealth and digital health, I ran a large clinical trials program, and it was really dependent on me remembering, “Oh, this patient might qualify,” or putting up flyers. It was just, frankly, a haphazard process. But what we can do with AI is we can input the criteria for the clinical trial and then search in our database and say, “Hey, these patients would meet the criteria.” Of course, we’d go out and contact them and go through the whole informed consent process, but it’s going to allow for a much more efficient process of matching patients to clinical trials. The first step in the process is just asking if patients would ever be interested in clinical trials. Some people would say, “I just don’t want to do that,” and we can put that on their chart. We’ll never contact them. I’m really excited about that.
Aaron Burnett: When we think about discoveries that are enabled by, or maybe powered by AI, one of the things that people fear who don’t know a lot about AI is that AI will discover something or make a recommendation that isn’t possible for a human being to have arrived at, at least in a particular time frame. How are those discoveries or recommendations then vetted or checked before a new drug or discovery comes in contact with patients?
Dr. John Scott: For example, if we’re describing new antibiotics, the first step in the process is testing in animals. You do all the safety first in animals. Then there’s a first-in-man study—those are phase one studies. I’ve done a number of those, and those patients are often in the hospital and observed very carefully. So I think it’s the same process. It’s just doing a much more efficient vetting on the front end. And as we learn, we can say, “Hey, this compound we should not use because it’s been known to have a side effect.” Hopefully, we can screen those out even better.
Aaron Burnett: Am I right in recalling that one of the early COVID vaccines was an AI-enabled discovery as well at the University of Washington? Or am I misremembering?
Dr. John Scott: There was a group that did use AI to predict structure, and I think it was licensed somewhere in Asia, but by the time they had made the discovery and everything, we already had the mRNA vaccines and the Novavax vaccine. But it was good proof of concept for—knock on wood—the next pandemic. We’re probably going to have another outbreak of some virus.
AI and the Digital Front Door
Aaron Burnett: You mentioned creating a digital front door and that the notion is there is no wrong front door. How, if at all, do you envision AI facilitating that digital front door?
Dr. John Scott: I think it’s taking a conversational approach. Chatbots sometimes get a bad reputation because up until now, they have been very algorithmic. But an AI can be very flexible. Currently, if you were to try to see a new specialist at UW Medicine, we ask you to first identify what kind of specialty you want to go into, and sometimes it’s not really apparent.
Let’s take the example of a chronic cough. Is that a lung condition where you see a pulmonary doctor, or is it ENT, something wrong with your throat? That’s where a conversational chatbot can maybe help to get you to the right specialty. It also allows us to understand what’s most important to you. Do you want to see a provider of a certain gender? Do they take certain insurance? Do they speak a language other than English? Are they close to public transportation? Whatever’s most important to you gets surfaced, and we can really prioritize that in matching you to the most appropriate provider.
Aaron Burnett: And it’s also important for folks to understand that in a medical context, a chatbot can be trained on and constrained to only the information that is provided by the provider.
Dr. John Scott: That’s correct.
Aaron Burnett: Do you also see implications for AI in virtual care, in telehealth, for example for ECHO?
Dr. John Scott: ECHO is a program that works best because of the relationships. I’ve been doing this now for 15 years, and over time, these become your friends all over the state. I think technology in general works best when it goes to the background and allows that relationship to really go smoothly and have just a fairly seamless interaction. I don’t want to put the technology in the middle of that relationship, but I do think there are some opportunities for ECHO.
First of all, there’s just a lot of information we cover. Several of the ECHOs we record are didactics. We usually start a session with an update on maybe a new topic or an essential thing that people need to know about. It’s 15 minutes. We’ve given thousands of hours of those didactics, and it might be really great for the participants to be able to say, “Hey, Dr. Scott was talking about this particular condition,” and they could use the chatbot to search that recorded didactic and it would play that one-to-two-minute clip. So I think AI does a really good job finding technology and summarizing it and bringing it to the user. I can see that, and also just the back office stuff—tracking CME and things like that and just making our team more efficient.
For telemedicine, I think there are some real opportunities to use best evidence for patients. Let’s say you’re staffing a telemedicine visit and it’s an urgent care visit for sinusitis symptoms. The AI could be listening to the interaction and saying, “Hey, these are the guidelines from the Infectious Disease Society of America and also the guidelines from ENT.” So it can really help you to practice evidence-based care and also be a good steward of antibiotics, a good steward of testing.
Lastly, it’s just providing education back to the patients. One of the things we’re really excited about with Abridge is it can provide a specific after-visit summary. It summarizes the conversation, pulls in information from UpToDate, which is a great resource that we use, and then patients can have information to refer back to. I think it’s really important that patients are educated on their conditions, and AI can really help with that.
AI’s Impact on Doctor-Patient Relationships
Aaron Burnett: You mentioned that AI in the context of inbox management or even charting has surprisingly resulted in maybe more empathetic responses. How do you envision that AI might change the role of a physician and what’s important in a physician-patient relationship?
Dr. John Scott: That’s a really great question. The AMA actually doesn’t use the term “artificial intelligence.” They use the term “augmented intelligence.” I’ve given talks to physicians about AI, and one of the most common questions I get is, “Is AI going to take my job?”
There’s a famous quote from one of the past presidents of the AMA who said, “AI is not going to replace doctors, but doctors that use AI are going to replace those that don’t.”
Aaron Burnett: You say exactly the same thing in digital marketing.
Dr. John Scott: Exactly. So I encourage my colleagues to play around with it and to see how it can make you more efficient. That’s the goal. The crux of the matter is trying to be more efficient and also trying to provide safer, more high-value care to our patients.
Aaron Burnett: In digital marketing—and I can speak for myself personally—there is a sense that value delivered is derived from time and effort, that there is a work component that’s required to feel that you’ve delivered value. I know that the life of a doctor is very rigorous and arduous. AI might make it less arduous. How do you think it might change the nature of work and what it feels like to be a physician?
Dr. John Scott: I really hope it takes us back to the good old days where we used to write a couple of lines and the doctor actually was looking at the patient and truly being 100 percent present. That’s my hope and dream for AI, and I think it can do that while also helping us to be safer and more evidence-driven.
We like to talk about upload and download time. One of the most common complaints I hear from patients is, “This is like the third time I’ve told this story.” I think it would be really great if, when you’re preparing for an outpatient visit, the chatbot says, “Hey, why are you coming today?” and maybe gets those first three or four questions. Then when you’re sitting down with the provider, they can see, “Oh, I see you’re coming for this and it’s been going on for this long.” So basically, you’re spending the majority of your time as a physician counseling them on their test results, counseling them on the options, being more of that partner in decision-making. I think that’s going to be a higher-value visit for patients and providers.
UW Medicine’s AI Guidelines and Principles
Aaron Burnett: Do you, or does the University of Washington, have a rubric or a framework that you apply when considering where AI should or shouldn’t be used?
Dr. John Scott: We have a task force for AI that released their first guidelines earlier this year. Some of those principles include, first of all, privacy—that is probably the most paramount aspect of AI. I think a lot of people don’t appreciate how just putting a little bit of patient information into a chatbot is like releasing it into the wider world. So the initial guidelines state do not put any protected health information into a chatbot—that was a hard line there.
The second is just the importance of bias and equity. Large language models all have bias because of the data that’s going in. So I think it’s really important to be checking those outputs and checking your model for bias and making sure we’re not widening any of the health inequities we have there. I was just recently at a health conference in Las Vegas where I heard executives from Microsoft talking about a tool called Fairlearn, which is checking for bias. I hope that becomes more of the standard for these LLMs—that there’s a bias detection that gives you a score, that kind of thing. Fine-tune your model to reduce bias. You do take a hit on accuracy, but I think that’s probably worth it.
I think the third principle we’re talking about is intellectual property. That’s a big thing in academia—making sure that you’re citing where your sources are and that you actually go back and verify that the study you’re citing concluded what it says. I review a lot of papers and manuscripts before they’re being published, and I’m seeing more and more the statement of whether you used AI or not in the research and writing of the manuscript. As a reviewer, I want a lot of specificity. What exactly did you use AI for? There have been a couple of times where there have been some red flags.
I think intellectual property is going to be a big thing, and just double-checking that, and then just validation—there can be drift in your models, so there needs to be constant reevaluation of those models. Those are some of the guiding principles that we’re employing at UW Medicine.
Aaron Burnett: You mentioned the use of AI for charting, and you’ve mentioned privacy here. So I think it’s probably important to clarify that in the context of charting, I assume you’ll be running a local LLM.
Dr. John Scott: That’s right. And it doesn’t get stored. The conversation doesn’t get stored permanently.
Aaron Burnett: Yeah.
Dr. John Scott: As soon as you’re done sending your note, that gets deleted.
Aaron Burnett: It can be configured in a way that protects PHI.
Dr. John Scott: Correct. That was a hard stop in any kind of contract.
Aaron Burnett: Exactly. We are similar in that we have access to PHI because we work with healthcare providers. So all of our infrastructure has to be HIPAA compliant and all of our data has to be handled in a very specific way.
AI’s Impact on Healthcare Workforce and Economics
Dr. John Scott: I’ll just tell you a question I asked one of the leaders at Microsoft last week. My question was, “What keeps you up at night when you think about AI?” And he said actually it’s the dramatic change on the workforce that might happen from AI. And I think that goes for healthcare. There’s a lot of skills that AI can probably take over, and that’s going to have profound implications.
There’s a huge drive for efficiency, there’s a huge drive for trying to save money. But I don’t want to have that happen at the expense of people. So we need to be very thoughtful about retraining those folks and showing them how AI can make them more efficient. So that’s probably the biggest concern I see for AI when it comes to healthcare that people maybe aren’t thinking about.
The second thing would be ROI. That’s getting talked about a lot for us because we have very thin margins. And this is not going to be something we’re going to get for free. So what is really the value for the healthcare system, value for the patient? I think we need to be very thoughtful about that.
We started out with use cases that were really geared towards preventing burnout, but that probably doesn’t have as much of a good ROI. I think where you’re going to start to see some improvements in driving down costs and improving efficiency will be more in the back office, so trying to make sure that our claims, when they’re submitted, get approved the first time, making sure we’re coding properly. Physicians are horrible about coding accurately, even though we’ve been trained for hours and hours on end. It’s just not something we’re interested in.
Aaron Burnett: Yeah.
Dr. John Scott: It seems to be the pinnacle of tedious work. The average doctor just wants to be able to take care of the patient. So I think you’re going to see a lot more pivot towards that back office operation and then more on case management, making sure those patients don’t fall through the cracks.
The third area is in population health. There are going to be insights from AI that are going to have profound implications on the health of a population. Let me give you an example of that. There was a study recently that looked at chest x-rays being used for predicting valvular heart disease and also predicting diabetes, and it’s surprisingly good. A chest x-ray is the most common radiological procedure. Just so the audience knows, that’s not usually how you diagnose diabetes or diagnose valvular heart disease, but it’s really accurate. So it might be a tip-off to say, “Hey, we need to do some follow-up testing and confirm this.”
It’s an insight that can only happen when you have huge databases and you’re taking overlapping databases and then helping to guide targeted interventions. It’s something that Dr. Eric Topol has called “machine eyes,” and I think that’s a super exciting area. Again, it’s going to drive outcomes at scale. I think that’s something we’re just exploring.
AI Applications in Medical Imaging and Diagnosis
Aaron Burnett: I think that’s been some of the most interesting reporting on the use of AI in medicine—impacts on quality of care and the stories where AI has been able to provide early identification of certain cancers better than radiologists can. You mentioned chest x-rays. What sort of other circumstances have you seen where AI is improving quality of care?
Dr. John Scott: AI does really well in imaging-heavy specialties—radiology, pathology, ophthalmology, dermatology. I think what it does is it helps to counteract the effects of fatigue. If you look at the quality of a radiologist’s performance, it goes down during the course of the day. They’re obviously better in the first couple hours of the shift, but towards the end, they miss things. So I think AI is like that little man on their shoulder saying, “Hey, look over here.” And they say, “Oh yeah, there is something there.” It may not be the reason for the study, but it may be something that is very important for them.
The other thing, like in pathology, is taking that image and overlaying genetic information. Let’s take the example of lung cancer. It can take that image of the biopsy and say, “Hey, it most likely is going to have these genetic signals, and therefore you should be using these specific chemotherapeutic agents.” I think that really shortcuts the turnaround time and leads to much more targeted and effective treatment. You’re going to see a lot more of that kind of stuff.
Aaron Burnett: That’s very interesting.
Dr. John Scott: I think that’s a company that might be interesting to look at as well.
Aaron Burnett: I’ve really enjoyed the conversation. I appreciate your time.
Dr. John Scott: Me, too. Thank you.