Artificial Intelligence and Machine Learning in Medicine – PediaCast 541

Show Notes


  • Drs Laura Rust and Tyler Gorham visit the studio as we explore the intersection of artificial intelligence, machine learning, and medicine. Technology has changed the delivery of health care in amazing ways… and there is more innovation to come! We hope you can join us!


  • Artificial Intelligence
  • Machine Learning
  • Medical Research, Teaching, and Practice




Announcer 1: This is PediaCast.

Announcer 2: Welcome to PediaCast, a pediatric podcast for parents. And now, direct from the campus of Nationwide Children's, here is your host, Dr. Mike.
Dr. Mike Patrick: Hello everyone, and welcome once again to PediaCast. It is a pediatric podcast for moms and dads. This is Dr. Mike coming to you from Nationwide Children's Hospital. We're in Columbus, Ohio. 
It's Episode 541 for July 11th, 2023. We're calling this one "Artificial Intelligence and Machine Learning in Medicine". I want to welcome all of you to the program.
So we have an interesting topic for you this week, as we consider the intersection of artificial intelligence, machine learning, and medicine. 
AI has become an everyday helper in most of our homes, from Alexa and Siri, to customized Internet search results, to smartphones, smartwatches, smart thermostats. And we've seen the rise of AI and machine learning in chatbots, including ChatGPT and others that are quickly emerging. 
Machines observe our behavior and offer customized advertisements. They help us choose a movie or suggest which streaming series we should watch next. And they keep us constantly connected to our family, friends, and co-workers.
Medical research, training, and practice have also seen an explosive rise in the use of artificial intelligence and machine learning. Clinical pathways, order sets, machine interpretations of X-rays and EKGs, and these are just the tip of the iceberg. What will the next decade bring and how will AI and machine learning change the face of medicine and improve disease diagnosis, treatment, and outcomes? 
Time will tell, but we do have some predictions based on what has already been developed and deployed. And today, we'll take a look at the technology side of medicine and the many ways in which researchers, teachers, and medical providers are using AI and machine learning to advance healthcare. 
To help us with this conversation, because I am in no way or means an expert on artificial intelligence or machine learning, but we do have a couple of people who are experts. And they are visiting the studio today. Dr. Laura Rust, she is an emergency medicine physician and clinical informatics expert at Nationwide Children's Hospital. And Dr. Tyler Gorham is a data scientist with Nationwide Children's. 
Before we get to them, let's cover our usual quick reminders. Don't forget, you can find PediaCast wherever podcasts are found. We're in the Apple and Google podcast apps, iHeartRadio, Spotify, SoundCloud, Amazon Music and most other podcast apps for iOS and Android. If you like what you hear, please remember to subscribe to our show so you don't miss an episode.
And also, please consider leaving a review wherever you get your podcasts so that others who come along looking for evidence-based child health and parenting information will know what to expect. We're also on social media and we love connecting with you there. You'll find us on Facebook, Twitter, LinkedIn and Instagram, simply search for PediaCast. 
We also have that handy contact link over at if you would like to suggest a future topic for the program.
Also, I want to remind you the information presented in PediaCast is for general educational purposes only. We do not diagnose medical conditions or formulate treatment plans for specific individuals. If you have a concern about your child's health, be sure to call your healthcare provider.
Also, your use of this audio program is subject to the PediaCast Terms of Use agreement, which you can find at 
So, let's take a quick break. We'll get Dr. Laura Rust and Dr. Tyler Gorham connected to the studio. And then, we will be back to talk about artificial intelligence and machine learning in medicine. It's coming up, right after this. 
Dr. Mike Patrick: Dr. Laura Rust is an emergency medicine physician and clinical informatics expert at Nationwide Children's Hospital and an assistant professor of Pediatrics and Biomedical Informatics at the Ohio State University College of Medicine.
Dr. Tyler Gorham is a data scientist at Nationwide Children's. He earned his PhD in Public Health and Environmental Health Sciences from the Ohio State University and now collaborates with other data scientists and clinicians to build machine learning algorithms that predict adverse health events and improve patient care. Both have a passion for using data and technology to improve child health.
That's what they're here to talk about, artificial intelligence and machine learning in medicine. But first, let's offer a warm PediaCast welcome to our guests, Dr. Laura Rust and Dr. Tyler Gorham, thank you both so much for visiting us today. 
Dr. Tyler Gorham: Thank you so much for having us. We're excited to be here.
Dr. Laura Rust: Yes, thank you for having us. 
Dr. Mike Patrick: We've already used some pretty big words today so far, clinical informatics, data science, artificial intelligence, machine learning. And I think that a great place to start is just defining some of these terms so that folks have an understanding, a baseline understanding of what we're talking about. So, Tyler, what do we mean by the term artificial intelligence, or AI? I think we all think of, like, Alexa and Google when we say, "Hey, Google." But really, there's a lot more behind artificial intelligence than just that, right? 
Dr. Tyler Gorham: Yeah, there's more behind it, but I'd say it's also maybe simpler than we picture. We think of these robots that are going to take things over or something. And the AI that Laura and I work with day in day out is much simpler than that. 
So, if we think of intelligence as just the ability to acquire skills or to learn something and slapping on artificial to that, it's really just a fake form of learning or a fake form of a skill that we can train a computer to do, right? So artificial intelligence, then, is just teaching a computer to be able to do some form of, say, pattern recognition like humans are able to do. 
Dr. Mike Patrick: And it's interesting because from an organic standpoint, our brain just does that somehow.
Dr. Tyler Gorham: Right. My toddler is so much smarter than Alexa. 
Dr. Mike Patrick: Yes. And the machine is only going to be as smart as the person who is programming it and making the algorithm for it to learn. 
Dr. Tyler Gorham: Yeah, to some degree. So, one of the cool things about artificial intelligence and machine learning specifically, which is a subset of artificial intelligence, is the more data we can kind of feed the computer or feed that algorithm, the better that machine learning program is going to be able to perform.
So, that's where we get the term "learning" in machine learning, where the more data, the more training examples we have, the better these systems are able to perform. And so, it is much like human intelligence in that the more, say, study examples you have before you take a test, the better you're going to do on that test. 
So that's really where we get the machine learning point, is the more data we feed, the more that algorithm is able to learn, the better it's able to perform. So, it's similar to human cognition in that way. 
Dr. Mike Patrick: And so, when we say machine learning, then you mentioned that's a subset of artificial intelligence. And in particular it's where depending on what the answer is and whether that answer is correct or incorrect, then the machine starts to learn which ones are the correct answers, which can then make it ask maybe the right questions next time. 
Dr. Tyler Gorham: Yeah, that's great. That's exactly right. So, we often use them interchangeably, or the media certainly will. And I think the only people that care that there is a difference are nerds that are doing it. And so, I think it's safe here if we use machine learning and artificial intelligence kind of interchangeably.
Dr. Mike Patrick: And some examples of machine learning outside of medicine that maybe folks have heard of, for example, facial and voice recognition. The computer has to decide, is this face I'm seeing a particular person's? And the way that it learns, is it gets "Yes, that's correct", or "No, that's not correct." 
And so, then it starts to build on the algorithm to be able to hopefully get more correct answers as time goes by. Is that right? 
Dr. Tyler Gorham: Yeah, that's exactly right. And you can train it to decipher a cat versus a dog or a cat versus an avocado, for example. But a lot of these machine learning examples that we look at today are kind of domain specific.
So, you may have heard of this ChatGPT, it's kind of a chatbot that's really popular and you can ask it to write a history paper for you. Or you can ask it to write a podcast script if you want. That might make our jobs a little easier. 
Dr. Mike Patrick: Why didn't I think of that? 
Dr. Tyler Gorham: And there's some concern around that. And is it becoming too smart or something? But as soon as I think about, okay, it's really good at text-based chat, but I can't ask it if a picture is a dog or a cat, right?
So, a lot of these tools that we use are very domain specific. So, they're very impressive. But it falls well short of something like human level intelligence. 
Dr. Mike Patrick: For now, who knows?
Dr. Tyler Gorham: Yeah, for now, or at least until they start. 
Dr. Mike Patrick: Yes. And then, how do these concepts of artificial intelligence and machine learning, how do those intersect with medicine? What kind of things and domains in medicine could these be useful? 
Dr. Tyler Gorham: That's a great question. So, a few of the early examples were things like computer vision. So, if we give the computer a lot of examples of X-rays and show where on that X-ray a bone fracture is, for example, or a picture of an eye and we say this is a certain disorder of the eye, and you label that image. So, we're just giving it kind of the answer to learn from.
Computer vision was really a great first application or one of the early applications in medicine where today maybe we are able to identify fractures and X-rays as well as a physician or something. What our team at Nationwide Children's and Laura's team working with Clinical Informatics, what we're much more interested in is what we would call augmented intelligence. So rather than trying to replace physicians or do better than physicians and nurses, we really just want to complement their training and expertise and skill sets and help make their jobs a little bit easier and maybe even make them perform better as clinicians.
So, things like that can even mean simplifying their work, things like note-taking, right? So we could record a patient encounter and have something like ChatGPT record what actually happened during that encounter. So that you don't have to go back and spend 20 minutes writing your patient notes after that. So that would be one example called natural language processing. 
Dr. Mike Patrick: And all those examples that you gave are really in clinical medicine and helping medical providers practice medicine. But AI and machine learning can also be used in medical research and even in education, right? Like simulation kind of stuff.
Dr. Tyler Gorham: Right. So the medical research side is pretty remarkable. So historically, we've needed to do these long case control studies and those will continue to happen, certainly. But you may have a wet lab situation where we're trying out 1000 different drugs to see which one will actually move the needle in X or Y disease. 
And what they're using machine learning now within medical research are things like genomics and gene expression. So, we now have the entire human genome sequenced. And that's been about a few decades that we've had that data. 
And so, they're able to see, based on the DNA of a patient and what health outcomes they have, try to correlate those two things. So maybe you're not presenting with a certain disease today, but through gene expression data, they're able to kind of predict what you might have in the future, right? 
They're also doing De Novo drug design and drug target identification. So again, rather than doing a lot of assays that require manual pipetting and a lot of medical students or grad students in the lab, they can kind of simulate that and see where might this drug impact the cell or impact the DNA. And is there another use that we could have for our existing drugs? 
Another field is in protein folding, which is certainly above my head, but it was really big news within our field this past year that something called AlphaFold by DeepMind, where previously they would have to kind of simulate and estimate how a set of proteins would fold, which is a really complex problem. And there was a machine learning based breakthrough in that which should contribute to future drug discoveries.
Dr. Mike Patrick: It's remarkable, really, that you can take the body systems in a human and then kind of have a machine learn what the results of all the interactions would be. So then, if you put a particular drug or medicine into that system, maybe you can predict a little better what's going to happen before you even use animal models. 
Dr. Tyler Gorham: Yeah, absolutely. And I think part of the machine learning side of it is we're really just learning patterns, right? So, similar to your training and your years of experience, you've seen what happens when a patient that looks like X or Y or Z takes this drug, what kind of side effects might they have? Will their blood pressure respond to this, for example? 
Machine learning in this case might just be looking at those patterns of past patients that look like the patient that's in front of you and then seeing what drugs have been tried for their disease or disorder or condition, and then predicting if this drug is likely to be effective based on kind of its past experience. 
Dr. Mike Patrick: Very interesting and really opens up a whole new way of doing research or at least making it a little bit safer by the time that the drug gets to humans. 
Dr. Tyler Gorham: Certainly, certainly. 
Dr. Mike Patrick: And then, Laura, I want to bring you in as a physician. How can AI and machine learning be used in medical training? I had mentioned simulation, but that really can be a useful thing, especially for new physicians that are just learning the trade. 
Dr. Laura Rust: Yeah, I think it's a great question and kind of where do we start to introduce that topic? I think it needs to be kind of bumped up earlier and earlier. 
I read an article recently where in the Journal of Medical Informatics that shared that there was AI model that was used to help screen medical school applicants for admissions. So, it does strengthen the argument that if it's being used before you even step foot in the door of a medical school, that you should probably learn to integrate it within the curriculum.
I think medical education has greatly evolved since I went nearly 15 years ago. And historically, it's been segmented into two years of traditional classroom-based education, followed by two years working clinically at the bedside. But from my research, it seems that we haven't really integrated healthcare related machine learning topics or clinical informatics concepts in our standard curriculum.
I think they get exposed to it on their clinical rotations, but I still find that many residents and attending physicians are surprised of the even existence of the field of clinical informatics. And it is something that they use every single day. 
So, I think that there's a lot of room for growth in how we learn to teach these tools and how to use them responsibly. Not that every physician needs to master the concepts of machine learning and become a data scientist, but just having that understanding where they know how it applies to their particular patient, like Tyler mentioned before, and its limitations. So, we don't expect everyone to be quite as nerdy as Tyler and myself.
Dr. Mike Patrick: I think that limitations is a really important thing, isn't it? Because it would be easy to say, oh, this is machine learning. It must be absolutely, positively correct. But it really still takes that human mind to look at the context of all the data points that the machine may not even be considering, like family history, for example. So just knowing that those limitations exist, I think can certainly be helpful and make the practice of medicine a little safer. 
Dr. Laura Rust: Agreed. Yeah, I think it's good to have that healthy skepticism and to recognize that the computer and the machine is only as good as the information that gets input. And there are several factors that play into medicine when you're interviewing a patient and kind of those intangible emotion reading and exam findings that the computer won't be able to pick up on. So, it really is just a piece of the puzzle. 
I will say there's a lot of efforts kind of not in the standard curriculum, but definitely a lot of learning opportunities to expose yourself to the field of clinical informatics and learn about it as a tool. We have the American Medical Informatics Association has put together several opportunities for that self-guided learning, but I would love to see it where it became kind of a more formal process for medical school education, too.
Dr. Mike Patrick: Absolutely. As we think about the practice of medicine, there are some opportunities, I feel like, where the medical training and the practice of medicine kind of intersect with one another. And so, when we think about things like using the electronic medical record for clinical pathways and order sets, especially when you're just beginning training as a physician, those can really be helpful. And then, they can also be helpful for practicing physicians because it kind of reminds you, especially things that maybe you don't see all the time, having algorithms and pathways and things like that can be helpful.
Run us through how AI and machine learning is really helping with the practice of medicine at all of these levels, from young physicians who are just starting out to those of us who have been around the block for a little while.
Dr. Laura Rust: I think I'm kind of in the middle of the block, if you will. So, I have kind of gotten to see both worlds of how this has been used within medical practice. So, AI can assist in diagnosis, in risk stratification if you're trying to gauge what will be an optimal treatment pathway for your patient that you're seeing. And I think it touches kind of all of those realms. 
So, it's really as important as learning basic pharmacology, I think. And many times, I've often wondered being a doctor in the 1940s may have been simpler in some respects. Like there was only one antibiotic which was penicillin to know at the time.
But nowadays, thankfully, for myself and my career, we live in an information age. And our consumers then may be victims of increasingly complex data that's always immediately available at our fingertips. And so, I think AI and machine learning can really help us make sense of some of it in a reasonable amount of time. 
Because, while our cognitive power as humans to process information gets very complex and picks up on small nuances that computers can't, at the end of the day, we can't process thousands and thousands of pieces of information within a matter of seconds. 
So, I think, an example of how we kind of use this within medical practice from a machine learning standpoint is the model that Tyler and I worked on together as a part of a larger partnership called the Deterioration Risk Index. This is a model that we trained on disease-specific groups such as those with congenital heart defects or cancer, recognizing that each of these patients have unique characteristics that required special attention for model development. 
And so, what we do is we try and help predict kids within these cohorts that would be at risk for having deterioration or bad outcomes in the hospital. And then, we alert the care teams to that risk and so that they can intervene. 
The system is not smart enough to kind of tell you what to do, it still relies on that clinical acumen of the physician taking care of the patient and the bedside care team for what is needed specifically for that patient, to help kind of intervene before things get bad. 
I will say one of the unique things about Nationwide Children's and how we handle model implementation is that most of the time, less than 10% of the time, predictive models are implemented at the bedside. But I think our collaboration with our Data Science team has allowed us to kind of ask the question upfront before we even invest resources into model development of how will this impact patient care? Can we implement this? And usually, that's using the electronic health record as a tool.
So, we ask that question up front and if that answer is yes and we have a way to do that, then we invest the resources to make an impact. 
Whereas, that's not always the case in academia, many times models will be developed and then they kind of never make it to the implementation side of things. But I think that's where we shine at Nationwide, that we make that a priority here.
Dr. Mike Patrick: Absolutely. Just in my own practice, as I think about the sort of things that you're talking about, like trying to predict when a kid might be in trouble or might need a particular intervention earlier rather than later. And one of those for us in the emergency department would be with a sepsis. And we have done episodes, if you're wondering, what in the world is sepsis, just look up PediaCast and sepsis in a Google search bar. And we have done whole episodes on it. 
But it's important to identify kids who might have this condition early so that you can intervene and make a difference in outcomes. And I know in the emergency department, there's basically machine learning that's looking at various components of the vital sign, some pieces of history, and can maybe predict which kids are at risk for sepsis. And then, that pops up and lets the medical team know that this is a concern. Is that something that your team developed? 
Dr. Laura Rust: We worked in partnership. I would say our Data Science team did work with those sepsis efforts. And it is a tool that has been kind of spread nationally, like Epic, our EHR vendor has used that as kind of a standard nationwide. So, I think that we've kind of set the bar in that regard. 
Dr. Tyler Gorham: I want to give credit where credit's due though. That was really developed and led by the Clinical Informatics team. So separate team from ours, maybe even, I think that project might have even predated our Data Science team at the end. 
Dr. Mike Patrick: But the important thing is collaboration here too, right? So, when we think about the data scientists and the clinical informatics folks, I mean, there's sort of different realms, but there's an intersect as well and lots of collaboration that goes on and this is just one example of those things. 
Dr. Tyler Gorham: Yeah, certainly. We found with our projects, the sooner we bring everyone to the table, the better, right? So I can write some computer code, but it would be completely nonsensical without all of you that actually know and practice medicine, right? 
And even if I have the right physicians or nurses at the table, it takes someone with that informatics background to bridge the gap between what we might program in the computer and then how is that actually going to be used in practice. So, these collaborative efforts, they really are kind of our bread and butter.
Dr. Laura Rust: We joke that data science is a team sport. And so we have found that to be our secret sauce of involving not only clinicians, but data scientists and informatics experts from my team, but then also those who are building it within the system, who are experts in that regard. 
So, I think having all of those people at the table early on, as well as kind of throughout the process. So, data scientists traditionally, I don't think, have been as involved in the implementation process and hearing the clinical team speak about how it will be used, but that is something that we have started doing. And I think that's just really helpful that everyone has that shared awareness of how to use it and implement it successfully. 
Dr. Mike Patrick: Yeah, absolutely. When we think about the practice of medicine, there's a role for AI and machine learning in the way that we make decisions. But there's actually also more and more machines involved in medical care itself. And I'm thinking about things like pacemakers and insulin pumps and things that folks live with every day. And it's a machine that's making decisions on whether to give you a shock or not or when to release insulin and how much insulin to release. 
And I would imagine that in the beginning, as they're designing the software behind what's running the machine, there must have been some kind of learning algorithms that take place to program that. Maybe I'm simplifying that too much. Tyler, what do you think? 
Dr. Tyler Gorham: No, I think you're exactly spot on. Thinking about continuous glucose monitoring and insulin pumps is a great example where a lot of our patients with Type 1 diabetes will wear a continuous glucose monitor, for not all of them is that connected to their insulin pump, right? So, they may know if my blood glucose is rising or falling, how I should change my insulin pump accordingly?
But it certainly required someone to take a step with machine learning to say, if the blood glucose is trending at a certain pace, then the algorithm will be able to say, okay, we need to apply this much insulin, because we see for this patient that their blood glucose is starting to spike, or whatever that example may be. So, definitely a good example. 
I even read an article of a dad who was kind of disappointed by the pace of this research, and so he created his own algorithm for his son's continuous glucose monitor and insulin pump to be able to work together. So, a dad solved this maybe before the complete field.
Dr. Mike Patrick: We do not encourage families to tinker with their insulin pumps. That's a really cool story, though.
Dr. Tyler Gorham: He was an engineer, I should say. 
Dr. Mike Patrick: Yes, I'm going to put a link in the show notes to an interesting article from Nature called "Eight Ways Machine Learning is Assisting Medicine". And if you'd like to know more about the various ways in which AI and machine learning are really being used in medical science, whether that be research, training, practice, all of those things. It's a pretty interesting article and I'll put a link to it in the show notes so folks can take a peek.
We've mentioned the electronic medical record several times. And Laura, I trained at a time when the only electronic medical record was in the ICU, like they had just come out. There's this, you know, instead of writing a paper note, you typed it into the computer. Everywhere else in the hospital when I trained, we still had the metal charts and it was free writing in paper. 
But really the electronic medical record has become way more than just documentation, right? I mean, there's just so many uses. Tell us a little bit about how the electronic medical record really improves patient care and can be used as we think about artificial intelligence and machine learning.
Dr. Laura Rust: Yeah, I think increasingly more and more of a provider's time and even the entire bedside care team is spent on a computer which has benefits. And I think a lot of people also tell you a lot of drawbacks as well. 
I think part of what I like about my specialty is that I get to really investigate how do I use this as a tool to improve provider efficiency, improve care? And I would say most cases, like the electronic medical record is the interface for how physicians on the care team interact with healthcare-based machine learning.
So, it's kind of under the larger umbrella of what we call clinical decision support, where we use lots of different tools within the electronic health record to help with decision-making. So, as you mentioned before, with our Pathway program, we use things called order sets to kind of help standardize care delivery, which is great for new learners who are learning and also great for those who have been around the block like you mentioned before. That kind of remind you of things have changed and research has evolved, and now this is the new standard.
So, I think machine learning is one of those tools. And we have traditionally used it as more as kind of in a learning mechanism, where there's an alert that pops up with a provider's face because the machine is constantly running in the background or the algorithm, I should say. Constantly evaluating without kind of any input required from the user than what they're already putting in the system. And then, it shows you at the right time to the right person that can make the intervention. 
So, I think one of the areas that I find most interesting and challenging within this work, is how do we operationalize that model within our clinical workflows? And usually that's using the electronic health record.
But we've kind of gotten fancy with it in some regards, where we do simulation sessions of these tools within that electronic health record system to know, what is that feedback? How will providers respond? How does a trainee respond versus a nurse respond?
Should we make it look different to them at the end of the day? So that it means something to them clinically and they know what to do with that information? 
The other very neat thing that we can do is we don't kind of show what we have traditionally called like a black box model, where you don't share kind of why the patient might have met a certain criterion. So, with our  Deterioration Risk Index model, we tell the clinicians this is why your patient is at risk based off these criteria that the model found in the system.
I think there are some models that just say your patient is at risk without kind of any context. But we found that clinicians want to know why a patient may or may not have met certain criteria. And then, they can also contextualize it, "Oh, you know what, this may not apply to my patient because of this exclusion." So they can use that kind of higher-level thinking to, like I said, apply it to that specific patient. 
Dr. Mike Patrick: Yeah, it's kind of like the difference between the check engine light came on versus something more specific, like, "Oh, no, you need to do something with your engine."
Dr. Laura Rust: Right. Yeah. Cars have also gotten more sophisticated, thankfully, because everyone, I think, gets that panic moment when they see that check engine light. But, yeah, the check engine light being more general versus saying, "Oh, you need to update your spark plugs," for instance. So, we try and kind of be very transparent in what we show the people, the end users, and what caused the model to fire. 
Dr. Mike Patrick: Yeah, actually... Oh, go ahead, Tyler.
Dr. Tyler Gorham: Yeah, I was just going to say this has also really helped us build trust with the nurses and physicians that weren't on the team to help build the model, whatever model that may be, the Deterioration Risk Index we talked about earlier or any of the other models that we've implemented. 
We found that when you provide those risk factors that cause the score and show there was some thought put into this model and it thinks in a similar way that you do, which isn't really surprising because we had trained medical professionals providing input as to what those risk factors should be in the first place, I think that helps build trust instead of just this kind of ominous alarm that says, hey, the computer thinks that something's wrong.
Dr. Mike Patrick: Yeah, I agree. Because if it's one little piece of data that actually ended up causing it to fire and you can explain that because you have more context than the machine has, I can see where that would be really helpful. And kind of reassuring that, "Oh, I understand why it fired." 
And either, "I'm going to act on that or I'm just going to watch for a little while before I intervene," or, "No, this is not a problem at all." And that really helps with that kind of decision making, for sure. 
I want to mention that both of you had written an article together, collaborating as a clinical informatics and data scientist on an article called using "Machine Learning in the Electronic Medical Record to Save Lives". 
And I'm going to put a link to that in the show notes so folks can find it and read it. If you want to learn more about the intersection of the electronic medical record and artificial intelligence and machine learning. 
We've been talking a lot about collaboration. And I think, Tyler, from a data scientist standpoint, in that lens, at the end of the day, there's computer programming and data sets and things that are very foreign to most medical clinicians. So how does that team operate together and learn to speak the same language? 
Dr. Tyler Gorham: Yeah, certainly. So, as I said earlier, we found we have a lot more success in bringing as many people to the table almost as possible, or the right people from different teams. And so, it starts with problem formation. So, for the example of this, we'll just keep using the same example of this Deterioration Risk Index. 
I wouldn't have personally known that patients deteriorating in the hospital is something that we could identify or prevent, right? And so, two of our physicians early and then Laura was on board later, Tensing Maa and Ryan Bode, and Laura as well, I said, "Hey, we have this problem of some of our patients come to the hospital. We're not expecting them to deteriorate. And it seems like maybe we're missing something in their physiology along the way that with this event could have been prevented, right?" Causes, maybe cardiopulmonary failure or this rapid emergency transfer to the ICU. 
And so just from the outset of the problem formulation, that's not something that the data scientists at our hospital are coming up with, right? Our physicians and researchers have the clinical problem or idea in mind and we just help them take the data that's available at our hospital and create these algorithms. 
So then at that point, the Data Science team goes and we program to allow the computer to learn the patterns in the data. And then we kind of hand that model back off to these Informatics team that then go and deploy this within the electronic health record. 
Dr. Mike Patrick: So interesting and something that when you think about how all of this developed over time, there probably wasn't really any kind of training program to learn to do this specifically. But that's not necessarily the case anymore. Are we training scientists right out of the gate to think about these principles and this sort of collaboration? 
Dr. Tyler Gorham: Yeah, that's a really great question. It's a very new field to have specific training. So, everyone on our team either comes from more of a scientific background. So myself and another teammate have PhDs in public health.
Others came from a computer science background where machine learning is kind of old hat to them. And they then get up to speed on the clinical side. So, we're kind of this group of ragtag programmers that some came from a science background or a health background and learned to program and others that was flipped.
There are programs now, though, where for bioinformatics, for example, there are programs now that are training people to think specific, how do you train the best computer algorithms in a clinical setting? There are machine learning programs and data science programs now at colleges that are training students to do this. 
Dr. Mike Patrick: And even as we think about clinical informatics and we'll talk a little bit more exactly what that means, but we even have a training program for that now at Nationwide Children's Hospital. It's a Clinical Informatics Fellowship that's relatively new. And I'll put a link to that in the show notes as well so folks can learn about that opportunity.
Tyler, what is the take-home, do you think, for patients and parents and families that they really need to keep in mind? Because we're just going to hear more and more and more about AI and machine learning in medicine, but what are sort of the key take-homes that families need to keep in the back of their mind as they hear about these new things that come on down the pike? 
Dr. Tyler Gorham: That is such a great question. And as a dad, myself and an employee at Nationwide Children's, I know we're always encouraged to have a questioning attitude. And I think that's really important for parents.
So, if we're talking about machine learning in medicine, kind of maybe put that through the same lens as whether or not you're considering a drug. And you're having that conversation with your pediatrician, right? This drug may be unfamiliar to you, likely you're not sure how it was developed. But you have some trust with your physician that this is the right drug for this disease course, perhaps.
And so, I want to encourage you to ask questions of your care team and not just take it at face value, but to rely on the same set of trust, that same trust that you've built with your pediatrician.
But that's also not to say that it needs to be scary. So, at Nationwide Children's, all of the models that we've implemented are interpretable, which is what we were talking about earlier.
So, if a model triggers or says, this patients at risk of X or Y outcome, we're going to be able to share why the model came to be in that way. And so, it doesn't need to be this scary black box or this thing that's going to cause some undue harm without physician input, right?
Everything that we do is really a clinical decision support tool where we're helping to assist our caretakers and providers and not replace them. So at least within the confines of Nationwide Children's Hospital, I think we're doing this very responsibly and putting a lot of thought into how and when we use these algorithms.
Dr. Mike Patrick: And Laura, then, as we think about medical providers, physicians, nurses, what do we need to keep in mind regarding the use of AI and machine learning because it's really going to be more and more a part of our daily experience as we engage families and practice medicine. But what do we need to keep in mind? What are the take-home points for us? 
Dr. Laura Rust: I think as providers, kind of similar things, takeaways for patients and families is that it is a tool and just one piece of the puzzle to optimize patient care. And so, I think one of the most important things that we can offer as developers is that aspect of transparency, both in kind of how the model was developed and how it performs for both clinicians and patients and families.
And so, not only should the clinicians know why it applies to that particular patient they're seeing, but kind of using that as part of the discussion and shared decision making with patients and families about why they're deciding to act a certain way based off that information. 
So, it shouldn't be scary. It seems kind of, I think, out there in that parents may not always understand what information is going into it. And when you don't know what's going into a model, it's hard to understand or trust it. So, I think having that transparency aspect is critical. 
I think historically when caring for patients we used to look at, we still do, but we look at articles and journal studies to say, how can I make the best decision for my patient? And I think that AI and machine learning kind of gives us the ability to do that in another capacity and more quickly.
Dr. Mike Patrick: So, you don't think that at least now, that AI and machine learning is going to replace medical providers. Would you agree with that, Laura? 
Dr. Laura Rust: Yeah, I certainly hope not, I say so. I think the amount of information that we're exposed to and that we're expected to synthesize is becoming more and more complex, as we talked about earlier. But our computing power as humans and the ability to have that emotional connections with patients and families and kind of pick up on those nonverbal cues when we're examining them, I think will never be replaced by a machine. 
So, the computer can help us analyze the thousands of different data elements, but I think it's still going to take that human factor to apply that to patient care. So, I think we have good job security, at least for now. 
Dr. Mike Patrick: Yeah, at least in our lifetimes. And then we've used this word clinical informatics several times throughout the course of this podcast. What exactly is clinical informatics? 
Dr. Laura Rust: So, I'll share a little tidbit. There's a medical textbook written in 1820 called "Diseases of the Chest" that question if the stethoscope would ever become mainstream. And I think technology within healthcare has had a similar journey for that. 
And so, in 2011, clinical informatics actually became recognized as a board-certified medical subspecialty. So very similar to cardiology or endocrinology. We have a two-year fellowship program that physicians can pursue after completing their primary residency.
And in its simplest form, I kind of describe clinical informatics as embracing the concept that a clinician working with the assistance of information technology is better off than working alone. 
I think in today's world, that concept is getting easier and easier to grasp. In fact, I feel like we would be kind of crippled without technology. When we misplace our phones, I think there's kind of this little panic similar to when we see that check engine light of like, "Oh God, what am I going to do?" "How will I survive?" Even though that's how people did it for hundreds and thousands of years beforehand. 
But I think that the field is special and that one of the things that I like about it most, is that you get to do a little bit of everything. Much like my other clinical specialty of emergency medicine, within clinical informatics, I can make more localized impacts, for instance, on improving provider documentation in the emergency department versus a hospital-wide improvement with our deterioration risk prediction model. So, it gives that opportunity to improve patient care even at the bedside or even at the city, state and national level. 
One of the other work efforts that we've done is in reducing the amount of unnecessary opioids prescribed to children. And informatics is part of the bigger umbrella of that effort of opioid stewardship, but certainly one that's kind of helped make an impact. 
And so that's one of the things I love most about the field of clinical informatics, is how broad of an impact you can make. But then I can also make an improvement for the colleague sitting next to me when I'm working on a shift. So, it is very meaningful work.
There is a misconception that I think what we do is not patient care, but in reality, that's our primary motivation and the reason that we all chose medicine to begin with. So, the computer is our stethoscope, if you will, and allows us to really have that direct impact on patient outcomes. 
Dr. Mike Patrick: Yeah, I love that analogy. And I will say a Nationwide Children's Hospital really has been in the forefront because as I think back to 2011, I remember still, even then, Jeff Hoffman was coming on board and Kathy Nuss was getting involved. So even a decade ago, clinical informatics, as it was taking off across the country, really was embedded here at Nationwide Children's Hospital pretty strongly. 
Dr. Laura Rust: We were lucky to be early adopters of the technology and back in 2006 was when we went live with our current electronic health record vendor. And I think that the growth that we've seen just on our campus alone, but also with the focus on informatics since I've joined the team as a resident even, has been really awesome to watch grow. 
Dr. Mike Patrick: And I'll put a link to the Clinical Informatics Program at Nationwide Children's Hospital in addition to the fellowship, actually a link to the stuff that you guys do on the website. So folks can find that along with those other links and resources that we've been talking about. You can find all of those in the show notes for this episode, 541, over at 
So, once again, Dr. Laura Rust, with Emergency Medicine and Clinical Informatics at Nationwide Children's Hospital and Dr. Tyler Gorham, data scientist at Nationwide Children's. Thank you both so much for stopping by today. 
Dr. Laura Rust: Thank you for having us.
Dr. Tyler Gorham: Yeah, thank you so much.
Dr. Mike Patrick: We are back with just enough time to say thanks once again to all of you for taking time out of your day and making PediaCast a part of it. Really do appreciate that. 
Also, thanks again to our guests this week, Dr. Laura Rust and Dr. Tyler Gorham, both with Nationwide Children's Hospital.
Don't forget, you can find us wherever podcasts are found, we're in the Apple and Google podcast apps, iHeartRadio, Spotify, SoundCloud, Amazon Music and most other podcast apps for iOS and Android. 
Our landing site is You'll find our entire archive of past programs there, along with show notes for each of the episodes, our Terms of Use agreement, and that handy contact page, if you would like to suggest a future topic for the program.
Reviews are helpful wherever you get your podcasts. We always appreciate when you share your thoughts about the show.
And we love connecting with you on social media. You'll find us on Facebook, Twitter, LinkedIn and Instagram. Simply search for PediaCast.
And there's this new thing out there you may have heard about called Threads. And so, we're going to be looking into that and most likely we'll hop on board because I have a feeling that it's going to take off, but we'll see. So, for now, we are on Facebook, Twitter, LinkedIn and Instagram, more to come. 
Also, I want to remind you about PediaCast CME, that is our sibling podcast. It's similar to this program. We do turn the science up a couple notches and offer free continuing medical education credit for those who listen. And we have joint accreditation, so it's not only for physicians. We also offer CEs for nurse practitioners, physician assistants, nurses, pharmacists, psychologists, social workers, and dentists. Of course, you want to be sure the content of the episode matches your scope of practice. 
Shows and details are available at the landing site for that program, You can also listen wherever podcasts are found, simply search for PediaCastCME. 
Thanks again for stopping by. And until next time, this is Dr. Mike saying stay safe, stay healthy, and stay involved with your kids. So long, everybody. 
Announcer 1: This program is a production of Nationwide Children's. Thanks for listening. We'll see you next time on PediaCast.


Leave a Reply

Your email address will not be published. Required fields are marked *