Artificial Intelligence in Precision Medicine: State of the Art at Stanford

Emily Le:
Hi everyone. I am Emily Le, conference content producer from Cambridge Health Tech Institute. I'm really pleased to have the opportunity to speak with Dr. Matthew Lungren, associate director at the Stanford Center for Artificial Intelligence in Medicine and Imaging. He's also an assistant professor of radiology at Stanford University. Matthew, thank you for joining us.

Matt Lungren:
Yeah, thanks for having me.

Emily Le:
So tell us about your current role at Stanford University and what a typical day at work for you is like.

Matt Lungren:
Oh, well, okay. So my role is interesting in the sense that I kind of wear two different hats. I, on the one hand, am a clinician, so I take care of patients, I interpret imaging, I perform procedures as an interventional radiologist at Stanford, so I have a busy clinical practice. But then with my other hats, as you mentioned in the introduction, I'm now the co-director actually of the imaging center here at Stanford, which is sort of a medical school wide center that hopes to sort of bring in researchers and faculty from both the clinical enterprise at Stanford and also the sort of engineering and computer science part of campus and sort of work on clinically important problems. And as part of the leadership of the center, my job is to find those opportunities, secure resources, and form partnerships.

Emily Le:
That's wonderful. And how long has that center been around?

Matt Lungren:
It's funny, it feels like it's been around forever, but actually it's only been around about a year and a half. I mean, we were literally doing work on this before that, but it wasn't really a formal thing. It was very much like, "Hey, I have a friend in orthopedic surgery who has an idea," or "Anesthesiologist wanted to talk to a computer scientist, do you know anybody?" And we started to get enough of this sort of basically a groundswell of interest in sort of these new AI and machine learning technologies around different problems and different clinical disciplines. And so we felt like it was time to kind of just step up and organize around it.

Emily Le:
I see. That's really cool that you have a whole center that is very interdisciplinary between all of the fields to make this happen. So can you give us some examples of some of your current projects?

Matt Lungren:
Absolutely. I think that one of our strengths as you sort of just touched on is the fact that it's very multidisciplinary, and so we really try to serve as a resource, a source of knowledge and education, but certainly sort of teaming up multidisciplinary groups. That includes statisticians, that include business students, that include law students, and those in computer science as well. Some of the things that we're really excited about are a lot of our clinical applications. We've just wrapped up a multicenter prospective clinical trial, which we're the first to do that formally as an academic institution at least where we're really exploring how these technologies interact with human experts on the ground. There's been a lot of discussion generally that these tools are possible to build, and what I'm referring to of course are algorithms or "AI" that can cognitively automate some of the things that we do in medicine.

But a lot of the things that we think they can do in the lab, we haven't really yet shown to a great degree how they perform in real world situations. So one algorithm we're working with involves chest x-rays where we can attempt to triage patients with physicians in the loop prior to radiologist having to interpret them in themselves. With that kind of an approach, it potentially allows patients to be treated faster, cuts down on some of the frustrations and bottlenecks in various hospital systems.

We have another algorithm that can help hospitalized patients who are dealing with end-of-life diagnoses to help predict mortality that algorithm made the news not too long ago, but the goal is to actually implement that so that patients who are likely to have very close to end-of-life diagnoses, it's important to have palliative care end-of-life discussions early on because we find that patients generally dislike when they're treated in the hospital in the last days of life. And so the goal is to try to understand how they would like to plan out the remaining days that they may or may not have. And so that kind of stuff I think as we're seeing in practice, we try to find applications where the use case that really provides benefits to our patients.

Emily Le:
I see. So is there any algorithm that you're working with are supporting the point of care diagnostics?

Matt Lungren:
Yeah, we have algorithms that are looking at sort of how can we detect arrhythmia. As you may know about the Apple heart study, which is obviously a large one, obviously sponsored by Apple but led by researchers affiliated with our center. And so those are the large population health level assistive diagnostics where we're looking at whether the epidemiology really does match up with some of the technical results that we're getting in a lab.

Again, as history has shown us even before the boom of AI algorithms, machine learning certainly did have stumbling blocks when it sort of worked on a nice clean data set in a laboratory environment. But in a very heterogeneous or even larger population, there tends to be biases and other things that kind of crop up and can really cause detrimental performance. And so again, one of our big goals is as we design projects, we try to provide guidance along those lines where we are looking at the end result before we start a project understanding how will the data be used, how will the information that the model produces be used, what are the potential biases and problems that could occur, and is it possible that the model is localizing on correlations and not causations when it makes a decision? Because obviously that that can cause problems down the line.

So these are the things that we've learned a lot about and thanks to a lot of our biomedical data scientists and epidemiologists here that we partner with, we really feel like we're in good footing to evaluate these in a safe but effective way and really show they have benefit.

Emily Le:
Yeah, definitely. And I think that in today's world we really need the collaboration with all kinds of fields and all kinds of companies, especially Apple in AI. So can you tell us a little bit about how did that collaboration happen?

Matt Lungren:
Yeah, so you know with groups like that, I mean certainly there are lots of different industry partners, but many of them I think are wisely choosing to partner with academic institutions are certainly domain experts. Either they're hiring them directly in something that you've may have seen. And I think that is certainly supportive of that because again, and that really does reinforce, again, this sort of the validation of the multidisciplinary approach because what we have seen, or at least we've encountered here occasionally, there may be a group who is in computer science or in engineering who has acquired a medical datasets and are working on a problem that had they had a physician or practitioner in the loop at the beginning, they may have advised them against the particular tasks that they're working on, a task that wouldn't be medically or clinically useful, for example.

Or on the other hand when you see that, and there's been literature on this topic, but if you were to deploy a model and not really understand the potential harms, and that's something that a clinician who is in practice day to day can give you that context and prevent some sort of errors in that way, I think that the end result is number one, safety, which is of course important. We like to talk about the move fast, break things culture in Silicon Valley, but we also have the medical first do-no-harm culture. And both of those things are at odds and I think in a good way the tension right between very conservative and very aggressive. And I think that that partnership is critical. So when we have partnerships with Apple and other large technologic companies who do have scale, we would like to leverage that in an intelligent way that allows us to sort of do hypothesis-driven research but also have good outcomes that are safe and effective.

Emily Le:
That's very nice. So I was browsing through your LinkedIn a little bit before the podcast and I saw that you got a master degree in public health at the University of North Carolina, and then you went to medical school at the University of Michigan, followed by getting your residency at Duke University. So I see a lot of medical experience here, but I don't see or know where you got the data science and machine learning and AI inspiration from. So how did you get to where you are now and can you tell us about your background and inspiration for your current work?

Matt Lungren:
Absolutely. It's definitely been an interesting, I guess course based on the resume, but certainly, yeah. So my real interest came I think from looking at population level problems. Some of the things that early on I was interested in was trying to understand relationships of practitioners with imaging equipment and ownership, something called self-referral, and throughout my training and even in my public health training, it was important to try to take local hypotheses and maybe even studies that had done in one hospital but not take that as the truth and try to expand that to a population level and think about where something may work at one institution, not the other. And in order to do that, you need a lot of data. And anytime you start to work with large datasets, it becomes clear fast that if you don't have the skill sets of working with either Python or R or have a competent data science team and a statistician to partner with, you really won't get very far.

And so fairly early on in my training, those things became skillsets that I worked very hard at sort of getting competent with it. And through that, coming to Stanford in trying to put together that same data set and work on large data, I had the opportunity to, to interact with Andrew Yang and his lab. And in that conversation sort of sparked the very first project, which was to say, well let's look at something like pneumonia on chest x-rays. You guys are telling me all about your revision models that aren't working with structured data like I had but actually looking at the images like I do with my day job. And that was intriguing. And so that's kind of how it started and since then it's been a really fun exploration.

Emily Le:
Very, very nice journey. So as a radiologist working on AI and machine learning, can you share with us what aspect of the radiologist's work can be replaced by AI and what aspect can never be replaced?

Matt Lungren:
Yeah, again, this for me, this is obviously a topic that comes up a ton. I think it's a great question. For a lot of folks, there is a desire to sort of really understand where this augment versus replace narrative, it seems to be like a debate to some extent of who believes what. I think for us, because we're an academia, we try to focus on what is the interesting scientific question and whether it's right or wrong, we want to show whether it's right or wrong. And so we really try to focus on areas in medicine where we think we'll have the largest impact. And in radiology, three quarters of the world doesn't have access to a radiologist's expertise, sometimes not even imaging exam.

And we feel like the three to 4 billion people across the world that live in countries like Liberia that have two radiologists for a population of almost 10 million. That kind of a situation to us provides an opportunity to not focus as much on the replace versus augment narrative and really try to say, "How can we develop a tool that will help the clinicians serving those populations, interpret imaging in a way that makes them more effective?" And so to give you an example that we partnered with the hospital in South Africa that runs a large clinic for HIV positive patients who are suspected of having lung disease. And now there are only a few radiologists who are available to them, and it really is a problem because they don't believe they have the expertise to make proper diagnoses and help their patients. And so we worked with them to develop a model that was able to more accurately than their own radiologists determine the presence or absence of active tuberculosis.

And we didn't just add that in as a replacement. What we did was we actually gave it to the pulmonologist in an experiment to say, "Can this you better than you are without this model? But certainly can the combination of this pulmonologist who does not have formal radiology training in combination with the model, does that make you as good or better than the radiologist?" Because if it does, then it relieves some of the pressure and the burden on the system to have that lack of expertise kind of still being a big bottleneck. And what we found was that in indeed it does in this particular use case.

And so we think that pushing on that narrative and that experimental thought is sort of what kind of gets us out of bed and really sort of informs what we work on at least in some of our leading work in our lab. And rather than sort of, again, focusing on the highest yield automation or economic goal. But to your point, I think that there are definitely areas, I think humans are not awesome at things like measuring and comparing measurements. I mean, that's something a computer can certainly do better and a human radiologist would welcome the opportunity to have that be automated, and quantification of various types of imaging examination certainly would be something that would be helpful. And then of course the biggest use case I think for now is going to be the application of machine learning in creating the images. So better and faster reconstruction, you can get your MRI in minutes instead of an hour. All those kinds of applications I think will be the first on the actual scene when you go to visit your local radiologist as opposed to some of the stuff that makes the headlines, which is we can make diagnoses like radiologists in certain areas.

Emily Le:
Those are very exciting examples that you just mentioned. So there's a lot of AI technologies out there for the healthcare field. What are some of the challenges and constraints of implementing AI in healthcare versus other industry, and where are we at right now in healthcare?

Matt Lungren:
Yeah, this is obviously known as the last mile problem that has been articulated in a lot of different industries. We're certainly not the only one to struggle with this, but we do have some additional complications that make it hard certainly to take a tool or a model that was developed potentially at Stanford or maybe at Google or wherever else and apply that to a large population that's so heterogeneous. I mean, I'm not just talking about the heterogeneity in terms of the diseases, but I'm also talking about the different healthcare systems who have different electronic medical records, different imaging device manufacturers, different protocols. Each of those differences provide a bit of a stumbling block for a one-size-fits-all solution. And so some of the challenges that I see and probably some of the frustrations that are coming from the promise of the technology, not meeting up with the reality of what we're seeing on the ground in practice, that gap is going to take a while to fill.

Now those of us I think that are involved in the research can see certainly that there is a path forward, but it's not a quick path. And certainly there will be certain that will have the upper hand in terms of leveraging these tools and learning more about them. And then some will potentially be left behind. And that I think is an important thing to sort of discover and think about. Because again, what we already have in this country, and certainly in a global sense, we have a disparity, we have a tiered healthcare system in a lot of ways. And what we don't want to see is that only those that are fortunate enough to be at certain institutions or certain groups to have this leg up or this advantage of the healthcare. Because in a couple of ways, if we're implementing these tools and technologies only considering the groups that are able to do that, we miss number one, the opportunity to include the populations that could potentially benefit from these technologies. But number two, we could also worsen the disparities that we're already seeing.

And so I think we're very cognizant of that. I think we spend a lot of time thinking about, "Well, if we can make the common diagnoses here at Stanford, can we also make the common diagnoses in rural West Virginia or in Southern India?" Or wherever those other populations that may potentially use these algorithms reside. And that's a critical, both a scientific question that we still haven't quite solved, but we're working on it, but also a technical and infrastructure question. So the challenges I think are predominantly around number one, again, infrastructure and sort of the heterogeneity in the landscape. But the second I think is keeping in mind that there are populations that could potentially be left behind and we really need to keep that in mind as we build these tools and deploy these technologies.

Emily Le:
Okay. So a lot of people are referring to AI as a black box because they do not feel comfortable with it yet. So if there is an algorithm that works very well for your purpose, but you don't know or understand it completely, would you still use it?

Matt Lungren:
Absolutely. I think this is a question and you're right, you're hitting on a very interesting conversation that comes up a lot. I've heard folks that say you don't ever really know how a person makes a decision and so there're people with black boxes. I've heard that before. I've also heard that in medicine. There are many drugs that we suspect we know how they work, but we don't actually know how they work, but yet they work and so that argument has been made before. I think that the black box argument, if you're taking it at face value and saying you have absolutely no way to figure out whether this model is working on correlation or causation, it could potentially cause harm, then of course I think that we have to spend more time working on understanding where it's pulling its weights from, what data sets it's being used to make a decision. That's true. I mean, no question.

However, if you have sound methodologic design and you have done your due diligence in terms of testing and evaluating and you're still not able to completely articulate the intricacies of the decisions, but yet it's effective and you feel with how it's effective, I have absolutely no problem using that in a trial or prospective fashion. And I think that that is probably the point of view of a lot of people that are doing this research.

Emily Le:
I see. So a lot of people are talking about precision medicine now and I wanted to get your opinion on how AI fits into the precision medicine landscape.

Matt Lungren:
Yeah, we talk a lot about precision health here and that's kind of been a big distinction at Stanford between precision medicine, precision health. I think that it sounds very much like a catchphrase, but it's quite a different philosophy. I think that a lot of the research that's done here is looking at keeping people out of the hospital. In fact, there was one of our leadership in radiology, Sam Gambhir, he quite articulately tells us that we should be celebrating when we close hospitals not open them, and that we should be working towards those goals. I think that's a really elegant way of sort of describing what precision health means.

And so if we can, again, turning back to something like the Apple study, which is a very much an early stage example, but if you can get to the place where the patients, the wearables, the peripherals are able to monitor, detect, and potentially help patients stay healthy, get to stay out of the hospital, catch things very early, monitor disease, that seems to be an institutional priority, but certainly maybe a noble priority to try to find ways to apply these technologies in that sense.

But certainly for medication discovery, for new oncologic targets, either via genomics or other cell biology work and in drug discovery, all of these areas are where precision health and AI have huge potential. And I think that obviously by academic institutions, but certainly large pharmaceutical companies have big advantages in potentially achieving those gains with technology. Certainly. Yeah.

Emily Le:
Yeah. So what do you want to see more in the field of precision medicine?

Matt Lungren:
I mean, I personally would love to see opportunities for us to conduct more global scale experiments, either with data collection and deployment or epidemiologic level of population health level kind of projects. I think that for me, we've spent a lot of time as a field and just because of logistics and convenience, working with the populations that we immediately have access to. But I think that transformation globally with obviously the adoption of things like cell phones and smartphones on the cell phones give us access to patients directly all over the world. And I think that with the spread of knowledge and the understanding of the opportunity that some of these technologies might bring, I think we have an opportunity despite a lot of the potential downsides to really try to leverage large populations and understand a little bit more about health and disease generally, which I think will inform a lot of how we move as a field overall. And that's across the board in all different areas of disease.

Emily Le:
So I do have a bonus question for you, Matt. We've been talking a lot about interdisciplinary study, especially in AI and medicine. What major would you advise your children or your family members to get into, a data science driven major or go to medical school like you, or both?

Matt Lungren:
Well, I firmly believe medical school is still a calling. I think that there was a lot of personal sacrifice that goes into pursuing medicine that I think, it has to be something that you're driven to do for one. But I think that anyone, everybody actually, no matter what you're doing, whether you're an English major, whether you're an engineering major, whether you're into art history, I think that it makes a lot of sense to have a basic competency in AI and the fundamentals of the technologies. Not necessarily because you're going to pursue a research career or because you're going to develop algorithms, but because whether you want it to or not, it actually runs and conducts a lot of your life. And that's whether you're interacting with your banking, your commerce, your email, your applications, for physicians, either institutions or jobs. If you become incarcerated, if you have brushes with the legal system, AI is everywhere and it's in places where it would help to understand the fundamentals where things go well, where they don't go well.

I think that some of the facial recognition research that had been done on some of the flaws and faults would be fairly straightforward to understand if there was a general competency along all the industries about how these tools and technologies do and do not work. And so I encourage everyone to look at even there's plenty of online courses that are free. I really, really like AI for everyone, which is no math, no coding, just a simple introduction to concepts that was free course by Andrew Yang. I think those kinds of things everybody should take. I think it should be required, at least for everyone to understand because that's increasingly how our lives are being conducted, whether we notice it or not.

Emily Le:
That's wonderful. Matthew, thank you so much for your time and insights today.

Matt Lungren:
Yeah, thanks so much for having me. It's been a great conversation.

Emily Le:
That was Dr. Matthew Lungren, associate director at the Stanford Center for Artificial Intelligence in Medicine and Imaging, and he's also an assistant professor of radiology at Stanford University. I'm Emily Le. Thank you for listening.


Premier Sponsors

NeoGenomics

Seegene