Putting the AI in Wait: Is It Ready to Solve All Our Pathology Problems?
I'm Rory McCann, with Cambridge Healthtech Institute. I'm joined today by Dr. Richard Levenson, who is Professor and Vice Chair for Strategic Technologies, in the Department of Pathology and Laboratory Medicine, at UC Davis Health.
We are thrilled to have him participating at the Molecular Medicine Tri-Conference, in San Francisco next month, chairing a session on slide-free imaging, and giving a presentation in the session on business-related aspects of digital pathology. Dr. Levenson, thank you for joining me today.
Now, artificial intelligence and machine learning are creating a buzz in the industry. Across pharma, healthcare, finance, business, and many other industry sectors, the focus recently is on AI. For the first time, we are starting to realize how transformative the power of computing can be to medicine and pathology. Can you give us an example of where this has had an impact? Or of a successful implementation? Or case use of the AI in pathology?
Well, in contrast to the situation in radiology, which has had a few early successful attempts to get through the FDA, there is not yet a single FDA-approved implementation of AI for pathology. So these are very early days, and I suspect that the task will be more daunting for pathology than for radiology, for a number of reasons, including the fact that the radiology is tasked with coming up with impressions, whereas pathology is tasked with coming up with definitive diagnoses. So the bar is a little bit higher. I imagine that this framing may be disputed by my colleagues across the aisle, but it seems that pathology has a little higher bar to get over.
Now, you say we're in the early days. Recently, you've said that the initial glow may be fading, and that we are starting to realize how hard it can be. Can you elaborate on this, and provide some examples of where this has been the case?
Well, it's clear that some manifestations of artificial intelligence are extremely eager to please. They will do whatever you ask them to do, and they will try to give you the answer you want. But I've found, and other people have found too, that some AI tools are pathological liars, pun intended.
The tools are exquisitely sensitive to the training and the images that you feed them. So, preliminary experiments can be extremely exciting, with wonderful ROC areas under the curve, but once these tools are tested on more general examples, examples on different cancer types, or from different institutions, or imaged with different instruments, the fragility of the methods become more apparent.
And again, I'm no expert in this area, but the thing was found with some AI tools in, I think it was lung cancer detection, where quality of the output was exquisitely set by the content of the training set. And so if the training set had lots of cancer examples in it, then the tendency for the AI tool was to call many normal things cancer, just because that was the world it was brought up in. So we have to be cautious, about not only the methodology we use, but also being very critical about the results.
Its market value estimations are through the roof, and you've already mentioned sort of a cautionary tale. What are some potential unintended consequences, or unsuccessful implementations of AI? Do you see any avenues forward that might avoid some of these pitfalls?
Well, it really depends on what AI is actually being used for. And I think, there was an initial blush of enthusiasm in which AI was going to become the definitive pathological diagnostic tool, and that has major implications on how pathologists work, and on the economics. But, I think the tone nowadays is much more modest, in terms of goals, and it's about logistics, pre-processing, relieving pathologists from time-demanding, and boring, and difficult to perform, quantitative tasks. And so, depending on where it rolls out, it'll have very different effects on the economics, as well as the sort of sociology of diagnostic medicine.
One major potential consequence down the road ... If people, especially trainees, perceive that AI is beginning to gain a foothold, in image recognition tasks, like radiology, like pathology, what does that mean for recruiting new trainees into such programs? And I think there is some indication that radiology is beginning to see a fall off, in terms of people planning on entering that discipline, although I've seen several different reads on that. But I think anyone who is trying to decide which way to go with their careers will certainly have to weigh the potential impact of AI tools on what they would do.
Another major concern, especially with these sky-high evaluations, is that somebody has to make money at the end of the day. You can't just throw in an AI tool for free. Because then, where did the $20 million that the founders put into the company go—that is, how do they get their tenfold-return on their investment? At the same time, if they do charge amounts per case that would justify the financial calculus, how would you demonstrate that's actually saving money for the healthcare system as a whole? Because if you have a pathologist fully engaged with an analysis, and you have a AI tool also fully engaged, presumably both have to bill for their time. And now, have you perhaps doubled the costs?
So, there has to be some very careful understanding of how the economics fall out, and what the implications would be, either for the business founders, or for the practitioners. And this is very much an open question.
Now, if you had a crystal ball in this area, what is modest expectation for the next five years?
Well, Monday, Wednesday, Friday, I'm an optimist, and the rest of the time I'm a pessimist, and Sunday I rest. So I think I'm relatively pessimistic, until the world changes, and the world could change once we have an FDA-approved AI-based test. In which case, people will begin to see how this might roll out.
And I think, the question is also very much, what are the roles of AI? If they are to do high-profile diagnostics, that's one thing. If it's simply to triage out a lot of normal so pathologists don't have to look at them, and thereby reduce, rather than increase the workload on the pathologists, and allow them to work with more rewarding and challenging cases, I think that could be a win. But that's sort of a modest goal, because there the task is to find the normal, whereas a lot of people think that the AI will actually be able to diagnose the tough cases.
And, as they say in the legal profession, something like, “Tough cases make bad law.” I think tough cases are really going to be a real danger for AI, because there aren't a lot of them. Very often, ground truth is hard to find. Then validating what the AI says about tough cases is also hard, because the cases are indeed tough. So I think it's a very complicated battle on the ground that will have to be fought case by case, and example by example.
Well, thank you so much for being part of Molecular Medicine Tri-Conference, and its digital pathology track. We're really excited to have you take part in this conversation. We're looking forward to seeing you in San Francisco. What are you planning to accomplish by attending and presenting at the meeting?
I hope to accomplish finding good sushi.
Well you'll have to let us know if you find any. Thank you so much.
You're very welcome.
You can learn more from Dr. Levenson on Wednesday, March 13th at 2:30. You can check out www.triconference.com. That's T-R-I-C-O-N-F-E-R-E-N-C-E, dot com, for more information. We hope you'll join us in San Francisco.