Podcast Episode Transcript: Sorting through a sea of data: AI in health care

With L. Gordon Moore, MD, V. “Juggy” Jagannathan, PhD

Gordon Moore: Welcome to 3M’s Inside Angle Podcast. This is your host, Gordon Moore. Today, I am speaking with Juggy Jagannathan, and he is a inventor. He is a researcher. He is a professor of artificial intelligence and understanding language and how machines can use language to do things for us that are beyond human capacity. Welcome, Juggy.

Juggy Jagannathan: Thank you.

Gordon: Thank you for agreeing to be on. The reason I thought to reach out and speak with you is that I’m fascinated by the work that you’re doing in the intersection of artificial intelligence and health care, and how, in a blog I read of yours, you’re talking about advances in technology and understanding and the capacity. And that makes me think about the evolution of health care and how we’re constantly improving. So I’d like to hear you talk some about that. What do you think?

Juggy: Well, it’s been a really crazy time with respect to the evolution of the technology and AI in particular. And I have been in the field of AI for four decades now, and it’s gone up and down. In the early 80s, there was a AI hype and then there was an AI winter. And this past decade, there’s been a significant progress in machine learning and deep learning, a specific type of technology which tries to mimic human brains. And that has led to unprecedented evolution of different solutions across the spectrum. And it’s really an exciting time. And it’s been fascinating to see what’s happening on this front.

Gordon: So tell me, I mean, you’ve been into it for four decades, I’m thinking that’s almost back like the dawn of computing age, of course, I know that’s not true. But I think that there must have been rather a remarkable evolution. And you talked about things going through a winter. Just tell me briefly what you meant by that.

Juggy: The earlier systems used to be called expert systems. And the way they worked was they basically looked at how to embody the expertise of a human into algorithms, and that’s how they all evolved. And those systems turned out to be fairly brittle. They didn’t live up to the hype that people thought it will solve everything. But now, we have this deep learning wave and it basically relies on lots and lots of data. So, for example, if you want to recognize a cat, you show a million images of a cat and it’ll recognize a cat in every aspect you can think of. And so with the explosion of data that is available and with the explosion of capacity in the technology, the solutions that you see are just progressing at a rapid clip.

I like to think of them as access of evolution. In one hand, the computing powers are exploding rapidly. The capacity to do more and more in shorter and shorter time is expanding rapidly. The sensors are evolving rapidly, the hardware, the augmented reality, and 3D printing. There are all kinds of evolution on the hardware access. The algorithmic access is what I would like to think of as something which is progressing independently, which is also making lots of applications possible. You’re seeing applications on almost every aspect of health care. And later on, maybe I’ll go through the smorgasbord of applications that are out there. From an algorithm standpoint, I like to come back to it again to give you the nuances of how those things impact the solutions that we are now seeing with deep learning in health care.

The other aspect which is also evolving is the workflow aspect of it. The types of workflow that you’re going to see are going to be enabled by technology. You’re going to have coordination of care, you’re going to have telemedicine, you’re going to have care being provided at homes. And these kinds of evolution of the technology side, the workflow side of it makes the context of the solutions that are being provided more meaningful and more effective. And the last but not the least is the regulatory aspect that needs to evolve to support all of these technologies to actually benefit the patient at the center of our universe.

Gordon: So, tell me more about the last aspect.

Juggy: The regulatory aspect?

Gordon: Yeah. I’ll express the nature of my concern. I have this perception that technology and process improvement, quality improvement are far ahead of the regulatory payments, policies that are confined health care. So, for instance, you mentioned telemedicine. Where I see that there’s a huge lag between what’s known to be good for patients and how we pay for telemedicine. And when I think about AI, I think maybe this is not a place where there’s that much regulation. Is that what you’re talking about?

Juggy: Well, there is some regulation. For instance, the FDA has approved a lot of the imaging modality solutions that are coming out there. I’ve seen a document that the FDA is looking at how do you accelerate a new drug discovery that is coming through the AI pipelines. So there is those kinds of regulations, but then I do agree with you. I remember being involved in a telemedicine application in 1993, and even to this date, the payment systems haven’t caught up with it. But Medicare and Medicaid are actually getting to a point where they are supporting these kinds of regulatory reforms in the name of value-based care and in the name of a capitated payments or what have you.

But I do agree that people need to take a look at how to allow the AI solutions to flourish. But you have to keep in mind that almost 95 percent of the AI solutions out there has to be of the nature of augmenting a physician or augmenting a health care worker to do what they do, and to help them—and to nudge them along in the direction of doing the right thing, at the right time, at the right place. So the AI technology, at least in health care, should be largely viewed as assistive technologies. They are not going to take over the role of a physician anytime soon. Even in the areas of radiology, their role is to actually look at what the AI is suggesting and incorporate it into what they are doing.

And the reason why I’m saying that is, the solutions that AI currently have and probably for some period of time, is point solutions. I mean, you develop a deep learning algorithm to very accurately diagnose, say, pneumonia, but the next disease, not so much. You might be able to selectively screen and get right solution for a person with a particular disease, that’s not a real solution which will work in a real workflow. So a lot of the AI solutions tend to be very deep and very specific, so it needs to be in the context of a workflow. And for the foreseeable future, it will be basically augmenting what the physician does.

Even people like radiologists don’t have to fear that their jobs are going away. The jobs will change and what they do will change in fairly qualitative and quantitative ways, but from a regulatory perspective, going back to your question, they need to facilitate the payment reform so that these technologies can be adopted fairly easily.

Gordon: Yeah. I want to now pivot into what you’re describing in terms of going very deep. It’s my impression, as I look into AI in health care at least, that the solutions are so focused and so discrete that it becomes challenging to think about bringing on a solution that does only one small thing well but leaves so much else on the table. So, if I’m a radiology department or a hospital and I bring on an AI solution that’s terrific at mammography. If mammography is the bulk of what I do, that’s terrific. But if it’s just one of many, many things, then I start to wonder if I need now another solution for looking at chest X-rays to diagnose pneumonia or head scans to look for signs of stroke or hemorrhage.

And then I think the promise of AI seems to be so grand, may be grandiose and the delivery is so narrow, and then I wonder if that has something to do with my experience years ago, looking at some of the voice recognition stuff that ends up putting out so much error that I have to spend all this time editing and cleaning up. Are these issues linked?

Juggy: I guess they are. But if your experience with speech recognition 10 years ago was that there were so many errors that you’re better off just typing it to yourself. But look at speech recognition now, it is 99 percent correct for most people, right? And so the technologies have evolved. And the issue with the radiology is fairly deep, is related to this notion of—you need lots of data, right? If you want a machine learning, deep learning neural network to recognize a cat, you need to show tens of thousands of cats. And if you look at it from a human standpoint, no human learns like that, right? You show him a few cats and you immediately generalize. And this capacity to generalize is fairly human and fairly unique.

And this is the kind of technology innovation people are working at right now is, how do you actually learn from less data than what we have? So from a training perspective, I’ll go back to diving deep into this whole notion, instead of showing 10,000 cats, can you show only a few hundred cats, and you not only recognize the cat, but then you’ll transfer this idea of how you’ll generalize to recognizing dogs. So there is this whole concept of transfer learning which is now taking off in the area of the deep learning algorithmic side of evolution. And the idea there is  to use unsupervised learning techniques.

So for instance, you have lots and lots of texts floating around, right? You have lots of clinical documents, you have lots of Wikipedia notes, and you can actually train a neural network to consume these texts and basically say, “Look at the first two words and try to predict the next word. Read the first three words and predict the fourth word.” And you can actually develop a model, and this is called a language model. But more than a language model, what it is trying to learn is it’s trying to learn the representation underneath these words, these concepts. And once you have these, with less data, you can actually try to do more.

The idea is, learn to predict different things with massively available data, with very little supervision, and then you transfer the concepts that you have learned and be able to deal with less data and predict more things. Coming back to the radiology side, what it means is there a way I can learn with lots of labeled data and then train on a smaller amount of data to predict other things? So right now, the way radiology image recognition works may be like what speech recognition was 10 years ago, but the field is rapidly evolving.

And there are two different forces here; one, you could have this notion of transfer learning, which will learn with less data. And then there is another, which is a field of experts, right? You can have an expert for X-ray images, expert for mammogram, an expert for—and just like you a tumor board of different experts coalescing around how to treat a particular cancer patient, you could have a collection of experts, even though they may be narrow and deep, providing feedback to the physician to basically a field of experts converging on helping you diagnose different aspects of the disease.

Gordon: When you’re talking about the field of experts, you mean an artificial intelligence field?

Juggy: Yes. Artificial intelligence expert. A whole collection of these deep networks turning around and helping the physician. So each one of them may be good at doing just the mammogram or recognizing lesions, et cetera, but they can collectively as the field evolves—and in the area of dermatology, somebody has created deep learning networks which will diagnose 2,000 different diseases. So even though the solutions which are being developed first tend to be deep, eventually, they will get better and better at it.

Gordon: And so it’s interesting as you talked about ways to overcome the size of the data necessary to train machines to do this, because I understand that that’s been one of the problems with some application of AI in health care where the cell sizes of the outcome becomes small enough that we can’t really trust the accuracy of the machine. And so this sounds like this knowledge transference approach overcomes that. Is that theoretical or is that actually happening?

Juggy: No, it’s not theoretical. It’s an activity of research right now. Last year, in fact, there was a major breakthrough. Google released something called BERT, and over a few months, they bested the state-of-the-art results in a whole variety of language comprehension tasks. Basically, you use unsupervised learning techniques and created a huge model, and applied transfer learning for a range of comprehension tasks. This time, I’m talking about the advances in natural language understanding and comprehension and related tasks, which is at the center of a lot of different applications. So it’s not theoretical at all, it’s an ongoing, exciting development in the area of the algorithmic side of the equation in AI right now.

Gordon: Are you aware of any forays into health care?

Juggy: I mean, everybody is into health care. Google is applying these things to health care, Microsoft is applying health care. Of course, we are in it as well. So health care is a huge focus for all of the language and understanding the types of technology we’re talking about. And all the big players are in it as well as tons and tons of startups. I remember seeing a picture from CB Insights, and there’s said, in the last two years, there are a few hundred startups in health care focused on almost every aspect of health care, from variable technologies, to data analytics, and all kinds of things.

Gordon: It makes me wonder about the ability to tease apart hyperbole from reality when I think about the number of startups, which are really onto something versus they look shiny and wonderful but don’t really have much behind them.

Juggy: I remember this statistic very well, one in ten startups succeed. So we have a few hundred of them, maybe a few of them will succeed. And which one is going to be the next Google or Twitter or Uber, who knows? But it is an exciting time in AI. One of the advantage of being in the field for four decades is I remember the Internet Bubble as well. So in the early 2000s, God knows how many companies were there, and they have ended up in smoke. But this time, it feels different. It really feels different. At that time, we used to think that how in the world are any of these companies are going to survive and thrive? And sure enough, a lot of them failed.

I mean, clearly, if you have a few hundred companies, majority of them are going to fail. But health care is a huge area. And all these people are going after niche products, right? So there is a conference on just variables that happened a month ago. And all these with the Apple Watch and the monitoring of EKGs and monitoring of falls. There’s really a ton of applications just monitoring and getting the sense of how a person is at every aspect of life. But the important thing which has not been addressed in any significant fashion is, how do you coordinate all of this data points to actually affect a good outcome to the patient at the center of this or the person at the center of this? So there are all these silos of information, just that we have more and more of it right now.

Gordon: That, I got to say is very interesting to me. Because when I think about the devices that can stream real-time information around oxygen saturation, or blood glucose, or blood pressure, as a primary care physician, I think about that stream of data from one patient, and my hair stands on end. I can’t consume that. I can barely consume the number of e-mails I get just asking questions. And I think, “Well, how in the world am I going to deal with that as these devices come online and they start to push their way into my workflow?” I want to run and hide. What do we do about that?

Juggy: That is an excellent question. And the reason I pointed out this unaddressed or under addressed area, somebody needs to develop algorithms and solutions which summarizes all of this information in a meaningful fashion, right? I mean, you’re not going to be able to look at all of these things. So all these data streams have to be condensed, summarized, and presented in a fashion which makes sense, to whoever is looking at it. And there’s probably a potential for a startup there.

Gordon: I hope so. I mean, when I think about the devices, it occurs to me that there needs to be some entity that consumes the raw information streaming and then creates a bunch of algorithms to say, this is all within normal and we’ll have an automated response system that deals with that in an escalation process that can eventually reach my eye as a clinician to say, “Hey, this person needs attention.’’ And I would think that seems so straight forward that there would already be part of the offering. Is that not the case as you look at the field?

Juggy: No. And actually there are multiple problems there. One is the standardization of the information coming from all these multiple streams. If each censor or variable technology puts out its data and its own unique format, it’s not going to help. So there needs to be a standardization around what is sent and how it is sent. And then there is this willingness to share, right? Of course, we have some regulation around information blocking. So we are talking about all these streams of data. There is just tons and tons of data, right? You have variable data, you have data from all the different things happening around the patient in his home setting, you have data about his genetics, his immunology, his biomes. And then you have the data from the EHR, his clinic visits.

So all of these data, somebody needs to summarize it. Number one, they are not in the same format, so there is some standardization needed. Number two, there is not a willingness to share. You are aware of the fact that clinics and hospitals, they put all this information blocking in and they don’t share this information. And then, there is this overarching privacy concern. And all of this makes this problem particularly hard to solve in the current climate.

Gordon: That is rather daunting as I think about it, and yet at the same time, there has to be some sort of solution because the promise is so great in terms of what can be done. When I think about the typical application of computer technology and the health care workflow, I think it’s pretty commonly shared experience of, I have a dropdown fields I have to click, I have things I have to do as a clinician, most of which I think has little, if nothing, to do with the care of the person in front of me and it’s quite distracting, very frustrating. And so the promise of technology helping me out has been thwarted by the actual application in my work. And yet now and again, there’s some brilliance. Now and again, there pieces that seem to be quite good.

I appreciate your comments about the voice recognition. It’s certainly my experience now today that I can speak to my phone and see a text appear that does a good job. And even sometimes goes back, initially proposes a word, realizes in the context that it’s inappropriate and fixes it to what I had actually said. And it’s pretty impressive, and this is not done through my phone connecting to, I presume, servers somewhere else. So there’s a lot that can be done.

Juggy: I don’t want to sound pessimistic. The technology of how physicians document using speech and other solutions has advanced fairly significantly. So there are point solutions which have emerged, which help a lot, nudging a person to do the right thing, or providing clinical vision support and the like. The areas which involve getting a holistic view on the patient and the synthesis of data streams across the board, again, there is progress. Recently, there is a good bit of uptick in something called the HL7 FHIR standard. And that standard has been now—again, I will mention the regulatory aspect of it, the government and the ONC is pushing towards the adoption of those standards, which is a good thing.

So, the synthesis aspect and the summarization aspect has a ways to go. But point solutions, giving abilities for the clinicians to document more effectively, and the various solutions which are point solutions in AI, actually, what they ultimately do is they augment the physician and getting back some time. So he has the time to provide the necessary care, like you were saying, being distracted by EHR, which is one of the major cost of burnout right now. Those aspects can be mitigated as we move forward.

Gordon: Well, you’ve given me a ray of hope. That’s certainly what I think I would like my colleagues to experience in the field as they start to see that there’s something working to make their jobs a little bit easier, maybe filling a regulatory reporting requirement without requiring a clinician to use pull-down menus and click boxes. I think there’s been too much of that and too little of technology that actually makes my work a little bit easier. So I hear a ray of hope. I hear point solutions. I’m hoping that there are more of them. I hope somebody listening to the podcast says, “I can take on this aggregation of information and algorithmically serve up information that streams all this together into a discrete data element that makes my job easier and helps me deliver great care for the person in front of me at the point of care.” That would be terrific.

Before we end, do you have any last words or recommendations on where we ought to go next, where we need some breakthroughs?

Juggy: I remember, Kurzweil wrote a book on singularity a while ago, and he predicted that computers will become super intelligent, and it’s called artificial general intelligence. At which point, the computers basically have figured out how to learn by themselves and rapidly become basically superhuman. We are very far away from it. I don’t know if it is 50 years or 30 years or 100 years, but for the foreseeable future, we are going to have, at least in health care, AI systems helping physicians. The nature of the jobs which the physicians do and the health care workers do is going to obviously evolve as the technology evolves and provides them more care to be actually empathetic and more focused on taking care of patients.

I am genuinely optimistic about the evolution of the technology. The algorithmic side of the equation is evolving rapidly. And it’s daunting to actually keep up with the technology advances as it is happening right now. I see papers posted almost everyday, and it’s almost impossible to keep up, but at the same time, it foreshadows that you’re going to have real solutions in the health care area. Health care traditionally, have been the slowest to adopt technology, and with the finance industry being the very first. I hope that that part is changing. And with the advent of all of these startups and technologies, that you’re going to see more real use cases than just theoretical research papers.

Gordon: I look forward to that as well. Juggy, thank you so much for your time today.

Juggy: Thank you so much for having me.

View Session Spotlight (PDF)