From 3M Health Information Systems
AI Talk: Bias, reimbursing for AI
Bias in AI
From time to time I have explored the topic of bias in AI. I came across another article on this front recently which has a nice summary of the challenges with AI in health care applications.
One of the current issues in using AI algorithms is treating it as a “black box.” You feed it, say, radiology images, and out comes a diagnosis with no explanation. The article discusses a case when a Google deep learning algorithm proved exceptionally good at diagnosing a set of chest X-Rays. When researchers examined in detail how the algorithm arrived at its conclusion, they found every one of the images had pen marks — radiologists had marked the images with notations about what was wrong with the patient! And, the algorithm had interpreted the pen marks as an abnormal chest X-ray! No pen marks, no abnormality. Clearly, this is not going to work in practice and just one reason researchers have been pushing for explainable AI solutions
Another case discussed in the article, which we have explored before, is one involving care coordination. In this example, a hospital algorithm was widely used to determine who should receive more attention. The thesis used by the algorithm looked at resource utilization, i.e. who had been hospitalized more, who had used emergency services, etc. as a proxy to decide who was sicker. Those who are sicker need more attention and care coordinators to reach out to them periodically to make sure they are doing well. The algorithm does this by looking at claims data which are easy to access and process.
When this practice was analyzed in detail, it revealed a systemic bias in the system. Minority communities, even if they are sick, tended to use the health care system less than other communities. The algorithm essentially ignored them as its algorithm was cued to recognize resource utilization, and they did not receive the care coordination services which could have helped them. Clearly not the intended outcome.
The silver lining in all of these case studies is the observation by Sendhil Mullianathan, researcher at the University of Chicago’s Booth School of Business: “… so actually, algorithms are a remarkable remedy for ourselves…. when people talk about algorithms bias they’re looking at the algorithm and the creators of the algorithm but they’re forgetting that in many cases that algorithm is a substitute or an aid to human and a much older literature is on human bias and human bias is much bigger much more intractable and is very hard.. the nice thing about algorithms is that they sit in a box and we can look at their behavior we can tweak them… they actually offer this amazing opportunity for us that if we’re careful we actually can do a lot more good things with them.”
$1,000 CT scan read by AI
I found this recent surprising news item where CMS approved paying $1,000 to hospitals using augmented AI for reading CT scans. What was the use case here? Turns out Viz.ai got FDA approval for detecting stroke in CT scans. The path to reimbursement didn’t just rely on technology that can detect stroke though. It relied on a complete reworking of the workflow to treat stroke patients.
Using the stroke detection technology (built using deep learning techniques), the software directly contacts specialists who can address the problem and bypass radiologists. It is a well-known mantra with stroke victims that time is of the essence. The sooner the intervention happens, the better the outcomes. Viz.ai did a study which shows the improved outcomes that come from the early intervention their solution enables.
Improved outcomes also imply overall reduction in cost of care. Those results convinced CMS to reimburse for the technology use—$1,000 for every time it is used. Hospitals lease the software from Viz.ai for $25,000 annually and they break even if they use it 25 times a year. They make much more money if there are lot more uses. But, as the article points out, reimbursement and alignment of incentives is the only way to get better outcomes. This is a win-win-win for the technology startup, hospital and of course, the patient!
I am, once again, indebted to my classmate and friend, Chandy, for both of the stories above.
I am always looking for feedback and if you would like me to cover a story, please let me know. “See something, say something!” Leave me a comment below or ask a question on my blogger profile page.
V. “Juggy” Jagannathan, PhD, is Director of Research for 3M M*Modal and is an AI Evangelist with four decades of experience in AI and Computer Science research.