From 3M Health Information Systems
AI Talk: MRI, selfies and untaken test results
This week we focus on algorithms. An example of an application ill-suited for automation and two that are well suited.
Algorithm to speed up MRI scans
FastMRI is a joint research initiative between Facebook AI Research (FAIR) team and NYU Longone Health. The goal? Learn how to reduce the time it takes to do MRI scans by a factor of 10. Capturing MRIs involves a very tedious process where a patient lies supine inside a chamber for up to an hour! Obviously, a painful process to undergo, particularly for those who tend to be claustrophobic. The project was set up as a research challenge and research teams can download anonymized datasets and try their hand at speeding up the process.
So, how exactly can you speed up this process of capturing MRI? The basic idea is to under sample the image and to recreate the full image from the under-sampled image. The latest news from this group is that they managed to speed up the process by a factor of four. So, the idea is instead of taking an hour, you sample MRI for 15 minutes. Now, how can you be sure that this works? The under-sampled image is used to predict what the image would look like if it was fully sampled. The model to do this uses deep learning training with the dataset released. The predicted, or rather the fully reconstructed MRI image is presented to radiologists. If their conclusions from the reconstructed image are identical to what they come up with based on the original fully sampled image, then the reconstructed image is good. That’s what the team has announced. Of course, from research results to actual deployment will take a significant time (it needs to clear the FDA hurdle).
Selfies to the rescue
I came across new research results from the Chinese National Center for Cardiovascular Disease. Their researchers developed an algorithm to detect cardiovascular disease (CAD) using camera shots of the face of a person. Well, four shots to be exact. One front, two from the sides and one from the top. The study involved roughly 6,000 patients from eight hospitals over a two-year period. Data collected, in addition to the facial images, included various imaging studies and other information needed to assess the progression of cardiac disease in this cohort. The deep learning algorithm has performed better than current existing models used in predicting CAD. How is the algorithm able to do this? It turns out there are various markers that have been associated with CAD, like thinning hair, wrinkles, ear lobe crease and deposits around eyelids. These subtle changes are hard for the human eye to distinguish, but with enough training data, an algorithm can! This type of system makes for an inexpensive way to screen a population for potential CAD leading to early intervention.
Algorithm to predict untaken test results
I saw an article in MIT Technology Review last week about how the UK cancelled student exams due to COVID-19 and used an algorithm to predict student scores instead. Really? In my opinion, this makes no sense. I decided to dig in deeper to see what went on here.
It turns out the UK decided to cancel the spring exams taken by high school seniors and charged the Office of Qualifications and Examinations Regulation (Ofqual) to come up with an alternative to taking actual exams. And Ofqual came up with a statistical model to predict the student scores based on the teacher’s prediction, past scores attained by each student, performance of a school in general and a variety of other factors. To be sure, they’ve developed a fairly extensive model. The model was validated with data from 2019 and according to their report, “51 of the 55 A levels tested had accurate predictions for more than 90% of students within plus or minus one grade.” This implies 10 percent of the students’ grades were wrong – if you ignore the “plus or minus one grade” comment.
Translated to the real world, however, it implies 10 percent of 4.6 million students’ grades are impacted negatively. This is a colossal injustice to the students thus impacted. They may not be able to pursue their choice of career or the school of their choice because of these results. Their futures have been impacted, but this is not the only problem. About 40 percent of the students’ grades were downgraded from the teachers’ predictions, and students from disadvantaged communities were disproportionately impacted. This is a story we have heard in other contexts: Bias in the algorithm. There was a major uproar and Ofqual reversed its decision on algorithm-based scores and reverted to simply using the teacher’s prediction. One cannot help but wonder what possessed the regulators to even consider replacing a test with an algorithm. Perhaps this will make it into a case study roster of when not to develop an algorithm to predict something.
The story about the MRI was pointed to me by my longtime friend and classmate, K. Chandrasekhar.
I am always looking for feedback and if you would like me to cover a story, please let me know. “See something, say something!” Leave me a comment below or ask a question on my blogger profile page.
V. “Juggy” Jagannathan, PhD, is Director of Research for 3M M*Modal and is an AI Evangelist with four decades of experience in AI and Computer Science research.