AI Talk: Future of health, voice navigation of apps and gene therapy

February 19th, 2021 / By V. “Juggy” Jagannathan, PhD

This week’s AI Talk…

Future of health

I came across a press release from UCI Institute for “Future Health” which intrigued me. UCI Institute’s research mission and goal are in reimagining health care. Their vision? To harness the power of technology to develop a personal health navigator which will perpetually guide and nudge everyone toward a healthy lifestyle. Essentially, this means taking research and seamlessly translating it into practice. The idea is to integrate all kinds of data from wearable sensors, genomics, proteomics, EHR data and other sources using AI and knowledge-based systems to personalize care. Indeed, this is a bold vision and a much-needed push to promote personal responsibility and markedly reduce dependence on the current health care system.

Navigating apps with voice

I saw a post on the Google AI Blog that discusses how to navigate apps on your phone using your voice. I use Siri to do a limited range of things, like asking “play NPR” and it plays my NPR station. The work described in the Google blog, called IconNet, is much more comprehensive. This program attempts to navigate all the apps in your phone using voice. This is indeed a complex task because the program must recognize the app being referred to and navigate the various objects and elements referred to within that application as well.

IconNet is a vision-based object detection model that works in Android phones with a small footprint and a reasonable response time to execute commands (usually in the sub second range). The model detects icons on app screens and is able to respond to commands like “tap camera” or “go home.” When ambiguity is detected in the command, for instance when you say “play video” and there are a number of videos that can be played, the app numbers them on the screen and you can say, “play number one.”

You can read about all the deep learning and related technology underlying this solution in the Google AI blog—from neural architecture search to using CenterNet architecture for mapping image pixels to key points in the image. Interesting solution, but it remains to be seen if this becomes a dominant way of interacting with your phone.

Future of Gene Therapy

There is a quiet revolution brewing in the design of therapeutics motivated by advances in gene technology. The promise here is potentially any genetic disorder can be cured! The FDA has approved three such therapies already. The FDA website also provides a layman’s description (YouTube video) explaining the basics of the approach.  

Essentially, the trick is to use a virus as a vehicle to deliver a good gene to replace or correct a defective gene. Viruses can attach to cells and deliver their payloads, but before this can be accomplished, the virus has to be modified so it doesn’t end up infecting the person and making them sick. One other complicating factor is our body’s immune response. The moment we notice any new virus, our immune response kicks in and attempts to destroy the virus. This is normally a good thing. However, when the virus is actually a carrier of a fix for a broken gene, the immune response can interfere with the effectiveness of the gene therapy.

So, what’s the solution? To engineer a virus that is able to attach to cells in the human body, but with the ability to fool the immune response—literally fly below the radar and deliver the good gene to replace the bad gene. This is exactly the problem researchers are focused on: how best to engineer viruses which can behave like this.

The Wyss Institute at Harvard has teamed up with Google Research to bring AI and machine learning to attempt to solve this problem. They have been focusing on Adeno-associated viruses (AAV) which have been shown as effective carriers (also referred to as capsids) for delivering gene therapy. Searching for AAV variants (also called serotypes) has been cast as a neural network search problem. To fool the immune response, the variant has to be significantly different while also maintaining the characteristics of AAV. The research team has utilized three different neural network architectures to come up with more than 100,000 possible variants of which they have judged 57,000 as being more diverse. I am not sure if this means they have found 57,000 ways to fool our immune response, but presumably they are all candidates for delivering the correct gene payload! Whether this approach actually works and ends up truly starting a gene-therapy revolution – only time will tell.

I am always looking for feedback and if you would like me to cover a story, please let me know. “See something, say something!” Leave me a comment below or ask a question on my blogger profile page.

V. “Juggy” Jagannathan, PhD, is Director of Research for 3M M*Modal and is an AI Evangelist with four decades of experience in AI and Computer Science research.


Listen to Juggy Jagannathan talk AI on the ACDIS podcast.