AI Talk: Past decade, next decade

January 3rd, 2020 / By V. “Juggy” Jagannathan, PhD

This week’s AI Talk…

It was the best of times, it was the worst of times” – this timeless quote from Charles Dicken’s Tale of Two Cities, is surprisingly apropos a century and half later. We are in the dawn of a new decade, and it seems like a review of the past decade and a look ahead to the next is in order, particularly from the standpoint of tech and AI.

Arguably, the last decade has seen an explosion of new tech. There are plenty of articles reviewing the past (and the future), like this CNET article, one from zdnet and others. To me, there has been a massive uptick in the role and use of AI technology this past decade, perhaps spurred by the evolution of hardware and infrastructure technologies like GPUs, 4G and cloud computing! An early inkling of the scope of change came when Google’s translate engine transitioned from an early version of statistical engine using scores of linguists to a model that was entirely developed using deep learning in 2016. The same deep learning solutions led to even better speech recognition systems allowing Amazon Alexa to storm into our homes, pretty much making speech a standard form of input to most things. Google Home and Apple Siri are not far behind. We even have automation in cars that is steadily getting better. Speech and language understanding have given rise to a host of practical applications across all sectors and industries—including health care. Every device has become a source of data, and availability of data at scale facilitated by ever more powerful computers has led to massive analytics-driven actionable intelligence. The number of AI companies across all industries has exploded, with many carving out different niches with big data-driven solutions.

If all of this happened in the past decade, what will the next decade look like? Undoubtedly, one can expect the pace of innovation to accelerate! But what about the impact of all this technology on us? Does it make our lives more interesting and better? One possible answer to this question comes from an unlikely source: Andrew Yang, one of the Democratic presidential contenders. His book that came out last year, The War on Normal People, raises some troubling questions. Whether you are a Democrat, a Republican or an Independent, this book is an interesting read. The book discusses each sector of our economy—office administration, sales and retail, food preparation and serving, transportation and material moving, manufacturing, white-collar jobs and others—and describes how machine learning and AI solutions has the potential to displace workers . Unlike past automation revolutions, this one is multi-faceted and can simultaneously impact all sectors of employment. Any job that can be classified as routine is a candidate for automation. Whether you subscribe to Yang’s proposed solution  of providing a safety net to everyone with a “Universal Basic Income,” the themes of Yang’s book will be debated this year and many more to come. We should pay heed to the warning bells and take corrective action, whatever that may be!

Health care on the other hand, is in a relatively good place this coming decade. Doctors and healthcare workers need not fear wholesale replacements, but can look forward to technology really helping them take care of people. A recent book on this topic, AI in Health Care The Hope, the Hype, the Promise, the Peril, by the National Academy of Medicine, takes a broad look at the impact of AI in healthcare sector. On the hope and promise side of the spectrum, the book lists a litany of AI-enabled solutions: conversational agents, health monitoring and risk prediction, personalized interventions, assistance for cognitive disabilities, diagnostic decision support, surgery support, population and public health management, automated coding, etc. The technology will be largely of an “augmenting” nature, enhancing human intelligence and essentially freeing the clinical workforce from routine drudgery. On the peril side of the equation, the authors caution that solutions need to be developed based on the assumption that malicious actors may attempt to subvert. An interesting example of this is how one can take a normal retinal image, add some noise to it, with the resulting image considered as indicative of diabetic retinopathy! One can prevent such intrusions by including digital signatures to every image. Nevertheless, one needs to exercise care in creating models, avoiding bias, evaluating them and keeping them updated!

I would like to conclude this edition of AI Talk, wishing everyone a joyous 2020 and beyond!

I am always looking for feedback and if you would like me to cover a story, please let me know. “See something, say something”! Leave me a comment below or ask a question on my blogger profile page.

V. “Juggy” Jagannathan, PhD, is Director of Research for 3M M*Modal and is an AI Evangelist with four decades of experience in AI and Computer Science research.