AI Talk: War, minds and tech

March 15th, 2019 / By V. “Juggy” Jagannathan, PhD

This week’s AI Talk…

War and Possible Minds

On almost every front we are now bombarded by the potential for good, evil and stupidity (bugs) with emerging AI technology. If any of you saw the recent release of Captain Marvel, AI and superintelligence are central to its story line. Of course, Hollywood and the science fiction genre have for a long time explored this area. I read two different books recently which underscored this preoccupation with AI. The first one was a recommendation from Bill Gates: Paul Schaare’s Army of None. The second one was an edited volume by John Brockman working with AI luminaries to come up with 25 ways of hallucinating about the future.

Paul Schaara traces the evolution of munition over the years—from laser-guided bombs, precision or smart bombs to swarm drones. However, the overriding message is about the military use of AI. Such use must be circumspect, and you cannot give machines too much autonomy. Autonomous weapons need to have human oversight to avoid potential catastrophic consequences. Thankfully, the U.S. military commanders appear to agree. However, it is not hard to imagine rogue actors abusing autonomous weapons. 

In Possible Minds, John Brockman sets out to give a voice to different perspectives on AI. For a book focused on the future of AI, it looks back a great deal, going back to the work of Robert Weiner from 1948 (he warned about the danger of autonomous machines). The book is sort of a trip down memory lane, tracing the ups and downs of the field of AI for the past 60 years through essays. The overriding concern is Artificial General Intelligence (AGI)—another term for superintelligence. Predictions about when we will achieve AGI vary widely. However, there is a consensus that it will happen and whether it happens in 30 years or 300 years, systems need to be designed with the right goals—essentially a safety engineering issue. Some essays referred to Asimolar AI Principles which essentially provide a roadmap for how to conduct AI research and give AI systems the right goals for the common good. Essays also explored the intersection of genetics and art with AI. One argues the Turing test is no longer relevant to test AGI. Another essay states the obvious: A four-year-old has better learning skills than AI systems today. All in all, an interesting collection of perspectives from the old guard. It would have been good to see a few perspectives from younger researchers. For that, look at the next article!

SXSW Conference

Every year in Austin, TX, an eclectic group of professionals gather to celebrate music, art and technology. Just take a look at the 25 sessions that happened this week at the SXSW Conference. An impressive collection of topics. I guess without actually attending the conference, we will have to rely on third party reporting. I did find a collection blogs covering various aspects of the SXSW conference, along with the technology-related discussions. A number of interesting applications are being discussed: an AI-powered coffee shop that can interpret an order given in sign-language; Bose has released glasses (called Frames) that substitute as wireless headsets and can talk to Siri or Google Assistant; an AI powered robot pet that can be trained to recognize the owner and learn to do some tricks; and lots of discussion around the future of news, Facebook and Quibi. What? Quibi is short for Quick Bites—a brief video story. Land O’Lakes (which I typically associate with butter) also made a splash at SXSW talking about biodiversity. Finally, this blog covers health apps and autonomous cars.

I am always looking for feedback and if you would like me to cover a story, please let me know. “See something, say something”!  Leave me a comment below or ask a question on my blogger profile page.

V. “Juggy” Jagannathan, PhD, is Vice President of Research for M*Modal, with four decades of experience in AI and Computer Science research.