From 3M Health Information Systems
AI Talk: Bingo, Dungeon and transparency
This week’s AI Talk…
Bingo for kids
The word bingo conjures up images of senior citizens congregating in a community center with a bunch of cards with numbers and a dauber, but this bingo, discussed in the latest issue of MIT Technology Report, is designed for kids 9 to 14. It is AI Bingo, a game designed to stir up kids curiosity towards AI. The matrix of rows and columns in the cards, however, has the features of AI systems like “search for something in Google”, “have an email labelled as ‘important’”, “get a forecast from a weather map”, etc. The cards also have two blanks: one for dataset and another for prediction. As the announcer reads out a series of datasets and predictions, the kids have to fill out the bingo cards matching the right dataset with the right prediction with the right AI-enabled feature! Any kid that gets a row or column or diagonal correct wins. A novel way to introduce kids to how AI-algorithms work, forcing them to think about how AI systems predict and what datasets they use for that purpose.
Well, we don’t want to leave out adults! It turns out you can have fun playing AI Dungeon, a text-based game that uses AI. Built on top of OpenAI’s text generation system, you can co-write a story with the computer! You start with the genre of story you want to write: adventure, science-fiction, fantasy, etc. The computer starts off the story and you add to it. It continues where you left off and adds a few lines. In this way, you and the computer continue to alternate generating the story line. The computer keeps track of characters and is impressive in the coherence of the storyline it displays! Reminds me of the Eliza system some forty years ago, which behaved like your shrink, but that was based on simple tricks. This one probably uses some tricks as well, but relies more on the text-generation of language models from OpenAI. If you want to try your hand at co-writing a story, here is the link.
Case for AI transparency
The Brookings Institute is a nonprofit public policy organization. They have an initiative focused on AI and emerging technology (AIET), where they explore various policy issues related to governance of AI technologies. Their latest article published last week addresses the need for transparency in AI. The policy prescription advanced by this article is that the bots that we interact with should declare that they are bots – i.e. be transparent. Of course, if you interact with a customer assistant bot which drones on and says, “say “Yes” or press 1,” you know that you are dealing with an automated system. But those systems may soon be a thing of the past, as interacting with bots becomes increasingly sophisticated and a casual user of the system may not necessarily discern the difference. In this coming age of what the author, Alex Engler, calls “Anonymous AI,” the bots are likely to be ubiquitous in all parts of life – from commerce to politics. He refers to a study where chatbot’s sales performance dropped by 80 percent when they disclosed their true nature over bots that did not disclose. Talk about incentives for not being transparent! In the social media domain, it has been noted that about 13 percent of twitter accounts are automated bots. Last month, Facebook announced that it will remove videos which are deep fakes and this week Twitter did the same. Regardless, the author argues, Congress should pass legislation to make all bots transparent – i.e. when you interact with them over the phone, online or through your favorite social media medium, there should be a declaration that the content was automatically generated by a bot. Will that happen? Good question.
The AI Dungeon link was sent to me by my previous boss, Detlef Koll. My thanks to Mary Zeigle to point me to the work done by The Brookings Institute.
I am always looking for feedback and if you would like me to cover a story, please let me know. “See something, say something”! Leave me a comment below or ask a question on my blogger profile page.
V. “Juggy” Jagannathan, PhD, is Director of Research for 3M M*Modal and is an AI Evangelist with four decades of experience in AI and Computer Science research.