AI Talk: Emotional AI, Fake news, stereotypes and vacations

September 6th, 2019 / By V. “Juggy” Jagannathan, PhD

Emotional AI

Fortune featured an interesting video interview this week with the CEO of Affectiva, Rana el Kaliouby. Affectiva is a spin-off from MIT Media Lab. What does the company do? They study human expressions. During the interview, the interviewer Jeremy Kahn of Fortune, makes various faces and the corresponding emoji shows up on the screen. He smiles, and a smiley emoji shows up, he frowns, another one shows up, etc. Now, what is the big deal about such a capability? Turns out it has a lot of applications. For example, it can help with negotiations, says Professor Curhan of MIT. Ms. Kaliouby mentions a slew of applications in the video interview: how to communicate the effects of bullying, identifying depression, communicating with caregivers, etc. Because these are such invasive solutions, privacy concerns have reared up. To combat these concerns, Affectiva uses an opt-in policy for their users and the company claims they have turned down money from companies involved in the security industry. The interview is worth watching.

Fake-news generator

In February, the OpenAI, a San Francisco-based AI research lab, released a fraction of the large language model they had trained. The reason they released only a fraction? They were worried about the potential to create fake news with the language model. Fast forward six months, they have now released a model that is roughly half the size of the overall model—which has 1.5 billion trainable parameters! Along with the model, they have released a report that argues for staged release of the models and explores the social impacts of their decision. Essentially, the argument is caution should be exercised when releasing a model in order to see its impact and take appropriate action. Caution is warranted for sure, however it is unclear how that helps this particular type of model, as one commentator (Richard Socher at Salesforce) quipped: “You don’t need AI to create fake news! People can easily do it 😊.” Efforts are underway on the detection front to find machine generated fake news. I think that focus is misplaced – one needs AI that can detect fake news. Period. It does not matter how it was generated!

Reinforcing stereotypes

Last week, Futurity had an article tabulating the top 11 adjectives (why 11, and not 15 or 20 or 5? Beats me!) that are commonly found in books from 1900-2008, for men and women. Well, it found that adjectives directed at women tend to describe physical appearance while those for men typically refer to behavior. The analysis was done using 3.5 million books. While the study itself just reinforces the obvious bias exhibited by people, what I found interesting was the source of the data—Google’s ngram corpus. Go check out their ngram viewer: It will chart your words and how often they appeared from the year 1800 onward, purely by mining data from books.

Vacation Science

For those of you who are total workaholics, here is a bit of advice from a perspective shared in JAMA: “Using vacation time instead of forfeiting it could lessen the chances of developing metabolic syndrome, which raises the risk of heart disease, stroke, and diabetes.” This advice comes from a study done with a small cohort of participants—63 in all. Perhaps they need to scale this up by an order of magnitude before drawing conclusions, but the advice is pretty much what conventional wisdom dictates!

I am always looking for feedback and if you would like me to cover a story, please let me know. “See something, say something”! Leave me a comment below or ask a question on my blogger profile page.

V. “Juggy” Jagannathan, PhD, is an AI Evangelist with four decades of experience in AI and Computer Science research.