From 3M Health Information Systems
AI Talk: Face recognition and flipping burgers
This week’s AI Talk…
Furor over face recognition
Last week, Amazon shocked everyone by announcing that it is going to place a one-year moratorium on police use of its facial recognition technology. Earlier, IBM announced that it is getting out of the face-recognition software sphere entirely. Microsoft announced they will stop selling their face recognition solution to police departments until there is clarity on federal regulations governing the technology. What prompted all these stalwarts of technology to halt the sale of tech to law enforcement?
The catalyst, of course, was the murder of George Floyd by police and the resultant national and international protests, and calls for massive reform. But this story has stronger roots and started a few years ago. This well written article by Karen Hao in MIT Technology Review traces the history of how it happened.
It began when Joy Buolamwini of the MIT Media lab along with a Microsoft researcher, Timnit Gebru, identified gender and race bias in face recognition software. They clearly demonstrated that such software did poorly with non-white faces – the specific experiments involved recognition of gender. For example, Oprah Winfrey was identified as male with fairly high probability. The results were so alarming, that Joy started a movement called Algorithm for Justice League to spotlight problems in training machine learning algorithms and the inherent problems with using biased data to train them.
In June 2018, more than seventy civil rights and research organizations complained to Amazon that the sale of facial recognition software to police departments without any safeguards could open up the possibility for abuse, particularly in heavily policed, low-income neighborhoods. Despite many petitions, and the company’s own employees urging caution, Amazon did not take action.
Problems with training face-recognition did get a lot of attention in the research community and some progress has been made to address the bias. Fundamentally however, the technology can still be abused. It is one thing to open your phone by having it recognize your face, but entirely a different matter if the technology is used for surveillance purposes resulting in the apprehension of wrongly accused individuals. Fast forward to this past week, the protest movement in the aftermath of George Floyd’s murder prompted law makers to consider police reform bills, and Amazon, IBM and Microsoft decided to pause on the deployment of flawed technology. These are indeed interesting and troubling times.
“Cobots” and flipping burgers
On the lighter side of tech news, this article in Futurism talks about burger flipping robots. Increasingly, when we talk about robots, there is an undercurrent of anxiety regarding loss of jobs. But this article is painting a more positive picture. The burger flipping robot is automating very specific, boring and tedious work, which nonetheless is important to the consumer of the burger. The burger flipping robot, appropriately named Flippy, works hand in hand with a coworker on an assembly line. The company now also has a robot arm that will fry and clean grills. These robots are being dubbed as “Cobots” for cooperating with humans. It appears the company is looking for investment from common folks. If you are interested, you can click on “Invest” on their home page and look at their slick marketing video for their “World’s first autonomous robotic kitchen assistant.” Please don’t take this blog as a recommendation for this company—it is simply an interesting story.
I am always looking for feedback and if you would like me to cover a story, please let me know. “See something, say something!” Leave me a comment below or ask a question on my blogger profile page.
V. “Juggy” Jagannathan, PhD, is Director of Research for 3M M*Modal and is an AI Evangelist with four decades of experience in AI and Computer Science research.