From 3M Health Information Systems
AI Talk: A teacher, a toothbrush and driverless cars
This week’s AI Talk…
Driverless cars and wearable AI
I saw a link in an Algorithm blog to this talk at MIT last month. It is a recording from a seminar by entrepreneur and professor Amnon Shashua. Dr. Shashua is President and CEO of Mobileye, an Intel company. They are building the technology for driverless cars and wearable AI. Fascinating talk. If you skip the first 13:53 minutes of the talk, which is a semi-technical look at statistics and deep learning, the rest is more accessible. First, he goes over the challenges of building an autonomous car. You start with perception – the ability to understand one’s environment using a combination of camera, sonar, lidar etc. Next is the procedure question. How will the car decide what to do in any given situation? Is it appropriate to be aggressive, to be slow, to be fast? What does it mean to be “careful?” The third factor is cost, specifically how to make all this technology cost-effective for the average consumer. His company is getting ready to deploy a robo-taxi service in Tel Aviv next year with tele-operator assisting 10 driverless vehicles when they get stuck. He refers to this as automation level 4 on a scale of 1 to 5. I was a bit disappointed with his projection that a fully autonomous, self-driving car for consumers is still a decade away. Elon Musk is more aggressive in his projections. But the robo-taxi service? It will be ready next year!
Dr. Shashua, also discussed a few wearable AI applications that are being deployed by his company to assist disabled people. One assists visually impaired people navigate their surroundings using a gadget similar in appearance to Google glass. Another assists hearing impaired people make sense of audio signals in a crowded room. A third can potentially help Alzheimer’s patients recognize people! Fascinating talk.
It is worth your time to listen to the entire talk – perhaps at 1.5x or 2x speed! Transcripts help, but the talk has video clips which are nice to watch.
Teaching machines to learn
I saw this article on Microsoft’s AI Blog. It asked the question “How does one teach machines to learn?” The argument is that machine learning (ML) works reasonably well if you have a lot of labeled data – i.e. training examples with inputs and correct output labels. What if there are not that many labelled data sets? Can we use the knowledge of the experts to teach the machines? That is the basic theme pursued here, explained in greater detail in this non-technical article: “Machine Teaching: A new paradigm for building Machine Learning Systems.” The argument itself is not new – deep learning with a lot of labeled data reduces or eliminates the need for feature engineering, the discipline of hand crafting heuristic knowledge into a program. The argument here is that the absence of a lot of labeled data brings the feature engineering, problem decomposition, heuristics etc. back into the picture. The idea is to combine these with traditional ML. A related argument being made for this approach is that there are millions of experts but only tens of thousands of ML scientists. By providing a teaching tool that can be used by experts, one can convert these millions to produce useful solutions for the masses. Whether this vision is going to be realized remains to be seen, but one can never underestimate Microsoft!
Recently, I was chatting with my brother-in-law and he was asking me what does AI do to a toothbrush? I was flummoxed. He mentioned that he had recently seen an Oral-B toothbrush that had AI! I ended up googling it. Sure enough, Oral-B recently announced a smart toothbrush called their “Genius” series. Ok, so what in the world does it really do? Turns out it has sensors that can map where you are brushing and how much time you are spending in each region of your teeth. The data gets downloaded to a smartphone app and it gives you advice on where you need to brush more! You will be out a mere $279 to garner this piece of advice.
I am always looking for feedback and if you would like me to cover a story, please let me know. “See something, say something”! Leave me a comment below or ask a question on my blogger profile page.
V. “Juggy” Jagannathan, PhD, is Vice President of Research for M*Modal, with four decades of experience in AI and Computer Science research.