fbpx
SPEAKER

DAY 1

Speaker

https://aisummit.co.kr/wp-content/uploads/2020/10/Larry_Heck-500x500.jpg

Dr. Larry Heck Viv Labs | CEO, Samsung | Sr. VP and Head of Bixby North America

Dr. Larry Heck is the CEO of Viv Labs and SVP at Samsung Electronics. Previously, he was a Principal Scientist at Google Research, a Distinguished Engineer and Chief Scientist of Microsoft Speech, VP of Search and Advertising quality at Yahoo!, and VP of R&D at Nuance Communications. He began his career as a researcher at the Stanford Research Institute (SRI).  Funded by the US NSA and DARPA, his team at SRI was the first to successfully create a large-scale deep neural network (DNN) deep learning technology in the field of speech processing in 1998 and the first to deploy a major industrial application of deep learning. In 2009, he co-founded the Microsoft Cortana digital assistant.  Dr. Heck received a Ph.D. in Electrical Engineering from the Georgia Institute of Technology. He is a Fellow of the IEEE, published numerous scientific papers, and holds 50+ United States patents. He was inducted into the Academy of Distinguished Engineering Alumni at Georgia Tech and received the Distinguished Engineer Award from Texas Tech University Whitacre College of Engineering.

Designing Digital Assistants with Machines-Teaching-Machines and Master-Apprentice Learning

I will present my recent research on automatically designing digital assistants. The new approach uses a “Machines Teaching Machines” (MTM) approach inspired by DeepMind’s AlphaGo but applied to the collaborative “game” of goal-directed conversations.

Digital assistant learns from other digital assistants with each assistant initially trained through human interaction in the style of a “Master and Apprentice”. For example, when a digital assistant does not know how to complete a requested task, rather than responding “I do not know how to do this yet”, the digital assistant responds with an invitation to the human “can you teach me?”. Apprentice-style learning is powered by a combination of all the modalities: natural language conversations, non-verbal modalities including gestures, touch, robot manipulation and motion, gaze, images/videos, and speech prosody.

The new apprentice learning model is always helpful and always learning in an open world – as opposed to the current commercial digital assistants that are sometimes helpful, trained exclusively offline, and function over a closed world of “walled garden” knowledge. Combining MTM with Master-Apprentice learning has the potential to yield exponential growth in the collective intelligence of digital assistants.