fbpx
SPEAKER

DAY 2

Speaker

https://aisummit.co.kr/wp-content/uploads/2020/12/Joel_500x500.jpg

Joel Lehman Open AI | ML Research Scientist

Joel Lehman is a research scientist at OpenAI, and previously was a founding member of Uber AI Labs and an assistant professor at the IT University of Copenhagen. His research focuses on open-endedness, reinforcement learning, and AI safety. His PhD dissertation introduced the novelty search algorithm, which inspired a popular science book co-written with Ken Stanley on what search algorithms imply for individual and societal objectives, called “Why Greatness Cannot Be Planned.”  

Towards Safe, Interpreteble and Moral Reinforcement Learning Agents

Reinforcement learning is a powerful paradigm of machine learning, where agents (like robots) are trained through rewards to perform tasks. While such an approach has proven successful in solving closed-world video games, it is difficult to apply in the real world. One reason it is difficult is the challenge of creating safe agents, i.e. agents that do not unintentionally damage themselves or the environment.

This talk describes the challenges of AI safety, and reviews three research projects aimed as steps towards agents that are safer, more interpretable, and that respect moral rules.