MAE -- Sam Green

Date: 
Wednesday, November 29, 2017 - 1:30pm
Location: 
HFH 1132
Title: 
Deep Reinforcement Learning Acceleration: Opportunities and Challenges
Speaker: 
Sam Green
Committee: 
Cetin Koc (Chair), Omer Egecioglu, Tim Sherwood, Craig Vineyard

Reinforcement learning is a third branch of machine learning, next to supervised and unsupervised learning. Reinforcement learning methods use knowledge gained from environmental interaction to develop strategies to maximize collection of "rewards". The basic concepts of reinforcement learning have overlap with psychology, neuroscience, control theory, and optimization. There have been impressive reinforcement learning results in the past, including Pieter Abbeel's 2006 work with advanced aerobatic helicopter flight. Deep neural networks have recently been incorporated into new "deep" reinforcement learning algorithms to produce surprising results. A few notable exemplars include DeepMind's and OpenAI's superhuman performance at the strategy games of Go and Dota 2, and Chelsea Finn & Sergey Levine's work on visuomotor control based on visualization of action outcomes.

During the past three years, research on quantization and compression has reduced the algorithmic, space, and energy requirements of deep neural network execution. Now the computer architecture community is showing interest in exploring deep neural network hardware accelerator designs. Based on recent reinforcement learning performance, we expect hardware designers to begin considering architectures tailored for accelerating these algorithms. 

This talk introduces reinforcement learning and its various applications, surveys hardware accelerators for deep neural networks, and discusses how quantization and compression affect deep neural network performance. The purpose of undertaking this survey is to understand the underlying architectural and mathematical requirements necessary for building high-performance or low-power reinforcement learning hardware accelerators. I end with a discussion of how neuromorphic engineering will improve reinforcement learning, as well as other opportunities for future research.

 

Everyone welcome!

Everyone welcome!