Learning methods have had immense impact on the practical implementation of robotics and automation systems, particularly in settings where models of the agent and the environment are unknown. As this emphasis on using machine learning techniques for control gains momentum, a related question concerning what role control should play in learning becomes relevant. I will talk about recent progress in control for learning, focusing on hybrid control of a reinforcement learning process. We treat the problem of combining actions based on learned models with experience-based state-action policy mappings as a hybrid scheduling problem, alternating between model-based and model-free approaches to learning. The approach efficiently learns motor skills and improves the performance of both model-based control and experience-based policies. I will illustrate the hybrid control method on a variety of robot reinforcement learning benchmark tasks in simulation as well as hardware manipulation tasks. I will end the talk with a brief discussion of roles control analysis and synthesis can play in learning systems.
Todd Murphey is a Professor of Mechanical Engineering in the McCormick School of Engineering and of Physical Therapy and Human Movement Sciences in the Feinberg School of Medicine, both at Northwestern University. He received his Ph.D. in Control and Dynamical Systems from the California Institute of Technology. His laboratory is part of the Center for Robotics and Biosystems, and his research interests include robotics, control, active learning in automation, and emergent behavior in dynamical systems.