Bayesian Learning for Autonomous Decision-Making

December 11, 2020, Zoom

Alec E. Koppel

U.S. Army Research Laboratory, Statistics

Abstract

Autonomous systems are driven by dynamics, necessitating adapting learned models to new information. Linear models readily adapt, but deep models are essential to modern speech and perception. Thus, fundamental tradeoffs between complexity and statistical accuracy must be understood to facilitate autonomous adaptation. Motivated by this spectrum of possibility, and the fact that an infinite node one-layer Gaussian mixture model may identify any deep neural network, we characterize architectural/accuracy tradeoffs in nonparametric models. We first note that the update rules for kernel regression, Gaussian processes, and importance (Monte Carlo) sampling imply the curse of dimensionality: each new sample enters into the model with an associated weight. Then we propose compression framework that sparsifies both the algorithm sample path and its limit, providing an explicit tradeoff between memory and the radius of convergence: more accurate convergence requires greater memory. We then demonstrate that this approach facilitates stable and efficient training of nonlinear statistical models that outperform alternatives memory reduction techniques, and how these methods may be applied to problems arising in autonomous systems: incremental localization, mapping, and nonlinear system identification.

Speaker's Bio

Alec Koppel is a Research Scientist at the U.S. Army Research Laboratory in the Computational and Information Sciences Directorate since September of 2017. He completed his Master's degree in Statistics and Doctorate in Electrical and Systems Engineering, both at the University of Pennsylvania (Penn) in August of 2017. Before coming to Penn, he completed his Master's degree in Systems Science and Mathematics and Bachelor's Degree in Mathematics, both at Washington University in St. Louis (WashU), Missouri. He is a recipient of the 2016 UPenn ESE Dept. Award for Exceptional Service, an awardee of the Science, Mathematics, and Research for Transformation (SMART) Scholarship, a co-author of Best Paper Finalist at the 2017 IEEE Asilomar Conference on Signals, Systems, and Computers, and a finalist for the ARL Honorable Scientist Award 2019. His research interests are in optimization and machine learning. Currently, he focuses on scalable Bayesian learning, reinforcement learning, and decentralized optimization, with an emphasis on problems arising in robotics and autonomy.