Challenges in Learning Behavior Certificates from Data

October 17, 2025, Webb Hall 1100

Stephen Tu

Abstract

A major challenge in deploying autonomous systems in the real world is ensuring that they operate safely and predictably, even in novel environments not encountered during development. One promising direction is to certify system behavior using behavior certificates, which provide sufficient conditions for guaranteeing performance and safety. Recently, learning-based statistical methods have gained traction as a tractable means of constructing such certificates. In this talk, we will first explore the fundamental limitations of these learning-based approaches. By drawing connections to lower bounds in derivative-free optimization, we show that a computational curse of dimensionality is unavoidable for establishing the de-randomized, almost sure guarantees typical in analyzing feedback control systems. Accepting that verification inherently scales poorly with system dimension, we will next highlight recent work that utilizes lower-dimensional latent dynamics models to overcome these limitations. We first introduce a set of forward and backward conjugacy conditions which quantify the fidelity of a latent dynamics model in constructing the original dynamics. Using these conditions, we show how Lyapunov stability and set invariance properties established for the latent system can be systematically transferred back to the original dynamics. Moreover, the conjugacy conditions naturally serve as loss functions, enabling learning latent representations and dynamics models directly from data. We conclude by discussing some ongoing efforts to further extend our latent space methods.

Speaker's Bio

Stephen Tu is an assistant professor in the Department of Electrical and Computer Engineering at the University of Southern California. His research interests span statistical learning theory, safe and optimal control, and generative modeling. Specifically, his work focuses on non-asymptotic guarantees for learning dynamical systems, rigorous analysis of distribution shift in feedback settings, safe control synthesis, and more recently foundations of generative modeling. Stephen earned his Ph.D. in Electrical Engineering and Computer Sciences (EECS) from the University of California, Berkeley. Previous to joining USC, Stephen was a research scientist at Google DeepMind Robotics where he focused on combining learning and control-theoretic approaches for robotics.

Video URL: