Distributed optimization algorithms allow collections of agents to cooperatively minimize joint cost functions without requiring them to share their local data. This can be useful when the data sets are large (as in some machine learning applications) or when the agents wish to keep their data private. In this talk I will examine some existing algorithms for distributed convex optimization and interpret them as nonlinear feedback loops in discrete time. In doing so I will show how simple rearrangements of the blocks in the feedback loop can lead to new algorithms which exhibit different (and often advantageous) performance trade-offs. In particular, I will show how to make some existing algorithms "hot-pluggable" and robust to inter-agent communication errors while preserving their convergence properties. Also, I will focus on algorithms that work over directed (and not necessarily symmetric) communication graphs.
Randy Freeman received a Ph.D. in Electrical Engineering from the University of California at Santa Barbara in 1995, after having received B.S and M.S. degrees in EE from Cornell University and the University of Illinois at Urbana-Champaign (respectively). Since then he has been a faculty member at Northwestern University (Evanston, Illinois), where he is currently a Professor of Electrical and Computer Engineering. He received the National Science Foundation CAREER Award in 1997. His research interests include nonlinear system theory, nonlinear control, robust control, optimal control, and distributed control and estimation.