Distributed Optimization via the Alternating Direction Method of Multipliers (joint work with N. Parikh, E. Chu, B. Peleato, and J. Eckstein)

June 09, 2011, Elings Hall 1605

Stephen Boyd

Stanford University, Electrical Engineering

Abstract

We describe the alternating direction method of multipliers, and we argue that it is ideal for distributed convex optimization, in which many processors or processing elements coordinate to solve a very large problem, such as those arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas-Rachford splitting, Spingarn’s method of partial inverses, Dykstra’s alternating projections, Bregman iterative algorithms for l_1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a variety of statistical and machine learning problems, as well as various generic distributed problems, such as consensus and exchange.

Speaker's Bio

Stephen P. Boyd is the Samsung Professor of Engineering, and Professor of Electrical Engineering in the Information Systems Laboratory at Stanford University. He also has a courtesy appointment in the Department of Management Science and Engineering, and is member of the Institute for Computational and Mathematical Engineering. His current research focus is on convex optimization applications in control, signal processing, and circuit design.

Video URL: