Distributed Alternating Direction Method of Multipliers for Multi-agent Optimization

Asuman Ozdaglar
Massachusetts Institute of Technology

We consider a network of agents solving a global optimization problem, where the objective function is the sum of privately known local objective functions of the agents and the decision variables are coupled via linear constraints. Recent literature focused on special cases of this formulation and studied their distributed solution through subgradient based methods with O(1/ \sqrt{k}) rate of convergence (where k is the iteration number). In this talk, we present distributed Alternating Direction Method of Multipliers (ADMM) based methods for solving this problem. We study both synchronous and asynchronous implementations. We present convergence rate estimates, which show that these methods converge at the rate $O(1/k)$ and highlight dependence on network structure.
This is joint work with Ermin Wei and partly with Shimrit Shtern.


Back to Stochastic Gradient Methods