We study distributed subgradient algorithms for solving convex optimization problems in which the objective function is the sum of local objective functions of agents and the constraint set is given by the intersection of local constraint sets of agents. We assume that each agent knows only his own local objective function and constraint set, and exchanges information with the other agents over a randomly varying network topology to update his information state. We assume a state-dependent communication model over this topology:
communication is Markovian with respect to the states of the agents and the probability with which the links are available depends on the states of the agents.
We first show that the multi-agent subgradient algorithm when used with a constant stepsize may result in the agent states to diverge with probability one. Under some assumptions on the stepsize sequence, we provide convergence rate bounds on a ``disagreement metric" between the agent states. Our bounds are time-nonhomogeneous in the sense that they depend on the initial starting time. Despite this, we show that agent states reach an almost sure consensus and converge to the same optimal solution of the global optimization problem with probability one under different assumptions on the local constraint sets and the stepsize sequence.
This is joint work with Ilan Lobel and Diego Feijer.
Back to Workshop V: Applications of Optimization in Science and Engineering