We consider the convergence rates of two convex optimization strategies in the context of multi agent systems, namely the Newton-Raphson consensus optimization and a distributed Gradient-Descent opportunely derived from the first. To allow analytical derivations, the convergence analyses are performed under the simplificative assumption of quadratic local cost functions. In this framework we derive sufficient conditions which guarantee the convergence of the algorithms. From these conditions we then obtain closed form expressions that can be used to tune the parameters for maximizing the rate of convergence. Despite these formulae have been derived under quadratic local cost functions assumptions, they can be used as rules-of-thumb for tuning the parameters of the algorithms in general situations.
The convergence rate of Newton-Raphson consensus optimization for quadratic cost functions
ZANELLA, FILIPPO;VARAGNOLO, DAMIANO;CENEDESE, ANGELO;PILLONETTO, GIANLUIGI;SCHENATO, LUCA
2012
Abstract
We consider the convergence rates of two convex optimization strategies in the context of multi agent systems, namely the Newton-Raphson consensus optimization and a distributed Gradient-Descent opportunely derived from the first. To allow analytical derivations, the convergence analyses are performed under the simplificative assumption of quadratic local cost functions. In this framework we derive sufficient conditions which guarantee the convergence of the algorithms. From these conditions we then obtain closed form expressions that can be used to tune the parameters for maximizing the rate of convergence. Despite these formulae have been derived under quadratic local cost functions assumptions, they can be used as rules-of-thumb for tuning the parameters of the algorithms in general situations.Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.