Convergence rates for distributed stochastic optimization over random networks

التفاصيل البيبلوغرافية
العنوان: Convergence rates for distributed stochastic optimization over random networks
المؤلفون: Jakovetic, Dusan, Bajovic, Dragana, Sahu, Anit Kumar, Kar, Soummya
سنة النشر: 2018
المجموعة: Mathematics
مصطلحات موضوعية: Mathematics - Optimization and Control
الوصف: We establish the O($\frac{1}{k}$) convergence rate for distributed stochastic gradient methods that operate over strongly convex costs and random networks. The considered class of methods is standard each node performs a weighted average of its own and its neighbors solution estimates (consensus), and takes a negative step with respect to a noisy version of its local functions gradient (innovation). The underlying communication network is modeled through a sequence of temporally independent identically distributed (i.i.d.) Laplacian matrices connected on average, while the local gradient noises are also i.i.d. in time, have finite second moment, and possibly unbounded support. We show that, after a careful setting of the consensus and innovations potentials (weights), the distributed stochastic gradient method achieves a (order-optimal) O($\frac{1}{k}$) convergence rate in the mean square distance from the solution. This is the first order-optimal convergence rate result on distributed strongly convex stochastic optimization when the network is random and/or the gradient noises have unbounded support. Simulation examples confirm the theoretical findings.
Comment: Submitted to CDC 2018
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/1803.07836
رقم الانضمام: edsarx.1803.07836
قاعدة البيانات: arXiv