skip to content
 

Multi-agent learning: Implicit regularization and order-optimal gossip

Presented by: 
Patrick Rebeschini University of Oxford
Date: 
Thursday 14th June 2018 - 10:00 to 11:00
Venue: 
INI Seminar Room 2
Abstract: 
In distributed machine learning, data are stored and processed in multiple locations by different agents. Each agent is represented by a node in a graph, and communication is allowed between neighbours. In the decentralised setting typical of peer-to-peer networks, there is no central authority that can aggregate information from all the nodes. A typical setting involves agents cooperating with their peers to learn models that can perform better on new, unseen data. In this talk, we present the first results on the generalisation capabilities of distributed stochastic gradient descent methods. Using algorithmic stability, we derive upper bounds for the test error and provide a principled approach for implicit regularization, tuning the learning rate and the stopping time as a function of the graph topology. We also present a new Gossip protocol for the aggregation step in distributed methods that can yield order-optimal communication complexity. Based on non-reversible Markov chains, our protocol is local and does not require global routing, hence improving existing methods. (Joint work with Dominic Richards)




University of Cambridge Research Councils UK
    Clay Mathematics Institute London Mathematical Society NM Rothschild and Sons