skip to content
 

Optimal and efficient learning with random features

Presented by: 
Lorenzo Rosasco Massachusetts Institute of Technology, Università degli Studi di Genova, Istituto Italiano di Tecnologica (IIT)
Date: 
Wednesday 17th January 2018 - 09:45 to 10:30
Venue: 
INI Seminar Room 1
Abstract: 
Random features approaches correspond to one hidden layer neural networks with random hidden units, and can be seen as approximate kernel methods. We study the statistical and computational properties of random features within a ridge regression scheme. We prove for the first time that a number of random features much smaller than the number of data points suffices for optimal statistical error, with a corresponding huge computational gain. We further analyze faster rates under refined conditions and the potential benefit of random features chosen according to adaptive sampling schemes.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
University of Cambridge Research Councils UK
    Clay Mathematics Institute London Mathematical Society NM Rothschild and Sons