skip to content
 

Scalable inference; statistical, algorithmic, computational aspects

Participation in INI programmes is by invitation only. Anyone wishing to apply to participate in the associated workshop(s) should use the relevant workshop application form.

Programme
3rd July 2017 to 28th July 2017

Organisers: Gareth Roberts (Warwick), Christophe Andrieu (Bristol), Paul Fearnhead (Lancaster), David Firth (Warwick), Chris Holmes (Oxford)

Programme Theme

The complexity and sheer size of modern data sets, of which ever increasingly demanding questions are posed, give rise to major challenges and opportunities for modern statistics. While likelihood-based statistical methods still provide the gold standard for statistical methodology, the applicability of existing likelihood methods to the most demanding of modern problems is currently limited. Thus traditional methodologies for numerical optimisation of likelihoods, and for simulating from complicated posterior distributions, such as Markov chain Monte Carlo and Sequential Monte Carlo algorithms often scale poorly with data size and model complexity, and thus fail for the most complex of modern problems.

The area of computational statistics is currently developing extremely rapidly, motivated by the challenges of the recent big data revolution, and enriched by new ideas from machine learning, multi-processor computing, probability and applied mathematical analysis. Motivation for this development comes from across the physical biological and social sciences, including physics, chemistry, astronomy, epidemiology, medicine, genetics, sociology, economics - in fact it is hard to find problems not enriched by big data and the resultant associated statistical challenges.

This programme will focus on methods associated with likelihood, its variants and approximations, taking advantage of, and creating new advances in statistical methodology. These advances have the potential to impact on all aspects of science and industry that rely on probabilistic models for learning from observational or experimental data.

Intractable likelihood problems are defined loosely as ones where the repeated evaluation of likelihood function (as required in standard algorithms for likelihood-based inference) is impossible or too computationally expensive to carry out. Scalable methods for carrying out statistical inference are loosely defined to be methods whose computational cost and statistical validity scale well with both model complexity and data size.

Understanding and developing scalable methods for intractable likelihood problems requires expertise across statistics, computer science, probability and numerical analysis. Thus it is imperative that the programme be broad, covering statistical, algorithmic and computational aspects of inference. The programme will cut across the traditional boundary between frequentist and Bayesian inference, and will incorporate both statistics and machine learning approaches to inference. Central to the focus will be the close integration of algorithm optimisation with the opportunities offered, and constraints imposed by modern multi-core technologies such as GPUs.

The first week of the programme will feature a broad-focused workshop, and more application specific activities will take place later.

University of Cambridge Research Councils UK
    Clay Mathematics Institute London Mathematical Society NM Rothschild and Sons