skip to content
 

Timetable (UNQW02)

Surrogate models for UQ in complex systems

Monday 5th February 2018 to Friday 9th February 2018

Monday 5th February 2018
10:50 to 11:20 Registration & Morning Coffee
11:20 to 11:30 Welcome from David Abrahams (INI Director)
11:30 to 12:30 Catherine Powell (University of Manchester)
Adaptive Stochastic Galerkin Finite Element Approximation for Elliptic PDEs with Random Coefficients
Co-author: Adam Crowder (University of Manchester)

We consider a standard elliptic PDE model with uncertain coefficients. Such models are simple, but are well understood theoretically and so serve as a canonical class of problems on which to compare different numerical schemes (computer models).

Approximations which take the form of polynomial chaos (PC) expansions have been widely used in applied mathematics and can be used as surrogate models in UQ studies. When the coefficients of the approximation are computed using a Galerkin method, we use the term ‘Stochastic Galerkin approximation’. In statistics, the term ‘intrusive PC approximation’ is also often used. In the Galerkin approach, the resulting PC approximation is optimal in that the energy norm of the error between the true model solution and the PC approximation is minimised. This talk will focus on how to build the approximation space (in a computer code) in a computationally efficient way while also guaranteeing accuracy.

In the stochastic Galerkin finite element (SGFEM) approach, an approximation is sought in a space which is defined through a chosen set of spatial finite element basis functions and a set of orthogonal polynomials in the parameters that define the uncertain PDE coefficients. When the number of parameters is too high, the dimension of this space becomes unmanageable. One remedy is to use ‘adaptivity’. First, we generate an approximation in a low-dimensional approximation space (which is cheap) and then use a computable a posteriori error estimator to decide whether the current approximation is accurate enough or not. If not, we enrich the approximation space, estimate the error again, and so on, until the final approximation is accurate enough. This allows us to design problem-specific polynomial approximations. We describe an error estimation procedure, outline the computational costs, and illustrate its use through numerical results. An improved multilevel implem entation will be outlined in a poster given by Adam Crowder.
INI 1
12:30 to 13:30 Lunch @ Churchill College
13:30 to 14:30 Michael Goldstein (Durham University)
Emulation for model discrepancy
Careful assessment of model discrepancy is a crucial aspect of uncertainty quantification. We will discuss the different ways in which emulation may be used to support such assessment, illustrating with practical examples.
INI 1
14:30 to 15:30 Ralph Smith (North Carolina State University)
Active Subspace Techniques to Construct Surrogate Models for Complex Physical and Biological Models
For many complex physical and biological models, the computational cost of high-fidelity simulation codes precludes their direct use for Bayesian model calibration and uncertainty propagation. For example, the considered neutronics and nuclear thermal hydraulics codes can take hours to days for a single run. Furthermore, the models often have tens to thousands of inputs--comprised of parameters, initial conditions, or boundary conditions--many of which are unidentifiable in the sense that they cannot be uniquely determined using measured responses. In this presentation, we will discuss techniques to isolate influential inputs for subsequent surrogate model construction for Bayesian inference and uncertainty propagation. For input selection, we will discuss advantages and shortcomings of global sensitivity analysis to isolate influential inputs and the use of active subspace construction to determine low-dimensional input manifolds. We will also discuss the manner in which Bayesian calibration on active subspaces can be used to quantify uncertainties in physical parameters. These techniques will be illustrated for models arising in nuclear power plant design, quantum-informed material characterization, and HIV modeling and treatment.
INI 1
15:30 to 16:00 Afternoon Tea
16:00 to 17:00 Christoph Schwab (ETH Zürich)
Domain Uncertainty Quantification
We address the numerical analysis of domain uncertainty in UQ for partial differential and integral equations. For small amplitude shape variation, a first order, kth moment perturbation analysis and sparse tensor discretization produces approximate k-point correlations at near optimal order: work and memory scale log-linearly w.r. to N, the number of degrees of freedom for approximating one instance of the nominal (mean-field) problem [1,3]. For large domain variations, the notion of shape holomorphy of the solution is introduced. It implies (the `usual') sparsity and dimension-independent convergence rates of gpc approximations (e.g., anisotropic stochastic collocation, least squares, CS, ...) of parametric domain-to-solution maps in forward UQ. This property holds for a broad class of smooth elliptic and parabolic boundary value problems. Shape holomorphy also implies sparsity of gpc expansions of certain posteriors in Bayesian inverse UQ [7], [->WS4]. We discuss consequences of gpc sparsity on some surrogate forward models, to be used e.g. in optimization under domain uncertainty [8,9]. We also report on dimension independent convergence rates of Smolyak and higher order Quasi-Monte Carlo integration [5,6,7]. Examples include the usual (anisotropic) diffusion problems, Navier-Stokes [2] and time harmonic Maxwell PDEs [4], and forward UQ for fractional PDEs. Joint work with Jakob Zech (ETH), Albert Cohen (Univ. P. et M. Curie), Carlos Jerez-Hanckes (PUC, Santiago, Chile). Work supported in part by the Swiss National Science Foundation. References: [1] A. Chernov and Ch. Schwab: First order k-th moment finite element analysis of nonlinear operator equations with stochastic data, Mathematics of Computation, 82 (2013), pp. 1859-1888. [2] A. Cohen and Ch. Schwab and J. Zech: Shape Holomorphy of the stationary Navier-Stokes Equations, accepted (2018), SIAM J. Math. Analysis, SAM Report 2016-45. [3] H. Harbrecht and R. Schneider and Ch. Schwab: Sparse Second Moment Analysis for Elliptic Problems in Stochastic Domains, Numerische Mathematik, 109/3 (2008), pp. 385-414. [4] C. Jerez-Hanckes and Ch. Schwab and J. Zech: Electromagnetic Wave Scattering by Random Surfaces: Shape Holomorphy, Math. Mod. Meth. Appl. Sci., 27/12 (2017), pp. 2229-2259. [5] J. Dick and Q. T. Le Gia and Ch. Schwab: Higher order Quasi Monte Carlo integration for holomorphic, parametric operator equations. SIAM Journ. Uncertainty Quantification, 4/1 (2016), pp. 48-79. [6] J. Zech and Ch. Schwab: Convergence rates of high dimensional Smolyak quadrature. In review, SAM Report 2017-27. [7] J. Dick and R. N. Gantner and Q. T. Le Gia and Ch. Schwab: Multilevel higher-order quasi-Monte Carlo Bayesian estimation. Math. Mod. Meth. Appl. Sci., 27/5 (2017), pp. 953-995. [8] P. Chen and Ch. Schwab: Sparse-grid, reduced-basis Bayesian inversion: Nonaffine-parametric nonlinear equations. Journal of Computational Physics, 316 (2016), pp. 470-503. [9] Ch. Schwab and J. Zech: Deep Learning in High Dimension. In review, SAM Report 2017-57.
INI 1
17:00 to 18:00 Welcome Wine Reception at INI
Tuesday 6th February 2018
09:00 to 10:00 Hoang Tran (Oak Ridge National Laboratory)
Recovery conditions of compressed sensing approach to uncertainty quantification
Co-author: Clayton Webster (UTK/ORNL). This talk is concerned with the compressed sensing approach to reconstruction of high-dimensional functions from limited amount of data. In this approach, the uniform bounds of the underlying global polynomial bases have often been relied on for the complexity analysis and algorithm development. We prove a new, improved recovery condition without using this uniform boundedness assumption, applicable to multidimensional Legendre approximations. Specifically, our sample complexity is established using the unbounded envelope of all polynomials, thus independent of polynomial subspaces. Some consequent, simple criteria for choosing good random sample sets will also be discussed. In the second part, I will discuss the recovery guarantees of nonconvex optimizations. These minimizations are generally closer to l_0 penalty than l_1 norm, thus it is widely accepted (also demonstrated computationally in UQ) that they are able to enhance the sparsity and accuracy of the approximations. However, the theory proving that nonconvex penalties are as good as or better than l1 minimization in sparse reconstruction has not been available beyond a few specific cases. We aim to fill this gap by establishing new recovery guarantees through unified null space properties that encompass most of the currently proposed nonconvex functionals in the literature, verifying that they are truly superior to l_1.
INI 1
10:00 to 11:00 Maurizio Filippone (EURECOM)
Random Feature Expansions for Deep Gaussian Processes
Drawing meaningful conclusions on the way complex real life phenomena work and being able to predict the behavior of systems of interest require developing accurate and highly interpretable mathematical models whose parameters need to be estimated from observations. In modern applications, however, we are often challenged with the lack of such models, and even when these are available they are too computational demanding to be suitable for standard parameter optimization/inference methods. While probabilistic models based on Deep Gaussian Processes (DGPs) offer attractive tools to tackle these challenges in a principled way and to allow for a sound quantification of uncertainty, carrying out inference for these models poses huge computational challenges that arguably hinder their wide adoption. In this talk, I will present our contribution to the development of practical and scalable inference for DGPs, which can exploit distributed and GPU computing. In particular, I will introduce a formulation of DGPs based on random features that we infer using stochastic variational inference. Through a series of experiments, I will illustrate how our proposal enables scalable deep probabilistic nonparametric modeling and significantly advances the state-of-the-art on inference methods for DGPs.
INI 1
11:00 to 11:30 Morning Coffee
11:30 to 12:30 Lorenzo Tamellini (Università degli Studi di Pavia)
Multi-Index Stochastic Collocation (MISC) for Elliptic PDEs with random data
Co-authors: Joakim Beck (KAUST), Abdul-Lateef Haji-Ali (Oxford University), Fabio Nobile (EPFL), Raul Tempone (KAUST)

In this talk we describe the Multi-Index Stochastic Collocation method (MISC) for computing statistics of the solution of an elliptic PDE with random data. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. We propose an optimization procedure to select the most effective mixed differences to include in the MISC estimator: such optimization is a crucial step and allows us to build a method that, provided with sufficient solution regularity, is potentially more effective than other multi-level collocation methods already available in literature. We provide a complexity analysis both in the case of a finite and an infinite number of random variables, showing that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one dimensional problem. We show the effectiveness of MISC with some computational tests, and in particular we discuss how MISC can be efficiently combined with an isogeometric solver for PDE.
INI 1
12:30 to 13:30 Lunch @ Churchill College
13:30 to 14:30 John Paul Gosling (University of Leeds)
Modelling discontinuities in simulator output using Voronoi tessellations
Co-authors: Chris Pope (University of Leeds), Jill Johnson (University of Leeds), Stuart Barber (University of Leeds), Paul Blackwell (University of Sheffield)

Computationally expensive, complex computer programs are often used to model and predict real-world phenomena. The standard Gaussian process model has a drawback in that the computer code output is assumed to be homogeneous over the input space. Computer codes can behave very differently in various regions of the input space. Here, we introduce a piecewise Gaussian process model to deal with this problem where the input space is divided into separate regions using Voronoi tessellations (also known as Dirichlet tessellations, Thiessen polygons or the dual of the Delaunay triangulation). We demonstrate our method’s utility with an application in climate science.
INI 1
14:30 to 15:30 Guannan Zhang (Oak Ridge National Laboratory)
A domain-decomposition-based model reduction method for convection-diffusion equations with random coefficients
We focuses on linear steady-state convection-diffusion equations with random-field coefficients. Our particular interest to this effort are two types of partial differential equations (PDEs), i.e., diffusion equations with random diffusivities, and convection-dominated transport equations with random velocity fields. For each of them, we investigate two types of random fields, i.e., the colored noise and the discrete white noise. We developed a new domain-decomposition-based model reduction (DDMR) method, which can exploit the low-dimensional structure of local solutions from various perspectives. We divide the physical domain into a set of non-overlapping sub-domains, generate local random fields and establish the correlation structure among local fields. We generate a set of reduced bases for the PDE solution within sub-domains and on interfaces, then define reduced local stiffness matrices by multiplying each reduced basis by the corresponding blocks of the local stiffness matrix. After that, we establish sparse approximations of the entries of the reduced local stiffness matrices in low-dimensional subspaces, which finishes the offline procedure. In the online phase, when a new realization of the global random field is generated, we map the global random variables to local random variables, evaluate the sparse approximations of the reduced local stiffness matrices, assemble the reduced global Schur complement matrix and solve the coefficients of the reduced bases on interfaces, and then assemble the reduced local Schur complement matrices and solve the coefficients of the reduced bases in the interior of the sub-domains. The advantages and contributions of our method lie in the following three aspects. First, the DDMR method has the online-offline decomposition feature, i.e., the online computational cost is independent of the finite element mesh size. Second, the DDMR method can handle the PDEs of interest with non-affine high-dimensional random coefficients. The challenge caused by the non-affine coefficients is resolved by approximating the entries of the reduced stiffness matrices. The high-dimensionality is handled by the DD strategy. Third, the DDMR method can avoid building polynomial sparse approximations to local PDE solutions. This feature is useful in solving the convection-dominated PDE, whose solution has a sharp transition caused by the boundary condition. We demonstrate the performance of our method based on the diffusion equation and convection-dominated equation with colored noises and discrete white noises.
INI 1
15:30 to 16:00 Afternoon Tea
16:00 to 17:00 Poster Session
Wednesday 7th February 2018
09:00 to 10:00 Martin Eigel (Weierstraß-Institut für Angewandte Analysis und Stochastik)
Aspects of adaptive Galerkin FE for stochastic direct and inverse problems
Co-authors: Max Pfeffer (MPI MIS Leipzig), Manuel Marschall (WIAS Berlin), Reinhold Schneider (TU Berlin)

The Stochastic Galerkin Finite Element Method (SGFEM) is a common approach to numerically solve random PDEs with the aim to obtain a functional representation of the stochastic solution. As with any spectral method, the curse of dimensionality renders the approach challenging when the randomness depends on a large or countable infinite set of parameters. This makes function space adaptation and model reduction strategies a necessity. We review adaptive SGFEM based on reliable a posteriori error estimators for affine and non-affine parametric representations. Based on this, an adaptive explicit sampling-free Bayesian inversion in hierarchical tensor formats can be derived. As an outlook onto current research, a statistical learning viewpoint is presented, which connects concepts of UQ and machine learning from a Variational Monte Carlo perspective.
INI 1
10:00 to 11:00 Elaine Spiller (None / Other)
Emulators for forecasting and UQ of natural hazards
Geophysical hazards – landslides, tsunamis, volcanic avalanches, etc. – which lead to catastrophic inundation are rare yet devastating events for surrounding communities. The rarity of these events poses two significant challenges. First, there are limited data to inform aleatoric scenario models, how frequent, how big, where. Second, such hazards often follow heavy-tailed distributions resulting in a significant probability that a larger-than-recorded catastrophe might occur. To overcome this second challenge, we must rely on physical models of these hazards to “probe” the tail for these catastrophic events. We will present an emulator-based strategy that allows great speed-up in Monte Carlo simulations for creating probabilistic hazard forecast maps. This approach offers the flexibility to explore both the impacts of epistemic uncertainties on hazard forecasts and of non-stationary scenario modeling on short term forecasts. Collaborators: Jim Berger (Duke), Eliza Calder (Edinburgh), Abani Patra (Buffalo), Bruce Pitman (Buffalo), Regis Rutarindwa (Marquette), Robert Wolpert (Duke)
INI 1
11:00 to 11:30 Morning Coffee
11:30 to 12:30 Panel comparisons: Challenor, Ginsbourger, Nobile, Teckentrup and Beck INI 1
12:30 to 13:30 Lunch @ Churchill College
13:30 to 17:00 Free Afternoon
19:30 to 22:00 Formal Dinner at Trinity College
Thursday 8th February 2018
09:00 to 10:00 Ben Adcock (Simon Fraser University)
Polynomial approximation of high-dimensional functions on irregular domains
Co-author: Daan Huybrechs (KU Leuven)

Smooth, multivariate functions defined on tensor domains can be approximated using orthonormal bases formed as tensor products of one-dimensional orthogonal polynomials. On the other hand, constructing orthogonal polynomials in irregular domains is difficult and computationally intensive. Yet irregular domains arise in many applications, including uncertainty quantification, model-order reduction, optimal control and numerical PDEs. In this talk I will introduce a framework for approximating smooth, multivariate functions on irregular domains, known as polynomial frame approximation. Importantly, this approach corresponds to approximation in a frame, rather than a basis; a fact which leads to several key differences, both theoretical and numerical in nature. However, this approach requires no orthogonalization or parametrization of the domain boundary, thus making it suitable for very general domains, including a priori unknown domains. I will discuss theoretical result s for the approximation error, stability and sample complexity of this approach, and show its suitability for high-dimensional approximation through independence (or weak dependence) of the guarantees on the ambient dimension d. I will also present several numerical results, and highlight some open problems and challenges.
INI 1
10:00 to 11:00 Christine Shoemaker (National University of Singapore)
Deterministic RBF Surrogate Methods for Uncertainty Quantification, Global Optimization and Parallel HPC Applications
Co-author: Antoine Espinet (Cornell University)

This talk will describe general-purpose algorithms for global optimization These algorithms can be used to estimate model parameters to fit complex simulation models to data, to select among alternative options for design or management, or to quantify model uncertainty. In general the numerical results indicate these algorithms do very well in comparison to alternatives, including Gaussian Process based approaches.. Prof. Shoemaker’s group has developed open source (free) PySOT optimization software that is available online (18,000 downloads) . The algorithms can be run in serial or parallel. The focus of the talk will be on SOARS, an Uncertainty Quantification method for using optimization-based sampling to build a surrogate likelihood function followed by additional sampling The algorithms builds a surrogate approximation of the likelihood function based on simulations done during the optimization search. Then MCMC is performed by evaluating the surrogate likelihood function rather than the original expensive-to-evaluate function. Numerical results indicate the SOARS algorithm is very accurate when compared to the posterior densities computed when using the expensive exact likelihood function. I also discuss an application to a model of the underground movement of a plume of geologically sequestered carbon dioxide. The uncertainty in the parameter values obtained from the MCMC analysis on the surrogate likelihood function can be used to assess alternative strategies for identifying a cost-effective plan that will most efficiently give a reliable forecast of a carbon dioxide underground plume. This includes joint work with David Ruppert, Antoine Espinet, Nikolay Bliznyuk, and Yilun Wang.

Related Links
INI 1
11:00 to 11:30 Morning Coffee
11:30 to 12:30 Aretha Teckentrup (University of Edinburgh)
Surrogate models in Bayesian Inverse Problems
Co-authors: Andrew Stuart (Caltech) , Han Cheng Lie and Timm Sullivan (Free University Berlin)

We are interested in the inverse problem of estimating unknown parameters in a mathematical model from observed data. We follow the Bayesian approach, in which the solution to the inverse problem is the probability distribution of the unknown parameters conditioned on the observed data, the so-called posterior distribution. We are particularly interested in the case where the mathematical model is non-linear and expensive to simulate, for example given by a partial differential equation. We consider the use of surrogate models to approximate the Bayesian posterior distribution. We present a general framework for the analysis of the error introduced in the posterior distribution, and discuss particular examples of surrogate models such as Gaussian process emulators and randomised misfit approaches.
INI 1
12:30 to 13:30 Lunch @ Churchill College
13:30 to 14:30 David Ginsbourger (None / Other); (Universität Bern)
Positive definite kernels for deterministic and stochastic approximations of (invariant) functions
INI 1
14:30 to 15:30 Raul Fidel Tempone (King Abdullah University of Science and Technology (KAUST))
Uncertainty Quantification with Multi-Level and Multi-Index methods
We start by recalling the Monte Carlo and Multi-level Monte Carlo (MLMC) methods for computing statistics of the solution of a Partial Differential Equation with random data. Then, we present the Multi-Index Monte Carlo (MIMC) and Multi-Index Stochastic Collocation (MISC) methods. MIMC is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the MLMC method first described by Heinrich and Giles. Instead of using first-order differences as in MLMC, MIMC uses mixed differences to reduce the variance of the hierarchical differences dramatically, thus yielding improved convergence rates. MISC is a deterministic combination technique that also uses mixed differences to achieve better complexity than MIMC, provided enough regularity. During the presentation, we will showcase the behavior of the numerical methods in applications, some of them arising in the context of Regression based Surrogates and Optimal Experimental Design. Coauthors: J. Beck, L. Espath (KAUST), A.-L. Haji-Ali (Oxford), Q. Long (UT), F. Nobile (EPFL), M. Scavino (UdelaR), L. Tamellini (IMATI), S. Wolfers (KAUST) Webpages: https://stochastic_numerics.kaust.edu.sa https://sri-uq.kaust.edu.sa
INI 1
15:30 to 16:00 Afternoon Tea
16:00 to 17:00 Maria Adamou (University of Southampton)
Bayesian optimal design for Gaussian process model
Co-author: Dave Woods (University of Southampton)

Data collected from correlated processes arise in many diverse application areas including both computer and physical experiments, and studies in environmental science. Often, such data are used for prediction and optimisation of the process under study. For example, we may wish to construct an emulator of a computationally expensive computer model, or simulator, and then use this emulator to find settings of the controllable variables that maximise the predicted response. The design of the experiment from which the data are collected may strongly influence the quality of the model fit and hence the precision and accuracy of subsequent predictions and decisions. We consider Gaussian process models that are typically defined by a correlation structure that may depend upon unknown parameters. This parametric uncertainty may affect the choice of design points, and ideally should be taken into account when choosing a design. We consider a decision-theoretic Bayesian design for Gaussian process models which is usually computationally challenging as it requires the optimization of an analytically intractable expected loss function over high-dimensional design space. We use a new approximation to the expected loss to find decision-theoretic optimal designs. The resulting designs are illustrated through a number of simple examples.
INI 1
Friday 9th February 2018
09:00 to 10:00 Olivier Roustant (Mines Saint-Étienne)
Group covariance functions for Gaussian process metamodels with categorical inputs
Co-authors : E. Padonou (Mines Saint-Etienne), Y. Deville (AlpeStat), A. Clément (CEA), G. Perrin (CEA), J. Giorla (CEA) and H. Wynn (LSE).

Gaussian processes (GP) are widely used as metamodels for emulating time-consuming computer codes. We focus on problems involving categorical inputs, with a potentially large number of levels (typically several tens), partitioned in groups of various sizes. Parsimonious group covariance functions can then defined by block covariance matrices with constant correlations between pairs of blocks and within blocks.

In this talk, we first present a formulation of GP models with categorical inputs, which makes a synthesis of existing ones and extends the usual homoscedastic and tensor-product frameworks. Then, we give a parameterization of the block covariance matrix described above, based on a hierarchical Gaussian model. The same model can be used when the assumption within blocks is relaxed, giving a flexible parametric family of valid covariance matrices with constant correlations between pairs of blocks.
We illustrate with an application in nuclear engineering, where one of the categorical inputs is the atomic number in Mendeleev's periodic table and has more than 90 levels.
INI 1
10:00 to 11:00 Daniel Williamson (University of Exeter)
Nonstationary Gaussian process emulators with covariance mixtures
Routine diagnostic checking of stationary Gaussian processes fitted to the output of complex computer codes often reveals nonstationary behaviour. There have been a number of approaches, both traditional and more recent, to modelling or accounting for this nonstationarity via the fitted process. These have included the fitting of complex mean functions to attempt to leave a stationary residual process (an idea that is often very difficult to get right in practice), using regression trees or other techniques to partition the input space into regions where different stationary processes are fitted (leading to arbitrary discontinuities in the modelling of the overall process), and other approaches which can be considered to live in one of these camps. In this work we allow the fitted process to be continuous by modelling the covariance kernel as a finite mixture of stationary covariance kernels and allowing the mixture weights to vary appropriately across parameter space. We introduce our method and compare its performance with the leading approaches in the literature for a variety of standard test functions and the cloud parameterisation of the French climate model. This is work led by my final-year PhD student, Victoria Volodina.
INI 1
11:00 to 11:30 Morning Coffee
11:30 to 12:30 Oliver Ernst (Technische Universität Chemnitz)
High-Dimensional Collocation for Lognormal Diffusion Problems
Co-authors: Björn Sprungk (Universität Mannheim), Lorenzo Tamellini (IMATI-CNR Pavia) Many UQ models consist of random differential equations in which one or more data components are uncertain and modeled as random variables. When the latter take values in a separable function space, their representation typically requires a large or countably infinite number of random coordinates. Numerical approximation methods for such functions of an infinite number of parameters based on best N-term approximation have recently been proposed and shown to converge at an algebraic rate. Collocation methods have a number of computational advantages over best N-term approximation, and we show how ideas successful there can be used to show a similar convergence rate for sparse collocation of Hilbert-space-valued functions depending on countably many Gaussian random variables. Such functions appear as solutions of elliptic PDEs with a lognormal diffusion coefficient. We outline a general L2-convergence theory based on previous work by Bachmayr et al. and Chen and establish an algebraic convergence rate for sufficiently smooth functions assuming a mild growth bound for the univariate hierarchical surpluses of the interpolation scheme applied to Hermite polynomials. We verify specifically for Gauss-Hermite nodes that this assumption holds and also show algebraic convergence with respect to the resulting number of sparse grid points for this case. Numerical experiments illustrate the dimension-independent convergence rate.
INI 1
12:30 to 13:30 Lunch @ Churchill College
13:30 to 14:30 Robert Gramacy (Virginia Polytechnic Institute and State University)
Replication or exploration? Sequential design for stochastic simulation experiments
We investigate the merits of replication, and provide methods that search for optimal designs (including replicates), in the context of noisy computer simulation experiments. We first show that replication offers the potential to be beneficial from both design and computational perspectives, in the context of Gaussian process surrogate modeling. We then develop a lookahead based sequential design scheme that can determine if a new run should be at an existing input location (i.e., replicate) or at a new one (explore). When paired with a newly developed heteroskedastic Gaussian process model, our dynamic design scheme facilitates learning of signal and noise relationships which can vary throughout the input space. We show that it does so efficiently, on both computational and statistical grounds. In addition to illustrative synthetic examples, we demonstrate performance on two challenging real-data simulation experiments, from inventory management and epidemiology.
INI 1
14:30 to 15:30 Future directions panel INI 1
University of Cambridge Research Councils UK
    Clay Mathematics Institute London Mathematical Society NM Rothschild and Sons