Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Likelihood-based methods for low frequency diffusion data
Sven Wang1, Matteo Giordano2
1Humboldt-Universität zu Berlin, Germany; 2University of Torino, Italy
We will consider the problem of nonparametric inference in multi-dimensional diffusion models from low-frequency data. Due to the computational intractability of the likelihood, implementation of likelihood-based procedures in such settings is a notoriously difficult task. Exploiting the underlying (parabolic) PDE structure of the transition densities, we derive computable formulas for the likelihood function and its gradients. We then construct a Metropolis-Hastings Crank-Nicolson-type algorithm for Bayesian inference with Gaussian priors, as well as gradient-based methods for computing the MLE and Langevin-type MCMC. The performance of the algorithms is illustrated via numerical experiments.
2:00pm - 2:25pm
Statistical guarantees for stochastic Metropolis-Hastings
Sebastian Bieringer1, Gregor Kasieczka1, Maximilian F. Steffen2, Mathias Trabs2
1Universität Hamburg, Germany; 2Karlsruhe Institute of Technology, Germany
Uncertainty quantification is a key issue when considering the application of deep neural network methods in science and engineering. To this end, numerous Bayesian neural network approaches have been introduced. The main challenge is to construct an algorithm which is applicable to the large sample sizes and parameter dimensions of modern applications on the one hand and which admits statistical guarantees on the other hand. A stochastic Metropolis-Hastings step saves computational costs by calculating the acceptance probabilities only on random (mini-)batches, but reduces the effective sample size leading to less accurate estimates. We demonstrate that this drawback can be fixed with a simple correction term. Focusing on deep neural network regression, we prove a PAC-Bayes oracle inequality which yields optimal contraction rates and we analyze the diameter and show high coverage probability of the resulting credible sets. The method is illustrated with a simulation example.
2:25pm - 2:50pm
The Bernstein-von Mises theorem for semiparametric mixtures
Stefan Franssen1, Jeanne Nguyen2, Aad van der Vaart3
1University of Oxford, United Kingdom; 2INRIA, France; 3TU Delft, The Netherlands
Mixture models are a method of creating flexible models to study reality. These models contain three components: a finite-dimensional parameter, the mixing distribution and the kernel. As Bayesians, we would put priors on the finite-dimensional parameter and the mixing distribution. A natural choice for priors on the mixing distribution is a species sampling process prior. We study the frequentist properties of such priors when the parameter of interest is the finite-dimensional parameter. Three fundamental questions arise: 1) will the posterior ever converge near the truth? 2) How fast will the posterior contract to the truth? 3) Can we use credible sets as valid confidence sets? By answering these questions, we get an insight into designing priors that are optimal for learning about our problem. To answer these questions, we prove a semiparametric Bernstein-von Mises theorem. We then provide the tools to verify the assumptions of this result. The new tools allow us to check the change of measure condition of semiparametric BvMs. We then use these tools to prove the Bernstein-von Mises theorem for mixture of finite mixtures and Dirichlet process mixtures for the frailty and errors-in-variables model.