Skip to ContentSkip to Navigation
Over ons Faculteit Wijsbegeerte Organisatie Nieuws & Agenda Agenda

Chance Encounter: Likelihoods and Model Fit

Workshop organized by the VIDI project What are the chances? An explication of single-case chance. Abstracts below:

Learning the learning rate: how to repair Bayes when the model is wrong

Peter Grünwald, CWI and Leiden University

Bayesian inference can behave badly if the model under consideration is wrong yet useful: the posterior may fail to concentrate even for large samples, leading to extreme overfitting in practice. We demonstrate this on a simple regression problem. The problem goes away if we make the so-called learning rate small enough, which essentially amounts to making the prior more and the data less important. Standard Bayes sets the learning rate to 1, which can be too high under model misspecification; in the  exponential weights algorithm and PAC-Bayesian inference, close cousins of Bayes popular in the learning theory community, one often sets the learning rate to 1/sqrt{sample size}, which is too low if the setting is not adversarial. We introduce the safe Bayesian estimator*, which learns the learning rate from the data. It behaves essentially as well as standard Bayes if the model is correct but continues to achieve minimax optimal rates with wrong models.

Model Verification and the Likelihood Principle

Sam Fletcher, MCMP Munich and Minnesota

The likelihood principle (LP) is typically understood as a constraint on any measure of evidence applied to a statistical experiment. It is not sufficiently often noted, however, that the LP assumes that the probability model giving rise to a particular concrete data set must be statistically adequate—it must “fit” the data sufficiently. In practice, though modeling assumptions are often necessary, their adequacy can nevertheless then be verified using statistical tests. My present concern is to consider whether the LP applies to these techniques of model verification. If one does view model verification as part of the inferential procedures that the LP intends to constrain, then there are certain crucial tests of model verification that no known method satisfying the LP can perform. But if one does not, the degree to which these assumptions have been verified is bracketed from the evidential evaluation under the LP. Although I conclude from this that the LP cannot be a universal constraint on any measure of evidence, proponents of the LP may hold out for a weaker version, either as a kind of idealization or as defining one among many different forms of evidence.

Programme

11:15 Peter Grünwald "Learning the learning rate: how to repair Bayes when the model is wrong"
12:30 Lunch
13:30 Sam Fletcher "Model Verification and the Likelihood Principle"
14:45 Discussion
16:00 Drinks

Admission is free but space is limited so please register by sending an E-mail to Prof. J.W. Romeijn (leader of the research project Chance) and indicate if you would like to join for lunch (for a small donation).

When & Where?

Monday March 9 from 11:15 until 16:00
Academiegebouw in Groningen, Broerstraat 5, faculty room of Law (1st floor)

Laatst gewijzigd:22 juli 2022 15:58