Information

📅: Monday, 23 January 2023 , 10:00-17:00
📌: Blå Hallen, Ekologihuset, Lund University
Main Organizers: Rasmus Bååth, Dmytro Perepolkin, and Ullrika Sahlin
Invited speakers: Aubrey Clayton and Mine Dogucu

Program

** 💡: Keynote, 🛠️: Tutorial

Time Title Speaker
10:00 Welcome
  Judgments
10:10 💡 One Probability to Rule them All? Aubrey Clayton
(on Zoom)
11:00 Modeling Legal Evidence with Bayesian Networks Christian Dahlman
11:20 Coffee Break
  Prior specification
11:40 🛠️ Structured expert judgement to assess uncertainty Ullrika Sahlin
12:00 Flexible prior elicitation via the prior predictive distribution Marcelo Hartman
12:20 🛠️ Prior sensitivity analysis with priorsense Noa Kallioinen
12:40 Lunch
  Estimation and model selection
13:30 🛠️ CUQIpy: A new Python platform for computational uncertainty quantification of inverse problems Jakob Sauer Jørgensen
13:50 Practical model specific automatic reparametrizations for Bayesian inference Nikolas Siccha
14:10 🛠️ Projection predictive variable selection with projpred Frank Weber
(on Zoom)
14:30 Coffee Break
  Use of Bayesian methods
14:50 💡 Teaching Bayesian modeling with Bayes Rules! Mine Dogucu
(on Zoom)
15:40 Identifying Data Requirements using Bayesian Decision Theory: Guidance for Engineers Domenic DiFrancesco
16:00 Bayesian Integration of Biological Data for Life Science Applications Nikolay Oskolkov
16:20 Quantile-based Bayesian Inference Dmytro Perepolkin
  Lightning talks
16:40 Can AI save us from the perils of P-values? Rasmus Bååth
16:50 End

Abstracts

One Probability to Rule them All? | Aubrey Clayton

Statistics practitioners (even Bayesians!) have long struggled with the fact that no traditional definition of probability is adequate for all purposes. Sometimes probability more naturally means frequency while at other times it lends itself to an interpretation as a degree of belief. However, if the latter, then why should the numerical rules of probability hold? How do we know when a probability is wrong? In this talk I’ll survey the acrimonious history of these foundational questions and propose a new approach to teaching probability-as-logic based on E.T. Jaynes’s principle of transformation groups. 🔼

Modeling Legal Evidence with Bayesian Networks | Christian Dahlman

Legal cases often involve a complex set of hypotheses and evidence. In a criminal case there may, for example, be several competing hypotheses about the identity of the perpetrator and various pieces of evidence that support or speak against each of these hypotheses. Research shows that legal decision-makers are prone to serious errors when they use their intuition to assess probabilities in complex cases. In my talk, I will show how Bayesian Networks can be used by legal decision-makers to improve evidence assessment. 🔼

Structured expert judgement to assess uncertainty | Ullrika Sahlin

An honest communication of the impact of limitations in knowledge is an essential part of scientific advice. One approach is to characterise scientific experts’ uncertainty quantitatively using subjective probability. Structured judgement can help to minimise experts’ biases and make the process transparent and reliable. Different methods have been developed on how to aggregate judgements from multiple experts. I reflect on the role of Bayesian modelling when quantifying uncertainty in scientific assessments, using food safety as an example. 🔼

Flexible prior elicitation via the prior predictive distribution | Marcelo Hartman

The prior distribution for the unknown model parameters plays a crucial role in the process of statistical inference based on Bayesian methods. However, specifying suitable priors is often difficult even when detailed prior knowledge is available in principle. The challenge is to express quantitative information in the form of a probability distribution. Prior elicitation addresses this question by extracting subjective information from an expert and transforming it into a valid prior. Most existing methods, however, require information to be provided on the unobservable parameters, whose effect the data generating process is often complicated and hard to understand. We propose an alternative approach that only requires knowledge about the observable outcomes – knowledge which is often much easier for experts to provide. Building upon a principled statistical framework, our approach utilizes the prior predictive distribution implied by the model to automatically transform experts judgements about plausible outcome values to suitable priors on the parameters. We also provide computational strategies to perform inference and guidelines to facilitate practical use. 🔼

Prior sensitivity analysis with priorsense | Noa Kallioinen

priorsense is an R package that provides tools for prior diagnostics and sensitivity analysis. It currently includes functions for performing power-scaling sensitivity analysis on Stan models. This is a way to check how sensitive a posterior is to perturbations of the prior and likelihood and diagnose the cause of sensitivity. For efficient computation, priorsense uses Pareto smoothed importance sampling and importance weighted moment matching. This tutorial will show the general workflow for using the package, and some more advanced features for using it with more complex models. 🔼

CUQIpy: A new Python platform for computational uncertainty quantification of inverse problems | Jakob Sauer Jørgensen

We present CUQIpy (“cookie pie”) - a new Python package for uncertainty quantification (UQ) of inverse problems. In inverse problems, such as an X-ray CT-scan, an inaccessible quantity, such as a cross-section image of the human head, is inferred from indirect measurements, such as of X-ray attenuation. Inverse problems pose a challenge to Bayesian methods due to their often large-scale and ill-posed nature. CUQIpy provides modelling and sampling tools to allow both experts and non-experts to perform UQ of their inverse problem. CUQIpy is developed in the CUQI project at the Technical University of Denmark and is available at https://github.com/CUQI-DTU/CUQIpy. (This is joint work with Amal Alghamdi and Nicolai Riis, both DTU). 🔼

Practical model specific automatic reparametrizations for Bayesian inference | Nikolas Siccha

Probabilistic programming languages and packages such as Stan, PyMC, Turing.jl and brms have done a lot to make Bayesian inference more accessible to applied researchers. However, there are still several roadblocks to more “automatic” reliable Bayesian inference for general models, such as multilevel hierarchical models or discretized Gaussian process models. We aim to remove one of the roadblocks by integrating automatic, model specific nonlinear reparametrizations for a subset of generalized non-linear multivariate multilevel models into the popular brms package. We will present some applied examples benefitting from our method, including epidemiological time series analysis and discretized Gaussian process regression. 🔼

Projection predictive variable selection with projpred | Frank Weber

The R package projpred implements the projection predictive variable selection for Bayesian regression models, typically fitted by rstanarm or brms. Although the projection predictive variable selection has been shown to possess excellent properties in terms of the trade-off between predictive performance and sparsity, projpred still seems to be used by only few applied researchers. Thus, the aim of this short tutorial is to increase awareness for projpred by the help of an example variable selection problem. 🔼

Teaching Bayesian modeling with Bayes Rules! | Mine Dogucu

Bayesian statistics is becoming more popular in data science. Data scientists are often not trained in Bayesian statistics and if they are, it is usually part of their graduate training. During this talk, we will introduce an introductory course in Bayesian statistics for learners at the undergraduate level and comparably trained practitioners. We will share tools for teaching (and learning) the first course in Bayesian statistics, specifically the bayesrules package that accompanies the open-access Bayes Rules! An Introduction to Bayesian Modeling with R book. We will provide an outline of the curriculum and examples for novice learners and their instructors. 🔼

Identifying Data Requirements using Bayesian Decision Theory: Guidance for Engineers | Dominic DiFrancesco

New methods of collecting and analysing data that are available to engineers can contribute to improvements in safety and efficiency of the built environment. However, understanding the required quantity and quality of data continues to be a challenge to engineers. This presentation will detail selected case studies (including inspecting a structure for damage, and measuring building occupancy to mitigate risk of airborne infections) where the expected value of data collection is quantified using Bayesian decision analysis. 🔼

Bayesian Integration of Biological Data for Life Science Applications | Nikolay Oskolkov

Recent advances in next-generation sequencing technologies allowed for various types of biological data to be considered and analysed in the context of each other. In this presentation, I plan to give an overview of available methodology for biological data integration analysis, and concentrate on Bayesian learning as a promising way to explore and combine heterogenous data in Life Sciences. I will demonstrate how Bayesian matrix factorization techniques can be successfully used for studying heterogeneity among biological cells and discovering novel cell types for biomedical applications. 🔼

Quantile-based Bayesian Inference | Dmytro Perepolkin

Bayesian inference can be extended to probability distributions defined in terms of their quantile functions. We describe the method of quantile-based likelihood to be used in the Bayesian models with sampling distributions which lack an explicit probability density function. The quantile-based inference can be used in the univariate settings, as well as in parametric quantile regression for describing and updating the parameters of the error term. We also extend the same method to propose the definition of an quantile-based prior and discuss the situations where it can be useful. 🔼