The program of the conference is now complete!

Invited talks

Nicky Welton

Nicky Welton

Nicky Welton is Professor of Statistical and Health Economic Modelling at the University of Bristol working on methods for evidence synthesis in healthcare decision-making. She leads the Multi-Parameter Evidence Synthesis research group, is co-Director of the Guidelines Technical Support Unit for the National Institute for Health and Care Excellence (NICE), Co-Director of the Bristol Technology Assessment Group, and was a long-standing member of the NICE technology appraisals committee.

Finn Lindgren

Finn Lindgren

Finn Lindgren is a Professor of Statistics in the School of Mathematics at the University of Edinburgh. He received a PhD in Engineering in Mathematical Statistics at Lund University (2003), and has since worked as lecturer and research fellow at Lund University and Norwegian Institute of Science and Technology in Trondheim, followed by four years as Reader at the University of Bath, before joining the growing Statistics group in Edinburgh in 2016. He has served as Associate Editor of Annals of Applied Statistics, as member of the Royal Statistical Society Research Committee, and is an Elected Member of ISI. His research covers spatial and spatio-temporal modelling and computational Bayesian methods. In particular, the development of stochastic partial differential equations to enable the use of computationally efficient methods for sparse matrices and Markov random fields lead to an RSS read paper in 2011. The subsequent software development, including the MCMC-free Bayesian statistics R packages INLA, inlabru, and excursions, have lead to involvement in a broad range of applications, including large scale modelling for climate science, point process models for animal abundance in ecology and earthquake forecasting in geoscience, as well as animal movement models, finance, genetics, and epidemiology.

Program

** 💡: Keynote, 🛠️: Tutorial

Time Title Speaker
10:00 Welcome: 10 years of Bayes@Lund Dmytro Perepolkin
Rasmus Bååth
  Meta-analysis
10:10 💡 Bayesian Network Meta-Analysis for Healthcare Decision Making Nicky Welton
11:00 Do beta-blockers reduce negative intrusive thoughts and anxiety in cancer survivors? – A Bayesian analysis of emulated trials David Bock et al
11:20 Coffee Break
  Prior specification
11:40 Simulation-Based Prior Knowledge Elicitation for Parametric Bayesian Models Florence Bockting et al
12:00 Translating predictive distributions into informative priors Robert Goudie, Andrew Manderson
12:20 Pushing for Bayesian Methods in Empirical Software Engineering Noric Couderc et al
12:40 Lunch
  Spatial Bayesian models
13:30 💡 inlabru: Bayesian spatial and spatio-temporal modelling in R Finn Lindgren
14:20 POLLENOMICS: Decoding the Farming History of Europe Using a Bayesian Approach Combining Compositional Data with a Point Process Behnaz Pirzamanbein et al
14:40 Computationally Efficient Hierarchical Gaussian Process Regression for Functional Data Adam Gorm Hoffmann et al
15:00 Coffee Break
  Use of Bayesian methods
15:10 Using a Bayesian model for clinical study design and blinded data review Erik Werner
15:30 Conceptualizing Neuronal Networks as Vector Fields: A Bayesian Perspective on Brain Function Szilvia Szeier, Henrik Jörntell
15:50 The Bayesian Approach to Numerical Analysis Filip Tronarp
16:10 Bottom-up mechanistic ecosystem models framed Bayesian Wolfgang Traylor
  Future outlook
16:30 Closing remarks: My wish list for Bayes@Lund in the future Ullrika Sahlin
16:40 End

Please, register for the conference by clicking the button below (external link). The conference and the workshop are FREE to attend!

Abstracts

Bayesian Network Meta-Analysis for Healthcare Decision Making | Nicky Welton

National healthcare organisations issue guidance on which treatments are recommended as effective and cost-effective options. This requires an assessment of the relative efficacy of multiple treatment options based on all relevant evidence that is available. Network Meta-Analysis (NMA) is a method to synthesise evidence from studies that form a connected network of treatment comparisons to deliver a set of treatment effect estimates between all treatments in the evidence network, even if they have not been directly compared in a study. The advantages of Bayesian methods for NMA will be discussed, including: (i) providing the inputs needed for probabilistic decision models; (ii) flexibility to combine evidence from studies reporting results in different formats, using shared-parameter models; (iii) evidence-based priors for variance parameters; (iv) bias-adjustment models; (v) population adjustment; and (vi) incorporation of external data. Some future challenges for NMA will be outlined, including disconnected evidence networks. 🔼

Do beta-blockers reduce negative intrusive thoughts and anxiety in cancer survivors? – A Bayesian analysis of emulated trials | David Bock et al

The aim of this study is to investigate if beta-blocker therapy reduces psychological distress in cancer survivors using Bayesian analysis of emulated randomized controlled trial by coming cohort study data with registry data. Questionnaire data from three cohort studies of Swedish patients diagnosed with colon, prostate or rectal cancer were combined with data on beta-blocker prescriptions from the Swedish Prescribed Drug Register. Randomized controlled trials were emulated and analysed using Bayesian weakly regularizing ordered logistic regression. No differences in negative intrusive thoughts were found. Depressed mood, impaired quality of life and anxiety were higher in the Active group. 🔼

Simulation-Based Prior Knowledge Elicitation for Parametric Bayesian Models | Florence Bockting et al

A central characteristic of Bayesian statistics is the ability to consistently incorporate prior knowledge into various modeling processes. In our work, we focus on translating domain expert knowledge into corresponding prior distributions over model parameters, a process known as prior elicitation. Expert knowledge can manifest itself in diverse formats, including information about raw data, summary statistics, or model parameters. A major challenge for existing elicitation methods is how to effectively utilize all of these different formats in order to formulate prior distributions that align with the expert’s expectations, regardless of the model structure. To address these challenges, we develop a simulation-based elicitation method that can learn the hyperparameters of potentially any parametric prior distribution from a wide spectrum of expert knowledge using stochastic gradient descent. We validate the effectiveness and robustness of our elicitation method in representative case studies ranging from (generalized) linear models to hierarchical models. Our method is largely independent of the underlying model structure and adaptable to various elicitation techniques, including quantile-based, moment-based, and histogram-based methods. 🔼

Translating predictive distributions into informative priors | Robert Goudie, Andrew Manderson

Prior information is often easier to obtain on observable quantities in a model (or other low dimensional marginals). However, identifying the appropriate informative prior that matches this information is often difficult, particularly for complex models. I will discuss an approach and associated algorithm for “translating” such information into priors on parameters, making it easier for applied Bayesian researchers to specify sensible priors. 🔼

Pushing for Bayesian Methods in Empirical Software Engineering | Noric Couderc et al

The evaluation of software performance requires the analysis of empirical data using statistical techniques. However, software engineering researchers typically have no formal training in empirical data analysis, they typically use frequentist approaches, but with no clear methodology. In this presentation, I will present a Bayesian statistical model tailored to the analysis of software performance experiments, which we presented at a major conference in the field. I will reflect on the advantages and the drawbacks of the Bayesian approach and will conclude with some thoughts on the state of statistical methods used in the software engineering community. 🔼

inlabru: Bayesian spatial and spatio-temporal modelling in R | Finn Lindgren

The Integrated Nested Laplace Approximation (INLA) method was developed to handle latent Gaussian additive regression models. Combined with the stochastic partial differential equation method for constructing computationally efficient representations of Gaussian random fields, this has enabled fast Bayesian analysis of a wide range of models, in particular in environmental sciences requiring spatial and spatio-temporal random field models. The inlabru package extends this to a more general model class that allows more non-linearity, and a more user-friendly interface for specifying complex models, such as point process models and joint models for multiple response variables and spatial covariates. By using an iterated INLA approach, the computational power of the R-INLA implementation is extended to a wider range of models, allowing both easier access to complex spatial model specification, and new applications in ecology, epidemiology, and geosciences. 🔼

POLLENOMICS: Decoding the Farming History of Europe Using a Bayesian Approach Combining Compositional Data with a Point Process | Behnaz Pirzamanbein et al

This study uniquely combines advanced continental-scale data from two distinct sources: pollen-based land cover (PbLC) and ancient DNA (aDNA), developing a novel statistical model for spatiotemporal reconstructions of past land use across Europe. The aDNA data serves as a proxy for human habitation, differentiating anthropogenic and natural land cover from PbLC reconstruction. This will be accomplished using a Bayesian hierarchical model that combines compositional data, Gaussian Markov random fields and point process models. This groundbreaking approach gives insights into the environmental impacts of Holocene human migration and subsistence practices, and marks a major advancement in understanding human-environmental dynamics over millennia. 🔼

Computationally Efficient Hierarchical Gaussian Process Regression for Functional Data | Adam Gorm Hoffmann et al

Gaussian process regression is a flexible, probabilistic approach to non-linear regression modeling. We consider a hierarchical Gaussian process regression model for functional data (e.g., from wearables) where a common mean function and individual subject-specific deviations are modeled simultaneously as latent Gaussian processes. We derive exact analytic and computationally efficient expressions for the log-likelihood function and the posterior distributions when the observations are sampled on a completely or partially regular grid. We provide Stan implementations that are 1,000-100,000 times faster than standard implementations, thus enabling fitting the model to previously infeasible data sets. 🔼

Using a Bayesian model for clinical study design and blinded data review | Erik Werner

At IRLAB we develop novel treatments for Parkinson’s disease. A central part of this work is the planning and analysis of clinical trials. A key benefit of using a Bayesian approach for this is that the models used for planning trials can be continuously refined as blinded data from the trials become available. This allows simulation of a range of plausible study outcomes that are consistent with the data observed, which is helpful for optimizing both study size and the plan for how to analyze results at the end of the trial. 🔼

Conceptualizing Neuronal Networks as Vector Fields: A Bayesian Perspective on Brain Function | Szilvia Szeier, Henrik Jörntell

The notion that the brain operates as a probabilistic inference machine, integrating prior knowledge with incoming sensory information to form beliefs about the world has received considerable interest in the field of neuroscience. While this hypothesis is a popular opinion among researchers, the task of defining the brain mechanisms to practically implement the idea is not a trivial one. Here we first introduce a new approach of representing neuronal networks as vector fields which enables us to retain network complexity. Then we illustrate through a simulated experiment how Bayesian inference could be applied and interpreted in a biologically inspired network. 🔼

The Bayesian Approach to Numerical Analysis | Filip Tronarp

A function given by a formula is completely specified. However, given the formula how do we generally compute its integral? Typically, this question is answered by interpolating it at a finite set of points with, say a polynomial, which can then be integrated. But once we have accepted that we don’t know everything about our function and that we are only allowed finite data, should we not be Bayesian about it? In this talk I will give a brief introduction to the Bayesian approach to numerical analysis and highlight some of my own contributions to the field. 🔼

Bottom-up mechanistic ecosystem models framed Bayesian | Wolfgang Traylor

Highly mechanistic dynamic ecosystem models are typically complex and parameter-rich, but thanks to being constrained by ecological mechanisms they are able to predict properties of past or future ecosystems with no contemporary analogs. However, they usually rely on manual tuning and lack quantitative uncertainty analyses. On the other hand, probabilistic models are typically simpler but less reliable outside their training domain. How can we combine the strengths of mechanistic detail and probabilistic predictions? I will present a case study of using a dynamic vegetation–herbivore model to predict woolly mammoth population densities in the long-vanished glacial steppe of Siberia. The model is bottom-up: Population densities are an emergent property, and most parameters are observable quantities. One model run by itself is a possibilistic prediction, but a customized likelihood function—based on mammoth occurrence only—allowed me to derive posterior probability distributions from Bayesian updating. The approach demonstrates how mechanistic detail, bottom-up model development, Bayesian uncertainty propagation, and preregistration mark the cornerstones of open-ended predictions under no-analog conditions. 🔼