Approximate Bayesian Inference
A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".
Deadline for manuscript submissions: closed (22 June 2021) | Viewed by 69046
Special Issue Editor
Interests: statistical learning theory; mathematical statistics; Bayesian statistics; aggregation of estimators; approximate posterior inference
Special Issues, Collections and Topics in MDPI journals
Special Issue Information
Already extremely popular when it comes to statistical inference, Bayesian methods are also becoming popular in machine learning and AI problems, where it is important for any device not only to predict well, but also to provide a quantification of the uncertainty of the prediction.
Traditionally, Bayesian estimators were implemented using Monte Carlo methods, such as the Metropolis–Hastings of the Gibbs sampler. These algorithms target the exact posterior distribution. However, many modern models in statistics are simply too complex to use such methodologies. In machine learning, the volume of the data used in practice makes Monte Carlo methods too slow to be useful.
Motivated by these applications, many faster algorithms have recently been proposed that target an approximation of the posterior.
1) A first family of methods still relies on Monte Carlo simulations but targets an approximation of the posterior. For example, approximate versions of Metropolis–Hastings based on subsampling, or Langevin Monte Carlo methods, are extremely useful when the sample size or the dimension of the data is too large. The ABC algorithm is useful when the model is generative, in the sense that it is simple to sample from it, even though its likelihood may be intractable.
2) Another interesting class of methods relies on the optimization algorithm to approximate the posterior by a member of a tractable family of probability distributions—for example, variational approximations, Laplace approximations, the EP algorithm, etc.
Of course, even though these algorithms are much faster than exact methods, it is extremely important to quantify what is lost in accuracy with respect to the exact posterior. For some of the previous methods, such results are still only partially available. Recent work established the very good scaling properties of Langevin Monte Carlo with the dimension of the data. Another series of paper connected the question of the accuracy of variational approximations to the PAC–Bayes literature in machine learning and obtained convergence results.
The objective of this Special Issue is to provide the latest advances in approximate Monte Carlo methods and in approximations of the posterior: design of efficient algorithms, study of the statistical properties of these algorithms, and challenging applications.
Dr. Pierre Alquier
Guest Editor
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
Keywords
- Bayesian statistics
- variational approximations
- EP algorithm
- Langevin Monte Carlo
- Laplace approximations
- Approximate Bayesian Computation (ABC)
- Markov chain Monte Carlo (MCMC)
- PAC–Bayes
Benefits of Publishing in a Special Issue
- Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
- Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
- Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
- External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
- e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.
Further information on MDPI's Special Issue polices can be found here.