A Learned Harmonic Mean Estimator for Efficient Bayesian Model Selection
The article presents a machine‑learning‑assisted harmonic mean estimator that computes Bayesian model evidence without dependence on sampling strategies, explains its theoretical basis, compares it to the original estimator, and demonstrates its accuracy on Rosenbrock and Normal‑Gamma benchmarks.
This article introduces a machine‑learning‑assisted approach to compute Bayesian model evidence, a key quantity for Bayesian model selection, using a learned harmonic mean estimator that is independent of the underlying sampling method.
Bayesian Model Selection
Bayesian model comparison provides a principled statistical framework that balances model complexity against fit to observed data. The model evidence (marginal likelihood) appears in the denominator of Bayes' theorem and is essential for computing Bayes factors, but evaluating the high‑dimensional integral is computationally challenging.
Original Harmonic Mean Estimator
Newton & Raftery (1994) introduced the original harmonic mean estimator, which estimates the reciprocal of the evidence from posterior samples 𝜃_i generated by MCMC. The estimator requires only posterior draws, making it attractive compared with methods tightly coupled to specific sampling schemes.
However, the original estimator can fail catastrophically because the prior, used as the importance‑sampling proposal, is often much broader than the posterior, leading to infinite or extremely large variance (Neal 2008).
Re‑weighted Harmonic Mean Estimator
Gelfand & Dey (1994) proposed a re‑weighted version that introduces a new target distribution ϕ(𝜃) to avoid the problematic configurations of the original estimator. Various choices for ϕ have been explored, such as multivariate Gaussian tails and indicator functions, each with trade‑offs in variance and efficiency.
Learning the Target Distribution
The article proposes learning an approximation to the normalized posterior (the optimal target) using machine‑learning techniques. Although the exact normalized posterior is unavailable, a learned approximation that does not have heavier tails than the true posterior can be constructed from posterior samples. This yields the learned harmonic mean estimator (McEwen et al. 2021).
The learned estimator retains the original’s independence from the sampling algorithm while dramatically reducing variance.
Experiments
Extensive numerical experiments compare the learned estimator to the true evidence on two benchmark problems.
• Rosenbrock function : 100 independent runs show that the learned estimator recovers the evidence accurately and provides reliable variance estimates.
• Normal‑Gamma model : The learned estimator improves accuracy by four orders of magnitude over the original harmonic mean estimator (log‑space), as shown in the comparative table.
Implementation
The learned estimator is implemented in the open‑source harmonic package (https://github.com/astro-informatics/harmonic.git). Because it only requires posterior samples, it integrates naturally with any MCMC sampler, such as the affine‑invariant ensemble sampler emcee (Foreman‑Mackey et al. 2013).
Conclusion
Bayesian model comparison remains computationally demanding due to the need for model evidence. The learned harmonic mean estimator offers a flexible, sampling‑agnostic solution that dramatically reduces variance compared with the original estimator, though scaling to very high‑dimensional problems may require more sophisticated machine‑learning models.
References
Newton & Raftery (1994); Gelfand & Dey (1994); McEwen et al. (2021); Neal (2008); additional works on nested sampling, MultiNest, PolyChord, and related methods.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Code DAO
We deliver AI algorithm tutorials and the latest news, curated by a team of researchers from Peking University, Shanghai Jiao Tong University, Central South University, and leading AI companies such as Huawei, Kuaishou, and SenseTime. Join us in the AI alchemy—making life better!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
