Fundamentals 7 min read

Unlocking Monte Carlo Sampling: From Basics to Acceptance‑Rejection in AI

Monte Carlo methods, originally a gambling-inspired random simulation technique, provide a versatile way to approximate integrals and sums, and by using acceptance‑rejection sampling they enable drawing samples from complex probability distributions, a key step toward effective Markov Chain Monte Carlo algorithms in machine learning and AI.

Model Perspective
Model Perspective
Model Perspective
Unlocking Monte Carlo Sampling: From Basics to Acceptance‑Rejection in AI

MCMC Overview

From its name, MCMC combines Monte Carlo (MC) simulation and Markov Chain (also MC). To understand MCMC we first need to grasp Monte Carlo methods and Markov chains. This series will cover MCMC in three parts; this article focuses on Monte Carlo.

Introduction to Monte Carlo Method

Monte Carlo was named after the casino because it simulates random processes like dice rolls. Historically it was used to approximate difficult sums or integrals. For an integral where the antiderivative is hard to obtain, Monte Carlo can estimate its value by random sampling.

For example, given a function f(x) on interval [a, b], we randomly pick a point x in [a, b] and evaluate f(x). The average of many such samples approximates the integral (b‑a)·E[f(x)].

In practice we draw many samples, compute their mean, and use it to estimate the integral. This assumes a uniform distribution over the interval, which may not hold for many problems.

If the true probability density p(x) is known, we can rewrite the integral as the expectation under p(x), leading to a more general Monte Carlo formula that works for both continuous and discrete cases.

Probability Distribution Sampling

The key to Monte Carlo is obtaining samples from the target distribution p(x). For common distributions (uniform, normal, t, F, Beta, Gamma) we can transform uniform random numbers or use library functions (e.g., NumPy, scikit‑learn). However, for uncommon or complex distributions we often cannot sample directly.

Acceptance‑Rejection Sampling

When the target distribution is difficult to sample, acceptance‑rejection provides a solution. We choose a proposal distribution q(x) that is easy to sample (e.g., a Gaussian) and a constant M such that p(x) ≤ M·q(x) for all x. We then:

Draw a sample x from q(x).

Draw u from Uniform(0,1).

Accept x if u ≤ p(x)/(M·q(x)); otherwise reject it.

Repeating this yields a set of accepted samples that follow p(x). The process is illustrated below.

Monte Carlo Method Summary

Acceptance‑rejection sampling enables Monte Carlo integration for distributions that are not directly samplable, but it may still be challenging for high‑dimensional or complex cases where finding a suitable proposal distribution is difficult. The next article will introduce Markov chains, which serve as a “white‑knight” for generating samples from such complex distributions.

machine learningsamplingmonte carloMCMCProbability DistributionAcceptance-Rejection
Model Perspective
Written by

Model Perspective

Insights, knowledge, and enjoyment from a mathematical modeling researcher and educator. Hosted by Haihua Wang, a modeling instructor and author of "Clever Use of Chat for Mathematical Modeling", "Modeling: The Mathematics of Thinking", "Mathematical Modeling Practice: A Hands‑On Guide to Competitions", and co‑author of "Mathematical Modeling: Teaching Design and Cases".

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.