Fundamentals 4 min read

Why Monte Carlo Converges Slowly: Law of Large Numbers & Central Limit Theorem Explained

This article explains how the law of large numbers and the central limit theorem underpin Monte Carlo methods, illustrating their convergence rate, the role of variance reduction, and the practical steps for applying Monte Carlo to both stochastic and deterministic problems.

Model Perspective
Model Perspective
Model Perspective
Why Monte Carlo Converges Slowly: Law of Large Numbers & Central Limit Theorem Explained

Law of Large Numbers and Central Limit Theorem

The foundation of Monte Carlo methods lies in probability theory, specifically the law of large numbers and the central limit theorem. To illustrate the accuracy of Monte Carlo, we present the central limit theorem.

Theorem 1 (Central Limit Theorem) Let {X_i} be an independent and identically distributed random variable sequence with expected value μ and variance σ². Then, as n → ∞, the standardized sum (∑_{i=1}^n X_i – nμ) / (σ√n) converges in distribution to a standard normal variable.

Consequently, when n is large, the probability that the sample mean deviates from μ by more than a given amount ε approaches the significance level α, which is also the confidence level. The critical value z_{α/2} of the standard normal distribution can be read from normal tables.

From this result we see that the arithmetic mean of the random variables converges to μ at a rate of O(1/√n). The error is called probabilistic error. Hence Monte Carlo converges slowly; to gain one additional decimal digit of accuracy, the number of trials must increase by a factor of 100. Conversely, reducing the error by a factor of 10 reduces the required work by a factor of 100. Therefore variance reduction is crucial for efficient Monte Carlo simulations.

Basic Idea of Monte Carlo Methods

Monte Carlo problems can be divided into two categories.

Stochastic problems: For these, a probability model (random vector or process) is built from the real‑world problem, then computer sampling is used to generate observations of the target random variable. If the variable Y is a function of m independent random variables X₁,…,X_m with known probability density functions, the Monte Carlo steps are:

Sample from the distribution of each X_i to obtain a value.

Combine the sampled values to compute a single realization of Y.

Repeat the sampling N times to obtain N samples of Y.

Use the empirical distribution of these samples to approximate the true distribution and compute statistical quantities.

Deterministic problems: Here a probabilistic statistical model is first constructed so that the desired solution is the model’s probability distribution or expected value. Random sampling of this model yields an arithmetic mean that serves as an approximation of the solution. As discussed earlier, variance reduction and model improvement are essential to lower computational cost and increase efficiency.

Variance Reductioncentral limit theoremmonte carloProbability TheoryLaw of Large Numbers
Model Perspective
Written by

Model Perspective

Insights, knowledge, and enjoyment from a mathematical modeling researcher and educator. Hosted by Haihua Wang, a modeling instructor and author of "Clever Use of Chat for Mathematical Modeling", "Modeling: The Mathematics of Thinking", "Mathematical Modeling Practice: A Hands‑On Guide to Competitions", and co‑author of "Mathematical Modeling: Teaching Design and Cases".

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.