Why Monte Carlo Converges Slowly: Insights from the Law of Large Numbers and Central Limit Theorem
This article explains how the law of large numbers and the central limit theorem underpin Monte Carlo methods, revealing why their convergence rate is low, how significance and confidence levels are defined, and why variance reduction is crucial for efficient simulations.
Law of Large Numbers and Central Limit Theorem
The foundation of Monte Carlo methods lies in probability theory, specifically the law of large numbers and the central limit theorem, which are used to assess the accuracy of Monte Carlo simulations.
Theorem 1 (Central Limit Theorem) : Let \(X_1, X_2, \dots, X_n\) be an independent and identically distributed sequence of random variables with expectation \(\mu\) and variance \(\sigma^2\). Then, as \(n\) becomes large, the standardized sum \(\frac{\sum_{i=1}^n X_i - n\mu}{\sigma\sqrt{n}}\) converges in distribution to a standard normal variable.
Consequently, when the sample size is large, the probability that the sample mean deviates from the true mean by a given amount can be approximated using the normal distribution. The threshold \(\alpha\) is called the significance level, which is equivalent to the confidence level \(1-\alpha\). The corresponding quantile of the standard normal distribution can be read from normal tables.
From this result we see that the arithmetic mean of random variables converges in probability to the true mean at a rate proportional to \(1/\sqrt{n}\). When \(n\) is large, the error \(\epsilon\) is called the probabilistic error. This shows that Monte Carlo methods have a relatively low convergence order and converge slowly; the error depends on the variance of the underlying random variable. With a fixed variance, improving the precision by one decimal place requires increasing the number of trials by a factor of 100, while reducing the number of trials by a factor of 10 reduces the workload by a factor of 100. Therefore, variance reduction is a key technique in practical Monte Carlo applications.
Basic Idea of Monte Carlo Methods
Monte Carlo problems can be divided into two categories.
1. Stochastic problems : For these, a direct simulation approach is used. First, a probabilistic model (random vector or stochastic process) is built based on the real‑world problem. Then, computer‑based random sampling generates values of the random variable of interest, whose distribution is approximated by the empirical distribution of the generated samples. If the random variable \(Y\) is a function of \(k\) independent random variables \(X_1,\dots,X_k\) with known probability density functions, the Monte Carlo procedure is:
Sample repeatedly from the distributions of \(X_1,\dots,X_k\) to obtain a value of \(Y\); repeat this \(N\) times to obtain \(N\) sample values of \(Y\); use the sample distribution to approximate the true distribution of \(Y\) and compute the desired statistical quantities.
2. Deterministic problems : Here a probabilistic statistical model is first constructed so that the solution of interest becomes the model’s probability distribution or expected value. Random sampling is then performed on this model, and the arithmetic mean of the sampled values serves as an approximation of the solution. As discussed earlier, improving the model to reduce variance and computational cost is essential for efficient Monte Carlo computation.
Model Perspective
Insights, knowledge, and enjoyment from a mathematical modeling researcher and educator. Hosted by Haihua Wang, a modeling instructor and author of "Clever Use of Chat for Mathematical Modeling", "Modeling: The Mathematics of Thinking", "Mathematical Modeling Practice: A Hands‑On Guide to Competitions", and co‑author of "Mathematical Modeling: Teaching Design and Cases".
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.