Fundamentals 4 min read

Mastering the Method of Moments: Theory and Python Example

This article explains the method of moments for estimating population parameters, outlines its step‑by‑step derivation, and demonstrates a Python implementation that estimates a basketball player's shooting odds from binary outcome data using.

Model Perspective
Model Perspective
Model Perspective
Mastering the Method of Moments: Theory and Python Example

According to the law of large numbers, if a population’s k‑th moment exists, the sample’s k‑th moment converges in probability to the population’s k‑th moment, and continuous functions of the moments also converge. This justifies using sample moments as estimators of population moments, a technique known as the method of moments.

Let X₁,…,Xₙ be a random sample from a population. The sample k‑th raw moment is the average of the k‑th powers of the observations. If the population’s k‑th raw moment exists, we estimate it by the corresponding sample moment.

Assume the population distribution depends on r unknown parameters and that the first r moments exist and can be expressed as functions of these parameters. The method‑of‑moments estimator is obtained by:

Compute the first r sample moments and set them equal to the theoretical moments, assuming the parameters are unknown.

Solve the resulting system of equations for the parameters.

Substitute the solutions back to obtain the estimators of the parameters.

When actual sample observations are available, substituting them into the derived formulas yields numerical estimates. Because the analytical solution may be difficult for complex distributions, numerical or iterative algorithms are often required, and a custom program must be written for each specific problem.

For a Bernoulli population with success probability p, the population mean equals p. By the method of moments, the estimator of p is the sample mean ̅X.
Consider a basketball player’s shot outcomes (1 for hit, 0 for miss). The following Python code estimates the odds of a successful shot using the method of moments.
<code>import numpy as np
x=[1,1,0,1,0,0,1,0,1,1,1,0,1,1,0,1,0,0,1,0,1,0,1,0,0,1,1,0,1,1,0,1]
theta = np.mean(x)
h = theta/(1-theta)
print('h=', h)
</code>

The program outputs h = 1.2857142857142858, which is the estimated odds of a successful shot.

Reference: Zhu Shunquan, “Economic and Financial Data Analysis and Its Python Application”.

Pythonstatisticsdata analysisprobabilityparameter estimationmethod of moments
Model Perspective
Written by

Model Perspective

Insights, knowledge, and enjoyment from a mathematical modeling researcher and educator. Hosted by Haihua Wang, a modeling instructor and author of "Clever Use of Chat for Mathematical Modeling", "Modeling: The Mathematics of Thinking", "Mathematical Modeling Practice: A Hands‑On Guide to Competitions", and co‑author of "Mathematical Modeling: Teaching Design and Cases".

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.