How to Build and Apply the GM(2,1) Grey Model for Accurate Forecasting
This article introduces the GM(2,1) grey model, presents its definitions and theorem, walks through a step‑by‑step case study with data preparation, parameter estimation, solution of the differential equation, and shows how to implement the whole process in Python with code examples and error analysis.
GM(2,1) Model
GM(1,1) is suitable for sequences with a strong exponential law and can only describe monotonic change; for non‑monotonic oscillating sequences or saturated S‑shaped sequences, GM(2,1), DGM and Verhulst models are considered. Below we introduce the GM(2,1) model.
Definition 1: Let the original sequence be \(x^{(0)}\). Its first‑order accumulated generating sequence (1‑AGO) and first‑order inverse accumulated generating sequence (1‑IAGO) are defined respectively as \(x^{(1)}\) and \(x^{(-1)}\). The mean‑generated sequence is \(z^{(1)}\), and the model is expressed as ...
Definition 2: The whitening equation of the GM(2,1) model is ...
Theorem 1: Under the definitions above, the least‑squares estimate of the parameter vector \(\theta\) is given by ...
Case Study
Problem
Given the data series \(\{41, 49, 61, 78, 96, 104\}\), establish a GM(2,1) model.
Modeling
The 1‑AGO sequence of the data and the corresponding 1‑IAGO sequence are computed. The mean‑generated sequence \(z\) is formed, and the whitening equation is constructed. Using the boundary conditions, the time‑response function is derived, and the IAGO inverse transformation yields the predicted values.
The prediction results are summarized (original data, predicted data, residuals, and relative errors). The relative errors are all below 5%, indicating good forecasting performance.
Python Implementation
<code>import numpy as np
from sympy import Function, diff, dsolve, symbols, solve, exp
x0=np.array([41, 49, 61, 78, 96, 104])
n=len(x0); x1=np.cumsum(x0) # 1‑AGO sequence
ax0=np.diff(x0) # 1‑IAGO sequence
B=np.c_[-x0[1:], -z, np.ones((n-1,1))]
u=np.linalg.pinv(B).dot(ax0)
# construct characteristic polynomial and solve for roots
p=np.r_[1, u[:-1]]
z=0.5*(x1[1:]+x1[:-1])
r=np.roots(p)
# particular solution of the differential equation
xts=u[2]/u[1]
c1,c2,t=symbols('c1,c2,t')
eq1=c1+c2+xts-41
eq2=c1*exp(5*r[0])+c2*exp(5*r[1])+xts-429
c=solve([eq1,eq2],[c1,c2])
s=c[c1]*exp(r[0]*t)+c[c2]*exp(r[1]*t)+xts
# generate predicted series
xt1=[s.subs({t:i}) for i in range(6)]
xh0=np.r_[xt1[0], np.diff(xt1)]
cha=x0-xh0 # residuals
delta=np.abs(cha)/x0 # relative errors
print(xt1, '\n------------\n', xh0, '\n------------\n', cha, '\n--------------\n', delta)
</code> <code>import numpy as np
import sympy as sp
x0=np.array([41,49,61,78,96,104])
n=len(x0)
lamda=x0[:-1]/x0[1:] # level ratios
rang=[lamda.min(), lamda.max()]# range of level ratios
theta=[np.exp(-2/(n+1)), np.exp(2/(n+1))] # admissible range
x1=np.cumsum(x0)
z=0.5*(x1[1:]+x1[:-1])
B=np.vstack([-x0[1:], -z, np.ones(n-1)]).T
u=np.linalg.pinv(B)@np.diff(x0) # least‑squares parameters
print('Parameters u:', u)
sp.var('t'); sp.var('x', cls=sp.Function)
eq=x(t).diff(t,2)+u[0]*x(t).diff(t)+u[1]*x(t)-u[2]
s=sp.dsolve(eq, ics={x(0):x0[0], x(5):x1[-1]})
xt=s.args[1]
print('xt =', xt)
# numerical prediction
fxt=sp.lambdify(t, xt, 'numpy')
yuce1=fxt(np.arange(n))
yuce=np.hstack([x0[0], np.diff(yuce1)])
epsilon=x0-yuce[:n]
delta=abs(epsilon/x0)
print('Relative error:', np.round(delta*100,2))
</code>Reference
Shi Shou‑kui, Sun Xi‑jing. Python Mathematics Experiment and Modeling .
Model Perspective
Insights, knowledge, and enjoyment from a mathematical modeling researcher and educator. Hosted by Haihua Wang, a modeling instructor and author of "Clever Use of Chat for Mathematical Modeling", "Modeling: The Mathematics of Thinking", "Mathematical Modeling Practice: A Hands‑On Guide to Competitions", and co‑author of "Mathematical Modeling: Teaching Design and Cases".
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.