Fundamentals 8 min read

Unlocking Grey System Theory: Modeling Uncertain Systems with Minimal Data

This article introduces Grey System Theory, explains its origins, core concepts of partial information, advantages over black‑box models, data accumulation/reduction techniques, and demonstrates a Python case study that improves forecasting accuracy for short‑term exponential trends.

Model Perspective
Model Perspective
Model Perspective
Unlocking Grey System Theory: Modeling Uncertain Systems with Minimal Data

1 Grey System Theory

Grey system theory studies analysis, modeling, prediction, decision and control of systems with partially known and partially unknown information. It extends general systems theory, information theory and control theory to social, economic and ecological abstract systems, using classical mathematics to handle incomplete information.

Grey system refers to an information‑incomplete system where some data are known and some are unknown.

The theory originated in the early 1980s. Professor Deng Julong first used the term “grey system” in 1981 and published foundational papers in 1982, gaining international attention, notably from Harvard professor R. W. Brockett.

The concept evolved from W. R. Ashby’s “black box” idea. While the black‑box approach studies systems solely from external input‑output relations, grey‑system theory investigates internal structures and parameters to make better use of known information.

Grey systems are characterized by partial knowledge; the theory seeks to predict unknown aspects from known data. Together with probability theory and fuzzy mathematics, it forms one of three main methods for dealing with uncertainty, especially effective with limited data.

Grey prediction models use generated data sequences (e.g., accumulated series) rather than raw data, revealing approximate exponential laws for modeling.

Advantages: only a few data points (often four) are needed; differential equations can capture system essence with high accuracy; transformed sequences become more regular, simplifying computation and verification.

Disadvantages: suitable mainly for short‑ to medium‑term forecasts and for systems exhibiting exponential growth.

2 Data Accumulation and Reduction

In practice, random disturbances cause high volatility in data. To address this, the concepts of data accumulation (cumulative generation) and reduction are introduced. An original data series is transformed into a first‑order accumulated series, and higher‑order accumulations can be defined similarly. The inverse operation, reduction, restores the original series.

3 Case Study

3.1 Problem

Given an annual sales data series of a product.

3.2 Analysis

Applying ordinary least squares linear fitting yields the line shown below, with a maximum relative error of 35.85%, indicating poor fit.

3.3 Modeling

The original series is cumulatively summed once, then fitted using an exponential model. The fitted parameters are obtained and the predicted accumulated series is back‑converted (reduced) to the original scale.

<code>import numpy as np
from matplotlib.pyplot import plot,show,rc,legend,subplot,savefig
%matplotlib inline
from scipy.optimize import curve_fit
rc('font',size=15); rc('font',family='SimHei')
t0 = np.arange(1,7)
x0 = np.array([5.081, 4.611, 5.1177, 9.3775, 11.0574, 11.0524])
xt = np.polyfit(t0, x0, 1)
xh1 = np.polyval(xt, t0)  # prediction
delta1 = abs((xh1 - x0)) / x0   # relative error
x1 = np.cumsum(x0)
xh2 = lambda t,a,b,c: a*np.exp(b*t)+c
para, cov = curve_fit(xh2, t0, x1)
xh21 = xh2(t0, *para)  # prediction of accumulated series
xh22 = np.r_[xh21[0], np.diff(xh21)]  # back‑converted series
delta2 = np.abs((xh22 - x0) / x0)  # relative error
print("Fitted parameters:", para)
subplot(121)
plot(t0, x0, 's'); plot(t0, xh1, '*-')
legend(('Original data','Linear fit'), loc='upper left')
subplot(122)
plot(t0, x1, 'o')
plot(t0, xh21, 'p-')
legend(('Accumulated data','Fit after accumulation'))
show()
</code>

3.4 Verification

The fitted curve after accumulation matches the accumulated data closely, with a maximum relative error of 24.15%, a significant improvement over direct linear fitting.

Fitted parameters: [ 15.3914543    0.23111521 -14.76199989]

References

Si Shougui, Sun Xijing. Python Mathematics Experiment and Modeling.

Pythonmodelingforecastingdata-accumulationgrey-system
Model Perspective
Written by

Model Perspective

Insights, knowledge, and enjoyment from a mathematical modeling researcher and educator. Hosted by Haihua Wang, a modeling instructor and author of "Clever Use of Chat for Mathematical Modeling", "Modeling: The Mathematics of Thinking", "Mathematical Modeling Practice: A Hands‑On Guide to Competitions", and co‑author of "Mathematical Modeling: Teaching Design and Cases".

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.