Fundamentals 9 min read

Why We Pretend to Win: The Hidden Math Behind Evaluation Bias

The article explores how people manipulate evaluation systems by redefining variables, adjusting weights, and shifting perspectives, turning losses into perceived wins, and reveals the psychological and statistical biases that create this illusion, urging more honest, multi‑dimensional, transparent modeling for genuine assessment.

Model Perspective
Model Perspective
Model Perspective
Why We Pretend to Win: The Hidden Math Behind Evaluation Bias

“Winning Study” – An Illusory Evaluation Phenomenon

“Winning” is a seemingly absurd yet highly realistic cognitive phenomenon. You may have seen news about a country celebrating ten days of victory after a disastrous conflict, or a government claiming twenty wins in twenty days of a president’s term, and similar claims by companies or individuals.

Although absurd, this phenomenon follows a logic: the manipulation and mismatch of the evaluation system, creating the illusion of “winning” by using different metrics and weights.

After the laughter, we must confront the question: what does “winning” really mean, what criteria are used for evaluation, and do we truly understand the success or failure we pursue?

This is not only a psychological issue but also a mathematical one—a problem of evaluation models.

Evaluation as Modeling

Every day we evaluate projects, negotiations, policies, or people. Evaluation compresses complex reality into a comparable indicator, which in mathematics is called modeling.

A typical multi‑metric evaluation model can be expressed as a weighted sum of variables, where each variable represents performance on a dimension and each weight reflects its importance.

Who decides which variables to include?

Who assigns the magnitude of each weight?

Who defines the standards and reference groups?

These three questions determine everything.

Variable and Weight Selection

The first step of “winning study” is redefining variables. For example, in war without territorial gain, one might claim to have won “dignity”; in business loss, claim to have won “brand influence”; in personal loss, claim to have won “perspective”. These statements are true but selectively emphasize certain dimensions while ignoring others.

Next, weights are deliberately controlled. Even if some variables are unfavorable, their impact can be diluted by assigning low weights, allowing a claim of “spiritual victory” even when the economy collapses.

Perspective Differences

Evaluation also depends on perspective. From a domestic view, an event may be “consolidating internal unity”; from an international view, it may be “isolation and loss”; historically it may be “a major setback”; from a propaganda angle, it may be “strategic shift”. Each perspective selects a different reference frame, akin to choosing coordinate systems in mathematical modeling.

For instance, a country’s 0.2% GDP growth looks ordinary against the global average, but compared to a previous –3% it appears “remarkable”. Similarly, a student’s score rising from 40 to 50 is still low, yet claiming a “25% improvement” feels like progress.

Sample Bias and Information Manipulation

Statistics warns of “selection bias”: choosing only favorable data to alter conclusions. Examples include promoting success stories while burying failures, substituting averages for medians to mask unfairness, using vague metrics like “satisfaction index”, or replacing real experiences with model indices.

An entrepreneur may claim a three‑fold increase in “active users”, but the definition of “active” may have shifted from “used three consecutive days” to “logged in at least once”. The model itself does not lie; the definition can deceive.

Goal Shift

When the original goal is unattainable, “winning study” triggers goal substitution. War aims shift from “recapturing territory” to “deterring the enemy”; businesses shift from “increasing profit” to “growing active users”; individuals shift from “improving ability” to “appearing impressive”. The shifted goal is often vaguer and harder to refute, making it easier to claim a win.

Mathematically, this is akin to changing the objective function of an optimization problem without disclosing the simplification’s cost.

Why We Fall into “Winning Study”

Evaluation is painful; admitting failure requires strong psychological resilience. Self‑comfort and distorted victories are low‑cost cognitive defenses. Cognitive biases such as confirmation bias, hindsight bias, optimism bias, and collective‑identity bias reinforce the phenomenon, encouraging group participation and the construction of an illusion of winning.

What Kind of Evaluation Should We Pursue?

To counter “winning study”, we need more honest modeling. A good evaluation system should have:

Multi‑dimensional perspectives: short‑term and long‑term, subjective and objective.

Transparent weights: based on traceable rationale, not whims.

Clear reference groups: who we compare with and why.

Explicit variable definitions: no arbitrary redefinition.

Allowance for negative feedback: accepting “no win” or “failure” as growth foundations.

In other words, effective evaluation is honest modeling—not propaganda or self‑delusion—used to improve reality, face truth, and clarify problems.

modelingdecision makingevaluationpsychologybias
Model Perspective
Written by

Model Perspective

Insights, knowledge, and enjoyment from a mathematical modeling researcher and educator. Hosted by Haihua Wang, a modeling instructor and author of "Clever Use of Chat for Mathematical Modeling", "Modeling: The Mathematics of Thinking", "Mathematical Modeling Practice: A Hands‑On Guide to Competitions", and co‑author of "Mathematical Modeling: Teaching Design and Cases".

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.