Artificial Intelligence 11 min read

Designing, Building, and Evaluating an Action Model for Decision Optimization

This article explains how to design, construct, and evaluate an Action Model that quantifies the impact of business decisions on long‑term profit, covering variable selection, model assumptions, algorithm choices, data challenges, and practical evaluation methods such as offline metrics and A/B testing.

JD Tech Talk
JD Tech Talk
JD Tech Talk
Designing, Building, and Evaluating an Action Model for Decision Optimization

In the previous article "I Predicted 14 Million Futures, the Optimal Decision Is Unique (Part 1)" we introduced the Action Model, a framework that quantifies the impact of decisions and explains how it yields the optimal decision. This second part focuses on how to actually construct the Action Model, requiring both business insight and algorithmic experience.

Model Design

During the design phase we must decide how to choose the variables that appeared in the abstract formulas of the first part. The optimization problem consists of a long‑term profit formula, the Action variables that can be controlled, the feasible domain of those actions, the sub‑goals that the model must predict, and the invariant factors (e.g., operating costs) that affect profit but do not change with the Action.

Model Assumptions

Decisions have an objective, regular influence on the performance we aim to predict.

The influence varies across individuals, making prediction worthwhile.

Formula & Variable Definition

The long‑term profit formula must be defined according to the specific business; in complex scenarios it can become very intricate, so simplifications are often necessary. For example, in a credit‑lending context the profit can be approximated as:

Profit = ResponseRate * Limit * Utilization * (Pricing - FundingCost) - DefaultRate * DefaultBalance * LossRate - OperatingCost

Action variables must be actionable and satisfy the two assumptions above (e.g., credit limit and pricing). Continuous variables like limit can be discretized into bins based on business logic, ensuring each bin has sufficient sample size.

Analysis Pitfalls

Random testing in credit is costly, and without it the data may be biased. For instance, higher limits often correlate with lower default rates because high‑limit users tend to be higher‑quality. This creates a spurious monotonic relationship that can mislead models. The following figures illustrate the ideal unbiased data versus the biased real‑world data.

When random experiments are unavailable, one can seek homogeneous groups where users are similar but receive different limits, or apply domain‑adaptation techniques to mitigate bias.

Model Building

After defining the variables, we train a model that takes both user features and Action as inputs and predicts the sub‑goals. Suitable algorithms should heavily incorporate Action information and allow cross‑features. Simple models can use Factorization Machines (with first‑order terms removed); more complex models may use neural networks. Tree‑based models require modifications to ensure Action variables are selected and properly binned.

It is crucial that the model captures the heterogeneous effect of Action across users; otherwise the model will underestimate the impact of Action and become ineffective. Regularization (norm constraints, dropout, priors) can help prevent over‑fitting to a single label per user.

Model Evaluation

Once training finishes, offline metrics such as RMSE, MSE, AUC, or F1 can be computed, but they may not reflect real business value. If random test data exist, compare the predicted and actual profit trends across different Actions. Without random data, construct quasi‑experimental groups as described earlier. Additional evaluation dimensions include model discrimination, stability, and interpretability of the predicted trends.

Ultimately, the most reliable assessment comes from online A/B testing: randomly split users into a control group (existing decision policy) and a treatment group (optimal Action suggested by the model) and compare long‑term profit.

Final Remarks

The Action Model quantifies how strategies affect user behavior, and solving the counterfactual inference problem is its core challenge. While the model has broad applications, it demands high‑quality data and careful algorithmic design, from hypothesis validation and variable selection to model construction.

machine learningdata analysispredictive modelingdecision optimizationAction Model
JD Tech Talk
Written by

JD Tech Talk

Official JD Tech public account delivering best practices and technology innovation.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.