Turning a General LLM into an E‑commerce Risk‑Detection Expert: A Step‑by‑Step Prompt Engineering Guide

The article recounts how a risk‑control algorithm engineer transformed a generic large language model into a specialized e‑commerce fraud detector by iteratively designing prompts, injecting business rules, structuring I/O, and introducing a dual‑hypothesis decision framework to achieve accurate, automated risk analysis.

JD Retail Technology
JD Retail Technology
JD Retail Technology
Turning a General LLM into an E‑commerce Risk‑Detection Expert: A Step‑by‑Step Prompt Engineering Guide

Introduction: When an Algorithm Engineer Meets an Unpredictable AI

As a risk‑control algorithm engineer, I constantly handle massive data and complex models, seeking hidden risk signals. I recently introduced a large language model (LLM) into my workflow, encountering many challenges and breakthroughs. This post reviews the step‑by‑step "Prompt Engineering Mindset" that turned a generic LLM into an AI expert capable of precisely identifying complex e‑commerce risk patterns.

Stage 1 – From 0 to 1: Giving the AI an Operations Manual

To make the LLM useful, I first had to convey my implicit risk‑control knowledge explicitly. The key actions were:

Role‑Playing: Begin the prompt with a statement such as "You are a senior e‑commerce risk‑control expert..." to set the AI’s identity.

Defining Analysis Dimensions: List the factors I normally examine, e.g., recipient information, delivery address, product combination and value.

Structured I/O: Use CSV for input to pack many orders efficiently, and require the AI to return results in strict JSON format for easy downstream parsing.

This first version (V1) automated the workflow but still behaved like a novice analyst, producing many false positives.

Stage 2 – Injecting Business Knowledge: Specific‑Problem‑Specific‑Analysis

To reduce false alarms, I added exemption rules and background knowledge:

Challenge 1 – High Discount ≠ Risk: Clarify that new‑user first‑order subsidies make high discounts normal.

Challenge 2 – Random Strings ≠ Fake Names: Explain that system‑generated user IDs are harmless; focus on the actual recipient name.

Challenge 3 – Zero‑Price Items, Nicknames, Benefit Products: Add rules stating that free gifts, nicknames, and promotional items are not risk indicators.

These additions dramatically lowered the false‑positive rate, turning the AI into a competent mid‑level analyst.

Stage 3 – Deepening Analysis: Teaching the AI to Think Like a Detective

After solving false‑positive issues, I aimed to improve the AI’s insight:

Bottleneck 1 – Ignoring Low‑Value Risks: Instruct the model that large volumes of low‑value, fast‑moving goods (e.g., hundreds of beverage cases) can signal small‑merchant stock‑arbitrage.

Bottleneck 2 – Lack of Consistency View: Introduce the concept of "shopping‑cart consistency"—identical or highly similar carts across accounts suggest scripted or organized behavior.

These upgrades elevated the AI to a senior analyst capable of detecting group‑level fraud patterns.

Stage 4 – Ultimate Evolution: Enabling the AI to Make Judge‑Like Decisions

The final challenge was distinguishing genuine fraud rings from benign clusters caused by marketing campaigns. I introduced a "dual‑hypothesis decision framework":

Hypothesis A: Coordinated risk ring.

Hypothesis B: Benign feature group.

The AI must first search for "hard links"—definitive evidence such as identical non‑public delivery addresses. If hard links exist, classify as a risk ring; otherwise, evaluate whether behavior can be explained by legitimate marketing.

Few‑shot examples for both scenarios were provided to guide the AI’s judgment.

Summary of Prompt Engineering Principles

Key takeaways:

Start by mimicking expert thinking, then abstract scattered rules into a reusable framework.

Rules form the skeleton; business context and common‑sense flesh give the AI intelligence.

Negative examples (what is NOT risk) are as important as positive ones for reducing false alarms.

Move from simple instructions to teaching a thinking model, such as the dual‑hypothesis framework, so the AI can independently analyze and decide.

Prompt engineering thus becomes a bridge between domain expertise and general AI, enabling engineers to empower LLMs to solve previously intractable problems.

Artificial Intelligencemachine learningLLMprompt engineeringRisk Detection
JD Retail Technology
Written by

JD Retail Technology

Official platform of JD Retail Technology, delivering insightful R&D news and a deep look into the lives and work of technologists.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.