Artificial Intelligence 27 min read

Prompt Engineering: Concepts, Evolution, Techniques, and JD Logistics Application

This article explains what Prompt Engineering is, traces its development from early NLP commands to modern adaptive and multimodal prompting techniques, describes various prompting strategies such as Zero‑shot, Few‑shot, Chain‑of‑Thought, Auto‑CoT, and showcases a JD Logistics case study using these methods to classify product types with code examples.

JD Tech
JD Tech
JD Tech
Prompt Engineering: Concepts, Evolution, Techniques, and JD Logistics Application

What is Prompt Engineering

Prompt Engineering is the design and optimization of prompts or instructions that guide large language models (LLMs) to produce accurate and useful responses. Clear, specific prompts enable the model to understand user intent and generate relevant output.

Key Aspects of Prompt Engineering

1. Define the goal – what task the model should accomplish. 2. Design concise, informative prompts that contain all necessary information. 3. Iterate, test, and refine prompts to improve performance. 4. Anticipate unexpected model behavior and create fallback strategies.

How Prompt Engineering Emerged

The field evolved through several stages:

Pre‑2017: Simple command‑based NLP and template‑based Q&A.

2017‑2018: Introduction of Seq2Seq models and early pre‑trained models (e.g., GPT‑1).

2019‑2020: GPT‑2 and BERT sparked interest in prompt design.

2020‑2021: GPT‑3 and few‑shot/zero‑shot learning highlighted the importance of prompts.

2021‑2023: Prompt Tuning, automated tools, and domain‑specific prompts.

2024 onward: Adaptive prompt generation, multimodal prompts, and human‑model co‑optimization.

Technical Techniques

Zero‑shot and Few‑shot Prompting

Zero‑shot provides only the task description; few‑shot adds a few examples to guide the model.

Chain‑of‑Thought (CoT) Prompting

CoT adds step‑by‑step reasoning in the prompt, encouraging the model to produce intermediate reasoning before the final answer.

Automatic Chain‑of‑Thought (Auto‑CoT)

Auto‑CoT automatically generates reasoning examples via clustering and selection, reducing manual effort.

Self‑Consistency

Multiple reasoning paths are generated; the most consistent answer across paths is selected.

Logical Chain‑of‑Thought (LogiCoT)

LogiCoT incorporates symbolic logic and proof‑by‑contradiction to verify reasoning steps.

Chain‑of‑Code (CoC)

CoC translates a task into pseudo‑code, guiding the model to solve programming‑related problems.

Contrastive CoT (CCoT)

CCoT presents both correct and incorrect reasoning examples, helping the model learn from mistakes.

EmotionPrompt

Injects emotional cues into prompts to steer the model toward desired affective responses.

Rephrase and Respond (RaR)

First rephrases the user query for clarity, then answers the clarified question.

Case Study: JD Logistics Product Type Classification

In JD Logistics, accurate product‑type (件型) classification is critical for large‑item handling. A set of rules based on product attributes (weight, model code, category) was combined with LLM prompting techniques to improve classification accuracy.

Baseline Prompt (Zero‑shot)

def classify_product(row, rules_text):
try:
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"], base_url=os.environ["OPENAI_API_BASE"])
description = f"商品编码:{row['goods_code']},描述:{row['goods_name']},重量:{row['weigth']}。"
system_message = "你是物流行业的一位专家,请基于规则和商品描述,仅输出该商品的件型,不要输出其他任何信息。"
user_message = f"规则:\n{rules_text}\n商品描述:{description}\n"
response = client.chat.completions.create(model="gpt-4-1106-preview", messages=[{"role": "system", "content": system_message}, {"role": "user", "content": user_message}], temperature=0, max_tokens=6, top_p=0.1, n=1)
return response.choices[0].message.content.strip()
except Exception as e:
return str(e)

Accuracy: 44.44%.

Few‑shot Prompt

Added two labeled examples to the prompt, raising accuracy to 55.56%.

Chain‑of‑Thought Prompt

Provided step‑by‑step reasoning for each example, achieving 66.67% accuracy.

Automatic Chain‑of‑Thought (Auto‑CoT) Prompt

Specified the reasoning order (category → source → type) in the system message, reaching 77.78% accuracy.

Self‑Consistency Prompt

Generated multiple outputs (n=5) and selected the most frequent result; accuracy returned to 66.67%.

Conclusion

Prompt Engineering techniques—especially structured prompting and automated reasoning chains—significantly improve LLM performance on domain‑specific classification tasks such as JD Logistics product‑type identification.

prompt engineeringLarge Language ModelsChain-of-ThoughtZero-shotfew-shotAI Prompt Design
JD Tech
Written by

JD Tech

Official JD technology sharing platform. All the cutting‑edge JD tech, innovative insights, and open‑source solutions you’re looking for, all in one place.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.