A Structured Prompt Engineering Guide to Make LLMs Obey

Learn how to craft effective prompts for large language models by using a systematic structure—role and task, core principles, context handling, chain‑of‑thought, output specifications, and few‑shot examples—and discover techniques for generating and iteratively refining prompts with the model itself.

Tencent Technical Engineering
Tencent Technical Engineering
Tencent Technical Engineering
A Structured Prompt Engineering Guide to Make LLMs Obey
Making a model obey instructions hinges on a well‑designed prompt.

Preface

When writing prompts, many encounter unresponsive or overly verbose model behavior. The model may answer incorrectly or drift despite clear instructions, a common frustration for Prompt Engineers seeking reliable reasoning.

Structure

A robust prompt for complex, high‑precision tasks should follow this order:

Role/Task + Core Principles + Context Handling + CoT (Chain of Thoughts) + Output Specification + Few‑Shot

Additional constraints can be added as needed.

Generating an Initial Prompt with the Model

Prepare 30 example queries and expected outputs.

Prepare contextual information and a description of the text structure.

Clearly describe the model’s goal and the prompt framework.

Feeding these items to the model yields a solid first‑draft prompt, often more effective than writing it from scratch.

Optimizing the Prompt with the Model

Prepare a test set and the current prompt’s generated results.

Add correct results and notes explaining why the generated output is wrong.

Model‑assisted refinement helps solve basic issues, but final optimization still requires human insight.

Prompt Format

Markdown (MD) is preferred for its readability, clear structure, and extensibility. JSON, while structured, is less flexible and can become cumbersome for long prompts.

Prompt Modules

Role & Task

The role defines the model’s domain expertise (e.g., data analyst, dentist). The task succinctly states what the model should do (e.g., generate SQL, produce a report).

Core Principles

Limit to three high‑level rules that guide the model’s behavior; too many principles reduce effectiveness.

Context Handling

Place lengthy context at the end of the prompt to avoid interrupting the main instruction. Clearly describe the context’s structure and its role, as token usage impacts performance.

CoT (Chain of Thoughts)

CoT guides the model to reason step‑by‑step, improving accuracy for logical tasks. Example: solving a fruit‑exchange puzzle by breaking it into incremental calculations.

Requirements & Constraints

Specify special handling or logical rules, optionally as a separate module, to ensure the model respects critical conditions.

Special Logic Expression

When natural language is insufficient, use pseudo‑code to convey precise logic, such as extracting the latest month‑end date from a report.

Output Specification

Define both the desired output format and explicitly forbid unwanted content. Structured output can be enforced through clear specifications.

Few‑Shot Examples

Providing one or two illustrative examples aligned with the CoT steps dramatically boosts the model’s ability to follow the prompt.

Conclusion

While details may vary across models and scenarios, the overarching framework remains consistent: by defining role, task, principles, context, reasoning steps, output rules, and few‑shot examples, anyone can craft prompts that reliably guide LLMs to produce the desired results.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

chain of thoughtFew‑Shot LearningAI promptingcontext handling
Tencent Technical Engineering
Written by

Tencent Technical Engineering

Official account of Tencent Technology. A platform for publishing and analyzing Tencent's technological innovations and cutting-edge developments.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.