Mastering Prompt Engineering: Advanced Techniques from OpenAI, Anthropic, and Google

This article provides a comprehensive guide to modern prompt engineering, covering foundational principles, detailed techniques such as role‑playing, delimiters, step‑by‑step instructions, and advanced strategies like chain‑of‑thought, reflection, and external tool integration, with real‑world examples from major AI providers and a practical Img2Code case study.

Data Thinking Notes
Data Thinking Notes
Data Thinking Notes
Mastering Prompt Engineering: Advanced Techniques from OpenAI, Anthropic, and Google

Background

With the emergence of next‑generation models like GPT‑4 and Gemini, prompt length and complexity limits have expanded. This article explores cutting‑edge prompt‑engineering solutions, drawing on resources from OpenAI, Anthropic, Google, and the open‑source community to provide a thorough guide for advanced interaction with large models.

Prompt Principles & Techniques

2.1 Clear, Detailed Instructions

The official ChatGPT documentation lists six suggestions for writing prompts. Examples illustrate how to transform vague queries into precise prompts:

Bad prompt: Who is the president?
Better prompt: Who was the president of Mexico in 2021 and how often are elections held?
Bad prompt: Summarize meeting notes.
Better prompt: Summarize the meeting notes in one paragraph, then list the speakers' key points and any recommended next steps.

2.1.2 Role‑Playing

Assign the model a specific persona to guide its responses. Example prompt for a mental‑health advisor is provided.

Better prompt: I want you to act as a mental‑health consultant. Using CBT, meditation, and mindfulness techniques, devise a personalized strategy for managing negative emotions. My first request is "How can I control negative emotions?"

2.1.3 Use Delimiters

Triple quotes, XML tags, or section headings help the model distinguish different parts of the input, reducing ambiguity for complex tasks.

"""Translate the ancient poem into modern Chinese.
"""

2.1.4 Specify Steps

Listing explicit steps makes it easier for the model to follow a procedure.

Step 1: Summarize the text inside triple quotes with the prefix "Summary:".
Step 2: Translate the summary into Spanish with the prefix "Translation:".

2.1.5 Provide Examples

Show the model examples of the desired output, especially for tasks that are hard to describe.

Prompt: You are a travel blogger. I will give you examples inside triple brackets; mimic the style to produce five answers.
Example: "Tell me about Shanghai."
Answer: Shanghai is a vibrant metropolis...

2.1.6 Set Length Constraints

Ask the model to generate responses of a specific word count, paragraph number, or bullet‑point count.

Summarize the following text in about 50 words, formatted as two paragraphs with three bullet points.

2.2 Provide Reference Text

Supply trustworthy information or cite source documents so the model can ground its answers.

Use the following article to answer the question. If the answer cannot be found, reply "I cannot find an answer."

2.3 Break Down Complex Tasks

Decompose large problems into smaller, manageable steps and classify sub‑tasks to reduce error rates and cost.

Classify each support query into primary categories (billing, technical support, account management, general) and secondary categories (unsubscribe, upgrade, add payment method, etc.). Then provide detailed solutions per category.

2.4 Give the Model Time to “Think”

2.4.1 Generate Its Own Answer First

Ask the model to solve the problem independently before comparing it with a student's solution.

First, devise your own solution to the math problem, then compare it with the student's answer and evaluate correctness.

2.4.2 Hide Reasoning (Inner Monologue)

Perform internal reasoning without exposing it to the user, then present the final answer.

Step 1 – Find your own solution (keep it inside triple quotes). Step 2 – Compare with the student's answer...

2.4.3 Reflect on the Answer

Prompt the model to double‑check for missing information before responding.

Think carefully about the question "What significant paradigm shifts have occurred in the history of AI?" and ensure all relevant context is included.

2.5 Use External Tools

2.5.1 Embedding‑Based Retrieval

Leverage vector search (RAG) to provide up‑to‑date knowledge.

If the user asks about a specific movie, combine high‑quality movie data (actors, director, etc.) with the query.

2.5.2 API Calls

Include code or API results in the prompt to improve precision.

Combine documentation and code examples, then wrap generated Python code or API output in triple backticks for further computation.

2.6 Systematic Testing

2.6.1 Metric Evaluation

Use metrics such as img2code to compare layout, detail, and font quality of generated code.

2.6.2 Model Evaluation

Score model outputs on accuracy, comprehension, and generation quality.

Accuracy: 0‑2 (severe errors) … 9‑10 (perfect).
Comprehension: 0‑2 (no understanding) … 9‑10 (full understanding).
Generation: 0‑2 (incoherent) … 9‑10 (fluent and logical).

2.6.3 Human Evaluation

When tasks are subjective (e.g., aesthetics), involve human reviewers to assess against a baseline.

Large‑Company Prompt Strategies

3.1 Anthropic

Anthropic emphasizes rigorous evaluation, robust test cases, and a systematic workflow: define task and success criteria, develop test cases, design initial prompt, test, iterate, and finalize.

Clear, direct instructions

Break complex tasks into numbered steps

Provide examples

Role‑play

Tagging with XML

Chain‑of‑thought (COT)

Self‑reflection

Separate data from instructions

Anthropic also offers a “MetaPrompt” tool that auto‑generates high‑quality prompts, though it works best for single‑turn dialogues and the Claude 3 Opus model.

3.2 Google

Google’s Gemini prompt guide is extensive (a 40‑page handbook). It stresses four pillars: Persona, Task, Context, and Format, illustrated with a concise example:

You are a Google Cloud program manager.
Draft an executive summary email to [persona] based on [program docs].
Limit to bullet points.

3.3 Summary of Strategies

Persona (role‑playing)

XML delimiters

Prefill response

Provide concrete examples

Control length

Reflection and self‑feedback

Chain‑of‑thought

All should be coupled with a solid test set and evaluation metrics.

Practical Example: Img2Code

The task is to generate clean HTML/CSS (Tailwind) from a screenshot, similar to Microsoft’s Sketch2Code but improved with modern models.

Initial naive prompt to GPT‑4o produced poor results. After applying the advanced prompt principles (clear role, step‑by‑step guidelines, delimiters, COT, reflection), the generated code more closely matched the target mini‑app layout.

#CONTEXT#
You take screenshots of a reference miniapp page from the user, and then build single‑page apps using Tailwind, HTML and JS.
#OBJECTIVE#
Build a single‑page app using Tailwind, HTML, and JS based on a provided screenshot, ensuring it matches the design precisely. Include necessary images and text as specified.
Guidelines to follow:
- Ensure the app looks exactly like the provided screenshot.
... (additional guidelines omitted)
#RESPONSE#
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<script src="https://cdn.tailwindcss.com"></script>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.3/css/all.min.css">
</head>
<body>
... (generated code omitted)
</body>
</html>
Return only the full code in <html></html> tags.

Further enhancements with COT and reflection yielded even better layout fidelity.

Further Resources

AI Short Prompt Generator

Awesome ChatGPT Prompts (Chinese)

PromptPerfect – automatic prompt optimization

Conclusion

The underlying model capability provides the foundation (0‑60 points). Prompt engineering adds significant gains (60‑90 points), turning a good model into a great one.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

Large language modelsAI best practicesimg2codeLLM Developmentprompt techniques
Data Thinking Notes
Written by

Data Thinking Notes

Sharing insights on data architecture, governance, and middle platforms, exploring AI in data, and linking data with business scenarios.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.