Make AI Obey: A Detailed Prompt Engineering Guide to Boost Large‑Model Logic
This tutorial explains how to enhance large language models' logical reasoning by using DeepSeek‑R1's deep‑thinking mode, few‑shot prompting, chain‑of‑thought, and zero‑shot chain‑of‑thought techniques, providing concrete examples, comparisons, and a step‑by‑step template for effective prompt design.
DeepSeek reasoning model (R1)
DeepSeek provides two models. DeepSeek‑V3 is trained on ordinary question‑answer data using the instruction, input, output format and does not learn a reasoning chain, so its logical inference ability is weak. DeepSeek‑R1 is trained on a Chain‑of‑Thought (CoT) dataset that adds a think field containing step‑by‑step reasoning. This enables the model to generate both the answer and the intermediate reasoning steps, improving its inference capability.
The DeepSeek web UI allows users to select the "Deep Thinking (R1)" option, which routes queries to the reasoning model.
Few‑shot prompting
Beyond the four essential elements (instruction, context, input, output), adding a few examples to the prompt supplies richer context. In a translation task, providing a single example corrected ChatGPT’s output, demonstrating that even one illustrative sample can dramatically improve answer quality.
Chain‑of‑Thought (CoT) prompting
For more complex logical problems, few‑shot prompting may still fail. A math problem about counting apples ("I bought 10 apples, gave 2 to a neighbor and 2 to a repairman, then bought 5 more and ate 1") is answered incorrectly by standard prompting. Adding a CoT prompt that includes the reasoning steps (the think field) yields the correct answer. This approach follows the method described in Google’s 2022 paper Chain‑of‑Thought Prompting Elicits Reasoning in Large Language Models .
Zero‑shot Chain‑of‑Thought
The same CoT principle can be applied without explicit examples. Inserting a magic phrase such as "Let’s think step by step" before the query triggers the model to generate a reasoning chain on its own, leading to the correct answer for the apple‑counting problem.
Universal prompt workflow
The four techniques—DeepSeek‑R1, few‑shot prompting, CoT prompting, and zero‑shot CoT—can be combined into a hierarchical template. The recommended sequence is:
1. Query the reasoning model (repeat if needed, choose the most frequent answer)
2. If ineffective, add few‑shot examples (1‑5 samples)
3. If still ineffective, try zero‑shot chain‑of‑thought
4. If still ineffective, add explicit chain‑of‑thought stepsThis step‑by‑step workflow helps solve increasingly complex reasoning tasks with large language models.
Fun with Large Models
Master's graduate from Beijing Institute of Technology, published four top‑journal papers, previously worked as a developer at ByteDance and Alibaba. Currently researching large models at a major state‑owned enterprise. Committed to sharing concise, practical AI large‑model development experience, believing that AI large models will become as essential as PCs in the future. Let's start experimenting now!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
