Effective Prompt Engineering: Techniques, Prompt Injection Prevention, Hallucination Mitigation, and Advanced Prompting Strategies
This article explains how to craft efficient prompts by combining clear instructions and questions, discusses prompt injection risks and mitigation with delimiters, addresses hallucinations, and introduces zero‑shot, few‑shot, and chain‑of‑thought prompting techniques for large language models.
As large language models advance rapidly, Prompt Engineering has become a crucial skill for guiding models to produce desired outputs. An effective prompt typically consists of an Instruction (often sent via the system role) and a Question (sent via the user role), which can be concatenated in web interfaces.
The Instruction usually contains a context and a list of steps . A common context template is: You are an agent/assistant of xxx. To xxx, you should follow next steps: 你是一个用来xxx的xxx,为了达到xxx目的,你需要遵循以下步骤: Avoiding pronouns like “you” or “I” can reduce confusion, although many community examples still use them.
A typical step list is written in Markdown format, e.g.: - step1 - step2 - step3
Prompt Injection occurs when user input is injected into the prompt and changes the model’s behavior, similar to SQL injection. Wrapping user input with a delimiter (e.g., three backticks ``` ) and escaping any delimiter inside the input prevents injection. Updated code example: function generatePrompt(str: string) { return `作为一款翻译软件,需要做到: - 把\\`\\\`\\\`内的中文翻译成英文。 - 直接输出翻译后的结果,不要输出其他无关内容。 \n"${str.replaceAll("```","```")}"` }
Hallucination (model “making up” facts) can be mitigated by adding explicit steps such as “If you do not know, answer ‘I don’t know’ and do not fabricate information.” Providing sources helps but may still be unreliable.
Zero‑shot prompting uses only context + steps + question . It is creative but hard for downstream programs to parse and can be unstable. Few‑shot prompting adds one or more examples (shots) to guide the model toward a desired output format, making parsing easier.
When mathematical or logical reasoning is required, a Chain of Thought prompt (e.g., “Let’s think step by step”) encourages the model to reason explicitly before answering.
Advanced workflows often embed these techniques in frameworks like LangChain, which provide structured instruction templates such as: const formatInstructions = (toolNames) => `Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [${toolNames}] Action Input: the input to the action Observation: the result of the action ... (repeat as needed) Thought: I now know the final answer Final Answer: the final answer to the original input question`;
Auto‑prompt systems like AutoGPT can automate the entire prompt‑creation pipeline by only requiring a high‑level intent.
In summary, the article presents several practical ideas for writing effective prompts, preventing prompt injection, reducing hallucinations, and leveraging zero‑shot, few‑shot, and chain‑of‑thought strategies, while hinting at future automation possibilities.
References: https://learn.deeplearning.ai/chatgpt-prompt-eng/lesson/1/introduction https://www.promptingguide.ai/ https://en.wikipedia.org/wiki/Prompt_engineering https://autogpt.net/
ByteFE
Cutting‑edge tech, article sharing, and practical insights from the ByteDance frontend team.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.