Master Prompt Engineering: CRIS, RAG, and Agent Strategies for Reliable LLM Outputs

This guide presents a comprehensive prompt engineering framework—including the CRIS four‑step template, RAG‑based prompt construction, and Agent‑oriented architectures—illustrated with practical examples and optimization tips for tasks such as code generation, data extraction, and customer support, helping developers achieve stable, accurate LLM results.

AI Architect Hub
AI Architect Hub
AI Architect Hub
Master Prompt Engineering: CRIS, RAG, and Agent Strategies for Reliable LLM Outputs

Prompt engineering is the key entry point for large language model (LLM) applications; well‑crafted prompts directly affect output stability, accuracy, and business usability. This article introduces a practical prompt engineering system that can be applied immediately in real‑world scenarios.

1. General Prompt Design Paradigm (CRIS)

The CRIS (Character, Request, Constraint, Example) four‑step structure provides a stable template for most use cases, reducing hallucinations and chaotic outputs.

Character : Define the model’s identity, domain expertise, and tone, limiting its behavior.

Request : State the specific task clearly to avoid vague instructions.

Constraint : Specify format, length, prohibitions, professional standards, and output form.

Example : Provide a few samples to guide the model’s style; few‑shot learning is highly effective.

You are a [professional role] proficient in [domain skill] . Please complete the following task: [task description] . Follow these rules: [constraint 1] [constraint 2] Do not hallucinate; if uncertain, reply "unknown". Output format: [format requirements]

2. RAG‑Based Prompt Construction

Retrieval‑Augmented Generation (RAG) forces the model to answer solely from retrieved context, eliminating fabricated knowledge. Prompt design must enforce citation and source constraints.

Core Logic

Require the model to use only the provided context.

If no relevant content exists, explicitly state that the answer is unavailable.

Mark information sources to improve traceability.

Ensure the answer aligns with the original text.

Context: --- {context} --- Question: {question} Rules: Use only information inside the context; external knowledge is prohibited. If the context lacks an answer, reply "No relevant answer found in the provided material". Provide concise, accurate answers while preserving key details. Optionally cite the source for critical statements.

3. Agent Prompt Architecture

Agent‑style prompts target autonomous agents that plan, invoke tools, reflect, and iterate. The structure is more system‑level and includes several modules.

System Identity & Goal : Define the agent’s purpose and overall objective.

Tool List : Enumerate available tools, their call syntax, and usage scenarios.

Thinking Process Constraint : Enforce a "Think → Act → Observe" loop.

Termination Condition : Specify when the task is considered complete or when to abort.

You are a task‑execution agent that must complete the user’s request step by step. Process: Thought: Analyze the current situation and decide the next step. Action: Call a tool using the format Action: tool_name(parameters) . Observation: Record the tool’s output. Repeat until the task is finished. Available tools: search(query) – retrieve information. calculate(expr) – perform mathematical calculations. finish(answer) – terminate and return the final answer. Do not skip the thinking step or try to complete everything in a single action.

4. Task‑Oriented Prompt Writing Tips

Different downstream tasks require tailored prompt adjustments.

Code Generation

Specify language, version, and framework.

State coding standards, comment requirements, and exception handling.

Provide input‑output examples.

Write a Python 3 function that does [task] , follow PEP8, add detailed comments, handle edge cases, and include a usage example.

Information Extraction

Clearly list the fields to extract.

Force JSON output; use empty strings for missing values.

Extract name, phone, and address from the given text and output as JSON. If a field is absent, return an empty string.

Customer Service Dialogue

Define tone and dialogue guidelines.

Prohibit promises beyond the agent’s authority.

Guide the model to collect key information from the user.

Complex Task Decomposition

Use a "step‑by‑step thinking" instruction.

Make the model plan steps before execution.

Include a self‑check stage to verify each step.

Conclusion

Applying the CRIS template ensures baseline quality for generic scenarios. In RAG contexts, strict source constraints prevent hallucinations. For agent‑driven workflows, the ReAct architecture reinforces thoughtful tool usage. Tailoring prompts with clear formats, constraints, and examples dramatically improves LLM performance across vertical tasks.

prompt engineeringRAGAgentLLM applicationsAI Prompt Design
AI Architect Hub
Written by

AI Architect Hub

Discuss AI and architecture; a ten-year veteran of major tech companies now transitioning to AI and continuing the journey.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.