Unlocking Claude 4.0: How System Prompts Drive Smart Decision‑Making
This article explains the design principles, iterative rules, and core workflow of Claude 4.0’s system prompts, covering query‑complexity classification, tool‑selection logic, user preferences, style handling, safety checks, and how these components form a closed‑loop decision process for intelligent AI responses.
Through previous articles we have learned about user prompt design and iteration principles and specific application scenarios . For users, the Claude Sonnet 4.0 general system prompt "agent core manual" acts like a mysterious black box that determines the identity, capabilities, safety constraints, and rule set of Claude agents. All user inputs must be interpreted and guided by this core before producing the final response.
This series will reveal a GitHub repository with over 19K stars that unintentionally leaked Anthropic’s Claude 4.0 system prompts. It shows how these prompts guide Claude 4.0’s intelligent decisions, including query‑complexity classification, tool selection, personalized user needs, artifact creation and management, visual artifact design principles, content safety, and response generation. The series is split into three parts for easier reading and practical implementation.
If you read the public Claude 4.0 system prompt core directly, understanding its internal design relationships and closed‑loop thinking is difficult (the DeepWiki documentation leans toward technical architecture). This is why the three‑article series exists.
Claude 4.0 core system prompts</code><code>https://github.com/asgeirtj/system_prompts_leaks/tree/main/Anthropic</code><code>Claude 4.0 DeepWiki poses a good question: learn and verify through intelligent Q&A</code><code>https://deepwiki.com/asgeirtj/system_prompts_leaksThe article will focus on the core design workflow of system prompts, covering:
Query analysis, intelligent decision tool selection
User personalization (preferences or style) and artifact creation/management, visualization
Design principles, content safety, and response generation
Understanding these will upgrade your grasp of prompt core logic, give you the key to deep AI collaboration, and dramatically improve prompt design and application ability. You will even be able to infer the design of DeepSeek or other model core prompts by observing their inputs and outputs.
Below are the core questions the workflow addresses:
Query complexity classification: How does Claude 4.0 parse user questions for intelligent decisions?
Tool selection: How does Claude decide which tools to use and how often?
User personalization: When user preferences conflict with commands, how does the AI satisfy personalized needs?
Artifact types: What kinds of files can Claude 4.0 create and manage?
Visual artifact design principles: What is Claude’s default aesthetic when creating pages or modifying front‑end code?
Content safety: What specific safety constraints are enforced?
To aid understanding, we use a human problem‑solving workflow as a perspective (see the diagram). After receiving a request, you analyze the problem’s core, then evaluate ( query analysis ) whether you can solve it with existing knowledge.
If not, you may use internet tools to fetch relevant information ( tool invocation ) or consult a professional.
For simple tasks you might need only a single source; for complex tasks you may need multiple sources, invoking tools 2‑20 times. After gathering information, you format the solution according to user‑specified style and format, cite sources like a paper, and output in Word or Markdown.
Finally, you perform safety and quality checks to ensure the document is on‑topic, harmless, and useful, iterating until it meets standards before delivering the final answer.
The following comparison shows how a human’s workflow aligns perfectly with Claude 4.0’s core prompt design:
The core design consists of seven stages:
Stage 1 – Query Complexity Classification (query_complexity_categories)
Stage 2 – Tool Selection Logic
Stage 3 – User Preferences & Style
Stage 4 – Artifact Creation & Management (artifacts_info)
Stage 5 – Visual Artifact Design Principles
Stage 6 – Safety Check – Global Safety Design (copyright + harmful_content)
Stage 7 – Response Generation
Stage 1. Query Complexity Classification
Core: Process user input, assess stability, change frequency, and complexity to decide whether to answer directly, use a single search, or perform multi‑tool research.
Key points:
Stable information: Direct answer, no tool.
Known but possibly outdated: Answer then offer search.
Real‑time data needed: Execute one web_search.
Complex analysis required: Perform 2‑20 tool calls, then answer.
core_search_behaviors – Determines whether to use tools, which tools, and priority.Categories:
never_search_category: Never search (e.g., basic facts, stable knowledge).
do_not_search_but_offer_category: Answer directly but offer a search option.
single_search_category: One web_search for real‑time or simple factual queries.
research_category: Multiple tool calls for deep research.
web_search() x1 – Execute a single web search.Stage 2. Tool Selection Logic
Core: After assessing complexity, the system selects appropriate tools from a large toolbox.
1. Web Tool Category
web_search– Performs a standard web search and returns results. web_fetch – Retrieves the full content of a specific URL (usually after web_search finds the link).
2. Google Workspace Tools
google_drive_search– Search files in the user’s Drive. google_drive_fetch – Read a specific Drive file. read_gmail_profile – Get Gmail profile info. search_gmail_messages – Search Gmail messages. read_gmail_thread – Read a specific email thread. list_gcal_calendars – List all calendars. list_gcal_events – List events in a calendar. find_free_time – Find free time slots in the calendar.
3. Analysis & Creation Tools
repl (analysis_tool)– A code interpreter (e.g., Python) for data analysis, calculations, chart generation, etc. artifacts – Create and output files such as reports, images, data files, not just plain text.
With these tools, Claude can decide whether to answer directly, invoke a single search, or launch a multi‑step research process.
Stage 3. User Preferences & Style
Core: Satisfy user personalization and style preferences while maintaining safety.
Preferences are divided into:
Behavioral Preferences: Output format, communication style, language, etc.
Contextual Preferences: Identity, interests, professional skills.
Style (userStyle): Writing tone, level of detail, vocabulary.
Behavioral preferences affect *how* Claude says something, while contextual preferences affect *what* it says.
When conflicts arise, the system follows a clear priority hierarchy:
1. Latest user instruction > 2. userStyle (selected style) > 3. userPreferences (saved preferences)Safety validation ( harmful_content_safety) runs before any response is shown, ensuring no hateful, violent, or illegal content is produced.
Stage 4‑7 (Brief Overview)
Stage 4 creates or manages artifacts (documents, images, data files). Stage 5 applies visual design principles when generating front‑end code or graphics. Stage 6 performs a global safety check, enforcing copyright limits and harmful‑content filters. Stage 7 assembles all gathered information and generates the final response.
References
https://kcn7nwtck8k3.feishu.cn/wiki/CRV7wEImHiyD7gkOseOciHlFnBf – Claude 4.0 decision‑making and prompt core design (Chinese/English)
https://deepwiki.com/asgeirtj/system_prompts_leaks – DeepWiki intelligent Q&A assistance
https://github.com/asgeirtj/system_prompts_leaks/tree/main/Anthropic – Claude 4.0 system prompt source files
https://support.anthropic.com/en/articles/10185728-understanding-claude-s-personalization-features – Anthropic documentation
https://docs.anthropic.com/zh-CN/release-notes/system-prompts – Release notes
https://docs.anthropic.com/zh-CN/resources/prompt-library/library – Prompt library
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
