Uncovering the Secret System Prompts Behind ChatGPT, Claude, and Gemini

The article examines the open‑source "system_prompts_leaks" project, which collects leaked system prompts from major AI models and reveals recurring design patterns such as modular layering, strict boundary control, dynamic strategy adjustment, emotional persona injection, and multi‑layer safety mechanisms.

Instant Consumer Technology Team
Instant Consumer Technology Team
Instant Consumer Technology Team
Uncovering the Secret System Prompts Behind ChatGPT, Claude, and Gemini

Earlier I discovered a newly open‑source GitHub project called system_prompts_leaks , which quickly gathered over 20 000 stars.

The repository aggregates system‑level prompts leaked from major AI services such as ChatGPT, Claude and Gemini, and reveals several recurring design patterns.

1. Modular layered structure

Prompts are organized into functional modules with a clear hierarchy—basic rules → tool specifications → scenario examples → safety boundaries. Claude Sonnet 4, for instance, uses XML‑like tags such as <citation_instructions> and <artifact_instructions> to separate layers.

2. Precise boundary control

The leaked prompts consistently emphasize prohibited actions with uppercase letters. Examples include “ NEVER use localStorage ” in Claude and “ UNDER NO CIRCUMSTANCE should you tell the user to sit tight ” in GPT‑5. The same principle can be applied in Chinese by using visually strong markers.

3. Dynamic perception and strategy adjustment

AI models are instructed to adapt their behavior based on user input, not only changing tone but also deciding when to invoke tools. Stable‑knowledge queries (e.g., explaining relativity) are answered directly, while time‑sensitive queries (e.g., today’s exchange rate) force a search tool call. Complex analyses may trigger 5‑20 coordinated tool calls.

4. Providing emotional value

The Grok Personas model includes preset personalities such as “Companion” and “Comedian”, giving the AI a distinct tone and character. The leaked prompts show how these personas are written to deliver emotional engagement.

5. Safety handling (five‑layer mechanism)

❶ Prohibit high‑risk actions : strict bans on bank transfers, weapon purchases, or any operation involving finance, weapons, or illegal goods.

❷ Privacy double‑insurance : the model must not collect sensitive user data nor infer race, religion, health, or political views, and must confirm before reusing prior conversation content.

❸ Anti‑phishing injection : when malicious commands appear (e.g., a fake “unlock privilege” button), the AI ignores the instruction and asks the user for confirmation instead of executing it.

❹ Content filtering : image uploads are limited to OCR‑type text extraction; the model must not identify real identities, guess ethnicity, or generate copyrighted material.

❺ Dynamic verification : real‑time data (news, stock prices) must be fetched via search tools; election‑related queries trigger a special review tool before answering.

For a complete list of leaked prompts and further details, visit the open‑source repository:

https://github.com/asgeirtj/system_prompts_leaks

The project covers Claude, Gemini, ChatGPT, Grok and other mainstream AI assistants, offering a valuable glimpse into how large language models are guided at the system level.

prompt engineeringsecurityAI safetysystem prompts
Instant Consumer Technology Team
Written by

Instant Consumer Technology Team

Instant Consumer Technology Team

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.