How ‘System Prompts Leaks’ Uncovers the Core Prompts of ChatGPT, Claude, Gemini

The open‑source ‘System Prompts Leaks’ project extracts and publishes the hidden system prompts of major LLMs such as ChatGPT, Claude and Gemini, offering version‑specific markdown files that let developers and researchers compare underlying model policies, safety rules and prompt‑engineering constraints.

AI Explorer
AI Explorer
AI Explorer
How ‘System Prompts Leaks’ Uncovers the Core Prompts of ChatGPT, Claude, Gemini

Purpose and significance

The System Prompts Leaks project reduces information asymmetry in large‑language‑model ecosystems by publishing the core system prompts that define a model’s identity, capability limits, safety rules, and response style. This enables developers and researchers to compare the design philosophies of commercial models on equal footing—for example, Anthropic’s “beneficial, honest, harmless” constitution for Claude versus OpenAI’s tool‑calling logic for GPT.

“It’s like having the source code of all smartphone operating systems. Understanding the low‑level rules is a prerequisite for efficient and safe development.” – an AI engineer

Technical approach

The extraction does not involve cracking the model weights. Instead it leverages the model’s own behavior through three main techniques:

Prompt injection : crafting inputs that cause the model to reveal its system instruction.

API metadata analysis : inspecting response headers and payloads returned by the provider’s API.

Behavioral comparison : issuing equivalent queries under different interaction patterns and observing divergences that expose hidden rules.

Key highlights

Multi‑model coverage : includes prompts from ChatGPT, Claude, Gemini, Grok, Perplexity and other major platforms.

Version granularity : distinguishes model families and specific releases (e.g., GPT‑5.3 → 5.4, Claude Opus 4.5 → 4.6) as well as tool‑enabled versus tool‑free variants.

Structured organization : each provider has its own directory; individual prompts are stored as Markdown (.md) files for easy browsing and batch processing.

Community driven : the repository accepts pull requests, allowing contributors to add new versions or correct existing entries.

Repository architecture

The project is a single GitHub repository. Every model’s system prompt resides in a separate .md file under a company‑named folder (e.g., OpenAI/gpt-5.4-thinking.md). This layout supports both manual inspection and automated scripts that iterate over the file tree.

System Prompts Leaks project overview
System Prompts Leaks project overview

Getting started

Browse directly : open the GitHub repository, navigate to the desired file (e.g., OpenAI/gpt-5.4-thinking.md) and read the full system prompt.

Clone locally for analysis:

git clone https://github.com/asgeirtj/system_prompts_leaks.git

Compare prompts : write a script that loads two Markdown files (for example Claude Opus 4.6 and GPT‑5.4) and diffs the wording of safety‑related instructions to reveal differing corporate values.

Intended users and scenarios

AI application developers : use the prompts to design compatible prompt‑engineering strategies and avoid conflicts with model constraints.

LLM researchers and learners : treat the collection as a real‑world case study of AI safety, alignment, and behavior design.

Technical evangelists and content creators : base deep technical analyses on concrete system‑prompt excerpts.

Enterprise decision makers : compare underlying safety designs when selecting an AI foundation.

Implications

The rapid growth of stars on the GitHub repository demonstrates strong community demand for model transparency and explainability. Although the extraction methods operate in a legal gray area, the project democratizes access to foundational model configurations and may encourage future disclosures of system instructions by AI providers.

GitHub stars growth
GitHub stars growth
LLMprompt engineeringGitHubAI transparencysystem prompts
AI Explorer
Written by

AI Explorer

Stay on track with the blogger and advance together in the AI era.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.