86K‑Star Repo Turns Karpathy’s Coding Wisdom into Practical AI‑Coding Rules
The article shares four concrete principles distilled from Andrej Karpathy’s experience—captured in the 86.1k‑star "andrej‑karpathy‑skills" repository—to help developers steer large language models toward reliable, concise, and goal‑driven code changes, with installation tips for Claude Code and other AI assistants.
Developer PaperAgent introduces a set of four practical principles for using large language models (LLMs) in code editing, based on the experience of Andrej Karpathy (OpenAI co‑founder) and the open‑source andrej‑karpathy‑skills repository that has earned 86.1k stars on GitHub.
Principle 1 — Think Before You Code
This principle targets "wrong assumptions" and "hidden confusion". LLMs tend to pick a single interpretation and proceed without clarification. The guideline urges developers to:
State uncertain aspects explicitly instead of guessing.
Expose ambiguous points with multiple possible meanings.
Offer simpler alternatives when they exist.
Pause and ask questions when the problem is unclear.
Key mantra: "Ask more questions, change fewer lines."
Principle 2 — Keep It Simple, Avoid Over‑Engineering
This addresses the LLM’s tendency toward excessive complexity and bloated abstractions. The rule set includes:
Do not add functionality that wasn’t requested.
Avoid abstract layers for code used only once.
Skip "flexibility" or "configurability" that no one needs.
Do not handle errors that cannot occur.
If 200 lines can be reduced to 50, rewrite it.
Verification: if a senior engineer would say “this is over‑engineered,” simplify the code.
Principle 3 — Modify Only What’s Required
Focused on preventing "collateral damage" and unnecessary refactoring. The checklist advises:
Do not "improve" adjacent code, comments, or formatting unless required.
Avoid refactoring code that already works.
Maintain existing style even if you disagree.
If you spot dead code, announce it instead of deleting outright.
Exception: if your change leaves imports, variables, or functions orphaned, clean them up yourself.
Verification: every line changed must trace back to a specific user request, turning diff review into a relevance check.
Principle 4 — Declare Success Criteria Instead of Giving Step‑by‑Step Commands
LLMs excel at looping until a clear goal is met… don’t tell them how to act, give them a success standard and watch them iterate.
The principle converts imperative prompts into declarative ones:
Replace "add validation" with "write tests that cover invalid inputs and ensure they pass".
Replace "fix bug" with "create a reproducible test and make it pass".
Replace "refactor X" with "ensure tests pass before and after refactoring".
This counter‑intuitive rule is the most effective: LLMs need explicit, verifiable goals rather than vague instructions.
How to Tell It’s Working
No unrelated changes appear in the diff—only the requested modifications.
The first generated code is already concise; no repeated simplification requests are needed.
The model asks clarification questions before making changes.
Pull requests are clean, without "accidental refactors" or "side‑effect improvements".
Installation
Two recommended ways to enable the rules:
Claude Code plugin (recommended): run two commands inside Claude Code; works for any project.
Project‑level CLAUDE.md: a single curl command downloads a configuration file to the repository root, affecting only that project.
The rules are also available for Cursor via the file .cursor/rules/karpathy-guidelines.mdc, which activates automatically when the project is opened.
Although the examples use Claude Code, the same principles apply to Cursor, Copilot, or any AI coding assistant.
In essence, instead of blaming the AI for poor output, encode your expectations as explicit rules and let the model follow them.
https://github.com/forrestchang/andrej-karpathy-skillsHow this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
