From Repetitive Prompts to One‑Click Execution: A Complete Guide to Writing Claude Skills

Learn how to turn daily repetitive Claude Code prompts into reusable Skills by identifying repeatable workflows, extracting five key Skill traits, applying a four‑step creation process, and iterating through observation, refinement, structuring, validation, and continuous improvement, illustrated with a real code‑review case study.

ArcThink
ArcThink
ArcThink
From Repetitive Prompts to One‑Click Execution: A Complete Guide to Writing Claude Skills

Identifying Worthwhile Workflows

Three signals indicate a workflow should be codified as a Skill:

Repetition : the same instruction appears in three or more separate sessions (e.g., always asking Claude to format Markdown, add spaces between Chinese and English, and label code blocks).

Correction : you repeatedly correct the same type of AI mistake (e.g., "do not use emoji", "align tables with vertical bars"). When a correction occurs more than twice, it becomes a reusable rule that belongs in lessons.md.

Collaboration : another person needs to follow the same process or you need a consistent standard across sessions. When knowledge stays only in your head, its lifecycle ends with the session; externalizing it as a Skill preserves it.

All three converge on the answer: encapsulate the workflow as a Skill .

Four‑Step Method to Build a Skill

Step 1 – Observation (Play the Product Manager)

Spend a week recording every prompt you give Claude. Log exact wording, fixed‑order steps, dependencies, optional parts, and frequent corrections in a temporary file such as CLAUDE.md. After each session, spend about 30 seconds noting the repeated commands.

Step 2 – Extraction (Separate Steps from Rules)

Classify the recorded items into:

Steps : ordered actions (e.g., "format Markdown → generate images → convert to HTML").

Rules : constraints that apply to every step (e.g., "add a space between Chinese and English", "use WebP for images", "titles no longer than 20 characters").

Keep steps and rules in separate files; the Skill file references the rule files.

Step 3 – Structuring (Choose an Architecture)

Two common patterns are recommended:

Knowledge‑base style : a hierarchy of markdown files where each file is a knowledge point (e.g., knowledge/api-spec.md, knowledge/style-guide.md). The Skill file acts as an index.

Workflow style : a commander file ( SKILL.md) that defines the step order and points to a workflow/ directory for each step, plus a rules/ directory for constraints.

Most real‑world Skills blend both: a workflow that loads rule files only when needed, ensuring each file does one thing.

Step 4 – Validation (Make the Skill Run)

Three validation techniques are described:

Cold‑start test : invoke the Skill in a brand‑new session with no context.

Boundary test : feed malformed or incomplete inputs and observe graceful handling.

Third‑party test : let someone unfamiliar with the design run the Skill and note missing explanations.

If the AI misinterprets the intent, simplify the structure rather than adding more prose.

Skill validation diagram
Skill validation diagram

Five Characteristics of a High‑Quality Skill

Atomicity : each file performs a single responsibility, making it replaceable.

Verifiability : define explicit "completion criteria" (e.g., output saved, lint passed, no missing links) so the AI knows when a step is truly done.

Progressive Disclosure : SKILL.md should be a lightweight map, not a full encyclopedia; AI only loads detailed step files when it reaches them.

Fail‑Safe Design : provide a "prohibited list" (e.g., "do not fabricate data", "do not reorder steps", "do not skip the checklist") to keep AI within safe boundaries.

Self‑Documentation : file names convey purpose (e.g., workflow/step1-research.md, rules/formatting.md), eliminating the need for separate docs.

Skill characteristics diagram
Skill characteristics diagram

Iterative Improvement Loop

Four recurring cycles evolve a Skill from "usable" to "great":

Observe Bias : after each run, ask whether the AI deviated and where manual intervention occurred.

Collect Feedback : gather teammate input on error‑prone steps, ambiguous commands, and missing or redundant parts.

Versioned Refinement : record every change with a concise git commit -m "reason for change" statement.

Periodic Pruning : remove never‑triggered rules, simplify steps that AI has internalised, and delete comments meant only for the author.

Skill’s ultimate goal is “as few words as possible to achieve as much control as possible.” Every line must be useful, not merely correct.
Iteration loop diagram
Iteration loop diagram

Real‑World Case: Evolving a Code‑Review Skill

Stage 1 – Chaos

Initially the author typed a long natural‑language request for each PR (type safety, error handling, performance, security). Problems: missing items, vague security checks, and language‑specific prompts that didn’t transfer to teammates.

Stage 2 – Monolithic File

All dimensions were merged into a single SKILL.md (~500 lines). Issues emerged: token waste on tiny PRs, rule mixing (security advice appearing in performance section), and difficulty updating a single dimension without affecting others.

Stage 3 – Modularisation

The author split each dimension into its own markdown file under dimensions/ and added routing logic in the main SKILL.md to load only the relevant files based on changed file types. Example: a CSS change loads only style.md, while an API change loads security.md and error-handling.md.

code-review/
├── SKILL.md            # dispatcher
├── dimensions/
│   ├── type-safety.md
│   ├── error-handling.md
│   ├── performance.md
│   ├── security.md
│   └── style.md
├── rules/
│   ├── severity-levels.md
│   └── dont-do.md
└── templates/
    └── review-report.md

Stage 4 – Iterative Optimisation

Feedback collected over two weeks led to three concrete improvements:

Added a severity hierarchy (Critical/High/Medium/Low) in severity-levels.md because AI always marked findings as "severe".

Inserted a prohibition against overly complex fixes in dont-do.md.

Extended the output template with separate "blocking" and "suggestion" sections to guide junior developers.

Each change was committed with a clear message, e.g.,

git commit -m "fix: add file‑existence check in step3 to avoid overwriting images"

, creating a transparent evolution history.

The final comparison shows how the skill progressed from chaotic ad‑hoc prompts to a modular, feedback‑driven system that scales with team size.

Skill is a shared memory between you and the AI; it persists beyond sessions, devices, and personnel changes.

Reference: Claude Code official best practices – https://code.claude.com/docs/en/best-practices

Architectural Token Load Comparison

Single file (8000 token) : each step loads the full 8000‑token file; total file size 8000 tokens.

Modular (3000 + 5×1000) : each step loads ~4000 tokens; total file size still 8000 tokens, but per‑step context is roughly halved.

With the same total information, modular architecture reduces the per‑step token burden.

Skill Evolution Stages Comparison

Chaos : each PR described verbally; dimensions omitted, not reusable.

Monolithic : 500‑line SKILL.md containing all dimensions; dimensions mixed, small PRs load the whole file.

Modular : dimensions independent + routing logic; functional but lacking feedback loop.

Iterated : added severity levels, prohibition list, and split output sections; continuous optimisation.

Each evolution step builds on the previous version by adding or removing functionality rather than rewriting from scratch.

Conclusion

Repeated commands, recurring corrections, and collaboration needs are signals that a workflow is ready to become a Skill. By observing, extracting steps vs. rules, structuring with a suitable architecture, and validating through cold‑start, boundary, and third‑party tests, you create a reusable, atomic, verifiable, and self‑documenting Skill. Iterative loops—bias observation, feedback collection, versioned refinement, and periodic pruning—turn a merely usable Skill into a high‑quality tool that scales with team size and persists across sessions.

Automationprompt engineeringClaudeContinuous improvementAI workflowskill development
ArcThink
Written by

ArcThink

ArcThink makes complex information clearer and turns scattered ideas into valuable insights and understanding.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.