Turning Your Coding Habits into Claude-Ready Skills with Waza

Waza is a lightweight open‑source framework that converts personal coding habits into reusable Claude Code skills, offering a six‑layer responsibility model, a set of slash commands for design, testing, debugging, and context‑engineered best practices, while explaining execution loops, tool design principles, and quick‑start installation steps.

AI Open-Source Efficiency Guide
AI Open-Source Efficiency Guide
AI Open-Source Efficiency Guide
Turning Your Coding Habits into Claude-Ready Skills with Waza

Six‑Layer Responsibility Model

CLAUDE.md / rules / memory – long‑term context that tells Claude "what this is".

Tools / MCP – action capability that tells Claude "what I can do".

Skills – on‑demand methodology that tells Claude "how to do it".

Hooks – forced behavior that executes constraints without relying on Claude's judgment.

Subagents – context‑isolated workers that provide controlled autonomy.

Verifiers – verification loop that makes output testable, rollbackable, auditable.

Risk warning: Over‑reliance on any single layer can destabilise the system (e.g., overly long CLAUDE.md, tool overload, excessive subagents, or skipping verification).

Core Skills (Slash Commands)

/think

– deep thinking before building anything; challenges the problem, stress‑tests designs, validates architecture. /design – UI design; generates unique UI, commits to a clear aesthetic, avoids generic defaults. /check – code review after task completion; reviews diffs, auto‑fixes security issues, blocks destructive commands, validates with evidence. /hunt – systematic debugging of any bug or unexpected behaviour; confirms root cause before applying fixes. /write – writing/editing; rewrites articles for natural English/Chinese, removes stiff phrasing. /learn – learning new domains; follows a six‑stage workflow (collect, digest, outline, fill, refine, self‑check). /read – content ingestion; fetches clean Markdown from URLs or PDFs via proxy scripts, supports WeChat and Feishu processors. /health – health check; audits CLAUDE.md, rules, skills, hooks, MCP, and behaviours, marking issues by severity.

Execution Model

Collect context → Take action → Verify result → [Finish or loop]
          ↑                ↓
   CLAUDE.md / Hooks / Permissions / Sandbox
          Skills / Tools / MCP
                Memory

Five Diagnostic Dimensions

Context – what should always be loaded vs. loaded on demand? (Artifacts: CLAUDE.md, rules, memory, skills)

Action – what actions can Claude currently take? (Artifacts: built‑in tools, MCP, plugins)

Control – which actions must be constrained, blocked, or audited? (Artifacts: permissions, sandbox, hooks)

Isolation – which tasks need context and permission isolation? (Artifacts: subagents, worktrees, branch sessions)

Verification – how do we know a task is complete and trustworthy? (Artifacts: tests, lint, screenshots, logs, CI)

Context Engineering: Cost Structure

200K total context
├── Fixed overhead (~15‑20K)
│   ├── System prompt: ~2K
│   ├── All enabled Skill descriptors: ~1‑5K
│   ├── MCP server tool definitions: ~10‑20K ← largest hidden cost
│   └── LSP state: ~2‑5K
├── Semi‑fixed (~5‑10K)
│   ├── CLAUDE.md: ~2‑5K
│   └── Memory: ~1‑2K
└── Dynamic usable (~160‑180K)
    ├── Conversation history
    ├── File contents
    └── Tool call results

A typical MCP server (e.g., GitHub) may expose 20‑30 tool definitions, each ~200 tokens. Connecting to five servers can consume ~25 K tokens (≈12.5 % of the window).

Recommended Context Layering

Resident – CLAUDE.md: project contract, build commands, prohibitions.

Path‑loaded – .claude/rules/: language‑, directory‑, file‑type‑specific conventions.

On‑demand – skills: workflows and domain knowledge.

Isolation‑loaded – subagents: heavy exploration or parallel research.

Never in context – hooks: deterministic scripts, auditing, blocking.

Context Best Practices

Keep CLAUDE.md short, strict, actionable.

Split large reference documents into supporting skill files.

Use .claude/rules/ for path‑ and language‑specific rules.

Run /context to inspect token consumption.

Switch tasks with /clear; use /compact for new phases of the same task.

Tool Design: Good vs. Bad Tools

Name – good: jira_issue_get, sentry_errors_search; bad: generic query, fetch, do_action.

Parameters – good: specific fields like issue_key, project_id, response_format; bad: vague id, name, target.

Return – good: information directly relevant to the next decision; bad: raw UUIDs, internal fields, noise.

Scope – good: single purpose, clear boundaries; bad: mixed actions, opaque side‑effects.

Cost – good: default output controllable, truncatable; bad: default returns massive context.

Error handling – good: includes corrective suggestions; bad: only opaque error codes.

Tool Design Principles

Prefix names with system or resource layer (e.g., github_pr_, jira_issue_).

Support response_format: concise/detailed for large responses.

Errors should be corrective.

Combine high‑level tasks when possible; avoid exposing many low‑level fragments.

Avoid list_all_* patterns that force the model to filter results.

Hooks: Custom Code Before/After Claude Actions

Hooks move work from the model’s immediate judgment to deterministic processes.

Suitable Hook Scenarios

Prevent modification of protected files.

Auto‑format/lint/light‑validate after edits.

Inject dynamic context (Git branch, env vars) after session start.

Push notifications after task completion.

Unsuitable Hook Scenarios

Complex semantic judgments requiring large context.

Long‑running business workflows.

Multi‑step reasoning and trade‑offs (use skills or subagents instead).

Hook Configuration Example

{
  "hooks": {
    "PostToolUse": [
      {
        "matcher": "Edit",
        "pattern": "*.rs",
        "hooks": [
          {
            "type": "command",
            "command": "cargo check 2>&1 | head -30",
            "statusMessage": "Running cargo check..."
          }
        ]
      }
    ],
    "Notification": [
      {
        "type": "command",
        "command": "osascript -e 'display notification \"Task completed\" with title \"Claude Code\"'"
      }
    ]
  }
}

Hooks provide early error detection, saving time.

Quick Start

Install All Skills

npx skills add tw93/Waza -a claude-code -g -y

Install a Single Skill

npx skills add tw93/Waza -a claude-code -s health -y

Replace health with any skill name.

Install Statusline

curl -sL https://raw.githubusercontent.com/tw93/Waza/main/scripts/setup-statusline.sh | bash

English Coach Feature

Collaborating with Claude in English yields better results because most high‑quality resources are in English. The coach automatically marks grammatical errors and suggests corrections, e.g., turning "it is not good to be read" into "it's hard to read (Unnatural phrasing)".

Safety Configuration

To prevent destructive Git commands (e.g., git push -f, git checkout ., git clean -f), add them to the reject list in ~/.claude/settings.json.

References

- GitHub repository: https://github.com/tw93/waza
- Six‑layer architecture article: https://tw93.fun/en/2026-03-12/claude.html
AI agentsprompt engineeringClaudeContext Managementtool designWaza
AI Open-Source Efficiency Guide
Written by

AI Open-Source Efficiency Guide

With years of experience in cloud computing and DevOps, we daily recommend top open-source projects, use tools to boost coding efficiency, and apply AI to transform your programming workflow.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.