Combining OpenSpec and Superpowers: A 4‑Step Workflow to Eliminate Luck in AI Coding
This article analyses how OpenSpec’s hard‑coded specification engine and Superpowers’ LLM‑driven execution loop complement each other, presenting a detailed four‑step workflow, concrete code snippets, and a side‑by‑side comparison that shows how the combined approach resolves both definition and execution quality issues in AI‑assisted programming.
1. The Soft‑Constraint Dilemma of Superpowers
Superpowers enforces a brainstorming → spec → plan → TDD → review pipeline via markdown skill files, but all rules are expressed as natural‑language prompts without any executable enforcement.
All iron rules are just natural language
The key directive in
<EXTREMELY-IMPORTANT>IF A SKILL APPLIES TO YOUR TASK, YOU DO NOT HAVE A CHOICE. YOU MUST USE IT.</EXTREMELY-IMPORTANT>(from using-superpowers/SKILL.md) is a plain‑text instruction; the framework provides 15 anti‑rationalization items but none of them are backed by executable code.
Consequently, compliance drops with weaker models or long contexts, and every rule, iron law, or hard gate remains a prompt rather than program logic.
Review loops can run indefinitely
Superpowers defines a two‑stage review: a spec‑reviewer prompt followed by a code‑quality reviewer prompt. The flow is:
Spec reviewer: Does code match spec?
→ No → Implementer fixes → Re‑run spec review
→ Yes → Dispatch code‑quality reviewer
Code‑quality reviewer: Is code approved?
→ No → Implementer fixes → Re‑run code‑quality review
→ Yes → Mark task completeThe process has three explicit termination conditions (spec passes, code‑quality passes, manual approval) but lacks a maximum iteration count, timeout, or cost ceiling. For a plan with five tasks, the minimum sub‑agent dispatches are 16; a single failed review per round can inflate this to 26, making resource consumption unpredictable.
Each task starts with a fresh sub‑agent
Superpowers creates an independent sub‑agent for each task, passing a controller context but not inheriting the full session history. This leads to cross‑task consistency problems: changes made in Task 3 are invisible to Task 7, and the integrity of information transfer relies entirely on the LLM‑driven controller.
The Cost of Zero‑Dependency
Superpowers claims zero external dependencies, which means it cannot use AST parsers, linters, formatters, static analysis, or test‑coverage tools. All code‑quality checks depend on the LLM’s reading ability, limiting reliability to the LLM’s own trustworthiness.
2. OpenSpec’s Hard‑Constraint Solution
OpenSpec takes a different path: instead of relying on prompts, it uses structured specifications enforced by a Zod‑based validation engine.
Structural constraints: not suggestions, but gates
The validation engine ( src/core/validation/) enforces hard rules such as:
Spec files must contain name, overview, and requirements.
Each requirement must include the keywords SHALL or MUST (RFC 2119).
Every requirement must have at least one scenario.
Change proposals must have a why field between 50‑1000 characters.
Delta specs may contain 1‑10 deltas.
These are enforced by Zod schemas, so non‑conforming files fail validation outright.
DAG dependency graph for ordered artifact generation
OpenSpec’s Artifact Graph Engine ( src/core/artifact-graph/graph.ts) uses Kahn’s algorithm for topological sorting, ensuring that when a Change Proposal is created, artifacts are generated in a deterministic order. Completion is detected via file‑system checks in state.ts (lines 14‑29).
Three‑level schema resolution
Resolution logic ( src/core/artifact-graph/resolver.ts lines 63‑91) prioritises schemas from:
Project‑local .openspec/schemas/ (custom).
User‑global ~/.openspec/schemas/ (preferences).
Built‑in defaults.
This lets teams extend constraints without modifying the framework code.
Delta Spec management for traceable changes
Delta Specs follow a fixed format (ADDED, MODIFIED, REMOVED, RENAMED) and are applied in a hard‑coded order (RENAMED → REMOVED → MODIFIED → ADDED), guaranteeing programmatic handling rather than LLM judgment.
OpenSpec’s limitations include a non‑automatic validation step (no pre‑commit hook) and a weak scenario format check; the Verify command remains AI‑driven.
3. Complementary Mechanism: Hard Constraints + Execution Self‑Test
Combining the tools yields a clear division of labour:
OpenSpec defines *what* to build with hard, programmatic constraints.
Superpowers enforces *how* to build it through a seven‑stage, LLM‑driven workflow and two‑stage review.
OpenSpec addresses four shortcomings of Superpowers:
Soft → hard constraints via Zod schema.
Missing termination → archive step provides a concrete finish line.
Cross‑task information gaps → Delta Spec records changes for later lookup.
Pure LLM review → programmatic validation backs the review.
Conversely, Superpowers mitigates three OpenSpec gaps:
Manual validation → enforced seven‑stage execution.
AI‑only Verify → detailed two‑stage review.
No execution loop → TDD + review closes the loop.
4. Minimal Working Flow: Four Verified Steps
The following four‑step workflow has been tested in real projects.
Step 1 – Define specifications with OpenSpec
# Initialize OpenSpec
npx @fission-ai/openspec init
# Create a spec file
npx @fission-ai/openspec specThe generated spec must contain name, overview, and requirements, each requirement carrying a SHALL / MUST keyword and at least one scenario.
Step 2 – Create a Change Proposal
# Create a change proposal
npx @fission-ai/openspec changeOpenSpec orders artifact creation via its DAG, and the why field (50‑1000 characters) forces clear reasoning. The output is a structured Change Proposal plus a Delta Spec (ADDED/MODIFIED/REMOVED/RENAMED).
Step 3 – Execute implementation with Superpowers
# Superpowers injects via session‑start hook
# Start work directly in Claude CodeSuperpowers takes over, following the brainstorming → plan → TDD → review loop. Its spec review uses the Delta Spec from OpenSpec as the authoritative source.
Resource consumption can be bounded by manually setting maximum review rounds (e.g., three spec rounds, two code‑quality rounds) before the OpenSpec archive step.
Step 4 – Validate and archive with OpenSpec
# Validate changes
npx @fission-ai/openspec validate
# Archive
npx @fission-ai/openspec archiveArchiving runs three checks: proposal validation, Delta Spec validation, and spec reconstruction verification, acting as a quality gate. If Superpowers’ review loop is still active, the archive step blocks non‑compliant changes.
Configuration files are isolated:
project-root/
├── .openspec/
│ ├── config.yml # OpenSpec config
│ ├── schemas/ # optional custom schemas
│ └── changes/ # change proposals
├── using-superpowers/
│ └── SKILL.md # Superpowers skill injection
└── .claude/
└── hooks/
└── session-start # Superpowers hookConclusion
AI‑assisted coding quality hinges on two factors: clear definition and reliable execution. Superpowers excels at execution flow (seven stages, two‑stage review, sub‑agent isolation) but relies on prompt‑based rules. OpenSpec provides programmatic definition (Zod validation, DAG management, Delta Spec tracking) but lacks an execution loop.
By pairing OpenSpec’s hard constraints with Superpowers’ structured workflow, teams obtain a double‑layered quality gate: OpenSpec defines *what* to build, Superpowers ensures *how* to build it. The archive step supplies the missing termination condition, preventing infinite review cycles.
For teams already using AI coding assistants, adopting a “spec first, execute later” strategy can dramatically improve code stability and predictability.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Shuge Unlimited
Formerly "Ops with Skill", now officially upgraded. Fully dedicated to AI, we share both the why (fundamental insights) and the how (practical implementation). From technical operations to breakthrough thinking, we help you understand AI's transformation and master the core abilities needed to shape the future. ShugeX: boundless exploration, skillful execution.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
