OpenSpec + Superpowers Integration: 3 Connection Points Tested, 2 Failed – A Hands‑On Review
This article documents a complete hands‑on experiment linking OpenSpec and Superpowers, showing that while the initial spec proposal works, three critical integration points break—two fail outright and one never triggers—leaving the envisioned seamless, spec‑driven development pipeline unachievable.
1. Environment Setup
Install Node.js (>= 20.19.0) and the OpenSpec CLI:
# Check Node version
node --version
# Install OpenSpec globally
npm install -g @fission-ai/openspec@latest
# Verify installation
openspec --version
openspec --helpInstall Superpowers according to the AI‑coding tool. For Claude Code:
# Install Superpowers plugin in Claude Code
/plugin install superpowers@claude-plugins-official
# Reload plugins
/reload-pluginsOther platforms (Cursor, Gemini CLI, OpenAI Codex) follow the instructions in Superpowers' GitHub README.
2. Step 1 – Define Spec
Run the OpenSpec propose command: /opsx:propose Example change description:
Create a Todo REST API with Express + TypeScript, supporting CRUD operations on an in‑memory array. Each task has id , title , and completed fields.
OpenSpec generates four artifacts:
proposal.md – intent, scope, method
specs/ – incremental specs using WHEN/THEN format (no GIVEN)
design.md – technical design, API definitions, open questions
tasks.md – coarse‑grained task list (4 groups, 11 subtasks)
Validate the spec:
openspec validate todo-rest-api
# Output: Change 'todo-rest-api' is valid3. Step 2 – Superpowers Closed‑Loop Development
3.1 Brainstorming (✅ works)
Trigger brainstorming in the AI‑coding assistant: brainstorming Provide a prompt referencing the OpenSpec artifacts. The assistant reads the proposal and specs and asks clarifying questions (e.g., timestamp, error‑handling middleware).
3.2 Writing Plans (❌ broken point 1)
After brainstorming, run: writing-plans The generated plan contains 7 fine‑grained tasks and does **not** use tasks.md as its skeleton. This creates two independent task‑tracking systems that never sync.
3.3 Sub‑agent‑Driven Development (✅ flow runs)
Execute the full sub‑agent pipeline: subagent-driven-development The process follows these steps:
Read the plan and extract all tasks.
Assign each task to an implementer sub‑agent.
After implementation, hand off to a spec‑compliance reviewer sub‑agent.
Then to a code‑quality reviewer sub‑agent.
If a review fails, the implementer fixes and the cycle repeats.
When all tasks are DONE, a final code‑review sub‑agent runs.
Status codes reported by sub‑agents include DONE, DONE_WITH_CONCERNS, NEEDS_CONTEXT, and BLOCKED.
3.4 Integration Points – Test Results
Point 1: The spec reviewer checks the generated plan file, not the OpenSpec specs/ directory, so the detailed WHEN/THEN scenarios are never used.
Point 2: Mapping of WHEN/THEN scenarios to TDD tests is theoretical; the sub‑agent pipeline never invokes the TDD skill, so no test generation occurs.
Point 3: The tasks.md artifact is ignored by writing‑plans; the plan’s tasks are unrelated, breaking the spec‑to‑implementation link.
4. Step 3 – Verification and Archiving
4.1 OpenSpec Verification
Run:
/opsx:apply
openspec status --json
# Output contains "isComplete": true indicating artifacts existNote: apply only checks artifact presence; it does **not** read tasks.md or verify individual task completion.
4.2 Superpowers Verification
After sub‑agent development, ensure the verification‑before‑completion skill runs (evidence before assertion). Code‑review is performed by the requesting‑code‑review skill.
4.3 Archiving
Archive a completed change:
openspec archive todo-rest-api
# Moves changes/todo-rest-api/ to changes/archive/Archiving does **not** merge incremental specs into openspec/specs/; the main spec directory remains empty unless manually updated.
5. Troubleshooting
Pitfall 1 – Review Loop Deadlock
Symptoms: spec reviewer and code‑quality reviewer keep finding issues, causing endless cycles.
Root cause: ambiguous specifications.
Pause and check the reviewer’s baseline (plan vs. specs).
Make WHEN/THEN scenarios concrete.
If more than three review rounds occur, stop and refine the spec before retrying.
Pitfall 2 – Divergent Task Tracking Systems
Symptoms: OpenSpec tasks.md and Superpowers plan tasks differ; apply reports success while many tasks remain unchecked.
Root cause: each tool maintains its own task list.
After writing‑plans, manually compare with OpenSpec tasks.md.
If discrepancies are large, adjust the plan or tasks accordingly.
Do not rely on apply for true completion; verify manually.
Pitfall 3 – Sub‑agents Reporting NEEDS_CONTEXT
Symptoms: sub‑agents return NEEDS_CONTEXT or BLOCKED, halting progress.
Root cause: insufficient context supplied to sub‑agents.
Ensure design.md contains full technical details (interfaces, data structures, error handling, edge cases).
Confirm specs/ covers normal and exceptional flows.
Add missing project background to the context field in config.yaml and refresh the agent with openspec update.
6. Summary
OpenSpec’s propose stage reliably produces high‑quality artifacts (proposal, specs, design, tasks). Superpowers’ sub‑agent‑driven development runs smoothly. However, the automatic end‑to‑end pipeline fails because: writing‑plans ignores OpenSpec tasks.md, creating a separate task list.
The spec reviewer evaluates the generated plan instead of the OpenSpec specs/ scenarios. apply only checks artifact existence and does not verify task completion.
Archiving does not merge incremental specs into the main openspec/specs/ directory.
Practical approach: use OpenSpec for requirement/spec definition and Superpowers for sub‑agent‑driven implementation, but align the artifacts manually between the two tools.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Shuge Unlimited
Formerly "Ops with Skill", now officially upgraded. Fully dedicated to AI, we share both the why (fundamental insights) and the how (practical implementation). From technical operations to breakthrough thinking, we help you understand AI's transformation and master the core abilities needed to shape the future. ShugeX: boundless exploration, skillful execution.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
