If you are just hearing about GitHub Spec Kit, the shortest explanation is this:
it gives your AI coding workflow a structure.
Instead of jumping straight from a vague prompt to a giant code diff, you move through a sequence:
- project principles
- feature specification
- clarification
- technical plan
- task breakdown
- implementation
That sounds simple, but it changes the working rhythm for both engineers and PMs.
- PMs get something reviewable before code starts.
- Engineers get clearer requirements, better boundaries, and fewer “wait, what exactly are we building?” loops.
- AI tools get a much better contract than a single giant prompt.
On April 15, 2026, I rechecked this article against the current GitHub Spec Kit README and docs, the current Cursor product docs and changelog, and the current Claude Code docs. One important note up front:
- the command set and artifact flow are source-backed
- the practical operating model in this article is my synthesis for real teams using Cursor or Claude Code
TL;DR
- GitHub Spec Kit is a structured spec-first workflow for AI-assisted software delivery.
- The day-one split is simple:
- use
specify ...commands in your terminal to install and verify the toolkit - use
/speckit.*commands inside your AI assistant chat to drive the feature workflow
- use
- A clean beginner flow is:
specify init --here --ai cursor-agentorspecify init --here --ai claudespecify check- create a feature branch like
001-saved-dashboard-views - run
/speckit.constitution - run
/speckit.specify - run
/speckit.clarify - run
/speckit.plan - run
/speckit.tasks - optionally run
/speckit.checklist,/speckit.analyze, and/speckit.taskstoissues - run
/speckit.implement
- Cursor is especially strong for planning loops and worktree-heavy exploration.
- Claude Code is especially strong when you want a terminal-first flow or want to kick off longer execution work, including on the web.
- Spec Kit is not only for new features. It is also useful for:
- reverse-engineering existing features
- spec-to-spec migration from a legacy app into a newer app
- planning-first PR review
- polyrepo coordination
Who This Is For
This article is written to read cleanly for both engineers and PMs.
If you are a PM, think of Spec Kit as a way to turn “the thing we want” into a reviewable artifact set before implementation drifts.
If you are an engineer, think of it as a workflow that helps you separate:
- what the feature must do
- what still needs clarification
- how you intend to build it
- what order the work should happen in
What Spec Kit Actually Gives You
The official docs describe Spec-Driven Development as a process where specifications become active delivery artifacts instead of disposable planning notes.
That matters because most AI coding failures are not really “model failures.” They are usually workflow failures:
- the prompt mixed product intent with stack choices too early
- the acceptance criteria never became explicit
- the implementation jumped ahead of unresolved questions
- tasks were never decomposed cleanly
Spec Kit gives you a repeatable shape for avoiding that.
The Simple Mental Model
Here is the easiest way to understand the workflow:
idea
-> constitution
-> feature spec
-> clarification
-> technical plan
-> task breakdown
-> implementation
And here is the important practical detail from the official quick start:
- Spec Kit commands detect the active feature from your current Git branch
So if you are on a branch like:
001-saved-dashboard-views
Spec Kit treats that as the current feature context.
That is one of the reasons the workflow feels natural in engineering teams.
Start Here: Cursor or Claude Code?
You can use the same Spec Kit flow with both tools. The difference is mainly where each tool feels strongest.
Cursor is a great front end for planning
Cursor’s current materials support a planning-heavy workflow well:
- Plan Mode can research the codebase, create a plan, and save it as Markdown
- Cursor 3 can run agents in parallel across repos and environments
- Cursor 3 also added a
/worktreecommand so isolated work can happen in separate git worktrees
If your team likes an IDE-native workflow, Cursor is a very comfortable place to drive /speckit.* commands.
Claude Code is a great front end for execution and terminal-first work
Claude Code fits especially well when you want:
- a terminal-first workflow
- explicit shell access
- strong repo-local execution
- the option to move to Claude Code on the web for cloud-run tasks
Anthropic’s docs now make a clear distinction:
- Remote Control keeps the session running on your machine
- Claude Code on the web runs on Anthropic-managed cloud infrastructure
That makes a nice pattern possible:
- plan locally in Cursor or Claude Code
- execute some bounded implementation work in Claude Code on the web
Step 1: Install Spec Kit
The official installation guidance recommends using the GitHub-hosted Spec Kit package, not unrelated packages with a similar name from PyPI.
I recommend a persistent install for day-to-day use:
uv tool install specify-cli --from git+https://github.com/github/spec-kit.git@vX.Y.Z
If you want a one-shot bootstrap instead:
uvx --from git+https://github.com/github/spec-kit.git@vX.Y.Z specify init <PROJECT_NAME>
Step 2: Initialize the Repo for Your Assistant
If you want to work in Cursor:
specify init --here --ai cursor-agent
If you want to work in Claude Code:
specify init --here --ai claude
If you are adding Spec Kit into an existing repository, the docs explicitly support initializing in the current directory with . or --here.
What you provide as input
- the repository directory
- the target AI assistant via
--ai - optionally
--here,--force,--script, or--branch-numbering
What you should expect as output
- a
.specify/folder with templates, scripts, and memory files - agent-specific command or skill wiring
- the ability to run
/speckit.*commands from the assistant
For example, the upstream README shows a structure like this after the early setup flow:
.specify/
memory/
constitution.md
scripts/
specs/
001-create-taskify/
spec.md
templates/
Step 3: Verify the Installation
Run:
specify check
specify version
specify check
Input
- your local environment
- installed tools like
gitand your chosen AI agent CLI
Output
- a readiness check telling you whether the required tools are available
specify version
Input
- none beyond the installed CLI
Output
- the current Spec Kit version so you can confirm you are using the expected build
Step 4: Start on a Feature Branch
The quick start docs are very clear that Spec Kit uses your current Git branch as the feature context.
So before you start feature work, create a branch:
git switch -c 001-saved-dashboard-views
For the rest of this article, I will use one simple running example:
Add saved dashboard views to an analytics product.
The feature idea is small enough to understand, but still realistic enough for PM and engineering collaboration:
- users can save a filtered dashboard view
- they can rename or delete it
- a saved view can optionally be shared with their team
- default and custom views need clear permissions
The Two Command Families
This is the part that confuses most people at first.
1. Terminal commands: specify ...
These are for:
- installation
- environment checks
- project customization
2. Assistant commands: /speckit.*
These are for:
- planning
- clarification
- technical design
- task generation
- implementation
If you remember only one sentence from this section, remember this one:
specify bootstraps the workflow, and /speckit.* runs the workflow.
Command-by-Command Guide
Below is the simple, practical map of the main Spec Kit commands.
Bootstrap Commands
specify init
This is the command you run first.
What you give it
- a project path or
--here - your assistant selection such as
--ai cursor-agentor--ai claude - optional setup flags like
--force,--script, or--branch-numbering
What you get back
- the Spec Kit project scaffolding
- templates and scripts
- agent integration for the slash-command workflow
Example
specify init --here --ai cursor-agent
Or:
specify init --here --ai claude
specify check
Use this right after initialization.
What you give it
- your current machine state
What you get back
- confirmation that required tools like
gitand your selected assistant tooling are present
Example
specify check
specify version
Use this when you want to verify the installed version.
What you give it
- nothing
What you get back
- the installed Spec Kit version string
Example
specify version
specify extension
This is not usually a day-one command, but it matters later.
What you give it
- an extension management action
- the extension you want to add, update, or manage
What you get back
- added or updated extension-driven capabilities
The official CLI reference lists this command, but the basic quick start does not walk through a full beginner example. So I treat this as a customization command, not a day-one delivery command.
specify preset
This is for preset management.
What you give it
- a preset action
- the preset you want to use or manage
What you get back
- a customized project-level template layer on top of core Spec Kit behavior
specify integration
This is for integration management.
What you give it
- an integration action
- the integration target
What you get back
- configured integration support for the project
Like extension and preset, this is a platform-management command rather than a first feature-delivery command.
Workflow Commands
Now we get to the commands most people mean when they say “Spec Kit commands.”
/speckit.constitution
This command sets the project principles that later specs and plans should respect.
What you should input
- durable project rules
- team constraints
- engineering principles
- product or compliance constraints that should shape future features
What you should not input
- a bunch of temporary implementation details for one feature
What it outputs
- a project constitution, typically stored in
.specify/memory/constitution.md
Simple example
/speckit.constitution
This product is audit-first and reliability-first.
All user-visible state changes must be attributable.
We prefer small reversible changes over large migrations.
All new features must include acceptance criteria and monitoring expectations.
Why it matters
This is the command that stops later AI output from drifting into “whatever seems convenient.”
For brownfield repos, I strongly recommend deriving this from repo evidence, not from aspirational slogans. I covered that in more detail in my companion article on integrating GitHub Spec Kit into existing repositories with Cursor.
/speckit.specify
This is where you describe the feature in business and user terms.
What you should input
- what users need
- why it matters
- the desired outcomes
- key acceptance boundaries
What you should avoid here
- detailed stack choices
- implementation architecture
- table schemas
- framework arguments
What it outputs
- a new feature specification directory on the active feature branch
- a
spec.mdcontaining user stories and functional requirements
Simple example
/speckit.specify
Add saved dashboard views for analysts and managers.
Users should be able to save the current filters and layout as a named view,
load a saved view later, rename it, delete it, and optionally share it with
their team. Shared views should respect team boundaries and role permissions.
Expected output shape
.specify/specs/001-saved-dashboard-views/spec.md
Or, depending on project layout conventions, the equivalent feature spec directory for the active branch.
/speckit.clarify
This command is for ambiguity removal.
The README describes it as a structured clarification pass that records answers in a Clarifications section.
What you should input
- unclear areas
- tradeoffs that need an explicit decision
- rollout, security, permissions, failure behavior, or data retention questions
What it outputs
- clarified answers recorded back into the spec artifact set
- a tighter, less ambiguous feature definition before planning
Simple example
/speckit.clarify
Focus on team-sharing permissions, rename/delete permissions, default-view behavior,
and how private views behave when a user changes teams.
Why it matters
This is where PM and engineering alignment gets much better. Instead of arguing late during implementation, you make the unresolved questions visible before the technical plan hardens.
/speckit.checklist
This command is best understood as a quality-review aid.
The official description frames it as a way to generate quality checklists that validate requirements completeness, clarity, and consistency.
What you should input
- usually the current spec context
- optionally the quality lens you want to validate
What it outputs
- a checklist-style validation artifact or review surface for the current spec
Simple example
/speckit.checklist
Generate a checklist for requirement clarity, edge-case completeness, and
review readiness for PM and engineering signoff.
Practical interpretation
I think of this as “unit tests for the English before you write the code.”
/speckit.plan
This is where you shift from what to how.
What you should input
- your chosen stack
- architecture constraints
- runtime assumptions
- data storage choices
- integration points
What it outputs
plan.md- supporting implementation detail documents such as
research.md,quickstart.md,data-model.md, andcontracts/*, depending on the feature
The README shows an output tree like this:
specs/001-create-taskify/
contracts/
api-spec.json
signalr-spec.md
data-model.md
plan.md
quickstart.md
research.md
spec.md
Simple example
/speckit.plan
Use Next.js for the web application, Postgres for persistence, and a simple
REST API. Saved views should support private and team-shared visibility.
Track ownership, sharing scope, created_at, updated_at, and last_used_at.
Why it matters
This is the handoff point from product intent to technical design.
/speckit.tasks
This command turns the plan into executable work.
The README is unusually concrete here. It says the generated tasks.md includes:
- task breakdown by user story
- dependency ordering
- parallel markers like
[P] - exact file paths
- TDD-friendly task sequencing when requested
- checkpoints for independent validation
What you should input
- usually nothing more than the current artifact set
- optionally a request to emphasize sequencing or testing
What it outputs
tasks.mdfor the active feature
Simple example
/speckit.tasks
Expected output
specs/001-saved-dashboard-views/tasks.md
This is the artifact that makes parallel implementation much safer.
/speckit.analyze
This is the cross-artifact review pass.
The official command description frames it as cross-artifact consistency and coverage analysis, and the README recommends running it after /speckit.tasks and before /speckit.implement.
What you should input
- usually the current artifact set
- optionally a request to focus on risk, missing dependencies, or contradictions
What it outputs
- an analysis pass highlighting gaps, inconsistencies, or missing coverage across the spec, plan, and tasks
Simple example
/speckit.analyze
Look for mismatches between permission rules in the spec, the data model,
and the task breakdown.
Why it matters
This is the command that catches “the plan sounds good, but it no longer matches the feature we agreed on.”
/speckit.taskstoissues
This is the bridge from planning artifacts to GitHub issue tracking.
The official description is concise: it converts generated task lists into GitHub issues for tracking and execution.
What you should input
- a completed
tasks.md - access to the relevant GitHub repo and tracking workflow
What it outputs
- GitHub issues or issue-ready tracking artifacts derived from the task list
Simple example
/speckit.taskstoissues
Important note
The high-level command purpose is documented clearly, but the quick-start docs do not give the same detailed walkthrough here that they give for plan, tasks, and implement. So I treat this as a very useful bridge command, but not one I would force into every beginner flow on day one.
/speckit.implement
This is the execution phase.
The README says this command will:
- validate prerequisites
- parse
tasks.md - execute tasks in order
- respect dependencies and parallel markers
- follow the TDD structure defined in the task plan
- provide progress updates and handle errors
What you should input
- a completed artifact set
- a clean enough local environment to run the required tooling
What it outputs
- actual implementation work in the repository
- progress updates while the assistant executes the plan
Simple example
/speckit.implement
Implement the feature in thin slices and stop to report if the permission model
requires a schema or API decision not covered in the current plan.
Important note
This is not just a text-generation step. It can drive real local commands, so your environment needs the actual language runtimes and tools that the implementation expects.
A Simple End-to-End Beginner Flow
If you only want one clean beginner path, I would use this:
In Cursor
specify init --here --ai cursor-agent
specify check
git switch -c 001-saved-dashboard-views
Then in Cursor chat:
/speckit.constitution
Our product is reliability-first and audit-first.
All user-visible changes need acceptance criteria.
We prefer incremental delivery.
/speckit.specify
Add saved dashboard views so users can preserve filters and layout, return to
them later, and optionally share them with their team.
/speckit.clarify
Focus on permissions, shared-view ownership, and default-view behavior.
/speckit.plan
Use the existing Next.js app and Postgres database. Prefer small API changes,
clear ownership rules, and reversible rollout.
/speckit.tasks
/speckit.analyze
/speckit.implement
In Claude Code
The same flow works, just with Claude Code as the chat surface:
specify init --here --ai claude
specify check
git switch -c 001-saved-dashboard-views
claude
Then run the same /speckit.* sequence inside Claude Code.
If you like a hybrid flow, a strong pattern is:
- do
constitution,specify,clarify, and sometimesplanin Cursor - validate and execute
tasks,analyze, andimplementin Claude Code
That is not a required pattern. It is just a practical one.
What Files Should You Expect as You Move Through the Flow?
Here is the simplest mental model:
.specify/
memory/
constitution.md
specs/
001-saved-dashboard-views/
spec.md
plan.md
tasks.md
research.md
quickstart.md
data-model.md
contracts/
Not every feature will generate every supporting file in the same way, but this is the general shape documented in the upstream examples.
What Else Can You Do with Spec Kit?
This is where Spec Kit becomes more interesting than a beginner tutorial.
1. Reverse-engineer an existing feature
You do not have to start from greenfield.
One of the most valuable brownfield uses is:
- pick one existing feature
- gather evidence from code, tests, routes, and UI
- reconstruct the current-state
spec.md,plan.md, andtasks.md
I wrote a full walkthrough for that here:
This is especially good for:
- onboarding
- safer refactors
- PM and engineering alignment on legacy behavior
2. Do spec-to-spec migration from a legacy app to a newer app
This is one of my favorite modern use cases.
Instead of copying old code directly into a new stack, you can:
- reverse-engineer the old feature into current-state specs
- treat those specs as a behavior contract
- write a new target-state plan in the new application
- implement parity slices instead of copying legacy structure
I covered that in depth here:
For many teams, that is a safer modernization path than code-first porting.
3. Use Spec Kit as the planning backbone for parallel delivery
Once you have tasks.md, you can split work more safely across:
- worktrees
- AI agents
- human teammates
- separate PRs
That is where Spec Kit becomes more than documentation. It becomes the planning contract for parallel execution.
Related reading:
4. Use it in brownfield or polyrepo environments
Spec Kit is not only for a clean greenfield monorepo.
A practical setup for polyrepo work is often:
- one canonical spec home
- one shared feature ID
- one repo-specific implementation stream per repository
I wrote more about that here:
5. Route different phases to different models or tools
Not every phase benefits from the same tool or model.
A practical split can be:
- faster drafting and clarification in Cursor
- deeper validation or execution in Claude Code
- different model choices for
specify,plan, andimplement
I explored that here:
Modern Usage Patterns I Recommend
These are not official rules. They are the patterns I think hold up best.
PM writes the “why,” engineering writes the “how”
Use:
- PM input heavily in
/speckit.specify - engineering leadership heavily in
/speckit.plan
That keeps the spec focused and the plan grounded.
Treat /speckit.clarify as a real review stage
Do not rush past it.
This is often where the most expensive misunderstandings are still cheap to fix.
Keep the planning PR separate from the implementation PR when the feature is large
For bigger features, I like:
- PR 1 for constitution/spec/plan/tasks
- PR 2 and beyond for implementation slices
That is not required, but it makes review quality much better.
Use SPECIFY_FEATURE only when you really need to override branch-based detection
The CLI reference documents SPECIFY_FEATURE for non-Git workflows or unusual situations. Most teams should rely on the branch-first default unless they have a strong reason not to.
Common Mistakes
These are the beginner mistakes I see most often:
Putting the stack into /speckit.specify
That weakens the separation between intent and design.
Skipping /speckit.clarify
You save five minutes and often lose much more later.
Treating the first generated plan as final
You should review and refine the plan.
Jumping into /speckit.implement before tasks.md is trustworthy
Spec Kit works better when the task breakdown is good enough to guide execution.
Trying to use one giant spec for a whole subsystem
The workflow is usually strongest when you scope one bounded feature at a time.
Final Takeaway
If you are new to GitHub Spec Kit, do not overcomplicate the first run.
Start with one feature, on one branch, with one clear /speckit.* sequence.
Then, once that feels natural, expand into the more advanced patterns:
- brownfield reverse-engineering
- migration planning
- parallel delivery
- multi-tool workflows across Cursor and Claude Code
The real value of Spec Kit is not that it creates more files.
The real value is that it gives engineers, PMs, and AI tools the same working artifact set before implementation starts drifting.
Source List
Validated against these sources on April 15, 2026.
Official sources
- GitHub Spec Kit README
- GitHub Spec Kit Quick Start Guide
- GitHub Spec Kit Installation Guide
- Cursor blog: Introducing Plan Mode
- Cursor changelog: New Cursor Interface 3.0
- Claude Code docs: Getting Started
- Claude Code docs: Remote Control
Companion articles in this repo
- How to Integrate GitHub Spec Kit into Existing Repositories with Cursor IDE and Cursor Agent
- How to Reverse-Engineer an Existing Feature into GitHub Spec Kit Artifacts
- Before You Copy Legacy Code: A Spec-to-Spec Migration Workflow for Modernizing One Feature
- From Spec to Parallel Delivery: Spec Kit, Cursor, Beads, and Claude Code on a Real Feature
- A Phase-by-Phase Model Strategy for GitHub Spec Kit in 2026