If you came here searching for “Speckit GitHub”, the official name is GitHub Spec Kit. I am going to use the official name throughout this article so the commands and docs line up cleanly.
On April 7, 2026, I rechecked this workflow against the current GitHub Spec Kit README and docs, the current Cursor docs, and a few community discussions that expose the most common brownfield and polyrepo questions. One important note up front: the installation steps and tool capabilities below are source-backed; the end-to-end operating model is my synthesis.
This matters because the hard part is no longer “can these tools exist in the same repo?” The hard part is how to use them without turning your existing repository, your PR flow, or your multi-repo coordination into a mess.
TL;DR
- Yes, you can add GitHub Spec Kit to an existing repository. The current Spec Kit docs explicitly support initializing in the current directory with
specify init --here, and the README now explicitly supports Cursor with--ai cursor-agent. - For a brownfield repo, the safest setup is: commit first, back up your constitution and templates if customized, then run
specify init --here --ai cursor-agent. - Cursor fits well in two places:
- Cursor IDE for clarification, planning, and repo-aware editing
- Cursor Background Agents / GitHub app for bounded async execution and PR loops
- For polyrepo work, the cleanest pattern is usually one canonical spec home, one shared feature ID, and one agent/PR per repository instead of trying to make one agent edit several unrelated repos at once.
- The biggest gaps teams hit are predictable: where the canonical spec should live, how much the spec should evolve after coding starts, what gets overwritten on upgrade, and how to keep shared guidance consistent across repos.
What You Will Learn Here
- How to add GitHub Spec Kit to an existing repository without clobbering your codebase
- How to make Cursor IDE a practical front end for the Spec Kit workflow
- How Cursor Agent and Cursor’s GitHub integration fit into the implementation loop
- How I would structure a polyrepo workflow when one feature spans multiple repositories
- Which practices are broadly safe, and which ones still need team judgment
- The most common questions engineers and PMs will ask before adopting this workflow
What the Sources Actually Confirm
Before we talk process, let’s separate the documented facts from the workflow design decisions.
1. GitHub Spec Kit works in the current directory
The installation guide explicitly supports:
uvx --from git+https://github.com/github/spec-kit.git@vX.Y.Z specify init .
# or
uvx --from git+https://github.com/github/spec-kit.git@vX.Y.Z specify init --here
That means Spec Kit is not limited to greenfield repos.
The upgrade guide is even clearer: seeing a warning about a non-empty directory is expected when adding Spec Kit to an existing codebase. It also explicitly says your specs/ directory, source code, and git history stay untouched during the merge. The overwrite risk is about Spec Kit infrastructure files, not your application code.
2. Cursor is now an explicit Spec Kit target
One subtle but important detail from the source audit:
- the installation page still lists only a smaller set of example AI agents
- the current README is the more complete source of truth and explicitly marks Cursor as supported
- the README also shows the direct initialization example:
specify init my-project --ai cursor-agent
So if you are adopting this in 2026, I would trust the README and CLI reference more than older setup snippets on the docs site.
3. Cursor reads the same kinds of repo guidance you already care about
Cursor’s docs and CLI snippets are important here because they support a practical shared-guidance model:
- Cursor supports project rules in
.cursor/rules - Cursor also supports
AGENTS.mdas a simpler markdown instruction file - Cursor CLI reads
AGENTS.md,CLAUDE.md, and.cursor/rules
That is what makes a Spec Kit + Cursor workflow viable in real repos: you are not limited to slash commands. You can layer stable repo guidance on top.
4. Cursor’s GitHub app is designed for background execution and PR loops
Cursor’s GitHub docs explicitly say the GitHub app is required for Background Agents and BugBot to clone repositories and push changes. The same docs also say you can trigger work from a PR or issue comment with:
@cursor [prompt]
That is a strong fit for repo-scoped implementation after the spec and plan are already in place.
5. Polyrepo workflow is not fully prescribed by any one vendor
This is where we have to be honest.
Spec Kit documents the building blocks:
- feature-scoped specs
- slash commands and structured phases
- branch numbering strategies
SPECIFY_FEATUREfor non-standard setups
Cursor documents the execution surfaces:
- IDE rules and commands
- Background Agents
- GitHub integration
Git documents the isolation primitive:
git worktree
What none of these sources fully prescribe is: where the canonical spec lives when one feature touches three repositories and two teams.
That part needs an operating model. The rest of this article is my recommendation for one that stays close to the documented primitives.
Step 1: Add Spec Kit to an Existing Repository Safely
If your repository already has production code, CI, and team conventions, do not treat specify init like a throwaway demo command.
Use this sequence instead:
# 1. Install or run a pinned Spec Kit release
uv tool install specify-cli --force --from git+https://github.com/github/spec-kit.git@vX.Y.Z
# 2. Inside the existing repository, add Spec Kit for Cursor
specify init --here --ai cursor-agent
# 3. Verify the setup
specify check
ls -la .cursor/commands
ls -la .specify/scripts
If the repo already contains old Spec Kit files or you are upgrading, then use:
specify init --here --force --ai cursor-agent
But only after backing up any customizations in:
.specify/memory/constitution.md
.specify/templates/
That backup matters because the upgrade guide explicitly warns that the constitution file is currently overwritten by the default template during upgrade.
Brownfield rule of thumb
For an existing repo, I would treat Spec Kit initialization like a small infrastructure change:
- Commit your current branch first.
- Add Spec Kit.
- Inspect the diff.
- Restore your customized constitution if needed.
- Commit the scaffolding before starting the first spec.
That keeps the tooling bootstrap separate from the actual feature work.
How I Would Generate a Constitution for an Existing Project
This is one of the biggest brownfield questions, and the source audit is useful here because it tells us both what exists and what does not.
The official quick start clearly expects you to define the constitution early with /speckit.constitution. At the same time, community issues show there is still strong demand for better reverse-engineering workflows for existing codebases. So today the safest pattern is not “push one magic button.” It is:
- use the existing project as evidence
- let Cursor help synthesize the first draft
- review it like an architecture document, not like autocomplete
That recommendation is my inference from the sources, not a first-party GitHub Spec Kit command contract.
Start from evidence, not aspirations
For a brownfield project, the first constitution draft should come from the things your team is already enforcing or repeatedly debating:
README.mdand architecture docs.cursor/rules,AGENTS.md, and existing agent instructions- CI workflows and required checks
- lint, typecheck, test, and formatting setup
- deployment manifests, infra folders, and environment conventions
- security or compliance rules already visible in code and process
- recurring review comments and “we always do it this way” team norms
In other words: do not ask the agent to invent an ideal company. Ask it to extract the durable rules your repo is already signaling.
Ask Cursor to draft, but constrain the task hard
I would use a prompt like this in Cursor:
/speckit.constitution
Analyze this existing repository before drafting the constitution.
Use the current codebase, .cursor/rules, AGENTS.md, CI workflows, tests,
deployment files, and repo docs as evidence.
Create 6-10 durable principles only.
Separate project-wide invariants from feature-specific behavior.
Use MUST/SHOULD language.
For each principle, include:
- the rule
- why it exists in this project
- what kinds of future specs it should influence
Do not include temporary implementation details, current library preferences
unless they are a real team constraint, or aspirational best practices that are
not already supported by the repo.
That prompt matters because a weak constitution draft usually fails in one of two ways:
- it becomes generic and useless
- it becomes a dump of today’s implementation details
You want neither. A good brownfield constitution should survive several future features.
What belongs in the constitution
For most existing projects, I would limit the constitution to categories like:
- architectural boundaries that should rarely be violated
- quality gates that apply to almost every change
- security and data-handling rules
- compatibility, migration, or rollout constraints
- documentation or observability rules that materially shape implementation
Examples:
- Public API changes MUST be backward compatible for one release cycle.
- Database schema changes MUST be deploy-safe and rollback-aware.
- New external dependencies SHOULD be avoided unless there is a clear gap in the current stack.
- User-visible behavior changes MUST include automated tests or an explicit acceptance plan.
What does not belong in the constitution
I would not put these in the constitution unless they are truly non-negotiable:
- temporary implementation workarounds
- one-off feature decisions
- every coding style preference
- long copies of wiki or standards documents
If you already have rich .cursor/rules or team wiki guidance, use the constitution as the compressed governing layer, not as the place to duplicate every document you own.
ASCII view:
existing repo
-> code patterns
-> CI gates
-> docs and runbooks
-> agent rules
-> recurring review norms
-> draft constitution
-> human review
-> durable project principles
How I Would Keep the Constitution Updated for Future Enhancements
This is the second half of the problem, and the community discussions around evolving specs are very relevant here.
The current workflow clearly supports creating new feature specs. What is less polished today is the “close the loop” step after a feature changes the project in a way that should affect future work. Several community replies explicitly talk about asking the agent to fold those lessons back into the constitution or into broader project memory.
So my recommendation is simple:
- do not update the constitution for every feature
- do update it when a feature changes a project-wide invariant
Use a change trigger test
After a feature merges, ask one question:
Did we learn or formalize a rule that future specs should follow by default?
If the answer is no, leave the constitution alone.
If the answer is yes, update it.
Typical triggers:
- you introduced a new architecture boundary
- you changed deployment or migration safety rules
- you added a new compliance or security requirement
- you changed the default testing or review gate
- you standardized a new cross-repo coordination rule
Run a post-merge constitution review
I would add a short maintenance step after major features:
Review .specify/memory/constitution.md against the merged changes for
042-tenant-sso and the current codebase.
Only propose edits if there is a project-wide rule change that should affect
future specifications.
Do not copy feature details into the constitution.
For each proposed edit, explain:
- what changed
- why it should govern future work
- which future kinds of features it affects
This keeps the constitution from drifting into two bad extremes:
- stale and ignored
- over-edited and bloated
Review on events, not just on schedule
If your team moves quickly, I would treat constitution reviews as event-driven first:
- after architecture shifts
- after new platform capabilities land
- after security or compliance changes
- after multi-repo coordination rules change
You can also do a lightweight quarterly review, but I would not rely on calendar-only governance. The better trigger is: something changed that future feature planning must now assume.
Keep a clean separation between constitution and feature specs
The constitution is for enduring rules.
The feature spec is for the current delta.
That separation is what keeps Spec Kit usable in brownfield systems. If the constitution becomes a log of every enhancement, it stops being a constitution and starts becoming noisy project history.
ASCII decision flow:
feature merged
-> did a project-wide invariant change?
-> no: keep constitution as-is
-> yes: update constitution, review, and commit separately
My practical rule is: if a PM, tech lead, or agent should assume the rule on the next unrelated feature, it belongs in the constitution. If not, it belongs in the feature spec, plan, or implementation notes.
How I Would Reverse-Engineer One Existing Feature into Spec Kit Artifacts
This is a different problem from generating a constitution for the whole repo.
Here the goal is narrower:
- take one existing feature
- infer how it works today
- generate the Spec Kit artifacts you wish already existed
- use those artifacts as the baseline for future enhancement, refactoring, or onboarding
The important caveat is still the same: Spec Kit does not currently ship a first-party /reverse command. The community has explicitly asked for one, and there is also active feedback asking for a clearer way to refine existing specs in place. So the recommended path today is to use the normal Spec Kit flow deliberately for current-state documentation.
That workflow is my recommendation, not an official one-command capability.
Scope one feature, not an entire subsystem
If you point the agent at “the billing system” or “the auth platform,” the output will usually become broad and fuzzy.
Reverse-engineering works much better when you pick a feature with a clear product surface, such as:
- coupon application during checkout
- tenant SSO configuration and login
- invoice CSV export
- telemetry dashboard event filtering
That gives Cursor and Spec Kit something concrete to reconstruct.
The safest mental model
Treat the exercise as creating a baseline feature branch for documentation.
In other words, do not tell Spec Kit:
Create a new feature for telemetry dashboards
Tell it:
Document the existing telemetry dashboard feature exactly as it works today,
including user-visible behavior, system boundaries, contracts, and constraints.
Do not invent enhancements.
That one sentence changes the whole quality of the output.
The Cursor IDE workflow I recommend
1. Gather feature evidence in Cursor first
Before you run Spec Kit commands, use Cursor to identify the feature boundary:
- UI routes or pages
- entry points and menus
- API endpoints
- database tables or stored entities
- background jobs
- feature flags
- logs, analytics, or audit events
- tests that already describe behavior
- docs, runbooks, and screenshots if available
This gives you the evidence set the agent needs to reconstruct the feature instead of fantasizing about it.
2. Create a documentation-oriented feature branch
Because /speckit.specify is optimized for feature branches, I would lean into that instead of fighting it.
Use a branch name that makes the purpose explicit:
050-document-existing-telemetry-dashboard
or
050-baseline-tenant-sso-current-state
This makes it clear to reviewers that the branch is documenting current behavior, not necessarily changing behavior yet.
3. Use /speckit.specify to capture current behavior
In Cursor IDE, I would prompt like this:
/speckit.specify
Reverse-engineer the existing telemetry dashboard feature from this repository.
Describe the feature exactly as it works today.
Infer requirements, user journeys, system boundaries, and acceptance criteria
from the current code, tests, routes, API handlers, data access patterns,
background jobs, and existing docs.
Do not propose enhancements.
Do not redesign the feature.
Mark any uncertainty explicitly instead of guessing.
This gives you the spec.md baseline.
4. Use /speckit.clarify to close the obvious gaps
This is where you turn “probably” into “we know.”
Example:
/speckit.clarify
Clarify gaps in the reverse-engineered spec for the telemetry dashboard.
Focus on access control, data freshness, empty states, filtering behavior,
failure modes, and any places where tests or code paths disagree.
For brownfield work, this step matters even more than in greenfield work because existing behavior is often inconsistent across code, tests, and UI.
5. Use /speckit.plan to generate the technical artifact set
Now ask Spec Kit to translate the documented feature into architecture artifacts, not into a redesign:
/speckit.plan
Create an implementation plan that documents the current architecture of the
existing telemetry dashboard feature.
Infer the data model, component boundaries, integration points, API contracts,
background processing, test scenarios, and operational constraints from the
current implementation.
Prefer documenting the existing architecture over proposing a new one.
If the current implementation has inconsistencies, record them clearly.
Generate contracts only for externally observable APIs, events, or interfaces
that this feature currently depends on.
Based on the current Spec Kit docs, this is the step that should produce the main supporting artifact set around plan.md, such as:
specs/050-document-existing-telemetry-dashboard/
spec.md
plan.md
research.md
data-model.md
quickstart.md
contracts/
The exact supporting files can vary with the plan and templates, but this is the artifact family Spec Kit documents around /speckit.plan.
6. Use /speckit.tasks to create the execution or onboarding map
Then generate tasks.md:
/speckit.tasks
For reverse-engineering, I treat tasks.md in one of two ways:
- as the roadmap to validate the reconstructed understanding
- as the starting point for the next enhancement or refactor
That makes it useful for both engineers and PMs. It turns documentation into something executable.
7. Use /speckit.checklist to validate completeness
This part is easy to skip and worth doing anyway.
The official README describes /speckit.checklist as a way to generate custom quality checklists. For reverse-engineering, I would ask for a checklist like this:
/speckit.checklist
Generate a reverse-engineering completeness checklist for the telemetry
dashboard feature.
Check whether the artifacts cover:
- user-visible behavior
- routes and entry points
- permissions and access control
- API contracts
- data model and persistence
- background jobs or async flows
- failure modes
- tests and validation scenarios
- known inconsistencies or undocumented assumptions
That gives you a practical audit artifact instead of just a pile of markdown files.
8. Run /speckit.analyze before you trust the output
Finally:
/speckit.analyze
The README explicitly recommends this after task generation and before implementation. For brownfield reconstruction, it is even more useful because it helps catch contradictions between:
- the reconstructed spec
- the inferred plan
- the generated tasks
- the actual codebase reality you started from
The artifact set I would target
If the reverse-engineering pass went well, the feature folder should look roughly like this:
specs/050-document-existing-telemetry-dashboard/
spec.md
plan.md
research.md
data-model.md
quickstart.md
contracts/
tasks.md
checklist artifact from /speckit.checklist
That is enough to do three valuable things:
- onboard a new engineer to the feature
- review the feature with a PM using stable artifacts
- start a follow-up enhancement with much better context
Example project: a brownfield telemetry dashboard
If you want a real brownfield-style reference, the Spec Kit README links a community walkthrough based on NASA’s open-source Hermes ground support system, where a lightweight web telemetry dashboard is added to an existing Go codebase.
That makes a good mental model for Cursor IDE even if your stack is different.
Imagine you are not building the dashboard from scratch now. You are documenting the current version already in production so you can safely extend it.
I would use a prompt like this:
/speckit.specify
Reverse-engineer the existing web telemetry dashboard feature in this repo.
Document:
- which users can access it
- how they reach it
- which telemetry streams or events it shows
- how filtering, sorting, and refresh behavior work
- which backend endpoints, modules, and data sources it depends on
- what happens on empty, delayed, or failed data states
- what tests currently prove this behavior
Do not invent future capabilities.
Write the specification as the current-state source of truth for a later
enhancement project.
Then I would follow with:
/speckit.plan
Create the technical plan and supporting artifacts for the current telemetry
dashboard implementation exactly as it exists today. Infer contracts, data
models, validation scenarios, and known technical constraints from the code.
Record inconsistencies rather than smoothing them over.
That would give me a clean baseline for a next feature, such as:
- add per-tenant filters
- add export to CSV
- add alert thresholds
- add role-based dashboard visibility
My recommended branching model for this work
Because spec evolution is still rough in brownfield flows, I would usually split this into two branches:
050-document-existing-telemetry-dashboard
051-enhance-telemetry-dashboard
First branch:
- reverse-engineer the current state
- generate and review the artifacts
- merge the documentation baseline
Second branch:
- create the actual enhancement spec
- reuse the new baseline artifacts
- implement with much less ambiguity
That is not the fastest possible flow, but it is the cleanest one I have found so far if you care about maintainable documentation and future AI-assisted work.
Step 2: Make Cursor IDE the Planning Surface
Once Spec Kit is in the repo, Cursor IDE becomes the most comfortable place to run the workflow with real codebase context.
The practical stack
Use the repository like this:
.cursor/rules/ -> persistent repo rules and scoped guardrails
.cursor/commands/ -> Spec Kit slash commands installed by Spec Kit
AGENTS.md -> simple repo-level instructions for any agent surface
.specify/ -> scripts, templates, memory
specs/<feature-id>/ -> spec, plan, tasks, contracts, research
That gives you a nice separation of concerns:
- Spec Kit owns the workflow phases
- Cursor rules own persistent repo behavior
AGENTS.mdowns plain-English team guidance
A useful AGENTS.md posture
You do not need a giant instruction file. For most teams, a short one is better:
# Agent Workflow Notes
- Use GitHub Spec Kit for feature planning before implementation.
- Treat `specs/<feature-id>/spec.md`, `plan.md`, and `tasks.md` as the source of truth for active feature scope.
- Follow `.cursor/rules` for coding standards and review expectations.
- Do not edit unrelated files while implementing a Spec Kit task.
- Open questions and requirement conflicts should be written back into the active spec or plan, not left only in chat.
That is enough to keep engineers and PMs aligned on one important point: the spec should survive the chat session.
The first commands I would run in Cursor
For a real brownfield feature, I would go in this order:
/speckit.constitution
/speckit.specify
/speckit.clarify
/speckit.plan
/speckit.tasks
Then I would stop and review the generated plan before implementation.
That pause is especially important in existing repos, where the hidden cost is almost never “can the model write code?” It is “did we accidentally plan a change that cuts across too many shared surfaces at once?”
Step 3: Use Cursor Agent the Right Way
When people say “Cursor Agent,” they often mean a few different things:
- the local agent experience inside Cursor IDE
- the
cursor-agentCLI surface - Background Agents connected to GitHub
For adoption planning, it helps to think in terms of local planning versus remote execution.
Best use of local Cursor Agent
Use the local agent when you need:
- repo exploration
- ambiguity cleanup
- plan refinement
- code edits that need live IDE context
This is the strongest moment for engineers and tech leads, because the conversation can stay close to the actual codebase.
If you prefer the CLI or want to script part of the flow, Cursor’s CLI docs also support non-interactive execution with -p:
cursor-agent -p "Read specs/042-tenant-sso/plan.md and summarize the API-only tasks. Do not write code."
That is useful for planning checks, CI-assisted summaries, or lightweight repo automation. The same CLI docs also note that Cursor reads AGENTS.md, CLAUDE.md, and .cursor/rules, so the agent can still inherit your repo guidance outside the IDE.
Best use of Cursor Background Agents
Use Background Agents after the feature is already decomposed into bounded tasks.
A good example is:
@cursor Implement task 3 from specs/042-tenant-sso/tasks.md.
Only touch the API repo. Follow the acceptance criteria in plan.md.
Open a PR with a summary of what changed and any unresolved questions.
This prompt is effective because it is:
- repo-scoped
- task-scoped
- spec-scoped
That is the right contract for async work.
What not to do
Do not ask a single background agent to “implement the whole feature across all repos” unless you have intentionally created a shared execution environment for that purpose and you understand the blast radius.
The docs are much clearer about repo access, PR creation, and dependent repos/submodules than they are about free-form orchestration across several independent repositories. So for most teams, the safer default is:
- one repo
- one task batch
- one agent
- one PR
That is slower than the fantasy version, but faster than cleaning up multi-repo drift.
How I Would Generate Multiple Specs at the Same Time
This is an easy place to get confused, because Spec Kit already supports one kind of parallelism very well and another kind only indirectly.
From the official SDD guide:
/speckit.taskscan mark independent tasks inside one feature as parallel/speckit.specifyis still fundamentally branch-oriented for one feature at a time
So if you are asking, “Can I have three active feature specs in the same repository at once?”, my answer is:
- yes, but
- not in the same working tree
- and not by pretending one branch can safely represent several active features
The current workflow is much easier to reason about if you treat one branch as one spec stream.
The current safe model
For multiple simultaneous specs in one repository, I recommend:
one repo
-> one launcher checkout on main
-> one feature branch per spec
-> one worktree per active spec branch
-> one PR per spec
That matches the current branch-centric behavior better than trying to keep several active spec directories moving inside a single checkout.
Why this matters
The community issues are useful here:
- Issue #370 shows there is still real confusion around when planning artifacts should live on
mainversus on a feature branch - Issue #1191 shows users still want easier in-place editing and refinement instead of always branching again
So until Spec Kit grows a more explicit multi-spec operating model, the safest rule is still:
one active spec stream per branch
A practical three-spec workflow
Assume you want to explore three features in parallel:
- tenant SSO
- invoice CSV export
- audit log search
I would use one checkout as the launcher and keep it on main.
From there:
main checkout
-> /speckit.specify for feature A
-> branch A created
-> materialize worktree A
main checkout
-> /speckit.specify for feature B
-> branch B created
-> materialize worktree B
main checkout
-> /speckit.specify for feature C
-> branch C created
-> materialize worktree C
ASCII view:
repo-main/ <- launcher checkout, stays on main
worktrees/
042-tenant-sso/ <- spec branch + Cursor window A
043-invoice-csv-export/ <- spec branch + Cursor window B
044-audit-log-search/ <- spec branch + Cursor window C
The sequence I would actually use
# launcher checkout stays on main
git switch main
# create first spec
/speckit.specify Add tenant SSO for enterprise accounts
# switch launcher back to main
git switch main
# create second spec
/speckit.specify Add invoice CSV export for finance admins
# switch launcher back to main
git switch main
# create third spec
/speckit.specify Add audit log search and filtering
At that point, Spec Kit has created:
- one branch per feature
- one matching
specs/<branch-name>/directory per feature
Then I would materialize each active branch as a worktree.
When to use timestamp branch numbering
If several people or agents will be creating specs independently, the README’s --branch-numbering timestamp option becomes much more attractive.
I would use timestamp numbering when:
- multiple humans may create specs at the same time
- agents might create specs asynchronously
- branch-number collisions are more painful than human readability
I would stay with sequential numbering when:
- one person coordinates spec creation
- PMs and engineers review spec IDs frequently
- you care more about readable feature IDs than collision resistance
What I would not do
I would avoid these patterns:
- creating multiple active specs and then doing all planning in one checkout on a constantly changing branch
- using one generic “planning” branch for unrelated features
- opening PRs that combine two or three spec branches just because the features look adjacent
Those shortcuts feel faster for about an hour and then get expensive.
How I Would Use Git Worktree with Spec Kit
Git worktree is the missing safety rail if you want several active specs in one repo without constant branch switching.
The official Git docs are clear: a repository can support multiple working trees, so you can check out more than one branch at a time. That lines up naturally with Spec Kit’s current branch-based feature flow.
The key recommendation
I would keep:
- one main checkout as the launcher
- one worktree per active spec branch
That gives you a clean relationship:
one spec
-> one branch
-> one worktree
-> one Cursor window
-> one PR
Why I would not let /speckit.specify fight worktree state
There is already a community bug report saying the current “create a new branch” behavior is fragile when used with git worktrees.
So the safest practical flow today is:
- Let Spec Kit create the feature branch in your launcher checkout.
- Then create a worktree for that branch.
- Do clarification, planning, tasking, and implementation from the worktree.
That is much less fragile than trying to improvise around branch creation inside an already-shuffled worktree layout.
The worktree pattern I recommend
Example:
# repo-main is your normal repository checkout
cd ~/repos/app
git switch main
# after /speckit.specify created branch 042-tenant-sso
git worktree add ../worktrees/042-tenant-sso 042-tenant-sso
# after /speckit.specify created branch 043-invoice-csv-export
git worktree add ../worktrees/043-invoice-csv-export 043-invoice-csv-export
# after /speckit.specify created branch 044-audit-log-search
git worktree add ../worktrees/044-audit-log-search 044-audit-log-search
Then open each one independently in Cursor:
cursor ../worktrees/042-tenant-sso
cursor ../worktrees/043-invoice-csv-export
cursor ../worktrees/044-audit-log-search
Where each command should run
Once the worktree exists, I would run the rest of the Spec Kit flow from inside that worktree:
worktree for 042-tenant-sso
-> /speckit.clarify
-> /speckit.plan
-> /speckit.tasks
-> /speckit.analyze
-> implementation / PR work
That matters because the active feature is branch-bound in the current workflow. Running the commands inside the worktree keeps the branch context stable and makes it much harder to accidentally write the next artifact into the wrong feature stream.
Use SPECIFY_FEATURE only as an escape hatch
If branch inference becomes awkward because of custom scripting, non-standard layout, or coordination shells, then:
export SPECIFY_FEATURE="042-tenant-sso"
But I would treat that as the fallback, not the default.
The cleaner default is still:
- enter the correct worktree
- let branch context identify the feature
A complete example: three concurrent specs
Here is the end-to-end pattern I would actually recommend for one repo:
# launcher checkout
cd ~/repos/app
git switch main
/speckit.specify Add tenant SSO for enterprise accounts
git worktree add ../worktrees/042-tenant-sso 042-tenant-sso
git switch main
/speckit.specify Add invoice CSV export for finance admins
git worktree add ../worktrees/043-invoice-csv-export 043-invoice-csv-export
git switch main
/speckit.specify Add audit log search and filtering
git worktree add ../worktrees/044-audit-log-search 044-audit-log-search
Then:
../worktrees/042-tenant-sso
-> finish clarify / plan / tasks / analyze
../worktrees/043-invoice-csv-export
-> finish clarify / plan / tasks / analyze
../worktrees/044-audit-log-search
-> finish clarify / plan / tasks / analyze
At that point, Cursor Background Agents or humans can take over one branch at a time without stepping on each other.
My rule of thumb
If the work is inside one feature, use Spec Kit’s task-level parallelism.
If the work is across several features, use Git branches and worktrees for feature-level isolation.
That distinction keeps the workflow sane.
The Recommended Polyrepo Pattern
This is the part most teams actually need.
Assume one feature touches:
web-appapi-serviceworker-service
Here is the operating model I recommend.
Pattern A: One canonical spec home
Pick one place where the authoritative feature artifacts live.
That can be:
- a platform repo
- an architecture repo
- the main customer-facing app repo
- a dedicated coordination repo
Then store the canonical artifacts there:
specs/042-tenant-sso/
spec.md
plan.md
tasks.md
research.md
contracts/
Everything else should reference that feature ID.
Pattern B: Shared feature identity across repos
Use the same feature ID everywhere:
042-tenant-sso
Use it in:
- the spec folder
- repo branches
- PR titles
- issue labels
- implementation checklists
Example:
specs/042-tenant-sso/
web-app: feat/042-tenant-sso
api-service: feat/042-tenant-sso
worker: feat/042-tenant-sso
This sounds minor, but it makes PM handoff, status tracking, and rollback discussions much easier.
If you are running Spec Kit from a non-standard coordination directory or without relying on the current git branch to infer the active feature, Spec Kit also documents a manual fallback:
export SPECIFY_FEATURE="042-tenant-sso"
That is especially useful when a platform repo or orchestration workspace is coordinating several implementation repos.
Pattern C: Repo-local overlays, not duplicate master specs
In a polyrepo system, duplicate specs become stale very fast.
So I would keep:
- one canonical
spec.md - one canonical
plan.md - repo-local notes only when needed
For example:
web-app/docs/042-tenant-sso-notes.md
api-service/docs/042-tenant-sso-notes.md
Those notes can capture implementation details, but they should never become competing sources of truth for requirements.
Pattern D: One agent and one PR per repo
This is the simplest safe coordination rule:
canonical spec
-> split by repository
-> one agent per repo
-> one PR per repo
-> merge in dependency order
ASCII flow:
feature request
-> canonical spec + plan
-> repo split
-> web-app PR
-> api-service PR
-> worker-service PR
-> integration check
-> release / rollout
Pattern E: Use timestamp branch numbering only if distributed creation is real
Spec Kit’s README now documents --branch-numbering timestamp as useful for distributed teams to avoid numbering conflicts.
That is helpful if several teams initialize features independently in separate repos. But if your team has one canonical spec home, I would still prefer simple sequential IDs because they are easier to read in planning conversations.
A Practical Polyrepo Setup I Would Actually Use
If I had to implement this next week, I would do it like this.
1. Create or choose the spec home
Example:
platform-coordination/
specs/042-tenant-sso/
2. Add lightweight repo guidance in each implementation repo
Each repo gets:
.cursor/rulesAGENTS.md- optionally Spec Kit scaffolding if the team wants local slash commands too
3. Keep the implementation contract explicit
Inside each implementation repo, the agent prompt should reference:
- the feature ID
- the canonical spec path or URL
- the repo-specific task subset
- the “do not touch” boundaries
Example:
Implement API-only tasks for 042-tenant-sso.
Source of truth is platform-coordination/specs/042-tenant-sso/plan.md.
Do not change the web-app or worker repos.
If an acceptance criterion is ambiguous, stop and write the question back to the spec.
4. Review cross-repo sequencing before merge
This is where PMs and engineers should align explicitly:
- which PR must merge first
- which API contract is frozen
- which repo can ship independently
- which rollout steps need coordination
If that is not written down, the best agent in the world will still create organizational confusion.
Common Practices I Recommend
These are not all official rules. They are the practices I would normalize for a team using this stack seriously.
- Treat Spec Kit initialization as repo infrastructure, not as part of a feature diff.
- Keep the constitution short and opinionated. It should express your real non-negotiables, not generic engineering slogans.
- Make
spec.md,plan.md, andtasks.mdreviewable artifacts. Do not bury planning decisions in chat alone. - Use Cursor IDE for clarification and decomposition. Use Background Agents for bounded execution after scope is stable.
- Prefer one canonical spec set per feature in polyrepo environments. Duplicate specs only when you want guaranteed drift.
- Use one feature ID across every repo and PR.
- Keep agent ownership narrow. One repo, one task slice, one PR is a very good default.
- Write open questions back into the spec or plan. If the answer matters later, chat is not enough.
- Back up or restore the constitution during upgrades. This one is not optional if you customized it.
- Check
.cursor/commandsafter initialization. The docs and README are converging, but a quick verification prevents confusion.
Common Questions Teams Will Ask
”Can I really add Spec Kit to a non-empty repository?”
Yes. The docs explicitly support specify init --here, and the upgrade guide explicitly says the non-empty directory warning is expected in existing codebases.
”Will it overwrite my source code?”
The docs say no. The overwrite risk applies to Spec Kit-managed infrastructure such as command files, templates, scripts, and memory files.
”Do I need to run specify every time I open the project?”
No. The upgrade guide explicitly says specify init is for initial setup and upgrades, not for every work session.
”Is Cursor officially supported, or is this a workaround?”
As of the current README, Cursor is explicitly supported and cursor-agent is listed in the CLI reference. The source audit did reveal a small docs lag across pages, so I would still verify the generated files locally after setup.
”Should every repo in a polyrepo have its own full spec?”
Usually no. I would keep one canonical feature spec and use repo-local notes only where implementation details genuinely diverge.
”What if the spec changes after implementation starts?”
This is one of the most common community pain points. My recommendation is simple:
- before coding starts, the spec can evolve freely
- once repo work is in flight, treat spec changes as change-control events
- update the canonical spec first, then update the affected repo tasks
That keeps PMs, reviewers, and agents aligned on the same acceptance criteria.
”Can I coordinate several repos from one place anyway?”
Yes, but be deliberate. The community discussions show people are experimenting with shared config and a single launch repo that reads other repos as context. That can work. But I would reserve it for mature teams that already have a strong shared conventions layer. For most organizations, one canonical spec home plus repo-scoped execution is simpler and safer.
My Recommended Operating Model for Engineers and PMs
If you want one opinionated answer, here it is:
- Add Spec Kit to one canonical planning repo or platform repo.
- Use Cursor IDE to generate and refine the constitution, spec, clarifications, plan, and tasks.
- Review those artifacts before implementation starts.
- Split the work by repository.
- Run one Cursor agent workflow per repo, with explicit boundaries.
- Keep PRs small and linked by the same feature ID.
- Merge in dependency order and update the canonical spec if acceptance criteria moved.
ASCII view:
PM / feature request
-> Cursor + Spec Kit planning
-> canonical spec / plan / tasks
-> repo-specific implementation prompts
-> one PR per repo
-> coordinated merge + rollout
That model is boring in the best sense. It gives PMs something to review, gives engineers something stable to build against, and gives AI agents narrower, safer work contracts.
The Real Gap to Watch
The gap is not “can these tools work together?” They can.
The real gap is that polyrepo governance is still mostly a team design problem:
- where the canonical spec lives
- who is allowed to change it after coding starts
- how shared instructions flow across repos
- when async agents are allowed to touch more than one repo
If you answer those questions early, GitHub Spec Kit and Cursor fit together surprisingly well. If you ignore them, the tooling will only help you create cleaner confusion.
Source List
Official sources
- GitHub Spec Kit, README
- GitHub Spec Kit, Specification-Driven Development guide
- GitHub Spec Kit Docs, Installation Guide
- GitHub Spec Kit Docs, Quick Start Guide
- GitHub Spec Kit Docs, Upgrade Guide
- Cursor Docs, Rules and AGENTS.md
- Cursor Docs, Commands
- Cursor Docs, CLI Usage
- Cursor Docs, GitHub Integration
- Git, git-worktree documentation
Community sources I used to understand real-world friction
- GitHub Spec Kit Issue #264, Reverse engineering command for existing codebases
- GitHub Spec Kit Issue #370, Branching strategy guidance gap
- GitHub Spec Kit Issue #518, /specify new branch problems with git worktrees
- GitHub Spec Kit Issue #1191, Spec-driven editing flow: updating or refining existing specs
- GitHub Spec Kit Discussion #152, Evolving specs
- GitHub Spec Kit Discussion #772, Sharing config across multiple repos
- Community demo repo, spec-kit-go-brownfield-demo