AI Coding Workflows

How to Reverse-Engineer an Existing Feature into GitHub Spec Kit Artifacts

A practical brownfield workflow for reconstructing one existing feature into spec.md, plan.md, tasks.md, and supporting Spec Kit artifacts without pretending you are starting from scratch.

16 min read Updated Apr 13, 2026

Most teams do not need help generating specs for a brand-new feature.

They need help understanding an existing feature well enough to change it safely.

That is the brownfield problem I care about here. You already have code. You already have routes, handlers, tests, and weird edge cases. What you do not have is a clean spec.md, plan.md, tasks.md, and supporting artifact set that tells a new engineer, a PM, or an AI agent how the feature actually works today.

This article is the standalone playbook I wish I had the first time I tried to do that with GitHub Spec Kit and Cursor.

It is also the companion to my broader article on integrating GitHub Spec Kit into existing repositories with Cursor. That earlier piece covers the larger brownfield setup. This one zooms in on a narrower and more practical question:

If I pick one existing feature in a real repo, what do I do first, what do I do second, and what artifacts should I expect at the end?

One important framing note up front: as of April 13, 2026, the current GitHub Spec Kit README and docs list core commands like /speckit.constitution, /speckit.specify, /speckit.plan, /speckit.tasks, and optional commands like /speckit.clarify, /speckit.analyze, and /speckit.checklist, but they do not document a dedicated first-party /reverse command for “turn this existing feature into artifacts.”

So what follows is not a hidden official workflow. It is my recommended brownfield operating model for using the normal Spec Kit stages deliberately for current-state documentation.

The source-backed parts of this article are:

  • installing Spec Kit in the current directory
  • using --ai cursor-agent
  • establishing a constitution early
  • the existence and purpose of the main /speckit.* commands

The brownfield reverse-engineering sequence itself is my synthesis on top of those building blocks.

TL;DR

  • Reverse-engineering into Spec Kit works best when you pick one bounded feature, not an entire subsystem.
  • The right mental model is not “generate a new feature.” It is “document the current feature exactly as it works today.”
  • The best sequence is:
    1. add Spec Kit to the repo if it is not there yet
    2. draft or refresh the brownfield constitution from repo evidence
    3. scope one feature
    4. gather evidence in Cursor
    5. create a documentation-only branch
    6. run /speckit.specify
    7. run /speckit.clarify
    8. run /speckit.plan
    9. run /speckit.tasks, /speckit.checklist, and /speckit.analyze
  • For brownfield work, the most important instruction is: do not invent improvements while reconstructing the current state.
  • My preferred end state is two branches:
    • one branch that documents the current feature
    • one later branch that enhances it

What You Will Learn Here

  • How to add GitHub Spec Kit to an existing repository if the repo does not have it yet
  • How to generate a brownfield constitution from current repo evidence
  • How to choose the right kind of feature for reverse-engineering
  • What evidence to gather before running any Spec Kit command
  • The exact order I would use for the Spec Kit commands
  • How I would word the prompts to keep the output grounded in the existing codebase
  • What artifact set I would expect at the end
  • How this workflow changes for a small feature versus a larger, messier system

The Exact Order I Would Use

If you hand this workflow to an engineer on your team, I want the answer to “what do I do first?” to be obvious.

So here is the sequence I would actually use:

  1. If needed, add GitHub Spec Kit to the existing repository.
  2. Draft or refresh the brownfield constitution from real repo evidence.
  3. Pick one feature with a clear product surface.
  4. Gather the evidence set in Cursor before touching feature-level Spec Kit commands.
  5. Create a documentation-oriented feature branch.
  6. Use /speckit.specify to capture current behavior.
  7. Use /speckit.clarify to close the obvious gaps.
  8. Use /speckit.plan to generate the technical artifact set.
  9. Use /speckit.tasks, /speckit.checklist, and /speckit.analyze to validate the reconstruction.
  10. Split documentation and enhancement into separate branches if you plan to change the feature next.

That is the high-level story. The rest of the article explains each step in the order I would do it.

Optional Step 0: Add Spec Kit to the Repo If It Is Not There Yet

If the repository does not already have GitHub Spec Kit installed, I would do that first so the rest of the workflow has a real home in the repo.

If you want a persistent installation of the specify CLI first:

uv tool install specify-cli --from git+https://github.com/github/spec-kit.git@vX.Y.Z

Then, inside the existing repository:

specify init --here --ai cursor-agent

If you prefer a one-shot bootstrap without installing the CLI globally:

uvx --from git+https://github.com/github/spec-kit.git@vX.Y.Z specify init --here --ai cursor-agent

That matters because I do not want the reverse-engineering flow to live only in chat. I want the repo to have the actual Spec Kit structure, commands, templates, and artifact folders that future engineers can reuse.

If you already have Spec Kit in the repo, then treat this as a quick prerequisite check and move on.

Step 1: Draft a Brownfield Constitution from Evidence

Before I reverse-engineer one feature, I want the repo-level guardrails in place.

This is the step that answers the question Fayaz raised very directly: should you build constitutions first or specify a feature first?

For a brownfield repository, my answer is:

  • if the repo has no useful constitution yet, draft the constitution first
  • if the repo already has a strong constitution, review it briefly and then move to the feature

The reason is simple. I want the feature-level artifacts to be produced inside the right project-wide constraints, not in a vacuum.

Using /speckit.constitution early is source-backed by the quick start. Using repo evidence to draft a brownfield-specific constitution is my recommended adaptation for existing repositories.

I would use this prompt:

/speckit.constitution
Analyze this existing repository before drafting the constitution.

Use the current codebase, .cursor/rules, AGENTS.md, CI workflows, tests,
deployment files, and repo docs as evidence.

Create 6-10 durable principles only.
Separate project-wide invariants from feature-specific behavior.
Use MUST/SHOULD language.
For each principle, include:
- the rule
- why it exists in this project
- what kinds of future specs it should influence

Do not include temporary implementation details, current library preferences
unless they are a real team constraint, or aspirational best practices that are
not already supported by the repo.

That gives you a much cleaner starting point for everything that follows.

I do not want the constitution to become a pile of generic advice. I want it to reflect the repo you actually have:

  • the checks it already enforces
  • the architectural boundaries it already respects
  • the operational rules people already care about
  • the review norms that keep coming up

Once that exists, feature-level reverse-engineering has a better frame.

Step 2: Pick One Feature, Not One Giant Subsystem

This workflow breaks down fast if the starting scope is vague.

If you tell the agent to reconstruct:

  • “the billing system”
  • “the auth platform”
  • “the analytics architecture”

the output will usually become broad, fuzzy, and too abstract to trust.

Reverse-engineering works much better when the feature has a clear product surface, such as:

  • coupon application during checkout
  • tenant SSO configuration and login
  • invoice CSV export
  • telemetry dashboard event filtering

I like features like these because they give me something concrete to trace:

  • where a user enters
  • what actions they can take
  • which APIs or jobs are involved
  • what success and failure look like

That is the first feature-level judgment call, and it matters more than the specific prompt wording later.

Step 3: Gather Evidence in Cursor Before You Run Feature-Level Spec Kit Commands

This is the step I think people skip too often.

Before I run /speckit.specify, I want Cursor to help me identify the feature boundary and the evidence that describes current behavior.

For a brownfield feature, I would gather:

  • UI routes or pages
  • menus, buttons, and entry points
  • API endpoints and request/response shapes
  • domain entities, database tables, or stored records
  • background jobs, schedulers, or webhooks
  • feature flags
  • logging, analytics, or audit events
  • tests that already describe behavior
  • docs, screenshots, and runbooks if they exist

The goal here is simple: I want to give the agent an evidence set so it reconstructs the feature from the repository instead of smoothing over the gaps with a “nice sounding” explanation.

If you only remember one principle from this article, remember this one:

Spec Kit is much more useful for reverse-engineering when the agent is constrained by evidence instead of optimism.

Step 4: Create a Documentation-Only Branch

Because /speckit.specify is designed around feature branches, I would lean into that instead of fighting it.

I prefer branch names that make the purpose explicit:

050-document-existing-telemetry-dashboard

or:

050-baseline-invoice-csv-export-current-state

That naming does two useful things:

  1. It tells reviewers this branch is documenting current behavior, not shipping a redesign.
  2. It keeps the future enhancement branch separate if you decide to improve the feature later.

For brownfield teams, that separation matters. Documentation cleanup and behavior change are much easier to review when they are not tangled together in the same diff.

Step 5: Use /speckit.specify to Capture Current Behavior

This is the most important prompt in the flow because it sets the tone for everything that follows.

The wrong prompt is something like:

Create a new feature for invoice CSV export

The safer prompt is:

/speckit.specify
Reverse-engineer the existing invoice CSV export feature from this repository.

Describe the feature exactly as it works today.
Infer requirements, user journeys, system boundaries, and acceptance criteria
from the current code, tests, routes, API handlers, data access patterns,
background jobs, and existing docs.

Do not propose enhancements.
Do not redesign the feature.
Mark uncertainty explicitly instead of guessing.

That one change in framing is what turns Spec Kit from “idea generator” into “current-state documentation tool.”

What I want out of this stage is a spec.md that captures:

  • who can use the feature
  • how they reach it
  • what they can do
  • what the system does in response
  • what constraints and failure modes already exist

Not what the feature should become. What it is.

Step 6: Use /speckit.clarify to Turn “Probably” Into “We Know”

Once the first spec exists, I assume there are still gaps.

That is normal in brownfield work because real behavior is often split across:

  • code
  • tests
  • UI assumptions
  • undocumented operational constraints

So the next step is not implementation. The next step is clarification.

I would use a prompt like this:

/speckit.clarify
Clarify gaps in the reverse-engineered spec for the invoice CSV export feature.
Focus on permissions, export limits, file format details, failure modes,
retry behavior, and any places where tests or code paths disagree.

This is where the workflow becomes useful for teams instead of just useful for one person.

A good clarification pass surfaces questions like:

  • who is allowed to export
  • whether empty exports are allowed
  • whether the CSV schema is stable
  • how failures are presented in the UI
  • which edge cases are actually covered by tests

In greenfield work, clarification helps refine the future.

In brownfield work, clarification helps stop your documentation from lying.

Step 7: Use /speckit.plan to Generate the Technical Artifact Set

After I am satisfied that the spec describes the current feature well enough, I use /speckit.plan to infer the technical architecture around it.

My prompt would look like this:

/speckit.plan
Create an implementation plan that documents the current architecture of the
existing invoice CSV export feature.

Infer the data model, component boundaries, integration points, API contracts,
background processing, test scenarios, and operational constraints from the
current implementation.

Prefer documenting the existing architecture over proposing a new one.
If the current implementation has inconsistencies, record them clearly.
Generate contracts only for externally observable APIs, events, or interfaces
that this feature currently depends on.

This step is where I expect Spec Kit to generate the core artifact family around the feature, typically something like:

specs/050-baseline-invoice-csv-export-current-state/
  spec.md
  plan.md
  research.md
  data-model.md
  quickstart.md
  contracts/

The exact shape can vary based on templates and plan output, but that is the right target.

At this point I am no longer just reconstructing a user story. I am creating a durable feature folder that other people can review, learn from, and build on.

Step 8: Generate the Execution and Validation Artifacts

Once the spec and plan are in place, I keep going in sequence.

First:

/speckit.tasks

For reverse-engineering, I treat tasks.md in one of two ways:

  • as the roadmap for validating whether the reconstructed understanding is complete
  • as the starting point for the next enhancement or refactor

Then I generate a completeness checklist:

/speckit.checklist
Generate a reverse-engineering completeness checklist for the invoice CSV
export feature.

Check whether the artifacts cover:
- user-visible behavior
- routes and entry points
- permissions and access control
- API contracts
- data model and persistence
- background jobs or async flows
- failure modes
- tests and validation scenarios
- known inconsistencies or undocumented assumptions

The official quick start shows /speckit.checklist earlier, during spec validation. For brownfield reverse-engineering, I also like using it here as a late-stage completeness audit once the full artifact set exists.

Finally, I run:

/speckit.analyze

This is the last trust-building step.

For greenfield work, analysis helps catch contradictions between the spec, plan, and tasks.

For brownfield work, it matters even more because it helps expose contradictions between:

  • the reconstructed spec
  • the inferred plan
  • the generated tasks
  • the actual codebase reality you started from

That is what makes the result usable instead of merely polished.

The Artifact Set I Would Want at the End

If the reverse-engineering pass went well, I want the feature folder to look roughly like this:

specs/050-baseline-invoice-csv-export-current-state/
  spec.md
  plan.md
  research.md
  data-model.md
  quickstart.md
  contracts/
  tasks.md
  checklist artifact from /speckit.checklist

That is enough to do three valuable things:

  • onboard a new engineer to the feature faster
  • review the feature with a PM using stable artifacts instead of chat history
  • start the next enhancement with much less ambiguity

That last point is the real payoff.

The output should not be “we made some markdown.”

The output should be “we now have a baseline source of truth for future change.”

A Simple Running Example: Invoice CSV Export

I am using invoice CSV export as the running example because it is the kind of brownfield feature that is small enough to scope and rich enough to be useful:

  • it has a clear entry point
  • it usually has permission rules
  • it has observable output
  • it may involve queueing or async processing
  • it often has edge cases around empty results, formatting, and retries

If I were reconstructing that feature, I would want the resulting artifacts to answer questions like:

  • which users can trigger the export
  • whether the export runs synchronously or asynchronously
  • which filters affect the exported rows
  • whether the CSV schema is fixed or versioned
  • what happens if the export is empty, slow, or fails
  • which tests prove the current behavior

That is the kind of feature where reverse-engineering into Spec Kit can immediately help onboarding, support, and future enhancement planning.

How This Changes for a Small Feature Versus a Large System

The workflow stays the same. The risk changes with scope.

For a small feature:

  • the evidence set is easier to gather
  • the current behavior is easier to verify
  • the generated artifacts are easier to review end-to-end

For a larger or older system:

  • the boundaries are less obvious
  • tests may disagree with the UI
  • operational knowledge may live in people rather than in docs
  • hidden coupling makes “current behavior” harder to state cleanly

That is why I would start with one smaller feature even if the long-term goal is to document a much bigger area.

The point is not to prove that Spec Kit can describe your whole platform on day one.

The point is to establish a repeatable reverse-engineering workflow your team can trust.

If I know we are going to change the feature after documenting it, I would split the work into two branches:

050-baseline-invoice-csv-export-current-state
051-enhance-invoice-csv-export

First branch:

  • reverse-engineer the current state
  • generate and review the artifact set
  • merge the documentation baseline

Second branch:

  • create the actual enhancement spec
  • reuse the new baseline artifacts
  • implement with much less ambiguity

That is not the fastest possible flow.

It is the cleanest one I have found so far if you care about maintainable documentation, better onboarding, and future AI-assisted work that starts from something durable.

What I Would Tell an Engineering Team to Do First

If I were teaching this to a team tomorrow, I would not start with the whole repo.

I would tell them:

  1. add Spec Kit to the repo if it is not there yet
  2. generate or refresh the brownfield constitution from evidence
  3. pick one bounded feature
  4. gather the evidence set
  5. create a documentation-only branch
  6. run the feature-level Spec Kit flow in order
  7. review the artifacts with both engineering and product
  8. only then start the next enhancement

That sequence is the real lesson.

Once the team trusts it on one feature, the workflow becomes much easier to repeat.

The Product Gap Still Exists

I want to be explicit about this because it matters.

The community has already signaled that there is demand for:

  • a clearer reverse-engineering workflow
  • a better in-place spec refinement workflow
  • smoother brownfield evolution after the first artifact set exists

So I do not present this as “the official magical solution.”

I present it as the cleanest operating model I have found with the current building blocks.

And until GitHub Spec Kit ships a first-class reverse-engineering path, that is enough for this to be genuinely useful.

Final Takeaway

If you remember the whole article as one sentence, I would make it this one:

Do not ask Spec Kit to imagine a better version of your existing feature before you have asked it to document the feature you already have.

For brownfield teams, that order changes everything.

Source List

Validated against these primary sources on April 13, 2026: