AI Coding Workflows

Before You Copy Legacy Code: A Spec-to-Spec Migration Workflow for Modernizing One Feature

An exploratory, source-backed article on reverse-engineering one feature into current-state specs in the old repo, then using those artifacts to drive a safer implementation in a new repo and a new stack.

18 min read Updated Apr 14, 2026

Most feature migrations start with a tempting idea: copy the old code, adapt it just enough, and keep moving.

That instinct is understandable. It also tends to drag the old app’s coupling, naming, assumptions, and accidental complexity into the new repo.

So here is the hypothesis I want to test in this article:

If a feature is moving from an old app into a new repository with a different stack, maybe the thing we should migrate first is not the code. Maybe we should migrate the specification.

More concretely:

  1. reverse-engineer the old feature into current-state artifacts
  2. turn those artifacts into a reviewable behavior contract
  3. generate new Spec Kit artifacts in the new repo from that contract
  4. implement thin slices with parity checks instead of code copy-paste

On April 14, 2026, I rechecked this idea against the current GitHub Spec Kit docs and README, SEI guidance on architecture reconstruction and modernization, Martin Fowler and Thoughtworks material on incremental legacy displacement, and Gojko Adzic’s writing on Specification by Example.

One important note up front: the building blocks are source-backed, but the exact end-to-end “spec-to-spec migration” workflow is my synthesis. I do not want to pretend GitHub Spec Kit already ships a magical one-command version of this.

This article is also a companion to my earlier post on reverse-engineering existing features into GitHub Spec Kit artifacts. That earlier piece focused on documenting one existing feature in place. This one asks a different question:

Can those artifacts become the bridge from a legacy app to a new app?

TL;DR

  • Yes, I think spec-to-spec migration is a strong default for moving one bounded feature from an old repo to a new repo with a different stack.
  • The winning idea is to migrate behavior, contracts, examples, and invariants, not controllers, helpers, and legacy abstractions.
  • The strongest source-backed parts of this argument are:
    • reverse-engineering existing systems into higher-level representations is a standard modernization move
    • incremental displacement beats big-bang replacement in many risky migrations
    • outcome parity matters more than blindly copying every legacy feature detail
    • collaborative examples are often more valuable than the tests generated from them
    • GitHub Spec Kit gives you the spec, clarify, plan, and task pipeline, but not a dedicated first-party reverse-engineering command
  • The workflow I would use is:
    1. choose one bounded feature
    2. reverse-engineer the old repo into current-state evidence and specs
    3. distill a parity contract
    4. create target-state specs in the new repo
    5. implement one thin slice
    6. dual-run or compare outcomes
    7. cut over gradually
  • This approach is strongest when the new repo has a different architecture or stack, and weakest when the move is basically a small refactor inside the same technical model.

What You Will Learn Here

  • Why I think spec-to-spec migration is a better framing than code migration for many brownfield feature moves
  • What the current sources actually prove, and what still depends on team judgment
  • How to define the “as-is” contract of an old feature without redesigning it too early
  • How to convert that contract into “to-be” Spec Kit artifacts in a new repository
  • How PMs and engineers can review the same migration using specs instead of only reading diff-heavy code
  • Where parity checks, seams, and dual-run fit into the delivery loop
  • When you should still just port the code or redesign the feature from scratch

Suggested Reading Flow

  • If you are a PM or engineering manager, read: The Hypothesis, Why This Idea Is Plausible, When This Beats Copying Code, A Simple Decision Table, and The Gaps to Watch.
  • If you are an engineer, read the whole article in order, but spend most of your time on The Workflow I Would Actually Use, Prompt Examples, and The Validation Loop.
  • If you are leading a migration across repos, do not skip the middle. The important part is the handoff between the old repo’s current-state artifacts and the new repo’s target-state plan.

The Hypothesis

I want to define the idea clearly, because “spec-to-spec migration” is not an official GitHub Spec Kit term.

Here is what I mean by it:

  • In the old repo, reverse-engineer one existing feature into a trustworthy current-state description.
  • Treat that description as a behavior contract, not as a redesign brief.
  • In the new repo, write a new feature spec and plan that preserves the required outcomes but fits the new architecture, conventions, and stack.
  • Validate parity with examples, tests, and side-effect checks instead of assuming copied code equals preserved behavior.

That is fundamentally different from:

  • copying handlers and utilities into the new repo
  • translating one framework into another file by file
  • preserving old module boundaries that only existed because of the old platform

In other words:

code migration
  -> moves implementation

spec-to-spec migration
  -> moves intent, behavior, contracts, examples, and rollout rules

That distinction matters because most migrations fail less from missing code than from:

  • misunderstood edge cases
  • hidden dependencies
  • accidental coupling
  • unclear success criteria
  • product behavior that was never written down cleanly

Why This Idea Is Plausible

The strongest version of this article is not “I like specs.”

The strongest version is: the underlying disciplines already exist, and they fit together surprisingly well.

1. Reverse-engineering into higher-level representations is a normal modernization move

The SEI’s architecture reconstruction guidance is explicit that reconstruction obtains a usable architectural representation of an existing system from the implemented legacy system. That matters because the first step in a safe migration is often abstraction upward, not direct movement sideways.

That is already a strong vote in favor of:

  • extracting current behavior
  • naming boundaries
  • documenting dependencies
  • making legacy structure reviewable before changing it

Which is exactly what a current-state spec is trying to do at the feature level.

2. Modernization works better when you move in thin slices

Thoughtworks’ legacy displacement material emphasizes thin slices, early risk reduction, frequent delivery, and parallel comparison during transition. That is close to the opposite of “copy everything and hope it still works.”

If we accept that modernization should happen slice by slice, then the migration unit should not be “all code touching this area.” It should be:

  • one user-visible capability
  • one reviewable contract
  • one migration slice

That makes specs a much better unit of coordination than folders copied across frameworks.

3. Outcome parity is usually more valuable than naive feature parity

One of my favorite ideas in recent modernization writing is the difference between preserving the right outcomes and blindly cloning the old surface area.

That is a perfect fit for this hypothesis.

When we migrate spec to spec, we can preserve:

  • the business rule
  • the required input/output behavior
  • the audit or side effects that matter
  • the rollout and failure expectations

Without preserving:

  • old class shapes
  • outdated route layouts
  • incidental data access patterns
  • framework-specific ceremony

That is how you avoid building a modern-looking app with old-app internals glued underneath.

4. Seams and probes give us a practical way to observe legacy behavior

Martin Fowler’s more recent writing on legacy seams is helpful here because it reframes seams as more than test helpers.

They are also useful for:

  • observing legacy behavior
  • capturing results for analysis
  • redirecting behavior gradually into a new implementation

That gives spec-to-spec migration a very practical validation model:

  1. introduce enough seams or observability to measure the old feature
  2. turn those observations into examples and contracts
  3. compare the new implementation against those expectations

5. Examples often matter more than the test tooling around them

Gojko Adzic’s reflection on Specification by Example makes an important point that I think maps nicely here: the conversations and examples are often more valuable than the later automated tests.

That is a big deal for migrations.

If I can get:

  • five critical examples
  • the edge cases that break trust
  • the expected side effects
  • the acceptable failure modes

then I already have the core of a migration contract that PMs, engineers, and agents can all review.

6. GitHub Spec Kit gives us the scaffolding, but not the whole brownfield answer

The current GitHub Spec Kit docs and README clearly support the core planning stages:

  • /speckit.constitution
  • /speckit.specify
  • /speckit.clarify
  • /speckit.plan
  • /speckit.tasks
  • optional validation helpers such as /speckit.analyze and /speckit.checklist

The installation guide also supports initializing in the current directory with specify init . or specify init --here.

That is enough to support the mechanics of this workflow.

But the docs do not currently present a first-party “reverse-engineer this existing feature into artifacts” command as the official path. And community discussion around brownfield usage makes it clear that teams still want more guidance there.

So the honest framing is this:

  • Spec Kit provides the workflow primitives
  • brownfield spec-to-spec migration is still an operating model we have to design intentionally

That is fine. It just means we should call the workflow a synthesis, not a built-in product promise.

When This Beats Copying Code

I think spec-to-spec migration is especially strong when these conditions are true:

  • the new repo uses a different stack, such as moving from Rails to Next.js plus Go, or from a monolith module to a service plus web app split
  • the old feature has behavioral value but low code reusability
  • the old implementation carries obvious incidental complexity
  • PMs or stakeholders need a reviewable migration artifact
  • multiple teams or agents need a stable shared contract before implementation starts

This is where the old instinct to “just port the code” creates the most damage.

You do not want to migrate:

  • an ORM workaround that only exists because of the old persistence layer
  • route naming that only made sense in the old app
  • retry logic that belongs in a modern queue or workflow engine
  • UI assumptions that belonged to a different navigation model

You want to migrate the feature’s meaning, not its fossil record.

The Workflow I Would Actually Use

If I had to run this on a real feature next week, this is the order I would use.

Step 1: Choose one bounded feature with a visible product surface

Do not start with:

  • “auth”
  • “billing”
  • “notifications”

Start with something smaller and visible, such as:

  • invoice CSV export
  • tenant SSO setup flow
  • coupon application during checkout
  • dashboard filtering with saved views

The feature should have:

  • clear entry points
  • clear success and failure states
  • clear observable side effects

That makes reverse-engineering and parity testing much more practical.

Step 2: Build an evidence pack in the old repo

Before you ask any tool to write a spec, gather the evidence that describes current behavior:

  • routes, handlers, and background jobs
  • request and response shapes
  • relevant tables or stored entities
  • existing tests
  • logs, analytics, and audit events
  • feature flags
  • screenshots or support tickets if they explain user behavior
  • rollout quirks and known exceptions

ASCII view:

old repo
  -> routes
  -> handlers
  -> tests
  -> logs
  -> screenshots
  -> flags
  -> side effects
  -> evidence pack

This is the part that keeps the later spec grounded in reality.

Step 3: Create current-state Spec Kit artifacts in the old repo

This is where I want to be very disciplined.

The purpose of the old-repo spec is not to redesign the feature.

The purpose is to document:

  • what the feature does today
  • what users and downstream systems expect
  • what edge cases already exist
  • what side effects are required for trust

I would treat the old repo as the place to create an as-is feature contract.

That might mean a branch and spec set that are explicitly documentation-oriented:

old-repo
  specs/058-invoice-export-current-state/
    spec.md
    plan.md
    tasks.md

The exact folder naming is your choice, but the point is to make the intent obvious:

  • this is the current feature as it exists now
  • this is not the modernization implementation yet

Step 4: Distill a parity contract from the old repo artifacts

This is the bridge.

I do not want the new repo to depend on a vague statement like:

“make it work like the old app”

I want a compact migration contract that answers:

  • what inputs matter
  • what outputs matter
  • what side effects matter
  • what non-functional expectations matter
  • which legacy quirks are real requirements
  • which legacy quirks are accidental and can be dropped

This contract can be written as:

  • worked examples
  • scenario lists
  • golden input/output samples
  • event and audit expectations
  • discrepancy rules and tolerances

A simple version can look like this:

Feature: invoice CSV export

Required outcomes
- Export includes only invoices visible to the requesting tenant.
- Columns must appear in the approved business order.
- Empty tax values serialize as empty strings, not zero.
- Export creation writes an audit event with actor, tenant, and export ID.

Allowed modernization differences
- Job execution may move from synchronous controller logic to async worker flow.
- Internal storage layer may change.
- UI trigger can move from page action to command palette if PM approves.

That is already far more useful than copying three services and hoping the meaning survives.

Step 5: Create target-state Spec Kit artifacts in the new repo

Now we switch repositories.

The new repo spec should be written from the parity contract, not from the copied code.

This is where the new stack gets to be modern on purpose.

The spec should preserve:

  • business outcomes
  • observable behavior
  • required side effects
  • rollout safety

The spec should adapt:

  • architecture
  • module boundaries
  • framework usage
  • data flow
  • observability implementation

ASCII flow:

old repo evidence
  -> current-state spec
  -> parity contract
  -> new repo spec
  -> new repo plan
  -> new repo tasks

This is the moment where the migration becomes spec to spec instead of code to code.

Step 6: Implement one thin slice and compare outcomes

At this point I would resist the urge to “finish the whole migration.”

Instead:

  1. pick the smallest useful slice
  2. implement it in the new repo
  3. compare outcomes against the old system
  4. log discrepancies
  5. decide whether the discrepancy is:
    • a bug
    • an acceptable modernization change
    • a misunderstood legacy behavior

This is where seams, probes, and dual-run become very practical.

A comparison harness can be very small:

const input = buildInvoiceExportRequest(sampleTenant, sampleFilters);

const legacyResult = await legacyExport(input);
const modernResult = await modernExport(input);

expect(normalizeExport(modernResult.csv)).toEqual(
  normalizeExport(legacyResult.csv)
);

expect(modernResult.auditEvent).toMatchObject({
  actorId: legacyResult.auditEvent.actorId,
  tenantId: legacyResult.auditEvent.tenantId,
  action: "invoice_export_created",
});

That example is simple on purpose.

The important move is not the exact testing library. The important move is:

  • compare the behavior
  • normalize irrelevant differences
  • keep the migration contract visible

Step 7: Cut over gradually and keep updating the contract

Once one slice behaves correctly, move traffic gradually:

  • dark launch if possible
  • canary if appropriate
  • compare logs and outputs
  • preserve a discrepancy log

Then update the spec artifacts based on what you learned.

That matters because a migration almost always reveals one of three things:

  • the legacy system had behavior nobody wrote down
  • the new system exposed a hidden assumption
  • the team realized some legacy behavior was not worth preserving

A good migration process makes those discoveries explicit.

Prompt Examples

Here is how I would prompt the old repo versus the new repo.

Old repo: current-state reverse-engineering prompt

/speckit.specify
Document the current behavior of the invoice CSV export feature exactly as it
works today.

Use the existing routes, handlers, tests, logs, feature flags, and audit events
as evidence.

Capture:
- user-visible behavior
- required side effects
- error cases
- edge cases already covered by tests

Do not redesign the feature.
Do not propose improvements yet.
If behavior is unclear, mark it as a clarification question instead of inventing
an answer.

Then:

/speckit.clarify
Focus on undocumented edge cases, tenant isolation rules, export column ordering,
empty-value serialization, and audit event expectations.

New repo: target-state migration prompt

/speckit.specify
Implement invoice CSV export in this repository using the approved migration
contract from the legacy system.

Preserve:
- tenant visibility rules
- export column order
- empty tax serialization behavior
- required audit event semantics

Adapt the implementation to this repo's architecture, observability, and async
processing model.

Do not copy legacy module boundaries unless they are required by the contract.

Then:

/speckit.plan
Create a thin-slice implementation plan for the new repository.

Include:
- the minimal vertical slice
- rollout and comparison strategy
- parity validation points
- discrepancy logging approach
- cutover criteria

The Validation Loop

I think this is the piece that makes the whole workflow credible.

Without a validation loop, “spec-to-spec migration” risks becoming a nice-sounding rewrite.

With a validation loop, it becomes a controlled modernization loop:

legacy feature
  -> observe
  -> specify
  -> extract examples
  -> build thin slice in new repo
  -> compare outputs and side effects
  -> resolve discrepancies
  -> expand traffic
  -> retire old path

That is why I prefer this idea over code copy for major stack changes.

Copied code often gives a false feeling of safety.

Compared outcomes give you real safety.

A Simple Decision Table

SituationBetter Default
Same repo, same stack, small refactorRefactor or extract code directly
New repo, different stack, hidden legacy couplingSpec-to-spec migration
Product behavior is changing heavily anywayFresh target-state spec with selective legacy evidence
No reliable evidence exists yetPause and build the evidence pack first
Exact binary or algorithm reuse is requiredReuse the proven component, then spec the integration around it

The Gaps to Watch

I like this workflow, but I would not sell it as frictionless.

1. Reverse-engineering can preserve bad behavior too faithfully

If you are not careful, current-state documentation turns bugs and accidental quirks into sacred requirements.

That is why I like separating:

  • required parity
  • known bugs
  • legacy behaviors explicitly approved for removal

2. Teams can confuse “examples” with “the whole spec”

Examples are powerful, but examples alone do not capture:

  • rollout rules
  • observability expectations
  • data retention or compliance constraints
  • operational ownership

You still need the full spec and plan.

3. This workflow still needs a brownfield operating model

GitHub Spec Kit gives us the commands and artifact flow. It does not yet remove the need to decide:

  • where the canonical migration contract lives
  • how old-repo and new-repo specs reference each other
  • how discrepancies are triaged
  • when a parity gap is acceptable

That is team design work.

4. Some migrations really do justify code reuse

If a small, well-isolated library already does the right thing and is portable, reusing it can be smarter than re-specifying it.

So I would not turn this into ideology.

The question is not:

“must we never copy code?”

The real question is:

“what is the safest unit of migration for this feature?”

For many cross-stack feature moves, I think the answer is still the spec.

Final Takeaway

If I had to compress the whole article into one sentence, it would be this:

When a feature moves to a new repo and a new stack, migrate the behavior contract first and let the implementation be reborn from that contract.

That is the heart of spec-to-spec migration.

It is not anti-code.

It is anti-accidental-coupling.

And for engineers and PMs trying to modernize one feature without dragging a whole legacy worldview into a new codebase, that is a very practical distinction.

Source List

Validated against these sources on April 14, 2026.

Official and primary sources

Community signals I used to understand the brownfield gap