Addressing Common Objections to Spec-Driven Development

Adopting new development methodologies raises valid questions about productivity, quality, and adaptability. This evidence-based FAQ addresses common objections to Liatrio's Spec-Driven Development (SDD) using a concrete example: implementing a cspell pre-commit hook for spell-checking. By analyzing specifications, task lists, proof artifacts, and validation reports, we demonstrate how well-executed SDD works in practice. You can inspect the full example in Reference Materials.

It also highlights something easy to miss: SDD keeps its method visible. The workflow is not hidden behind a product layer, allowing teams to inspect the prompts and understand the patterns that drive the workflow. This accessibility enables teams to improve their skills with AI as they learn SDD.

Does Spec-Driven Development Add Unnecessary Overhead?

Structured processes often face criticism for introducing bureaucratic overhead. However, the cspell hook implementation proves that SDD structure is an investment, not a cost. By forcing the hardest thinking into the first two steps when correction is cheapest, it prevents ambiguity and rework. These are the true sources of delay. Mandating upfront clarity ultimately increases development velocity.

Upfront Planning Prevents Costly Rework

The specification clearly established Goals and Non-Goals, defining what's out of scope. That early clarity protected the team from scope creep and unexpected requirements, when it was still cheap to make adjustments.

Structured Work Creates Velocity

Git history shows implementation took under 6 minutes—from first commit (09:57:02) to final commit (10:02:41). That speed was enabled by Step 2 turning the approved spec into four distinct tasks before implementation began.

Unambiguous Blueprint

SDD creates a direct, verifiable link between initial goals and final outcomes. The validation report confirms every stated goal was met without deviation, eliminating friction from misaligned expectations.

Non-Goals Defined Clear Boundaries

Explicitly Out of Scope:

  • Spell checking code files
  • Automatic dictionary updates
  • CI/CD spell checking
  • IDE integration

Additional Exclusions:

  • Multi-language support
  • Auto-fixing errors
  • Generated files checking
  • CHANGELOG.md checking

By setting these boundaries, the team ensured effort was concentrated exclusively on delivering agreed-upon value without distraction.

How Do We Know the Feature Actually Worked?

A primary benefit of SDD is generating objective, verifiable evidence proving features are complete and correct. This moves assessment from subjective opinion to factual verification. The cspell implementation generated multiple layers of proof—from validation reports to individual commit traceability—guaranteeing the final product meets every requirement.

11/11
Requirements Verified

100% of functional requirements passed validation with documented evidence

Final Status

Unambiguous PASS conclusion in comprehensive validation report

2
Spelling Errors Caught

Test proof showed system correctly identified and suggested fixes

Coverage Matrix: Evidence for Every Requirement

Verifiable Proof from Test Output

test-spell-check.md:9:4 - Unknown word (recieve) fix: (receive)
test-spell-check.md:10:4 - Unknown word (seperate) fix: (separate)
CSpell: Files checked: 1, Issues found: 2 in 1 file.

This artifact proves the hook correctly identified the file, misspelled words, and provided correct suggestions. This is not a description of what happened—it's a record of what happened, providing undeniable evidence of system behavior.

What Happens If Requirements Change?

Planning-heavy processes face criticism for rigidity and inability to adapt. The cspell hook example proves that SDD provides a structured framework for managing change gracefully. Its emphasis on clarity, iterative planning, and modularity makes it inherently adaptable to inevitable project changes.

Clarity as Foundation

Initial specification provides stable baseline. Explicit Non-Goals make it easy to identify scope changes versus clarifications, enabling structured prioritization conversations.

Iterative Planning

Clarifying Questions phase demonstrates dialogue approach. User feedback refined requirements before finalization, proving planning is collaborative, not dictatorial.

Modular Adaptation

Task structure allows small-scale changes without disrupting workflow. Git history shows pragmatic in-flight adjustments based on real-world testing discoveries.

Real Example: Incorporating Feedback

"we don't need validation tests in python, that's overkill, remove that."

This user feedback during task generation was immediately incorporated. The final task list reflects this change, preventing wasted effort on unnecessary work. The plan adapted to stakeholder input before implementation began.

In-Flight Adjustments

Even with excellent planning, discoveries happen during development. Commit message 26e8c10 shows: "Added missing dictionary terms found during testing." This proves the process allows pragmatic adjustments without rigidity.

Clear Scope Boundaries

Non-Goals establish what's out of scope, making change identification straightforward

Feedback Integration

Planning phase incorporates stakeholder input before implementation starts

Modular Tasks

Small, focused units allow adjustments without disrupting entire workflow

Conclusion: Evidence-Based Success

The cspell pre-commit hook implementation provides concrete evidence that Spec-Driven Development effectively mitigates common concerns about overhead, verifiability, and rigidity when executed properly.

High-Velocity Development

Upfront planning investment created unambiguous scope, leading to focused development. Complete implementation achieved in under 6 minutes with clear task breakdown.

Guaranteed Verifiability

Emphasis on proof artifacts produced auditable evidence chain. All 11 functional requirements met and validated with documented proof for stakeholder review.

Graceful Adaptability

Process demonstrated flexibility by incorporating feedback during planning and implementation phases. Modular structure enabled pragmatic in-flight adjustments without disruption.

The SDD Advantage

SDD provides a robust framework that enhances clarity, guarantees verifiability, and gracefully accommodates change. The initial investment in structured planning and documentation delivers more predictable, successful outcomes with reduced rework and increased stakeholder confidence. Because the workflow is exposed in readable prompts and artifacts, it also helps engineers learn the patterns of effective AI-assisted development instead of treating them as hidden product behavior.

100%
Requirements Met

Complete validation coverage

6
Minutes to Implement

From first to final commit

0
Scope Creep Issues

Clear boundaries prevented drift

Key Takeaway: The evidence from this real-world implementation demonstrates that SDD's structured approach is not overhead—it's an investment that pays dividends through clarity, velocity, quality assurance, and a more teachable way of working with AI.

Are Specs, Tasks, and Proofs Supposed To Live in the Repo Forever?

No. In SDD, these artifacts are process scaffolding for an implementation loop, not living documentation that must stay perfectly synchronized forever. Their job is to help a human and an AI align on the work, verify progress, and make the reasoning process explicit while the feature is being built.

Process Artifacts, Not Product Docs

Specs, tasks, proofs, and validation reports exist to guide the current implementation loop. Once that loop is complete, the durable source of truth becomes the code and tests.

Storage and Retention Are Flexible

Some teams keep artifacts in docs/specs/, some archive them, and some map them to Jira, GitHub issues, or other systems. SDD stays intentionally unopinionated so teams can manage context the way that fits their environment.

They Still Matter After the Run

Even when they are not maintained forever, these artifacts still provide value as an audit trail and a teaching surface that shows how the work was framed and verified.

How To Think About PR Feedback

Implementation Feedback

Feedback like "use a different test structure" or "refactor this code path" stays inside the current implementation and review loop. The original artifacts already did their job by getting the work to a reviewable state.

Directional Feedback

Feedback like "this should use a different technology" or "the scope should be different" signals that the earlier framing was off. That usually means going back upstream, clarifying the direction, and starting a new spec for a new implementation loop.

You can think of specs and proofs like the recipe notes and test batches used to bake a tray of cookies. They help you get the current batch right, but once the batch is baked, the next major change is usually a new batch, not an edit to the old dough.

Why Do AI Responses Start with Emoji Markers (SDD1️⃣, SDD2️⃣, etc.)?

You may notice that AI responses begin with emoji markers like SDD1️⃣, SDD2️⃣, SDD3️⃣, or SDD4️⃣. This is an intentional feature designed to detect a silent failure mode called context rot.

What Is Context Rot?

Research from Chroma and Anthropic demonstrates that AI performance degrades as input context length increases, even when tasks remain simple. This degradation happens silently—the AI doesn't announce errors, but gradually loses track of critical instructions.

How Verification Markers Work

Each prompt instructs the AI to always begin responses with its specific marker (SDD1️⃣ for spec generation, SDD2️⃣ for task breakdown, etc.). When you see the marker, it's an indicator that critical instructions are probably being followed. If the marker disappears, it's an immediate signal that context instructions may have been lost.

What You Should Expect

Normal responses will start with the marker: SDD1️⃣ I'll help you generate a specification... or SDD3️⃣ Let me start implementing task 1.0.... This is expected behavior and indicates the verification system is working correctly. The markers add minimal overhead (1-2 tokens) while providing immediate visual feedback. They also model a broader SDD principle: effective AI workflows should make important operating logic explicit and easy to inspect.

Technical Background

This verification technique was shared by Lada Kesseler at AI Native Dev Con Fall 2025 as a practical solution for detecting context rot in production AI workflows. The technique provides:

  • Immediate feedback: Visual confirmation that instructions are being followed
  • Low overhead: Minimal token cost (1-2 tokens per response)
  • Simple implementation: Easy to spot in terminal/text output
  • Failure detection: Absence of marker immediately signals instruction loss

Continue Evaluating the Workflow

Watch the short walkthrough next, then dive into the real prompts, proofs, and validation artifacts behind the workflow.