The Spec-Driven Development (SDD) Playbook
Complete a feature in minutes, not days. Liatrio's Spec-Driven Development workflow is just 4 markdown prompts that help developers guide an AI assistant through implementing their day-to-day work with structure, clarity, and evidence.
These prompts are transparent and learnable: you can read every prompt, understand the workflow logic it applies, and adapt the patterns to your own team and projects. Instead of hiding the method inside a product, SDD exposes it in markdown.
They are also tool-agnostic (works with any AI assistant) and
lightweight (no installation, no dependencies). They create a simple file
structure in docs/specs/ that doesn't pollute your repo.
View the prompts on GitHub: github.com/liatrio-labs/spec-driven-workflow/prompts
What You Bring: Flow Input
Start with anything: an idea, a Jira story, a GitHub issue, or a feature request. The prompts adapt to your input and guide you through the workflow. No special format required—just describe what you want to build.
Built-in Scope Validation
The prompts automatically check if your work is appropriately sized. If it's too large, they'll suggest splitting it into smaller specs. If it's too small, they'll suggest direct implementation. Perfect for well-scoped features that are ready to implement.
The Four Steps
One person runs through these 4 prompts sequentially with an AI assistant. In the cspell pre-commit hook example featured across this site, the end-to-end workflow took less than 15 minutes from start to finish. The highest-leverage work happens in Step 1 and Step 2: when the spec is clear and the task plan is concrete, implementation and validation are far more likely to run smoothly without human rescue. Because the prompts are readable, the workflow also doubles as a guide for learning how to scope work, guide AI, and verify results more effectively.
Specification
Focus: Lock down scope, intent, and success criteria before any code is written.
What It Does: The prompt validates scope (too large/too small/just right), resolves ambiguous context, and generates a specification document with clear boundaries.
Output: A markdown spec file in
docs/specs/[NN]-spec-[feature-name]/
Prompt: SDD-1-generate-spec.md
Task Breakdown
Focus: Turn the approved spec into an executable plan with demo criteria and proof artifacts.
What It Does: Analyzes the spec, identifies relevant files, and creates a task list with parent tasks (first) and sub-tasks (after confirmation) so drift is caught before implementation starts.
Output: A task list markdown file with demo criteria and proof artifact requirements
Implementation
Focus: Execute tasks systematically while creating proof artifacts and following git workflow.
What It Does: Guides implementation with checkpoint modes (continuous/task/batch), creates proof artifacts, and manages git commits.
Output: Working code, proof artifact files, and task-aligned git commits
Prompt: SDD-3-manage-tasks.md
Validation
Focus: Verify implementation meets all spec requirements using proof artifacts and evidence.
What It Does: Creates a coverage matrix, verifies proof artifacts, checks file integrity, and produces a validation report.
Output: A validation report with PASS/FAIL gates and evidence-based coverage matrix
The highest-leverage work happens in the first two steps: if you nail the spec and task breakdown, the later steps have a much better chance of succeeding without rework, course correction, or human intervention.
Why This Works
- Built-in scope validation prevents oversized work
- Early clarification makes later course correction cheaper and less likely
- Readable prompts expose patterns teams can study and adapt
- Tool-agnostic—works with any AI assistant
- Artifact storage stays lightweight and can adapt to your environment
- Context verification markers detect instruction loss early
Perfect For
- Jira stories post-grooming (ready to pick up)
- Small-to-medium features (one user story)
- One-person workflows with AI assistance
- Teams wanting lightweight, flexible processes
Built-in Context Verification
Each prompt includes a verification marker (SDD1️⃣, SDD2️⃣, SDD3️⃣, SDD4️⃣) that appears at the start of AI responses. These markers help detect context rot—a silent failure mode where AI performance degrades as input context length increases, even when tasks remain simple.
For more information, see Why Do AI Responses Start with Emoji Markers?
How does this compare to other structured development tools? See our comparison against Kiro, SpecKit, and Taskmaster to understand why Liatrio's prompts fit better into the typical workflow of software developers doing day-to-day work, and why exposing the workflow logic matters just as much as lightweight adoption.
Where SDD Fits in the Work
SDD is intentionally focused on the implementation loop. In many enterprise teams, research, prioritization, and architecture decisions already happen upstream in product, design, or ticketing systems. SDD starts when an engineer has a piece of work ready to execute and helps carry it through implementation and into review.
Loop 1: Research and Problem Framing
This is where requirements are clarified, options are considered, and constraints are understood. In many teams this already lives in Jira, GitHub issues, design docs, or product conversations before SDD begins.
Loop 2: Implementation via SDD
This is the four-step workflow. Specs, tasks, proofs, and validation artifacts help the human and the AI align on the work, verify progress, and make the reasoning process visible while the feature is being built.
Loop 3: Review and Iteration
The PR is reviewed and feedback is incorporated. Implementation feedback can be handled in the current loop, but directional feedback usually means starting a new spec and a new implementation run.
Artifacts Are Process Scaffolding
Specs, tasks, proofs, and validation reports are process artifacts, not living documentation. They exist to guide the implementation loop, help the human verify the AI's understanding, and make the workflow teachable while the work is in flight.
After that loop completes, the code and tests become the durable source of truth. Some teams keep
these artifacts in docs/specs/, some archive them elsewhere, and some align them to
external systems. SDD intentionally leaves that lifecycle flexible so teams can manage context in
the way that fits their environment.
From Reactive Art to Predictable Engineering
By consistently using these four prompts, developers transform small feature work from reactive coding into predictable, evidence-based implementation. Liatrio's Spec-Driven Development workflow creates a self-documenting, auditable process that systematically drives up quality, reduces ambiguity, and ensures every feature delivered is a feature validated—all in minutes, not days.
The important point is that this process remains visible to the team. Engineers are not only using AI to move faster; they are learning a repeatable way to frame requirements, direct execution, and validate output.
Abstract Idea
Initial concept with unclear requirements
Clear Specification
Locked scope with stakeholder alignment
Systematic Execution
Evidence-backed implementation
Validated Feature
Production-ready, merge-approved code
The Audit Trail Advantage
The linkage from Git commit, to task list, to proof artifact forms an unbreakable audit trail from a specific line of code back to the requirement it satisfies. This traceability is non-negotiable and transforms development into a transparent, verifiable process.
Because the workflow is documented in prompts and artifacts rather than hidden behind product logic, the audit trail also becomes a teaching surface teams can inspect to understand how effective AI-assisted delivery actually works.
Self-Documenting Process
Every step generates a working record of the implementation loop, creating traceability and shared context without requiring a separate documentation effort.
Auditable Progress
Machine-readable commit messages and proof documents enable automated verification and compliance tracking at any point in time.
Evidence-Based Quality
Replaces subjective opinions with verifiable proof, ensuring what was built is exactly what was requested.
This disciplined approach prevents common development pitfalls and ensures that completion is a matter of proof, not opinion. The result is higher quality software, reduced risk, and a development process that scales with organizational growth.
Directory Structure
One common way to run SDD is to keep implementation artifacts in
docs/specs/ so requirements, tasks, proofs, and validation stay connected in one
place while the work is active.
docs/specs
├── 01-spec-feature-name
│ ├── 01-proofs
│ │ ├── 01-task-01-proofs.md
│ │ ├── 01-task-02-proofs.md
│ │ ├── 01-task-03-proofs.md
│ │ └── 01-task-04-proofs.md
│ ├── 01-questions-1-feature-name.md
│ ├── 01-spec-feature-name.md
│ ├── 01-tasks-feature-name.md
│ └── 01-validation-feature-name.md
├── 02-spec-another-feature
│ ├── 02-proofs
│ │ ├── 02-task-01-proofs.md
│ │ ├── 02-task-02-proofs.md
│ │ └── 02-task-03-proofs.md
│ ├── 02-questions-1-another-feature.md
│ ├── 02-spec-another-feature.md
│ ├── 02-tasks-another-feature.md
│ └── 02-validation-another-feature.md
└── 03-spec-third-feature
├── 03-proofs
│ ├── 03-task-01-proofs.md
│ ├── 03-task-02-proofs.md
│ ├── 03-task-03-proofs.md
│ ├── 03-task-04-proofs.md
│ └── 03-task-05-proofs.md
├── 03-questions-1-third-feature.md
├── 03-spec-third-feature.md
├── 03-tasks-third-feature.md
└── 03-validation-third-feature.md
File Naming Convention
[NN]-spec-[feature-name]/: Main directory for each feature specification[NN]: Zero-padded 2-digit number (01, 02, 03, etc.)spec-[feature-name]: Descriptive feature name
[NN]-spec-[feature-name].md: The main specification document[NN]-tasks-[feature-name].md: Task breakdown with parent/sub-tasks[NN]-questions-1-[feature-name].md: Clarifying questions and answers[NN]-validation-[feature-name].md: Validation report and coverage matrix[NN]-proofs/: Subdirectory containing proof artifacts[NN]-task-[TT]-proofs.md: Proof artifacts for each parent task
Workflow Progression
- Specification Phase
Creates[NN]-spec-[feature-name].mdand[NN]-questions-1-[feature-name].md - Task Planning Phase
Creates[NN]-tasks-[feature-name].md - Implementation Phase
Creates[NN]-proofs/[NN]-task-[TT]-proofs.mdfiles as tasks are completed - Validation Phase
Creates[NN]-validation-[feature-name].mdwith final validation report
This shows one way to store a complete Loop-2 implementation trail in-repo. The exact retention model is intentionally flexible, and teams can archive or externalize these artifacts once the implementation loop is complete.