Specs
Spec Types at a Glance
| Type | Use For | Example |
|---|---|---|
code | Features, bugs, refactoring | Implement JWT auth |
task | Manual work, prompts, config | Create documentation prompt |
driver | Coordinate multiple specs | Auth system (with .1, .2, .3 members) |
group | Alias for driver | Same as driver |
documentation | Generate docs from code | Document auth module |
research | Analysis, synthesis | Analyze survey data |
# Code spec - implement something
---
type: code
target_files: [src/auth.rs]
---
# Task spec - manual/config work
---
type: task
target_files: [.chant/prompts/doc.md]
---
# Driver spec - coordinates members
---
type: driver
---
# (has 001.1.md, 001.2.md members)
# Documentation spec - docs from code
---
type: documentation
tracks: [src/auth/*.rs]
target_files: [docs/auth.md]
---
# Research spec - analysis/synthesis
---
type: research
origin: [data/metrics.csv]
target_files: [analysis/report.md]
---
See spec-types.md for detailed documentation of each type.
Unified Model
No separate “epic” type. Specs can be split into groups. A spec with group members is a driver.
.chant/specs/
2026-01-22-001-x7m.md ← Driver (has members)
2026-01-22-001-x7m.1.md ← Member
2026-01-22-001-x7m.2.md ← Member
2026-01-22-001-x7m.2.1.md ← Nested group member
Filename is the ID
No id field in frontmatter. The filename (without .md) is the identifier.
See ids.md for format details.
Frontmatter Schema
See Schema & Validation for validation rules and linting.
---
# No id field - filename is the ID
# Type (determines behavior)
type: code # code | documentation | research
# State
status: pending # pending | in_progress | completed | failed
depends_on: # Spec IDs that must complete first
- 2026-01-22-001-x7m
# Organization
labels: [auth, feature] # Free-form tags
target_files: # Files this spec creates/modifies
- src/auth/middleware.go
# Context (reference docs for any type)
context: # Docs injected as background for agent
- docs/api-design.md
# Type-specific fields (see spec-types.md)
tracks: # documentation: source code to monitor
informed_by: # research: materials to synthesize
origin: # research: input data (triggers drift)
# Git (populated on completion)
branch: chant/2026-01-22-002-q2n
commit: a1b2c3d4
completed_at: 2026-01-22T15:30:00Z
model: claude-opus-4-5 # AI model that executed the spec
# Execution
prompt: standard # Optional, defaults to config
# Verification
last_verified: 2026-01-22T15:00:00Z # Timestamp of last verification
verification_status: passed # passed | partial | failed (after verify)
verification_failures: # List of failed acceptance criteria
- "Criterion description"
# Re-execution tracking
replayed_at: 2026-01-22T16:00:00Z # Timestamp of last re-execution
replay_count: 1 # Number of times re-executed
original_completed_at: 2026-01-15T14:30:00Z # Preserved from first completion
---
See spec-types.md for field usage by type.
Spec States
pending → in_progress → completed
↘ failed
blocked
cancelled
See Lifecycle for detailed status documentation, transitions, and approval workflow.
Drift Detection
Documentation and research specs declare their input files. When these change after completion, drift is detected.
# Documentation: tracks source code
---
type: documentation
tracks:
- src/auth/*.rs
target_files:
- docs/auth.md
---
# Research: origin data + informed_by materials
---
type: research
origin:
- data/metrics.csv
informed_by:
- docs/methodology.md
target_files:
- analysis/report.md
---
Drift by Type
| Type | Field | Drifts When |
|---|---|---|
code | — | Acceptance criteria fail |
documentation | tracks: | Tracked source code changes |
research | origin:, informed_by: | Input files change |
Checking for Drift
Use chant verify to re-check acceptance criteria and detect drift:
$ chant verify 001
Verifying spec 001: Add rate limiting
Checking acceptance criteria...
✓ Rate limiter middleware exists
✓ Returns 429 with Retry-After header
✓ Tests verify rate limiting works
Spec 001: VERIFIED
For documentation and research specs, use chant drift to detect input file changes:
$ chant drift
⚠ Drifted Specs (inputs changed)
2026-01-24-005-abc (documentation)
src/api/handler.rs (modified: 2026-01-25)
$ chant work 005 --skip-criteria # Re-run to update documentation
See autonomy.md for more on drift detection.
Readiness
A spec is ready when:
- Status is
pending - All
depends_onspecs arecompleted - No group members exist OR all members are
completed
Spec Groups
Determined by filename, not frontmatter:
2026-01-22-001-x7m.md ← Driver
2026-01-22-001-x7m.1.md ← Member (driver = 2026-01-22-001-x7m)
No driver field needed. The .N suffix establishes group membership.
A driver with incomplete members cannot be marked complete. See groups.md.
Spec Cancellation
Cancel a spec by marking it cancelled (soft-delete), or permanently delete it with --delete.
Cancelling a Spec
$ chant cancel 001 # Cancel with confirmation (soft-delete)
$ chant cancel 001 --yes # Skip confirmation
$ chant cancel 001 --dry-run # Preview what would be cancelled
$ chant cancel 001 --skip-checks # Skip safety checks
$ chant cancel 001 --delete # Permanently delete (hard delete)
$ chant cancel 001 --delete --cascade # Delete driver and all members
$ chant cancel 001 --delete --delete-branch # Delete spec and associated branch
Safety Checks (default, skipped with --skip-checks):
- Cannot cancel specs that are in-progress or failed
- Cannot cancel member specs (cancel the driver instead)
- Cannot cancel already-cancelled specs
- Warns if other specs depend on this spec
What Happens When Cancelled (Soft-Delete)
- Spec status changed to
Cancelledin frontmatter - File is preserved in
.chant/specs/ - Cancelled specs excluded from
chant listandchant work - Can still view with
chant showorchant list --status cancelled - All git history preserved
What Happens When Deleted (Hard Delete with --delete)
- Permanently removes spec file from
.chant/specs/ - Removes associated log file from
.chant/logs/ - Removes worktree artifacts
- With
--cascade: deletes driver and all member specs - With
--delete-branch: removes associated git branch
Cancelled State
---
status: cancelled
---
Difference from Delete
cancel: Changes status to Cancelled, preserves files and historydelete: Removes spec file, logs, and worktree artifacts
Re-opening Cancelled Specs
To resume work on a cancelled spec, manually edit the status back to pending:
# Edit the spec file and change status: cancelled to status: pending
chant work 001 # Resume execution
Spec Amendments
Specs are append-only by default. Prefer:
- Cancel and create new spec
- Add member specs for new requirements
- Create follow-up spec
Editing Specs
Edit spec files directly in your text editor - they’re just markdown files.
Safe edits (always allowed):
- Description clarification
- Labels
- Notes
Risky edits (use caution for in-progress/completed specs):
- Target files (may not match actual work)
- Dependencies (may invalidate completion)
Amendment Log
Track changes to spec after creation:
---
status: completed
amendments:
- at: 2026-01-22T14:00:00Z
by: alex
field: description
reason: "Clarified scope"
---
Splitting Specs
If a spec grows too large, use chant split to break it into member specs:
$ chant split 001
Creating member specs from 001...
Found sections:
1. "Implement auth middleware"
2. "Add JWT validation"
3. "Write tests"
Create as members? [y/N]
Created:
001.1 - Implement auth middleware
001.2 - Add JWT validation
001.3 - Write tests
Driver 001 status: pending (waiting for members)
Acceptance Criteria Validation
Chant validates that all acceptance criteria checkboxes are checked before marking a spec complete. This validation happens after the agent exits.
## Acceptance Criteria
- [x] Implement login endpoint
- [ ] Add rate limiting <- Unchecked!
- [x] Write tests
If unchecked boxes exist, chant shows a warning and fails:
⚠ Found 1 unchecked acceptance criterion.
Use --force to skip this validation.
error: Cannot complete spec with 1 unchecked acceptance criteria
The spec is marked as failed until all criteria are checked.
Skipping Validation
Use --force to complete despite unchecked boxes:
chant work 001 --force
Best Practice
Agents should check off criteria as they complete each item:
- Change
- [ ]to- [x]in the spec file - This creates a clear record of completion
Agent Output
After successful completion, chant appends the agent’s output to the spec file. This creates an audit trail of agent work.
Format
The output is appended as a new section with timestamp:
## Agent Output
2026-01-24T15:30:00Z
\`\`\`
Done! I've implemented the authentication middleware.
Summary:
- Added JWT validation in src/auth/middleware.go
- Added tests in src/auth/middleware_test.go
- All 5 tests pass
\`\`\`
Multiple Runs
Each replay with --force appends a new output section:
## Agent Output
2026-01-24T15:30:00Z
\`\`\`
[first run output]
\`\`\`
## Agent Output
2026-01-24T16:00:00Z
\`\`\`
[replay output - agent detected implementation exists]
\`\`\`
This allows tracking how the agent behaved across multiple executions.
Truncation
Outputs longer than 5000 characters are truncated with a note indicating the truncation. This prevents spec files from growing excessively large while still capturing the essential information about what the agent accomplished.
Model Tagging
When a spec completes, chant records the AI model used in the frontmatter:
---
status: completed
commit: abc1234
model: claude-opus-4-5
---
How Model is Detected
The model is detected from environment variables at execution time. Chant checks these variables in order:
CHANT_MODEL- chant-specific overrideANTHROPIC_MODEL- standard Anthropic environment variable
The first non-empty value found is recorded. If neither is set, the model field is omitted from the frontmatter.
Possible Values
The model field contains whatever value is in the environment variable. Common values include:
claude-opus-4-5- Claude Opus 4.5claude-sonnet-4- Claude Sonnet 4claude-haiku-3-5- Claude Haiku 3.5
The value is recorded as-is without validation, so it may also contain version suffixes or custom identifiers depending on your setup.
Use Cases
- Cost tracking: See which models completed which specs to understand costs
- Debugging: Identify model-specific behavior differences when issues arise
- Auditing: Know which AI version produced each change for compliance or review