Get the most out of Damper CLI.
A practical guide to project setup, spec writing, project context, completion checklists, and the habits that make AI agents ship reliable code.
One command to connect Damper to your project.
The knowledge base that makes every AI session productive from the first line.
The anatomy of a task that produces great code.
Two mechanisms to prevent mistakes and verify quality: critical rules shown at task start, and completion checklists verified at task end.
One command to connect Damper to your project.
npx @damper/cli setup
This stores your API key in .damper/config.json and adds the Damper MCP server to your Claude Code configuration (~/.claude/settings.json). No code changes required.
After setup, start a Claude Code session and ask it to analyze your codebase. The agent will read your code, identify patterns, and create project context sections that future sessions can reference.
Analyze this codebase and create project context sections for overview, conventions, testing, and architecture. Use the Damper MCP tools: update_context_section for each section.
Section-based docs
Organize knowledge into named sections like overview, conventions, testing, or api/architecture. Each section is loaded independently so agents only read what they need.
Hierarchical paths
For monorepos, use paths like api/architecture or api/endpoints. Fetch all children with api/* or all descendants with api/**.
Token-efficient loading
Large sections can be explored block-by-block. Use get_section_blocks to see headings, then get_section_block_content to load only relevant parts.
Managing sections
Use MCP tools to create and update sections. Each section can target specific modules with appliesTo and include tags for discoverability.
The anatomy of a task that produces great code.
- Start with an action verb matching the type: "Add ..." for features, "Fix ..." for bugs, "Improve ..." for improvements. A good title tells you what the PR will say.
- Explain the user problem, the expected behavior, and any constraints. Don't prescribe implementation details here — save that for the plan.
- Break the work into numbered steps. Reference specific files, functions, or patterns. The more concrete the plan, the less the agent guesses.
- For features touching multiple modules, create subtasks. Agents check them off as they go, giving you visibility into progress.
- Tag tasks with labels (backend, frontend, database) and estimate effort (xs, s, m, l, xl). This helps with planning and filtering.
Two mechanisms to prevent mistakes and verify quality: critical rules shown at task start, and completion checklists verified at task end.
Rules added to context sections via the criticalRules field. They are surfaced automatically when an agent starts a task, so agents can't miss them. Use for patterns that cause real problems when skipped.
update_context_section({
section: "testing",
content: "# Test workflow\n\nRun bun test before handoff"
})The Complete Workflow
Five steps from zero to shipping code with AI agents.