Skip to content

Commit b674242

Browse files
committed
ai(claude[commands]) Add /tdd-fix
why: Provide structured TDD bug-fix workflow for disciplined test-first debugging what: - Add .claude/commands/tdd-fix.md with 6-phase TDD workflow - Phases: understand → write failing xfail test → find root cause → fix → remove xfail → recovery loop
1 parent 1799f8e commit b674242

1 file changed

Lines changed: 265 additions & 0 deletions

File tree

.claude/commands/tdd-fix.md

Lines changed: 265 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,265 @@
1+
---
2+
description: TDD bug-fix workflow -- reproduce a bug as a failing test, find root cause, fix, and verify
3+
argument-hint: Paste or describe the bug to reproduce and fix
4+
---
5+
6+
# TDD Bug-Fix Workflow
7+
8+
You are an expert test engineer performing a disciplined TDD bug-fix loop on this project. Follow this workflow precisely for every bug.
9+
10+
Initial bug report: $ARGUMENTS
11+
12+
---
13+
14+
## Phase 1: Understand the Bug
15+
16+
**Goal**: Parse the bug report into a testable reproduction scenario.
17+
18+
**Actions**:
19+
1. Create a todo list tracking all phases
20+
2. Read the bug report and identify:
21+
- **Symptom**: What the user observes (error message, wrong output, silent failure)
22+
- **Expected behavior**: What should happen instead
23+
- **Trigger conditions**: What inputs, configuration, or state reproduce it
24+
- **Affected component**: Which module/function is involved
25+
3. Use Explore agents to find the relevant source code and existing tests:
26+
- The test file that covers this area
27+
- The source file with the suspected bug
28+
- Any existing fixtures that can help reproduce the scenario
29+
4. Read the identified files to understand current behavior
30+
5. Summarize your understanding of the bug and confirm with user before proceeding
31+
32+
---
33+
34+
## Phase 2: Write a Failing Test (xfail)
35+
36+
**Goal**: Create a test that reproduces the bug and is expected to fail.
37+
38+
**CRITICAL RULES** (from project's CLAUDE.md):
39+
- Write **functional tests only** -- standalone `test_*` functions, NOT classes
40+
- Use `typing.NamedTuple` for parameterized tests when appropriate
41+
- Use `from __future__ import annotations` at top of file
42+
- Use `import typing as t` namespace style for stdlib
43+
- Leverage project-specific fixtures from conftest.py
44+
- Document every mock with comments explaining WHAT is being mocked and WHY
45+
46+
**Actions**:
47+
1. Identify which test file to add the test to
48+
2. Study existing test patterns in that file (parameter fixtures, assertion styles, imports)
49+
3. Write a test function that:
50+
- Has a descriptive name: `test_<component>_<bug_description>`
51+
- Has a docstring explaining the bug scenario
52+
- Uses existing fixtures wherever possible
53+
- Is decorated with `@pytest.mark.xfail(strict=True)` so it's expected to fail
54+
- Asserts the **correct** (expected) behavior, not the buggy behavior
55+
4. Run the test to confirm it fails as expected:
56+
```
57+
uv run pytest <test_file>::<test_name> -xvs
58+
```
59+
5. Run the full test suite to ensure no other tests broke:
60+
```
61+
uv run pytest <test_file> -q
62+
```
63+
6. Run linting and type checks:
64+
```
65+
uv run ruff format .
66+
uv run ruff check . --fix --show-fixes
67+
uv run mypy
68+
```
69+
7. **Commit the failing test** using project commit style:
70+
```
71+
tests(feat[<test_file>]) Add xfail test for <bug description>
72+
73+
why: <Why this test is needed -- what behavior is broken>
74+
what:
75+
- Add test_<name> with @pytest.mark.xfail(strict=True)
76+
- <What the test sets up and asserts>
77+
```
78+
79+
---
80+
81+
## Phase 3: Find the Root Cause
82+
83+
**Goal**: Trace from symptom to the exact code that needs to change.
84+
85+
**Actions**:
86+
1. Read the source code path exercised by the test
87+
2. Add temporary debug logging if needed (but track it for cleanup)
88+
3. Identify the root cause -- the specific line(s) or logic gap
89+
4. If the bug spans multiple packages (this project + a dependency):
90+
- Note which package each change belongs to
91+
- Check that the dependency is installed as editable from local source:
92+
```
93+
uv run python -c "import <dep>; print(<dep>.__file__)"
94+
```
95+
- If it points to `.venv/lib/.../site-packages/`, the editable install is stale -- fix it:
96+
```
97+
# In pyproject.toml [tool.uv.sources]:
98+
# Temporarily use: <dep> = { path = "/path/to/<dep>", editable = true }
99+
# Then: uv lock && uv sync
100+
```
101+
5. Document the root cause clearly
102+
103+
---
104+
105+
## Phase 4: Fix the Bug
106+
107+
**Goal**: Apply the minimal fix that makes the test pass.
108+
109+
**Principles**:
110+
- Minimal change -- only fix what's broken
111+
- Don't refactor surrounding code
112+
- Don't add features beyond the fix
113+
- Follow existing code patterns and style
114+
- Use `import typing as t` namespace style
115+
- Use NumPy docstring style if adding/modifying docstrings
116+
117+
**Actions**:
118+
1. Apply the fix to the source code
119+
2. Remove any debug instrumentation added in Phase 3
120+
3. Run the failing test (it should still fail because xfail is still on):
121+
```
122+
uv run pytest <test_file>::<test_name> -xvs
123+
```
124+
- If it now passes, the xfail decorator will cause it to XPASS (unexpected pass) -- this is correct!
125+
4. Run quality checks:
126+
```
127+
uv run ruff format .
128+
uv run ruff check . --fix --show-fixes
129+
uv run mypy
130+
```
131+
5. **Commit the fix** using project commit style:
132+
```
133+
<component>(fix[<subcomponent>]) <Concise description>
134+
135+
why: <Root cause explanation>
136+
what:
137+
- <Specific technical changes>
138+
```
139+
140+
---
141+
142+
## Phase 5: Remove xfail and Verify
143+
144+
**Goal**: Confirm the fix works and the test is a proper regression test.
145+
146+
**Actions**:
147+
1. Remove `@pytest.mark.xfail(strict=True)` from the test
148+
2. Update the test docstring to describe it as a regression test (not a bug report)
149+
3. Run the test -- it MUST pass:
150+
```
151+
uv run pytest <test_file>::<test_name> -xvs
152+
```
153+
4. Run the FULL test suite:
154+
```
155+
uv run pytest <test_file> -q
156+
```
157+
5. Run all quality checks:
158+
```
159+
uv run ruff format .
160+
uv run ruff check . --fix --show-fixes
161+
uv run mypy
162+
```
163+
6. If ALL checks pass, **commit**:
164+
```
165+
tests(fix[<test_file>]) Remove xfail from <test_name>
166+
167+
why: Fix verified -- <brief description of what was fixed>
168+
what:
169+
- Remove @pytest.mark.xfail(strict=True)
170+
- Update docstring to describe as regression test
171+
```
172+
173+
---
174+
175+
## Phase 6: Recovery Loop (if fix doesn't work)
176+
177+
**Goal**: If the test still fails after the fix, diagnose why.
178+
179+
**Decision tree**:
180+
181+
### A. Is the reproduction genuine?
182+
1. Read the test carefully -- does it actually reproduce the reported bug?
183+
2. Run the test with `-xvs` and examine the output
184+
3. If the test is testing the wrong thing:
185+
- Go back to **Phase 2** and rewrite the test
186+
- Recommit with an amended failing test
187+
188+
### B. Is the fix correct?
189+
1. Add debug logging to trace execution through the fix
190+
2. Check if the fix is actually being executed (stale installs are common with uv):
191+
```
192+
uv run python -c "import <module>; import inspect; print(inspect.getsource(<function>))"
193+
```
194+
3. If the installed code doesn't match the source:
195+
- Check `uv.lock` and `[tool.uv.sources]` in pyproject.toml
196+
- Force reinstall: `uv pip install -e /path/to/dependency`
197+
- Verify: the `__file__` path should point to the source directory
198+
4. If the fix is wrong:
199+
- Revert the fix
200+
- Go back to **Phase 3** to re-analyze the root cause
201+
- Apply a new fix in **Phase 4**
202+
203+
### C. Loop limit
204+
- After 3 failed fix attempts, stop and present findings to the user:
205+
- What was tried
206+
- What the debug output shows
207+
- What the suspected issue is
208+
- Ask for guidance
209+
210+
---
211+
212+
## Cross-Dependency Workflow
213+
214+
When the bug involves both this project and a dependency:
215+
216+
1. **Dependency changes first**: Fix the underlying library
217+
2. **Commit in the dependency**: Use the same commit style
218+
3. **Verify dependency tests**:
219+
```
220+
cd /path/to/<dep> && uv run pytest tests/ -q
221+
```
222+
4. **Update this project's dependency reference**: Ensure this project uses the fixed dependency
223+
5. **Then fix/test in this project**
224+
225+
**IMPORTANT**: `uv run` enforces the lockfile. If `[tool.uv.sources]` points to a git remote, `uv run` will overwrite any local `uv pip install -e`. To develop across repos simultaneously, temporarily change `[tool.uv.sources]` to a local path.
226+
227+
---
228+
229+
## Quality Gates (every commit must pass)
230+
231+
Before EVERY commit, run this checklist:
232+
```
233+
uv run ruff format .
234+
uv run ruff check . --fix --show-fixes
235+
uv run mypy
236+
uv run pytest <test_file> -q
237+
```
238+
239+
ALL must pass. A commit with failing tests or lint errors is not acceptable.
240+
241+
---
242+
243+
## Commit Message Format
244+
245+
```
246+
Component/File(commit-type[Subcomponent/method]) Concise description
247+
248+
why: Explanation of necessity or impact.
249+
what:
250+
- Specific technical changes made
251+
- Focused on a single topic
252+
```
253+
254+
Use HEREDOC for multi-line messages:
255+
```bash
256+
git commit -m "$(cat <<'EOF'
257+
Component(type[sub]) Description
258+
259+
why: Reason
260+
what:
261+
- Change 1
262+
- Change 2
263+
EOF
264+
)"
265+
```

0 commit comments

Comments
 (0)