Linting Fixes:
1. app/__tests__/layout.test.tsx:34 - Changed `any` to `Record<string, unknown>` for Script props
2. app/dashboard/__tests__/page.test.tsx:9,24,34 - Added proper type definitions for mock components
3. components/ContainerCard.tsx:5 - Removed unused `Container` import
4. e2e/dashboard.spec.ts:71 - Removed unused catch variable `e`
CLAUDE.md Updates:
- Added linting as Step 2 in all testing workflow options
- Updated Critical Requirements to include linting
- Added linting to Common Mistakes section
- Added linting commands to Development Commands
- Updated Summary workflow to include linting as step 3
- Updated Acceptance Criteria to require zero linting errors
All checks now pass:
✅ Linting: Zero errors (only pre-existing issues in other files)
✅ Unit tests: 282/282 passing (100%)
✅ Build: Successful with zero errors
✅ E2E tests: 11/11 passing (100%)
This ensures AI assistants run linting before every commit and fix
any linting issues introduced by their changes.
https://claude.ai/code/session_01T57NPQfoRb2fS7ihdWkTxq
Added strict requirements that were missing:
1. Critical Testing Requirements section (NEW):
- MUST keep working until ALL tests pass
- Do NOT commit if ANY test fails
- Do NOT commit if build fails
- Do NOT commit if coverage drops
- Keep iterating until 100% success
2. Updated "Keep working until ALL tests pass" section:
- Clear action items for each failure type
- Do NOT commit partial fixes
- ONLY commit when FULL suite passes (282/282 unit, 11/11 e2e)
- Your responsibility: achieve 100% test success
3. Updated Common Mistakes:
- Committing when ANY test fails
- Committing to "fix it later"
- Stopping at 9/11 e2e tests (need 11/11!)
- Thinking failures are "acceptable"
4. Updated Summary Workflow:
- Clear pass/fail criteria for each step
- Added "Fix failures" and "Iterate" steps
- Moved "Commit" to step 8 (after iteration)
- Added Acceptance Criteria checklist
- No exceptions clause
This removes all ambiguity. AI assistants MUST keep working until:
✅ 282/282 unit tests passing
✅ Build succeeds
✅ 11/11 e2e tests passing
✅ No coverage regression
No more "9/11 tests pass, good enough!" - that's unacceptable.
https://claude.ai/code/session_01T57NPQfoRb2fS7ihdWkTxq
Previous version was misleading about e2e tests:
- Said "read e2e files" but didn't explain how to RUN them
- Didn't mention that e2e tests DO run in Docker (Dockerfile line 55)
- Didn't show how to run e2e tests locally
- Didn't clarify that e2e tests show failures in CI even if non-blocking
Now provides 3 clear options:
Option A (preferred): Run full Docker build
- Runs both unit and e2e tests
- Matches CI environment exactly
Option B: Local testing with e2e
- Shows how to start the app (npm run dev)
- Shows how to run e2e tests in separate terminal
- Requires Playwright browser installation
Option C: Minimum verification (fallback)
- Run unit tests (required)
- Run build (required)
- Manually verify changes match e2e expectations
- Lists specific things to check (button text, labels, etc.)
This makes it clear that you cannot just "read" e2e files and call it done.
You must either RUN the e2e tests or very carefully manually verify.
https://claude.ai/code/session_01T57NPQfoRb2fS7ihdWkTxq
The previous version was incomplete and misleading:
- Said to run `docker build --target test` which only runs unit tests
- Didn't explain what to do when Docker isn't available
- Didn't mention that CI runs BOTH unit AND e2e tests
Now includes:
- Correct Docker command to run full build (unit + e2e tests)
- Fallback workflow when Docker isn't available (npm test + build)
- Explicit requirement to read e2e test files to verify expectations
- Clearer step-by-step process
This ensures AI assistants actually verify their changes properly before
committing, not just run unit tests and assume everything works.
https://claude.ai/code/session_01T57NPQfoRb2fS7ihdWkTxq
This document establishes critical workflow rules to prevent untested code
from being committed. Key requirements:
- Read test files before making changes to understand expectations
- Verify all changes match what tests expect (button text, labels, etc)
- Run tests via Docker build before committing
- Never commit code that hasn't been verified to work
This should prevent issues like button text mismatches where the component
says "Access Dashboard" but tests expect "Sign In".
https://claude.ai/code/session_01T57NPQfoRb2fS7ihdWkTxq