Skip to content

Latest commit

 

History

History
55 lines (54 loc) · 13.6 KB

File metadata and controls

55 lines (54 loc) · 13.6 KB

Claude Retro Suggestions

  • Before implementing features requiring data queries, always ask 'what does the UI actually display with this data?' and verify the data source can provide it, rather than building the feature and discovering mid-implementation that the data structure doesn't support it.
  • When debugging backend data bugs, check the data source (database query, API response, log output) directly with a quick query FIRST before investigating application code.
  • When the user says 'do it', 'continue', 'just finish it', or 'DO EVERYTHING', work autonomously without asking for clarification or pausing for approval. Only return when blocked or asking would save significant wasted work.
  • Before proposing large rewrites (UI redesigns, architectural changes, multi-file refactors), spend the first 1-2 turns validating the core problem with the user: 'What are the 2-3 specific things that need to change?' rather than proposing comprehensive redesigns.
  • When setting up external tool integrations (MCP, Jira, credentials) in the middle of a session, immediately ask the user for configuration details rather than attempting multiple failed calls. A 2-minute credential clarification beats 60 turns of failed attempts.
  • After implementing data-flow features (especially those involving LLM calls or schema changes), always run end-to-end integration tests with actual data before declaring success. Don't rely on code inspection — verify the data actually gets written to tables, and check for silent failures like guard conditions (if count > 0) that skip execution without error messages.
  • When a user says 'make it look like X', ask in the first turn: 'Are we optimizing visuals only, or also the underlying data/content quality?' If the report/output matters more than the UI, prioritize that analysis first.
  • When implementing data queries that will be exposed through multiple API endpoints, define a shared filtering utility (e.g., apply_standard_session_filters()) to prevent count and visibility discrepancies. Verify filter consistency across endpoints before declaring features complete.
  • Before implementing features requiring data queries or API calls, always run a quick query/API call first to confirm the data structure matches what the UI/output will display, even if it seems obvious. Don't discover mid-implementation that the API returns different field names or shapes.
  • When the user says 'DO EVERYTHING', 'just finish it', 'continue', or 'work autonomously', do not pause to ask clarifying questions unless truly blocked. Instead, proceed with best judgment and include a checkpoint sentence like 'I'm proceeding autonomously; I'll run [VERIFICATION STEP] and circle back if blocked.' Re-read the conversation for tone signals before asking.
  • When investigating backend failures (data not appearing, API returning wrong shapes, queries failing), start by directly querying the data source (run a test query, check database rows, inspect API response) before investigating application code. A 2-minute data check often reveals the root cause faster than 20 turns of code review.
  • After implementing any feature that involves LLM calls, database writes, or API integrations, run a complete end-to-end test with real data (not code inspection): verify rows appear in the actual database table, API responses contain expected fields, and the UI displays the data. Do not rely on code review or mock tests to confirm data flow.
  • When setting up integrations with external tools (MCP servers, Jira APIs, credentials, GitHub actions), immediately ask the user for all required configuration details (cloud IDs, API keys, service URLs) before attempting any tool calls. A 2-minute credential clarification beats 60 turns of failed attempts.
  • When making sweeping code changes that span multiple modules (e.g., DuckDB→SQLite conversion, changing get_conn to get_writer, cursor vs connection attribute access), identify the pattern once, then grep for all instances and fix them all at once before declaring the change complete. Don't wait for the user to report failures in the module you didn't fix yet.
  • When refactoring LLM integrations to use external services like claude-relay, verify the service is running with the expected protocol support before testing code against it. Ping the target endpoint (e.g., curl http://localhost:8082/v1/messages) in a separate shell to confirm version compatibility, rather than discovering mismatches through failed requests from your implementation.
  • When a user asks 'did you test/verify this?', always run the complete test suite or validation before claiming success. If tests are part of the original request, run them end-to-end and show output. Don't report 'everything works' without explicit proof of passing test execution.
  • Before committing changes, always run git diff --cached to verify the staged diff is exactly what you intend (not 422 lines when the fix is ~4 lines). If the diff is suspiciously large or contains unrelated changes, unstage, investigate, and recommit with only the intended changes.
  • Before provisioning services, installing heavy dependencies, or deploying infrastructure (Neo4j, PostHog, Daytona), check available system resources first: df -h, free -m, and whether the service is already configured. If resource requirements exceed available space/memory, ask the user before proceeding.
  • When a user provides documentation/config (AGENTS.md, CLAUDE.md, README) without an explicit task statement, ask one clarifying question: 'What should I build/fix/test/verify?' Don't infer work or propose alternatives. Confirm the actual deliverable in one turn before starting.
  • When a user explicitly specifies a tool, method, or constraint ('use the relay', 'use tools not guesses', 'reply with exactly X', 'implement in parallel'), treat it as a hard requirement, not a preference. Don't ask if they meant something else or suggest alternatives—just execute it immediately.
  • Before force-pushing a rewritten repository, output a sample of commits from the main branch showing the new email address to confirm the rewrite succeeded before proceeding with the push.
  • When tests are part of the request, don't report completion until you've finished and validated all requested tests. Unit tests and end-to-end tests are different verification steps—distinguish between them.
  • When a user asks you to test or validate something, run FULL test suites (not selective tests) and report: 1) total test count, 2) pass rate %, 3) any failures with error messages. Do NOT declare completion until all tests pass.
  • Before deploying any service, database, or large tooling (PostHog, Neo4j, language runtimes), run df -h (disk available), free -h (RAM available), and check against service docs. Abort deployment if resources are insufficient and ask for guidance.
  • Before staging any git commit, run git diff --cached and review the full diff line-by-line. If the diff includes unrelated code, unstage it immediately and ask for clarification. Do NOT commit without human approval of the staged changes.
  • When an exploratory conversation reaches 3+ turns without a stated deliverable, ask explicitly: 'Are we (a) researching for insights, (b) designing a solution, or (c) shipping working code? Give me a one-sentence answer and we lock to that direction.'
  • When an API call fails with 503 (capacity/overload), do not retry. Instead ask: 'Do you have local data, log files, or alternative data sources we can use instead of this API?' Pivot from infrastructure dependency to available resources immediately.
  • When a user specifies exact output format ('reply with just X', 'respond with only the integer', 'output exactly: done'), deliver ONLY that format without preamble, explanation, or narration turns. Check for constraint violations before responding.
  • Before committing any changes, run git diff --cached to verify the staged diff contains ONLY the intended changes. If unrelated files are staged, unstage them with git restore --staged [file] and commit separately.
  • When a user explicitly says 'implement immediately', 'implement everything', 'start building', or provides a detailed PRD with explicit approval to proceed, begin implementation directly without asking for clarification, planning phases, or debating alternatives. Execute first, iterate based on feedback.
  • When a user reports a specific system issue (cronjob failures, recursive loops, API errors, deployment problems), attempt direct investigation (SSH, log inspection, code reading, command execution) before asking for clarification. Only ask if genuinely unable to probe the system.
  • For data pipelines, spatial visualizations, and ML models, require explicit pytest assertions validating outputs BEFORE claiming completion. Do not accept 'looks correct' as sufficient validation. Add tests alongside implementation.
  • When a user explicitly instructs a specific tool or approach (e.g., 'use the relay', 'use SSH', 'use real data', 'check this API'), treat this as a hard requirement, not a suggestion. Do not substitute, skip, or explore alternatives without asking first.
  • When you encounter explicit method constraints ('use tools', 'don't guess', 'use the relay', 'use real data'), treat them as non-negotiable and state them back to the user before proceeding. Never provide answers without following the stated method, even if you have the answer.
  • Before deploying or installing any service, infrastructure component, or heavy dependency, verify available resources: free disk space, available memory, and system constraints. Report findings before proceeding.
  • When claiming work is complete or 'everything works', you must first: 1) Run all requested tests, 2) Show test output or command execution results, 3) Verify the specific requirement from the user's original request is satisfied. Do not proceed until all three are documented.
  • Before committing, always run git diff --cached to verify the staged diff contains only the intended changes. If the diff looks larger than expected, investigate what's staged and unstage unrelated files. Do not commit if diff exceeds ~50 lines without explicit user approval.
  • When a user reports an issue on a specific named remote system (e.g., 'cronjob issues on hetzner-recon'), attempt SSH investigation or direct inspection first rather than asking for clarification. Probe the actual system state before requesting more details.
  • When implementing from detailed specifications, execute only the stated deliverables. Don't proactively fetch related guidance, patterns, or standards unless explicitly asked. After implementation is verified, ask 'What next?' rather than assuming follow-up tasks.
  • When test payloads, encoding, or fixtures are created, validate them against exact specifications before executing. Show the validation result (character count, encoding format, payload structure) and get confirmation before proceeding.
  • Before implementing architectural changes (deployment method, dependency structure, file organization, build system), explicitly present the change and confirm it with the user. Don't let architectural shifts happen silently mid-project.
  • For document analysis, strategy reviews, or comprehensive assessments, deliver the full findings, summary, and recommendations in a single response. Don't defer sections or require multiple turns to extract complete analysis.
  • When implementing test fixtures that switch application modes (strict/audit mode, different configurations), use dependency injection or configuration parameters rather than module reloading. Module reloads are fragile and slow.
  • Before implementing any integration or feature that uses external data (APIs, databases, web scraping), ask in one turn: 'Should I use [REAL DATA/PRODUCTION] or [DEMO DATA/SAMPLE]?' and 'What is the exact data source or endpoint?' Do not assume.
  • When a user says 'use tools' or asks for URLs/versions/citations/official documentation, always invoke WebSearch, WebFetch, or equivalent tool. Guesses are not acceptable, even if confident.
  • When a fix or integration is claimed complete, always: (1) Show the actual deployed output or system behavior, (2) Verify it against the original problem statement. Do not say 'done' until user confirms with deployed evidence.
  • When debugging a deployed service or infrastructure issue, always SSH into the actual environment and inspect configuration/logs there first. Do not assume repo configuration matches deployment.
  • When Docker builds produce suspiciously small binaries, build failures are unclear, or dependency installations enter retry loops, do not continue rebuilding. Test the build in an isolated temporary directory, verify dependencies, and compare against known working builds.
  • Before rewriting git history or performing destructive git operations, confirm with the user whether force-push to remote will be needed and which branches/remotes are affected, unless the user has already explicitly authorized these operations.
  • When a user reports a data quality or categorization issue (e.g., 'Other' dominating, categories inflated), trace all backend endpoints and classifiers that produce that data before implementing UI-side filtering; fix at the source, not the display layer. Ask only if both approaches are genuinely viable.
  • When implementing a multi-location feature (agent filtering, pronoun replacement, schema changes), use grep systematically to find ALL occurrences before reporting completion. Create a checklist of all files/queries that need the change and verify each one. If you find a bug in one place (e.g., behavioral_correlations missing agent filter), immediately search for the same pattern elsewhere rather than waiting for the user to report it.