Skip to content

feat: Adapt CodeFire-app ideas for enhanced session logging and project intelligence #9

@sinh-x

Description

@sinh-x

Inspiration

CodeFire is an open-source (MIT) desktop companion that gives AI coding CLIs persistent memory via MCP. Several of its patterns would significantly improve ai-usage-log's session tracking and project intelligence capabilities.

What CodeFire Does Well

Feature How It Works
Direct .jsonl parsing Reads Claude Code session files line-by-line, extracting model, tokens (in/out/cache), tool counts, git branch, timestamps
Auto project discovery Scans ~/.claude/projects/, decodes path-encoded folder names back to real filesystem paths
Live session monitoring Watches active sessions — context window %, token burn rate, cost estimates
Session → task linking Sessions reference tasks worked on; tasks reference originating sessions
Cost tracking Per-session and aggregate cost estimates by model pricing
Codebase snapshots Point-in-time file tree capture per session
Daily briefings AI-generated digest summarizing activity across all projects

Proposed Adaptations

Phase 1 — Direct .jsonl Parsing (High Priority)

Currently read_claude_sessions reads JSONL files but could extract more structured data. Adapt CodeFire's parsing approach:

  • Extract model ID per session (for cost calculation)
  • Extract token counts (input, output, cache_creation, cache_read) per turn
  • Calculate estimated cost using model pricing tables
  • Extract git branch active during the session
  • Extract context window utilization (% of model limit)
  • Parse tool call results for success/failure status

This overlaps with #8 but focuses specifically on the .jsonl parsing layer rather than stats aggregation.

Phase 2 — Auto Project Discovery

  • Scan ~/.claude/projects/ to auto-discover project folders
  • Decode path-encoded folder names (e.g., -home-sinh-git-repos-foo/home/sinh/git-repos/foo)
  • Auto-link sessions to projects based on discovered paths
  • Expose via MCP tool: list_discovered_projects

Phase 3 — Session ↔ Avodah Task Linking

Enable bidirectional linking between ai-usage-log sessions and Avodah tasks:

  • Add optional task_ids field to session metadata
  • When saving a session, accept Avodah task IDs that were worked on
  • Add session_ids lookup — given a task ID, find all sessions that touched it
  • Expose via MCP: link_session_task, get_task_sessions

This creates the feedback loop: AI reads tasks → works on them → session logs reference the tasks → next session can see history.

Phase 4 — Cost Tracking & Reporting

  • Maintain a model pricing table (configurable, with sensible defaults)
  • Calculate per-session cost from token counts × model pricing
  • Aggregate cost by: day, week, project, model
  • Expose via MCP: get_cost_summary(period, project?)
  • Include cost in daily summary output

Phase 5 — Live Session Awareness (Stretch)

  • Detect currently active Claude Code sessions (via .jsonl file locks or recent writes)
  • Expose get_active_sessions MCP tool showing: project, model, duration, tokens so far, context %
  • Could power the Context Guardian feature (auto-save when context runs low)

Non-Goals

  • No GUI — ai-usage-log stays MCP-only; visual dashboards are Avodah Flutter app territory
  • No code indexing/RAG — out of scope for session logging
  • No browser automation — CodeFire feature, not relevant here

Relationship to Other Issues

References

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions