Skip to content

Enrich session statistics for deeper usage pattern analysis #8

@sinh-x

Description

@sinh-x

Summary

Current session statistics capture flat tool counts, but miss important dimensions needed to understand user usage patterns across MCP servers and workflows.

Current State

The statistics JSON files capture:

  • Session metadata (id, project, branch, model, timestamps, duration)
  • Token counts (input, output, cache, subagent)
  • Message counts (user, assistant)
  • tools_summary — flat count per tool name

Proposed Additions

High Value

  1. Tool call sequence — Ordered list of tool calls (or at least common subsequences). Reveals workflow patterns like list-spaces → search-space → get-object → update-object.

  2. Tool success/failure counts — Per-tool error rates to surface MCP reliability issues. e.g. {"API-update-object": {"success": 10, "failure": 3}}

  3. MCP server grouping — Aggregate tool counts by MCP server. e.g. {"anytype": 24, "google": 2, "ai-usage-log": 3}. Currently requires parsing tool name prefixes manually.

  4. Session tags/intent — A tags array (e.g. ["learning", "anytype", "google-classroom"]) or session_summary string for easy filtering without grepping file contents.

  5. Tool call timing — Average/max latency per tool to identify slow MCP tools.

Medium Value

  1. Tool call pairs (co-occurrence) — Which tools are called together in the same turn. Reveals parallel vs sequential usage.

  2. User message densityavg_time_between_user_messages to show engagement intensity.

  3. Error messages — Count + last error string per tool. Currently errors vanish after session ends.

  4. Objects touched count — For MCP tools, count of unique object IDs operated on (without storing IDs). Shows breadth vs depth of interaction.

Low Value (Nice to Have)

  1. Context compression events — How many times context was compressed during the session.
  2. Skills invoked — Which /skill commands were triggered (separate from raw tool calls).
  3. Git diff stats — Lines added/removed during the session.

Motivation

When analyzing Anytype MCP usage across sessions, the current stats only revealed tool call counts. With the proposed fields, we could answer questions like:

  • What's the typical workflow when interacting with Anytype?
  • Which MCP tools fail most often?
  • How much time is spent waiting on MCP tool responses?
  • Are sessions focused (few objects, deep) or broad (many objects, shallow)?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions