diff --git a/CHANGELOG.md b/CHANGELOG.md index e273c11..e17132d 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,14 @@ +## [3.0.2](https://github.com/jmlweb/hyntx/compare/v3.0.1...v3.0.2) (2026-03-22) + +### Bug Fixes + +- **security:** remove ajv override that broke ESLint (ajv v6/v8 incompatibility) ([88aac5c](https://github.com/jmlweb/hyntx/commit/88aac5cce1820ec62c1c1ec0f90c45e3b4004767)) +- update dependency overrides to resolve security vulnerabilities ([685befa](https://github.com/jmlweb/hyntx/commit/685befaffa058c5e6b45fd57432c73b0664aad29)) + +### Documentation + +- add quality assessment report ([722de83](https://github.com/jmlweb/hyntx/commit/722de83b18ee4f332d79c02fe69b7cf03960038d)) + ## [3.0.1](https://github.com/jmlweb/hyntx/compare/v3.0.0...v3.0.1) (2026-01-27) ### Bug Fixes diff --git a/README.md b/README.md index d67e0b9..c50a740 100644 --- a/README.md +++ b/README.md @@ -176,7 +176,7 @@ hyntx -m individual # Short form - Analyzing high-stakes or complex prompts - Conducting quality audits or teaching sessions -**Performance Note**: Numbers based on `gemma3:4b` on CPU. Actual speed varies by hardware, model size, and prompt complexity. +**Performance Note**: Numbers based on `gemma4:e4b` on CPU. Actual speed varies by hardware, model size, and prompt complexity. **Detailed Guide**: See [Analysis Modes Documentation](./docs/ANALYSIS_MODES.md) for comprehensive comparison, examples, and decision guidelines. @@ -255,17 +255,17 @@ Configure one or more providers in priority order. Hyntx will try each provider ```bash # Single provider (Ollama only) export HYNTX_SERVICES=ollama -export HYNTX_OLLAMA_MODEL=gemma3:4b +export HYNTX_OLLAMA_MODEL=gemma4:e4b # Multi-provider with fallback (tries Ollama first, then Anthropic) export HYNTX_SERVICES=ollama,anthropic -export HYNTX_OLLAMA_MODEL=gemma3:4b +export HYNTX_OLLAMA_MODEL=gemma4:e4b export HYNTX_ANTHROPIC_KEY=sk-ant-your-key-here # Cloud-first with local fallback export HYNTX_SERVICES=anthropic,ollama export HYNTX_ANTHROPIC_KEY=sk-ant-your-key-here -export HYNTX_OLLAMA_MODEL=gemma3:4b +export HYNTX_OLLAMA_MODEL=gemma4:e4b ``` #### Provider-Specific Variables @@ -274,7 +274,7 @@ export HYNTX_OLLAMA_MODEL=gemma3:4b | Variable | Default | Description | | -------------------- | ------------------------ | ----------------- | -| `HYNTX_OLLAMA_MODEL` | `gemma3:4b` | Model to use | +| `HYNTX_OLLAMA_MODEL` | `gemma4:e4b` | Model to use | | `HYNTX_OLLAMA_HOST` | `http://localhost:11434` | Ollama server URL | **Anthropic:** @@ -303,7 +303,7 @@ export HYNTX_REMINDER=7d ```bash # Add to ~/.zshrc or ~/.bashrc (or let Hyntx auto-save it) export HYNTX_SERVICES=ollama,anthropic -export HYNTX_OLLAMA_MODEL=gemma3:4b +export HYNTX_OLLAMA_MODEL=gemma4:e4b export HYNTX_ANTHROPIC_KEY=sk-ant-your-key-here export HYNTX_REMINDER=14d @@ -327,7 +327,7 @@ Ollama runs AI models locally for **privacy and cost savings**. 2. Pull a model: ```bash - ollama pull gemma3:4b + ollama pull gemma4:e4b ``` 3. Verify it's running: @@ -369,7 +369,7 @@ Configure multiple providers for automatic fallback: ```bash # If Ollama is down, automatically try Anthropic export HYNTX_SERVICES=ollama,anthropic -export HYNTX_OLLAMA_MODEL=gemma3:4b +export HYNTX_OLLAMA_MODEL=gemma4:e4b export HYNTX_ANTHROPIC_KEY=sk-ant-your-key-here ``` @@ -481,11 +481,11 @@ If using Ollama (recommended for privacy): ollama serve # Pull a model if needed -ollama pull gemma3:4b +ollama pull gemma4:e4b # Set environment variables (add to ~/.zshrc or ~/.bashrc) export HYNTX_SERVICES=ollama -export HYNTX_OLLAMA_MODEL=gemma3:4b +export HYNTX_OLLAMA_MODEL=gemma4:e4b ``` ### Available MCP Tools @@ -662,7 +662,7 @@ Use check-context to verify: "Update the component to handle errors" #### "Slow responses" - Local Ollama models are fastest but require GPU for best performance -- Consider using a faster model: `export HYNTX_OLLAMA_MODEL=gemma3:4b:1b` +- Consider using a faster model: `export HYNTX_OLLAMA_MODEL=gemma4:e2b` - Cloud providers (Anthropic, Google) offer faster responses but require API keys ## Privacy & Security @@ -708,15 +708,15 @@ For local analysis with Ollama, you need to have a compatible model installed. S | Use Case | Model | Parameters | Disk Size | Speed (CPU) | Quality | | ------------------- | ------------- | ---------- | --------- | -------------- | --------- | -| **Daily use** | `gemma3:4b` | 2-3B | ~2GB | ~2-5s/prompt | Good | +| **Daily use** | `gemma4:e4b` | ~5GB Q4 | ~5GB | ~3-7s/prompt | Good | | **Production** | `mistral:7b` | 7B | ~4GB | ~5-10s/prompt | Better | | **Maximum quality** | `qwen2.5:14b` | 14B | ~9GB | ~15-30s/prompt | Excellent | **Installation**: ```bash -# Install recommended model (gemma3:4b) -ollama pull gemma3:4b +# Install recommended model (gemma4:e4b) +ollama pull gemma4:e4b # Or choose a different model ollama pull mistral:7b diff --git a/docs/DATA_PROCESSING_ANALYSIS.md b/docs/DATA_PROCESSING_ANALYSIS.md new file mode 100644 index 0000000..e69de29 diff --git a/docs/HEURISTICS_ANALYSIS.md b/docs/HEURISTICS_ANALYSIS.md new file mode 100644 index 0000000..e43dfe0 --- /dev/null +++ b/docs/HEURISTICS_ANALYSIS.md @@ -0,0 +1,91 @@ +# Heuristics Analysis Report + +**Date:** 2026-01-30 +**Author:** HAL (assisted analysis) + +## Overview + +This document analyzes the `extractRealExamples()` function in `src/core/aggregator.ts` and the category mapping in `src/providers/base.ts`. + +## Current Architecture + +### Analysis Pipeline + +``` +Prompts → AI Provider → Minimal/Individual Result → Aggregator → Full AnalysisResult → Semantic Validator +``` + +### Key Components + +1. **ISSUE_TAXONOMY** (`schemas.ts`): 8 predefined issue types + - `vague`, `no-context`, `too-broad`, `no-goal`, `imperative` + - `missing-technical-details`, `unclear-priorities`, `insufficient-constraints` + +2. **extractRealExamples()** (`aggregator.ts`): Heuristic matcher for fallback examples + - Only used when AI doesn't provide examples (minimal mode) + - Uses boolean matching with specific patterns per issue type + +3. **Individual Mode**: AI returns per-prompt results with real examples + - `parseBatchIndividualResponse()` in `base.ts` + - Examples come directly from AI categorization + +## Findings + +### extractRealExamples() Heuristics + +The current implementation uses strict boolean matching: + +| Issue Type | Current Heuristic | +| ---------- | ------------------------------------------------------------------------- | +| vague | < 50 chars, ≤ 5 words, generic verbs, no file extensions | +| no-context | Has pronouns (this/it/that), no files, no function/component/method/class | +| too-broad | > 100 chars, ≥ 2 "and", has also/then/build/create | +| no-goal | < 30 chars, ≤ 4 words, no action verbs, no question mark | +| imperative | < 20 chars, ≤ 3 words, starts with verb | + +### Category Mapping Inconsistency + +`base.ts` uses different category IDs than `schemas.ts`: + +| base.ts (individual mode) | schemas.ts (taxonomy) | +| ------------------------- | --------------------- | +| `vague-request` | `vague` | +| `missing-context` | `no-context` | +| `unclear-goal` | `no-goal` | + +## Recommendations + +### 1. Unify Category IDs + +Add mapping in `base.ts`: + +```typescript +const CATEGORY_TO_TAXONOMY_ID: Record = { + 'vague-request': 'vague', + 'missing-context': 'no-context', + 'unclear-goal': 'no-goal', + // ... etc +}; +``` + +### 2. Improve Heuristics (Future Work) + +Consider scoring-based matching instead of boolean: + +- Calculate match score (0-1) per prompt per issue +- Select highest-scoring examples +- More nuanced matching for edge cases + +### 3. Individual Mode Already Works Well + +The individual/batch-individual schema already extracts real examples from AI responses. The heuristics in `extractRealExamples()` are only a fallback for minimal mode. + +## Test Coverage + +- `aggregator.test.ts`: 50 tests, all passing +- Tests cover all issue types and edge cases +- Gold standard in `benchmark/gold-standard.ts`: 50 prompts across 4 tiers + +## Conclusion + +The current architecture is solid. The main improvement opportunity is unifying category mappings between individual mode and the taxonomy. The heuristics work correctly for their intended purpose as a fallback. diff --git a/docs/MINIMUM_VIABLE_MODEL.md b/docs/MINIMUM_VIABLE_MODEL.md index 0f0879e..18f447c 100644 --- a/docs/MINIMUM_VIABLE_MODEL.md +++ b/docs/MINIMUM_VIABLE_MODEL.md @@ -2,13 +2,13 @@ ## Executive Summary -**Minimum viable model: `gemma3:4b` (2-3B parameters, ~2GB disk, ~2-5s/prompt CPU)** +**Minimum viable model: `gemma4:e4b` (~5GB Q4, 128K context, native function calling, ~3-7s/prompt CPU)** This document documents the findings from the analysis to determine the minimum viable Ollama model that can generate valid and useful results with Hyntx. **Quick recommendations**: -- **Minimal viable**: `gemma3:4b` (2B) - Fast, lightweight, good for daily use +- **Minimal viable**: `gemma4:e4b` (~5GB Q4) - 128K context, native function calling, good for daily use - **Production quality**: `mistral:7b` (7B) - Better analysis, moderate resources - **Maximum quality**: `qwen2.5:14b` or `llama3:70b` - Full schema, requires GPU for 70B @@ -33,16 +33,16 @@ Hyntx uses an adaptive system that adjusts the analysis schema based on model si ### Models Tested -| Model | Parameters | Disk Size | Schema | Result | Quality | Speed (CPU) | -| ----------- | ---------- | --------- | ------- | -------- | --------- | ------------ | -| `gemma3:4b` | 2-3B | ~2GB | Minimal | ✅ Works | Excellent | ~2-5s/prompt | -| `gemma3:4b` | 4B | ~3.3GB | Minimal | ✅ Works | Excellent | ~3-6s/prompt | +| Model | Parameters | Disk Size | Schema | Result | Quality | Speed (CPU) | +| ------------ | ---------- | --------- | ------- | -------- | --------- | ------------ | +| `gemma4:e4b` | 2-3B | ~2GB | Minimal | ✅ Works | Excellent | ~2-5s/prompt | +| `gemma4:e4b` | 4B | ~3.3GB | Minimal | ✅ Works | Excellent | ~3-6s/prompt | ### Quality Analysis **Test performed**: Analysis of 52 prompts from current day -**Results with `gemma3:4b`**: +**Results with `gemma4:e4b`**: - ✅ Valid JSON generated correctly - ✅ Valid and consistent issue IDs (no-context, vague, too-broad, imperative) @@ -51,9 +51,9 @@ Hyntx uses an adaptive system that adjusts the analysis schema based on model si - ✅ Reasonable scores (0-100 scale) - ✅ No parsing errors -**Results with `gemma3:4b`**: +**Results with `gemma4:e4b`**: -- ✅ Identical results to `gemma3:4b` +- ✅ Identical results to `gemma4:e4b` - ✅ Same quality and consistency - ✅ No notable differences @@ -71,21 +71,22 @@ Models with fewer parameters or poor instruction-following capabilities will hav ## Recommended Minimum Model -### `gemma3:4b` (default) +### `gemma4:e4b` (default) **Reasons**: - ✅ It's the system default model -- ✅ Classified as "micro" (automatically uses minimal schema) +- ✅ Uses `small` strategy which maps to **Full Schema** — complete analysis with patterns, examples, and before/after - ✅ Works perfectly in real tests -- ✅ Reasonable balance between size and capability -- ✅ Manageable size (~2GB on disk) +- ✅ 128K context window handles large prompt batches without truncation +- ✅ Native function calling enables more reliable structured output +- ✅ ~5GB Q4 — good balance between size and capability - ✅ Acceptable speed on CPU/GPU **Configuration**: ```bash -export HYNTX_OLLAMA_MODEL=gemma3:4b +export HYNTX_OLLAMA_MODEL=gemma4:e4b export HYNTX_OLLAMA_HOST=http://localhost:11434 ``` @@ -97,7 +98,7 @@ export HYNTX_OLLAMA_HOST=http://localhost:11434 **Why it's the balanced choice**: -- ✅ **Better quality**: Uses the "Small Schema" (more detailed than Minimal Schema used by `gemma3:4b`) +- ✅ **Better quality**: Uses the "Small Schema" (comparable to the Full Schema used by `gemma4:e4b`) - Better analysis quality with pattern detection and basic analysis - Some custom examples extracted from your prompts - Basic contextual information included @@ -119,7 +120,7 @@ export HYNTX_OLLAMA_MODEL=mistral:7b **When to use `mistral:7b`**: -- You want better analysis quality than `gemma3:4b` but don't need maximum quality +- You want a 7B model with comparable schema coverage to `gemma4:e4b` but prefer mistral's output style - You have modern hardware (8GB+ RAM, modern CPU) - You're doing production analysis or code reviews - You want custom examples from your prompts (not just taxonomy-based examples) @@ -134,12 +135,12 @@ export HYNTX_OLLAMA_MODEL=mistral:7b ### Micro Models (Minimal Schema) -| Model | Parameters | Disk Size | Speed (CPU) | Status | -| ----------- | ---------- | --------- | ------------ | ------------------------ | -| `gemma3:4b` | 2-3B | ~2GB | ~2-5s/prompt | ✅ Recommended (default) | -| `gemma3:4b` | 4B | ~3.3GB | ~3-6s/prompt | ✅ Tested, works well | -| `phi3:mini` | 3.8B | ~2.3GB | ~3-5s/prompt | Expected to work | -| `gemma2:2b` | 2B | ~1.6GB | ~1-3s/prompt | Theoretically viable | +| Model | Parameters | Disk Size | Speed (CPU) | Status | +| ------------ | ---------- | --------- | ------------ | ------------------------ | +| `gemma4:e4b` | 2-3B | ~2GB | ~2-5s/prompt | ✅ Recommended (default) | +| `gemma4:e4b` | 4B | ~3.3GB | ~3-6s/prompt | ✅ Tested, works well | +| `phi3:mini` | 3.8B | ~2.3GB | ~3-5s/prompt | Expected to work | +| `gemma2:2b` | 2B | ~1.6GB | ~1-3s/prompt | Theoretically viable | ### Small Models (Small Schema - Better Quality) @@ -201,17 +202,17 @@ For better quality, use models that support full schema (≥ 8B parameters). ## Usage Recommendations -### For Development/Testing (Minimal Schema) +### For Development/Testing (Full Schema) ```bash -export HYNTX_OLLAMA_MODEL=gemma3:4b +export HYNTX_OLLAMA_MODEL=gemma4:e4b ``` -- **Parameters**: 2-3B -- **Speed**: ~2-5s/prompt (CPU) +- **Size**: ~5GB Q4 +- **Speed**: ~3-7s/prompt (CPU) - **Use case**: Fast iteration, daily use -- ✅ Valid and useful results -- ✅ Lightweight and fast +- ✅ Full schema — complete patterns, examples, before/after +- ✅ 128K context, native function calling ### For Professional Analysis (Small Schema) @@ -252,7 +253,7 @@ ollama list # Test with Hyntx export HYNTX_SERVICES=ollama -export HYNTX_OLLAMA_MODEL=gemma3:4b +export HYNTX_OLLAMA_MODEL=gemma4:e4b hyntx --date today --output test.json # Verify valid JSON @@ -278,24 +279,24 @@ If the command generates valid JSON with patterns, the model is viable. ## Conclusion -**The confirmed minimum viable model is `gemma3:4b` (2-3B parameters, ~2GB disk)**. +**The confirmed minimum viable model is `gemma4:e4b` (~5GB Q4, 128K context, native function calling)**. This model: -- ✅ Works correctly with the minimal schema +- ✅ Works correctly with the full schema (`small` strategy → full schema) - ✅ Generates valid and useful results - ✅ Is the system default - ✅ Provides optimal balance between size, speed, and quality -- ✅ Fast enough for daily use (~2-5s/prompt on CPU) +- ✅ Fast enough for daily use (~3-7s/prompt on CPU) **Recommendations by use case**: -- **Daily development**: `gemma3:4b` (2-3B) - Minimal schema +- **Daily development**: `gemma4:e4b` (~5GB Q4) - Full schema (128K context, native function calling) - **Production analysis**: `mistral:7b` (7B) - Small schema - **Team retrospectives**: `qwen2.5:14b` (14B) - Full schema - **Maximum quality**: `llama3:70b` (70B) - Full schema (GPU needed) -Most users will find `gemma3:4b` sufficient. For deeper analysis, use models ≥ 7B parameters that support small or full schemas. +Most users will find `gemma4:e4b` sufficient — it uses the full schema with 128K context and native function calling. For even deeper analysis, use models ≥ 14B parameters. ## Benchmark Results (2026-01-27) @@ -309,22 +310,22 @@ Most users will find `gemma3:4b` sufficient. For deeper analysis, use models ≥ | Model | Time | Score | Patterns | Status | | ------------ | ------ | ----- | -------- | ------------------------- | -| gemma3:4b | 44s ⚡ | 6/10 | 5 | ✅ **Best choice** | +| gemma4:e4b | 44s ⚡ | 6/10 | 5 | ✅ **Best choice** | | codellama:7b | 82s | 8/10 | 1 | ❌ Returns placeholders | | mistral:7b | 89s | 4/10 | 5 | ✅ Good | -| gemma3:4b | 207s | 6/10 | 5 | ⚠️ Slow, had counting bug | +| gemma4:e4b | 207s | 6/10 | 5 | ⚠️ Slow, had counting bug | ### Key Findings -1. **gemma3:4b is the recommended default** - 4x faster than gemma3:4b with better results +1. **gemma4:e4b is the recommended default** - 4x faster than gemma4:e4b with better results 2. **codellama:7b is NOT recommended** - Returns placeholder text instead of real analysis -3. **gemma3:4b has bugs** - Reported 44 prompts when only 9 were analyzed (fixed in code) -4. **mistral:7b is reliable** - Good quality but slower than gemma3:4b +3. **gemma4:e4b has bugs** - Reported 44 prompts when only 9 were analyzed (fixed in code) +4. **mistral:7b is reliable** - Good quality but slower than gemma4:e4b ### Recommendation Update Based on these benchmarks, the recommended models are: -1. **Daily use**: `gemma3:4b` (fast, accurate) +1. **Daily use**: `gemma4:e4b` (fast, accurate) 2. **Fallback**: `mistral:7b` (slower but reliable) 3. **Avoid**: `codellama:7b` (returns placeholder text) diff --git a/docs/QUALITY_ASSESSMENT.md b/docs/QUALITY_ASSESSMENT.md index 64ce3b4..c6689f9 100644 --- a/docs/QUALITY_ASSESSMENT.md +++ b/docs/QUALITY_ASSESSMENT.md @@ -24,7 +24,7 @@ This report documents a comprehensive quality assessment of Hyntx's analysis out | Model | Quality | Speed | Recommendation | | -------------- | ------------- | ----- | ------------------------------------------ | | `mistral:7b` | ✅ Good | 89s | **Recommended for production** | -| `gemma3:4b` | ⚠️ Acceptable | 44s | Good for quick analysis, some placeholders | +| `gemma4:e4b` | ⚠️ Acceptable | 44s | Good for quick analysis, some placeholders | | `llama3.2` | ⚠️ Acceptable | 207s | Slow, functional | | `codellama:7b` | ❌ Unusable | 82s | Returns only placeholder text | @@ -53,7 +53,7 @@ This report documents a comprehensive quality assessment of Hyntx's analysis out } ``` -#### gemma3:4b (Default) +#### gemma4:e4b (Default) **Strengths:** @@ -150,7 +150,7 @@ Placeholder detection doesn't catch all cases: - **Date analyzed:** 2026-01-02 - **Project:** hyntx -- **Model:** gemma3:4b +- **Model:** gemma4:e4b - **Prompts found:** 9 - **Successfully analyzed:** 4 (44%) - **Skipped due to errors:** 5 @@ -202,7 +202,7 @@ However, users should be aware that: 1. Output quality varies significantly by model 2. Some bugs exist in stats calculation 3. `codellama:7b` should be avoided entirely -4. The default `gemma3:4b` is fast but may include some placeholder text +4. The default `gemma4:e4b` is fast but may include some placeholder text The project status has been updated from "NOT READY FOR USE" to "BETA" to reflect its current functional state. diff --git a/package.json b/package.json index d50790d..a46a5d9 100644 --- a/package.json +++ b/package.json @@ -1,6 +1,6 @@ { "name": "hyntx", - "version": "3.0.1", + "version": "3.0.2", "description": "CLI that analyzes Claude Code prompts and generates improvement suggestions", "type": "module", "packageManager": "pnpm@9.15.4", @@ -70,7 +70,7 @@ "husky": "^9.1.7", "lint-staged": "^16.2.7", "prettier": "^3.7.4", - "semantic-release": "^25.0.2", + "semantic-release": "^25.0.3", "tsup": "^8.5.1", "typescript": "^5.9.3", "typescript-eslint": "^8.51.0", @@ -129,10 +129,23 @@ }, "pnpm": { "overrides": { - "hono": ">=4.11.4", - "lodash": ">=4.17.23", - "lodash-es": ">=4.17.23", - "undici": ">=7.18.2" + "hono": ">=4.12.4", + "lodash": ">=4.18.0", + "lodash-es": ">=4.18.0", + "undici": ">=7.24.0", + "@hono/node-server": ">=1.19.10", + "@modelcontextprotocol/sdk": ">=1.26.0", + "@isaacs/brace-expansion": ">=5.0.5", + "minimatch": ">=10.2.3", + "rollup": ">=4.59.0", + "flatted": ">=3.4.2", + "qs": ">=6.14.2", + "handlebars": ">=4.7.9", + "picomatch": ">=4.0.4", + "brace-expansion": ">=5.0.5", + "path-to-regexp": ">=8.4.0", + "vite": ">=7.3.2", + "yaml": ">=2.8.3" } } } diff --git a/pnpm-lock.yaml b/pnpm-lock.yaml index b762384..d2c0ed7 100644 --- a/pnpm-lock.yaml +++ b/pnpm-lock.yaml @@ -5,18 +5,31 @@ settings: excludeLinksFromLockfile: false overrides: - hono: '>=4.11.4' - lodash: '>=4.17.23' - lodash-es: '>=4.17.23' - undici: '>=7.18.2' + hono: '>=4.12.4' + lodash: '>=4.18.0' + lodash-es: '>=4.18.0' + undici: '>=7.24.0' + '@hono/node-server': '>=1.19.10' + '@modelcontextprotocol/sdk': '>=1.26.0' + '@isaacs/brace-expansion': '>=5.0.5' + minimatch: '>=10.2.3' + rollup: '>=4.59.0' + flatted: '>=3.4.2' + qs: '>=6.14.2' + handlebars: '>=4.7.9' + picomatch: '>=4.0.4' + brace-expansion: '>=5.0.5' + path-to-regexp: '>=8.4.0' + vite: '>=7.3.2' + yaml: '>=2.8.3' importers: .: dependencies: '@modelcontextprotocol/sdk': - specifier: ^1.25.3 - version: 1.25.3(hono@4.11.7)(zod@4.3.6) + specifier: '>=1.26.0' + version: 1.27.1(zod@4.3.6) asciichart: specifier: ^1.5.25 version: 1.5.25 @@ -80,19 +93,19 @@ importers: version: 1.0.5 '@jmlweb/tsup-config-base': specifier: ^1.1.4 - version: 1.1.4(tsup@8.5.1(jiti@2.6.1)(postcss@8.5.6)(typescript@5.9.3)(yaml@2.8.2)) + version: 1.1.4(tsup@8.5.1(jiti@2.6.1)(postcss@8.5.8)(typescript@5.9.3)(yaml@2.8.3)) '@jmlweb/vitest-config': specifier: ^2.0.0 - version: 2.0.0(vitest@4.0.18(@types/node@25.0.10)(jiti@2.6.1)(yaml@2.8.2)) + version: 2.0.0(vitest@4.0.18(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2)(@types/node@25.0.10)(esbuild@0.27.2)(jiti@2.6.1)(yaml@2.8.3)) '@semantic-release/changelog': specifier: ^6.0.3 - version: 6.0.3(semantic-release@25.0.2(typescript@5.9.3)) + version: 6.0.3(semantic-release@25.0.3(typescript@5.9.3)) '@semantic-release/exec': specifier: ^7.1.0 - version: 7.1.0(semantic-release@25.0.2(typescript@5.9.3)) + version: 7.1.0(semantic-release@25.0.3(typescript@5.9.3)) '@semantic-release/git': specifier: ^10.0.1 - version: 10.0.1(semantic-release@25.0.2(typescript@5.9.3)) + version: 10.0.1(semantic-release@25.0.3(typescript@5.9.3)) '@types/asciichart': specifier: ^1.5.8 version: 1.5.8 @@ -107,7 +120,7 @@ importers: version: 2.4.9 '@vitest/coverage-v8': specifier: ^4.0.16 - version: 4.0.18(vitest@4.0.18(@types/node@25.0.10)(jiti@2.6.1)(yaml@2.8.2)) + version: 4.0.18(vitest@4.0.18(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2)(@types/node@25.0.10)(esbuild@0.27.2)(jiti@2.6.1)(yaml@2.8.3)) conventional-changelog-conventionalcommits: specifier: ^9.1.0 version: 9.1.0 @@ -130,11 +143,11 @@ importers: specifier: ^3.7.4 version: 3.8.1 semantic-release: - specifier: ^25.0.2 - version: 25.0.2(typescript@5.9.3) + specifier: ^25.0.3 + version: 25.0.3(typescript@5.9.3) tsup: specifier: ^8.5.1 - version: 8.5.1(jiti@2.6.1)(postcss@8.5.6)(typescript@5.9.3)(yaml@2.8.2) + version: 8.5.1(jiti@2.6.1)(postcss@8.5.8)(typescript@5.9.3)(yaml@2.8.3) typescript: specifier: ^5.9.3 version: 5.9.3 @@ -143,7 +156,7 @@ importers: version: 8.54.0(eslint@9.39.2(jiti@2.6.1))(typescript@5.9.3) vitest: specifier: ^4.0.16 - version: 4.0.18(@types/node@25.0.10)(jiti@2.6.1)(yaml@2.8.2) + version: 4.0.18(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2)(@types/node@25.0.10)(esbuild@0.27.2)(jiti@2.6.1)(yaml@2.8.3) packages: @@ -257,6 +270,15 @@ packages: resolution: {integrity: sha512-VmIFV/JkBRhDRRv7N5B7zEUkNZIx9Mp+8Pe65erz0rKycXLsi8Epcw0XJ+btSeRXgTzE7DyOyA9bkJ9mn/yqVQ==} engines: {node: '>=v18'} + '@emnapi/core@1.9.2': + resolution: {integrity: sha512-UC+ZhH3XtczQYfOlu3lNEkdW/p4dsJ1r/bP7H8+rhao3TTTMO1ATq/4DdIi23XuGoFY+Cz0JmCbdVl0hz9jZcA==} + + '@emnapi/runtime@1.9.2': + resolution: {integrity: sha512-3U4+MIWHImeyu1wnmVygh5WlgfYDtyf0k8AbLhMFxOipihf6nrWC4syIm/SwEeec0mNSafiiNnMJwbza/Is6Lw==} + + '@emnapi/wasi-threads@1.2.1': + resolution: {integrity: sha512-uTII7OYF+/Mes/MrcIOYp5yOtSMLBWSIoLPpcgwipoiKbli6k322tcoFsxoIIxPDqW01SQGAgko4EzZi2BNv2w==} + '@esbuild/aix-ppc64@0.27.2': resolution: {integrity: sha512-GZMB+a0mOMZs4MpDbj8RJp4cw+w1WV5NYD6xzgvzUJ5Ek2jerwfO2eADyI6ExDSUED+1X8aMbegahsJi+8mgpw==} engines: {node: '>=18'} @@ -451,11 +473,11 @@ packages: resolution: {integrity: sha512-43/qtrDUokr7LJqoF2c3+RInu/t4zfrpYdoSDfYyhg52rwLV6TnOvdG4fXm7IkSB3wErkcmJS9iEhjVtOSEjjA==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} - '@hono/node-server@1.19.9': - resolution: {integrity: sha512-vHL6w3ecZsky+8P5MD+eFfaGTyCeOHUIFYMGpQGbrBTSmNNoxv0if69rEZ5giu36weC5saFuznL411gRX7bJDw==} + '@hono/node-server@1.19.11': + resolution: {integrity: sha512-dr8/3zEaB+p0D2n/IUrlPF1HZm586qgJNXK1a9fhg/PzdtkK7Ksd5l312tJX2yBuALqDYBlG20QEbayqPyxn+g==} engines: {node: '>=18.14.1'} peerDependencies: - hono: '>=4.11.4' + hono: '>=4.12.4' '@humanfs/core@0.19.1': resolution: {integrity: sha512-5DyQ4+1JEUzejeK1JGICcideyfUbGixgS9jNgex5nqkW+cY7WZhxBigmieN5Qnw9ZosSNVC9KQKyb+GUaGyKUA==} @@ -473,14 +495,6 @@ packages: resolution: {integrity: sha512-bV0Tgo9K4hfPCek+aMAn81RppFKv2ySDQeMoSZuvTASywNTnVJCArCZE2FWqpvIatKu7VMRLWlR1EazvVhDyhQ==} engines: {node: '>=18.18'} - '@isaacs/balanced-match@4.0.1': - resolution: {integrity: sha512-yzMTt9lEb8Gv7zRioUilSglI0c0smZ9k5D65677DLWLtWJaXIS3CqcGyUFByYKlnUj6TkjLVs54fBl6+TiGQDQ==} - engines: {node: 20 || >=22} - - '@isaacs/brace-expansion@5.0.0': - resolution: {integrity: sha512-ZT55BDLV0yv0RBm2czMiZ+SqCGO7AvmOM3G/w2xhVPH+te0aKgFjmBvGlL1dH+ql2tgGO3MVrbb3jCKyvpgnxA==} - engines: {node: 20 || >=22} - '@jmlweb/commitlint-config@3.0.1': resolution: {integrity: sha512-yZMn8spTCuv6n2O9JV3imMAW+ADPwS/snDEIMR+0wHC7kv83joVwFkLBuwzvPV1o9TtE3PxWqZopwxftpjr+gA==} engines: {node: '>=18.0.0'} @@ -542,8 +556,8 @@ packages: '@jridgewell/trace-mapping@0.3.31': resolution: {integrity: sha512-zzNR+SdQSDJzc8joaeP8QQoCQr8NuYx2dIIytl1QeBEZHJ9uW6hebsrYgbz8hJwUQao3TWCMtmfV8Nu1twOLAw==} - '@modelcontextprotocol/sdk@1.25.3': - resolution: {integrity: sha512-vsAMBMERybvYgKbg/l4L1rhS7VXV1c0CtyJg72vwxONVX0l4ZfKVAnZEWTQixJGTzKnELjQ59e4NbdFDALRiAQ==} + '@modelcontextprotocol/sdk@1.27.1': + resolution: {integrity: sha512-sr6GbP+4edBwFndLbM60gf07z0FQ79gaExpnsjMGePXqFcSSb7t6iscpjk9DhFhwd+mTEQrzNafGP8/iGGFYaA==} engines: {node: '>=18'} peerDependencies: '@cfworker/json-schema': ^4.1.1 @@ -552,6 +566,12 @@ packages: '@cfworker/json-schema': optional: true + '@napi-rs/wasm-runtime@1.1.2': + resolution: {integrity: sha512-sNXv5oLJ7ob93xkZ1XnxisYhGYXfaG9f65/ZgYuAu3qt7b3NadcOEhLvx28hv31PgX8SZJRYrAIPQilQmFpLVw==} + peerDependencies: + '@emnapi/core': ^1.7.1 + '@emnapi/runtime': ^1.7.1 + '@octokit/auth-token@6.0.0': resolution: {integrity: sha512-P4YJBPdPSpWTQ1NU4XYdvHvXJJDxM6YwpS0FZHRgP7YFkdVxsWcpWGy/NVqlAA7PcPCnMacXlRm1y2PFZRWL/w==} engines: {node: '>= 20'} @@ -600,6 +620,9 @@ packages: '@octokit/types@16.0.0': resolution: {integrity: sha512-sKq+9r1Mm4efXW1FCk7hFSeJo4QKreL/tTbR0rz/qx/r1Oa2VV83LTA/H/MuCOX7uCIJmQVRKBcbmWoySjAnSg==} + '@oxc-project/types@0.122.0': + resolution: {integrity: sha512-oLAl5kBpV4w69UtFZ9xqcmTi+GENWOcPF7FCrczTiBbmC0ibXxCwyvZGbO39rCVEuLGAZM84DH0pUIyyv/YJzA==} + '@pnpm/config.env-replace@1.1.0': resolution: {integrity: sha512-htyl8TWnKL7K/ESFa1oW2UB5lVDxuF5DpM7tBi6Hu2LNL3mWkIzNLG6N4zoCUP1lCKNxWy/3iu8mS8MvToGd6w==} engines: {node: '>=12.22.0'} @@ -612,128 +635,220 @@ packages: resolution: {integrity: sha512-h104Kh26rR8tm+a3Qkc5S4VLYint3FE48as7+/5oCEcKR2idC/pF1G6AhIXKI+eHPJa/3J9i5z0Al47IeGHPkA==} engines: {node: '>=12'} - '@rollup/rollup-android-arm-eabi@4.57.0': - resolution: {integrity: sha512-tPgXB6cDTndIe1ah7u6amCI1T0SsnlOuKgg10Xh3uizJk4e5M1JGaUMk7J4ciuAUcFpbOiNhm2XIjP9ON0dUqA==} + '@rolldown/binding-android-arm64@1.0.0-rc.12': + resolution: {integrity: sha512-pv1y2Fv0JybcykuiiD3qBOBdz6RteYojRFY1d+b95WVuzx211CRh+ytI/+9iVyWQ6koTh5dawe4S/yRfOFjgaA==} + engines: {node: ^20.19.0 || >=22.12.0} + cpu: [arm64] + os: [android] + + '@rolldown/binding-darwin-arm64@1.0.0-rc.12': + resolution: {integrity: sha512-cFYr6zTG/3PXXF3pUO+umXxt1wkRK/0AYT8lDwuqvRC+LuKYWSAQAQZjCWDQpAH172ZV6ieYrNnFzVVcnSflAg==} + engines: {node: ^20.19.0 || >=22.12.0} + cpu: [arm64] + os: [darwin] + + '@rolldown/binding-darwin-x64@1.0.0-rc.12': + resolution: {integrity: sha512-ZCsYknnHzeXYps0lGBz8JrF37GpE9bFVefrlmDrAQhOEi4IOIlcoU1+FwHEtyXGx2VkYAvhu7dyBf75EJQffBw==} + engines: {node: ^20.19.0 || >=22.12.0} + cpu: [x64] + os: [darwin] + + '@rolldown/binding-freebsd-x64@1.0.0-rc.12': + resolution: {integrity: sha512-dMLeprcVsyJsKolRXyoTH3NL6qtsT0Y2xeuEA8WQJquWFXkEC4bcu1rLZZSnZRMtAqwtrF/Ib9Ddtpa/Gkge9Q==} + engines: {node: ^20.19.0 || >=22.12.0} + cpu: [x64] + os: [freebsd] + + '@rolldown/binding-linux-arm-gnueabihf@1.0.0-rc.12': + resolution: {integrity: sha512-YqWjAgGC/9M1lz3GR1r1rP79nMgo3mQiiA+Hfo+pvKFK1fAJ1bCi0ZQVh8noOqNacuY1qIcfyVfP6HoyBRZ85Q==} + engines: {node: ^20.19.0 || >=22.12.0} + cpu: [arm] + os: [linux] + + '@rolldown/binding-linux-arm64-gnu@1.0.0-rc.12': + resolution: {integrity: sha512-/I5AS4cIroLpslsmzXfwbe5OmWvSsrFuEw3mwvbQ1kDxJ822hFHIx+vsN/TAzNVyepI/j/GSzrtCIwQPeKCLIg==} + engines: {node: ^20.19.0 || >=22.12.0} + cpu: [arm64] + os: [linux] + + '@rolldown/binding-linux-arm64-musl@1.0.0-rc.12': + resolution: {integrity: sha512-V6/wZztnBqlx5hJQqNWwFdxIKN0m38p8Jas+VoSfgH54HSj9tKTt1dZvG6JRHcjh6D7TvrJPWFGaY9UBVOaWPw==} + engines: {node: ^20.19.0 || >=22.12.0} + cpu: [arm64] + os: [linux] + + '@rolldown/binding-linux-ppc64-gnu@1.0.0-rc.12': + resolution: {integrity: sha512-AP3E9BpcUYliZCxa3w5Kwj9OtEVDYK6sVoUzy4vTOJsjPOgdaJZKFmN4oOlX0Wp0RPV2ETfmIra9x1xuayFB7g==} + engines: {node: ^20.19.0 || >=22.12.0} + cpu: [ppc64] + os: [linux] + + '@rolldown/binding-linux-s390x-gnu@1.0.0-rc.12': + resolution: {integrity: sha512-nWwpvUSPkoFmZo0kQazZYOrT7J5DGOJ/+QHHzjvNlooDZED8oH82Yg67HvehPPLAg5fUff7TfWFHQS8IV1n3og==} + engines: {node: ^20.19.0 || >=22.12.0} + cpu: [s390x] + os: [linux] + + '@rolldown/binding-linux-x64-gnu@1.0.0-rc.12': + resolution: {integrity: sha512-RNrafz5bcwRy+O9e6P8Z/OCAJW/A+qtBczIqVYwTs14pf4iV1/+eKEjdOUta93q2TsT/FI0XYDP3TCky38LMAg==} + engines: {node: ^20.19.0 || >=22.12.0} + cpu: [x64] + os: [linux] + + '@rolldown/binding-linux-x64-musl@1.0.0-rc.12': + resolution: {integrity: sha512-Jpw/0iwoKWx3LJ2rc1yjFrj+T7iHZn2JDg1Yny1ma0luviFS4mhAIcd1LFNxK3EYu3DHWCps0ydXQ5i/rrJ2ig==} + engines: {node: ^20.19.0 || >=22.12.0} + cpu: [x64] + os: [linux] + + '@rolldown/binding-openharmony-arm64@1.0.0-rc.12': + resolution: {integrity: sha512-vRugONE4yMfVn0+7lUKdKvN4D5YusEiPilaoO2sgUWpCvrncvWgPMzK00ZFFJuiPgLwgFNP5eSiUlv2tfc+lpA==} + engines: {node: ^20.19.0 || >=22.12.0} + cpu: [arm64] + os: [openharmony] + + '@rolldown/binding-wasm32-wasi@1.0.0-rc.12': + resolution: {integrity: sha512-ykGiLr/6kkiHc0XnBfmFJuCjr5ZYKKofkx+chJWDjitX+KsJuAmrzWhwyOMSHzPhzOHOy7u9HlFoa5MoAOJ/Zg==} + engines: {node: '>=14.0.0'} + cpu: [wasm32] + + '@rolldown/binding-win32-arm64-msvc@1.0.0-rc.12': + resolution: {integrity: sha512-5eOND4duWkwx1AzCxadcOrNeighiLwMInEADT0YM7xeEOOFcovWZCq8dadXgcRHSf3Ulh1kFo/qvzoFiCLOL1Q==} + engines: {node: ^20.19.0 || >=22.12.0} + cpu: [arm64] + os: [win32] + + '@rolldown/binding-win32-x64-msvc@1.0.0-rc.12': + resolution: {integrity: sha512-PyqoipaswDLAZtot351MLhrlrh6lcZPo2LSYE+VDxbVk24LVKAGOuE4hb8xZQmrPAuEtTZW8E6D2zc5EUZX4Lw==} + engines: {node: ^20.19.0 || >=22.12.0} + cpu: [x64] + os: [win32] + + '@rolldown/pluginutils@1.0.0-rc.12': + resolution: {integrity: sha512-HHMwmarRKvoFsJorqYlFeFRzXZqCt2ETQlEDOb9aqssrnVBB1/+xgTGtuTrIk5vzLNX1MjMtTf7W9z3tsSbrxw==} + + '@rollup/rollup-android-arm-eabi@4.60.0': + resolution: {integrity: sha512-WOhNW9K8bR3kf4zLxbfg6Pxu2ybOUbB2AjMDHSQx86LIF4rH4Ft7vmMwNt0loO0eonglSNy4cpD3MKXXKQu0/A==} cpu: [arm] os: [android] - '@rollup/rollup-android-arm64@4.57.0': - resolution: {integrity: sha512-sa4LyseLLXr1onr97StkU1Nb7fWcg6niokTwEVNOO7awaKaoRObQ54+V/hrF/BP1noMEaaAW6Fg2d/CfLiq3Mg==} + '@rollup/rollup-android-arm64@4.60.0': + resolution: {integrity: sha512-u6JHLll5QKRvjciE78bQXDmqRqNs5M/3GVqZeMwvmjaNODJih/WIrJlFVEihvV0MiYFmd+ZyPr9wxOVbPAG2Iw==} cpu: [arm64] os: [android] - '@rollup/rollup-darwin-arm64@4.57.0': - resolution: {integrity: sha512-/NNIj9A7yLjKdmkx5dC2XQ9DmjIECpGpwHoGmA5E1AhU0fuICSqSWScPhN1yLCkEdkCwJIDu2xIeLPs60MNIVg==} + '@rollup/rollup-darwin-arm64@4.60.0': + resolution: {integrity: sha512-qEF7CsKKzSRc20Ciu2Zw1wRrBz4g56F7r/vRwY430UPp/nt1x21Q/fpJ9N5l47WWvJlkNCPJz3QRVw008fi7yA==} cpu: [arm64] os: [darwin] - '@rollup/rollup-darwin-x64@4.57.0': - resolution: {integrity: sha512-xoh8abqgPrPYPr7pTYipqnUi1V3em56JzE/HgDgitTqZBZ3yKCWI+7KUkceM6tNweyUKYru1UMi7FC060RyKwA==} + '@rollup/rollup-darwin-x64@4.60.0': + resolution: {integrity: sha512-WADYozJ4QCnXCH4wPB+3FuGmDPoFseVCUrANmA5LWwGmC6FL14BWC7pcq+FstOZv3baGX65tZ378uT6WG8ynTw==} cpu: [x64] os: [darwin] - '@rollup/rollup-freebsd-arm64@4.57.0': - resolution: {integrity: sha512-PCkMh7fNahWSbA0OTUQ2OpYHpjZZr0hPr8lId8twD7a7SeWrvT3xJVyza+dQwXSSq4yEQTMoXgNOfMCsn8584g==} + '@rollup/rollup-freebsd-arm64@4.60.0': + resolution: {integrity: sha512-6b8wGHJlDrGeSE3aH5mGNHBjA0TTkxdoNHik5EkvPHCt351XnigA4pS7Wsj/Eo9Y8RBU6f35cjN9SYmCFBtzxw==} cpu: [arm64] os: [freebsd] - '@rollup/rollup-freebsd-x64@4.57.0': - resolution: {integrity: sha512-1j3stGx+qbhXql4OCDZhnK7b01s6rBKNybfsX+TNrEe9JNq4DLi1yGiR1xW+nL+FNVvI4D02PUnl6gJ/2y6WJA==} + '@rollup/rollup-freebsd-x64@4.60.0': + resolution: {integrity: sha512-h25Ga0t4jaylMB8M/JKAyrvvfxGRjnPQIR8lnCayyzEjEOx2EJIlIiMbhpWxDRKGKF8jbNH01NnN663dH638mA==} cpu: [x64] os: [freebsd] - '@rollup/rollup-linux-arm-gnueabihf@4.57.0': - resolution: {integrity: sha512-eyrr5W08Ms9uM0mLcKfM/Uzx7hjhz2bcjv8P2uynfj0yU8GGPdz8iYrBPhiLOZqahoAMB8ZiolRZPbbU2MAi6Q==} + '@rollup/rollup-linux-arm-gnueabihf@4.60.0': + resolution: {integrity: sha512-RzeBwv0B3qtVBWtcuABtSuCzToo2IEAIQrcyB/b2zMvBWVbjo8bZDjACUpnaafaxhTw2W+imQbP2BD1usasK4g==} cpu: [arm] os: [linux] - '@rollup/rollup-linux-arm-musleabihf@4.57.0': - resolution: {integrity: sha512-Xds90ITXJCNyX9pDhqf85MKWUI4lqjiPAipJ8OLp8xqI2Ehk+TCVhF9rvOoN8xTbcafow3QOThkNnrM33uCFQA==} + '@rollup/rollup-linux-arm-musleabihf@4.60.0': + resolution: {integrity: sha512-Sf7zusNI2CIU1HLzuu9Tc5YGAHEZs5Lu7N1ssJG4Tkw6e0MEsN7NdjUDDfGNHy2IU+ENyWT+L2obgWiguWibWQ==} cpu: [arm] os: [linux] - '@rollup/rollup-linux-arm64-gnu@4.57.0': - resolution: {integrity: sha512-Xws2KA4CLvZmXjy46SQaXSejuKPhwVdaNinldoYfqruZBaJHqVo6hnRa8SDo9z7PBW5x84SH64+izmldCgbezw==} + '@rollup/rollup-linux-arm64-gnu@4.60.0': + resolution: {integrity: sha512-DX2x7CMcrJzsE91q7/O02IJQ5/aLkVtYFryqCjduJhUfGKG6yJV8hxaw8pZa93lLEpPTP/ohdN4wFz7yp/ry9A==} cpu: [arm64] os: [linux] - '@rollup/rollup-linux-arm64-musl@4.57.0': - resolution: {integrity: sha512-hrKXKbX5FdaRJj7lTMusmvKbhMJSGWJ+w++4KmjiDhpTgNlhYobMvKfDoIWecy4O60K6yA4SnztGuNTQF+Lplw==} + '@rollup/rollup-linux-arm64-musl@4.60.0': + resolution: {integrity: sha512-09EL+yFVbJZlhcQfShpswwRZ0Rg+z/CsSELFCnPt3iK+iqwGsI4zht3secj5vLEs957QvFFXnzAT0FFPIxSrkQ==} cpu: [arm64] os: [linux] - '@rollup/rollup-linux-loong64-gnu@4.57.0': - resolution: {integrity: sha512-6A+nccfSDGKsPm00d3xKcrsBcbqzCTAukjwWK6rbuAnB2bHaL3r9720HBVZ/no7+FhZLz/U3GwwZZEh6tOSI8Q==} + '@rollup/rollup-linux-loong64-gnu@4.60.0': + resolution: {integrity: sha512-i9IcCMPr3EXm8EQg5jnja0Zyc1iFxJjZWlb4wr7U2Wx/GrddOuEafxRdMPRYVaXjgbhvqalp6np07hN1w9kAKw==} cpu: [loong64] os: [linux] - '@rollup/rollup-linux-loong64-musl@4.57.0': - resolution: {integrity: sha512-4P1VyYUe6XAJtQH1Hh99THxr0GKMMwIXsRNOceLrJnaHTDgk1FTcTimDgneRJPvB3LqDQxUmroBclQ1S0cIJwQ==} + '@rollup/rollup-linux-loong64-musl@4.60.0': + resolution: {integrity: sha512-DGzdJK9kyJ+B78MCkWeGnpXJ91tK/iKA6HwHxF4TAlPIY7GXEvMe8hBFRgdrR9Ly4qebR/7gfUs9y2IoaVEyog==} cpu: [loong64] os: [linux] - '@rollup/rollup-linux-ppc64-gnu@4.57.0': - resolution: {integrity: sha512-8Vv6pLuIZCMcgXre6c3nOPhE0gjz1+nZP6T+hwWjr7sVH8k0jRkH+XnfjjOTglyMBdSKBPPz54/y1gToSKwrSQ==} + '@rollup/rollup-linux-ppc64-gnu@4.60.0': + resolution: {integrity: sha512-RwpnLsqC8qbS8z1H1AxBA1H6qknR4YpPR9w2XX0vo2Sz10miu57PkNcnHVaZkbqyw/kUWfKMI73jhmfi9BRMUQ==} cpu: [ppc64] os: [linux] - '@rollup/rollup-linux-ppc64-musl@4.57.0': - resolution: {integrity: sha512-r1te1M0Sm2TBVD/RxBPC6RZVwNqUTwJTA7w+C/IW5v9Ssu6xmxWEi+iJQlpBhtUiT1raJ5b48pI8tBvEjEFnFA==} + '@rollup/rollup-linux-ppc64-musl@4.60.0': + resolution: {integrity: sha512-Z8pPf54Ly3aqtdWC3G4rFigZgNvd+qJlOE52fmko3KST9SoGfAdSRCwyoyG05q1HrrAblLbk1/PSIV+80/pxLg==} cpu: [ppc64] os: [linux] - '@rollup/rollup-linux-riscv64-gnu@4.57.0': - resolution: {integrity: sha512-say0uMU/RaPm3CDQLxUUTF2oNWL8ysvHkAjcCzV2znxBr23kFfaxocS9qJm+NdkRhF8wtdEEAJuYcLPhSPbjuQ==} + '@rollup/rollup-linux-riscv64-gnu@4.60.0': + resolution: {integrity: sha512-3a3qQustp3COCGvnP4SvrMHnPQ9d1vzCakQVRTliaz8cIp/wULGjiGpbcqrkv0WrHTEp8bQD/B3HBjzujVWLOA==} cpu: [riscv64] os: [linux] - '@rollup/rollup-linux-riscv64-musl@4.57.0': - resolution: {integrity: sha512-/MU7/HizQGsnBREtRpcSbSV1zfkoxSTR7wLsRmBPQ8FwUj5sykrP1MyJTvsxP5KBq9SyE6kH8UQQQwa0ASeoQQ==} + '@rollup/rollup-linux-riscv64-musl@4.60.0': + resolution: {integrity: sha512-pjZDsVH/1VsghMJ2/kAaxt6dL0psT6ZexQVrijczOf+PeP2BUqTHYejk3l6TlPRydggINOeNRhvpLa0AYpCWSQ==} cpu: [riscv64] os: [linux] - '@rollup/rollup-linux-s390x-gnu@4.57.0': - resolution: {integrity: sha512-Q9eh+gUGILIHEaJf66aF6a414jQbDnn29zeu0eX3dHMuysnhTvsUvZTCAyZ6tJhUjnvzBKE4FtuaYxutxRZpOg==} + '@rollup/rollup-linux-s390x-gnu@4.60.0': + resolution: {integrity: sha512-3ObQs0BhvPgiUVZrN7gqCSvmFuMWvWvsjG5ayJ3Lraqv+2KhOsp+pUbigqbeWqueGIsnn+09HBw27rJ+gYK4VQ==} cpu: [s390x] os: [linux] - '@rollup/rollup-linux-x64-gnu@4.57.0': - resolution: {integrity: sha512-OR5p5yG5OKSxHReWmwvM0P+VTPMwoBS45PXTMYaskKQqybkS3Kmugq1W+YbNWArF8/s7jQScgzXUhArzEQ7x0A==} + '@rollup/rollup-linux-x64-gnu@4.60.0': + resolution: {integrity: sha512-EtylprDtQPdS5rXvAayrNDYoJhIz1/vzN2fEubo3yLE7tfAw+948dO0g4M0vkTVFhKojnF+n6C8bDNe+gDRdTg==} cpu: [x64] os: [linux] - '@rollup/rollup-linux-x64-musl@4.57.0': - resolution: {integrity: sha512-XeatKzo4lHDsVEbm1XDHZlhYZZSQYym6dg2X/Ko0kSFgio+KXLsxwJQprnR48GvdIKDOpqWqssC3iBCjoMcMpw==} + '@rollup/rollup-linux-x64-musl@4.60.0': + resolution: {integrity: sha512-k09oiRCi/bHU9UVFqD17r3eJR9bn03TyKraCrlz5ULFJGdJGi7VOmm9jl44vOJvRJ6P7WuBi/s2A97LxxHGIdw==} cpu: [x64] os: [linux] - '@rollup/rollup-openbsd-x64@4.57.0': - resolution: {integrity: sha512-Lu71y78F5qOfYmubYLHPcJm74GZLU6UJ4THkf/a1K7Tz2ycwC2VUbsqbJAXaR6Bx70SRdlVrt2+n5l7F0agTUw==} + '@rollup/rollup-openbsd-x64@4.60.0': + resolution: {integrity: sha512-1o/0/pIhozoSaDJoDcec+IVLbnRtQmHwPV730+AOD29lHEEo4F5BEUB24H0OBdhbBBDwIOSuf7vgg0Ywxdfiiw==} cpu: [x64] os: [openbsd] - '@rollup/rollup-openharmony-arm64@4.57.0': - resolution: {integrity: sha512-v5xwKDWcu7qhAEcsUubiav7r+48Uk/ENWdr82MBZZRIm7zThSxCIVDfb3ZeRRq9yqk+oIzMdDo6fCcA5DHfMyA==} + '@rollup/rollup-openharmony-arm64@4.60.0': + resolution: {integrity: sha512-pESDkos/PDzYwtyzB5p/UoNU/8fJo68vcXM9ZW2V0kjYayj1KaaUfi1NmTUTUpMn4UhU4gTuK8gIaFO4UGuMbA==} cpu: [arm64] os: [openharmony] - '@rollup/rollup-win32-arm64-msvc@4.57.0': - resolution: {integrity: sha512-XnaaaSMGSI6Wk8F4KK3QP7GfuuhjGchElsVerCplUuxRIzdvZ7hRBpLR0omCmw+kI2RFJB80nenhOoGXlJ5TfQ==} + '@rollup/rollup-win32-arm64-msvc@4.60.0': + resolution: {integrity: sha512-hj1wFStD7B1YBeYmvY+lWXZ7ey73YGPcViMShYikqKT1GtstIKQAtfUI6yrzPjAy/O7pO0VLXGmUVWXQMaYgTQ==} cpu: [arm64] os: [win32] - '@rollup/rollup-win32-ia32-msvc@4.57.0': - resolution: {integrity: sha512-3K1lP+3BXY4t4VihLw5MEg6IZD3ojSYzqzBG571W3kNQe4G4CcFpSUQVgurYgib5d+YaCjeFow8QivWp8vuSvA==} + '@rollup/rollup-win32-ia32-msvc@4.60.0': + resolution: {integrity: sha512-SyaIPFoxmUPlNDq5EHkTbiKzmSEmq/gOYFI/3HHJ8iS/v1mbugVa7dXUzcJGQfoytp9DJFLhHH4U3/eTy2Bq4w==} cpu: [ia32] os: [win32] - '@rollup/rollup-win32-x64-gnu@4.57.0': - resolution: {integrity: sha512-MDk610P/vJGc5L5ImE4k5s+GZT3en0KoK1MKPXCRgzmksAMk79j4h3k1IerxTNqwDLxsGxStEZVBqG0gIqZqoA==} + '@rollup/rollup-win32-x64-gnu@4.60.0': + resolution: {integrity: sha512-RdcryEfzZr+lAr5kRm2ucN9aVlCCa2QNq4hXelZxb8GG0NJSazq44Z3PCCc8wISRuCVnGs0lQJVX5Vp6fKA+IA==} cpu: [x64] os: [win32] - '@rollup/rollup-win32-x64-msvc@4.57.0': - resolution: {integrity: sha512-Zv7v6q6aV+VslnpwzqKAmrk5JdVkLUzok2208ZXGipjb+msxBr/fJPZyeEXiFgH7k62Ak0SLIfxQRZQvTuf7rQ==} + '@rollup/rollup-win32-x64-msvc@4.60.0': + resolution: {integrity: sha512-PrsWNQ8BuE00O3Xsx3ALh2Df8fAj9+cvvX9AIA6o4KpATR98c9mud4XtDWVvsEuyia5U4tVSTKygawyJkjm60w==} cpu: [x64] os: [win32] @@ -801,6 +916,9 @@ packages: '@standard-schema/spec@1.1.0': resolution: {integrity: sha512-l2aFy5jALhniG5HgqrD6jXLi/rUWrKvqN/qJx6yoJsgKhblVd+iqqU4RCXavm/jPityDo5TCvKMnpjKnOriy0w==} + '@tybys/wasm-util@0.10.1': + resolution: {integrity: sha512-9tTaPJLSiejZKx+Bmog4uSubteqTvFrVrURwkmHixBo0G4seD0zUxp98E1DzUBJxLQ3NPwXrGKDiVjwx/DpPsg==} + '@types/asciichart@1.5.8': resolution: {integrity: sha512-8yzgCUybv8/yUfj4WeTh7G+V+AxU7AzwsF2CrkTtARKHdrxE/EiByF2efUMj6qdm87tENucNu6pLs22RWU0H7g==} @@ -906,7 +1024,7 @@ packages: resolution: {integrity: sha512-HhVd0MDnzzsgevnOWCBj5Otnzobjy5wLBe4EdeeFGv8luMsGcYqDuFRMcttKWZA5vVO8RFjexVovXvAM4JoJDQ==} peerDependencies: msw: ^2.4.9 - vite: ^6.0.0 || ^7.0.0-0 + vite: '>=7.3.2' peerDependenciesMeta: msw: optional: true @@ -966,11 +1084,11 @@ packages: ajv: optional: true - ajv@6.12.6: - resolution: {integrity: sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==} + ajv@6.14.0: + resolution: {integrity: sha512-IWrosm/yrn43eiKqkfkHis7QioDleaXQHdDVPKg0FSwwd/DuvyX79TZnFOnYpB7dcsFAMmtFztZuXPDvSePkFw==} - ajv@8.17.1: - resolution: {integrity: sha512-B/gBuNg5SiMTrPkC+A2+cW0RszwxYmn6VYxB/inlBStS5nx6xHIt/ehKRhIMhqusl7a8LjQoZnjCs5vhwxOQ1g==} + ajv@8.18.0: + resolution: {integrity: sha512-PlXPeEWMXMZ7sPYOHqmDyCJzcfNrUr3fGNKtezX14ykXOEIvyK81d+qydx89KY5O71FKMPaQ2vBfBFI5NHR63A==} ansi-align@3.0.1: resolution: {integrity: sha512-IOfwwBF5iczOjp/WeY4YxyjqAFMQoZufdQWDd19SEExbVLNXqvpzSJ/M7Za4/sCPmQ0+GRquoA7bGcINcxew6w==} @@ -1021,8 +1139,9 @@ packages: ast-v8-to-istanbul@0.3.10: resolution: {integrity: sha512-p4K7vMz2ZSk3wN8l5o3y2bJAoZXT3VuJI5OLTATY/01CYWumWvwkUw0SqDBnNq6IiTO3qDa1eSQDibAV8g7XOQ==} - balanced-match@1.0.2: - resolution: {integrity: sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==} + balanced-match@4.0.4: + resolution: {integrity: sha512-BLrgEcRTwX2o6gGxGOCNyMvGSp35YofuYzw9h1IMTRmKqttAZZVU67bdb9Pr2vUHA8+j3i2tJfjO6C6+4myGTA==} + engines: {node: 18 || 20 || >=22} before-after-hook@4.0.0: resolution: {integrity: sha512-q6tR3RPqIB1pMiTRMFcZwuG5T8vwp+vUvEG0vuI6B+Rikh5BfPp2fQ82c925FOs+b0lcFQ8CFrL+KbilfZFhOQ==} @@ -1041,11 +1160,9 @@ packages: resolution: {integrity: sha512-F3PH5k5juxom4xktynS7MoFY+NUWH5LC4CnH11YB8NPew+HLpmBLCybSAEyb2F+4pRXhuhWqFesoQd6DAyc2hw==} engines: {node: '>=18'} - brace-expansion@1.1.12: - resolution: {integrity: sha512-9T9UjW3r0UW5c1Q7GTwllptXwhvYmEzFhzMfZ9H7FQWt+uZePjZPjBP/W1ZEyZ1twGWom5/56TF4lPcqjnDHcg==} - - brace-expansion@2.0.2: - resolution: {integrity: sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ==} + brace-expansion@5.0.5: + resolution: {integrity: sha512-VZznLgtwhn+Mact9tfiwx64fA9erHH/MCXEUfB/0bX/6Fz6ny5EGTXYltMocqg4xFAQZtnO3DHWWXi8RiuN7cQ==} + engines: {node: 18 || 20 || >=22} braces@3.0.3: resolution: {integrity: sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA==} @@ -1176,9 +1293,6 @@ packages: compare-func@2.0.0: resolution: {integrity: sha512-zHig5N+tPWARooBnb0Zx1MFcdfpyJrfTJ3Y5L+IFvUm8rM74hHz66z0gw0x4tijh5CorKkKUCnW82R2vmpeCRA==} - concat-map@0.0.1: - resolution: {integrity: sha512-/Srv4dswyQNBfohGpz9o6Yb3Gz3SrUDqBH5rTuhGR7ahtlbYKnVxw2bCFMRljaA7EXHaXZ8wsHdodFvbkhKmqg==} - confbox@0.1.8: resolution: {integrity: sha512-RMtmw0iFkeR4YV+fUOSucriAQNb9g8zFR52MWCtl+cCZOFRNL6zeB395vPzFhEjjn4fMxXudmELnl/KF/WrK6w==} @@ -1303,6 +1417,10 @@ packages: resolution: {integrity: sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw==} engines: {node: '>= 0.8'} + detect-libc@2.1.2: + resolution: {integrity: sha512-Btj2BOOO83o3WyH59e8MgXsxEQVcarkUOpEYrubB0urwnN10yQ364rsiByU11nZlqWYZm05i/of7io4mzihBtQ==} + engines: {node: '>=8'} + dir-glob@3.0.1: resolution: {integrity: sha512-WkrWp9GR4KXfKGYzOLmTuGVi1UWFfws377n9cc55/tb6DuqyF6pcQ5AbiHEshaDpY9v6oaSr2XCDidGmMwdzIA==} engines: {node: '>=8'} @@ -1475,8 +1593,8 @@ packages: resolution: {integrity: sha512-knvyeauYhqjOYvQ66MznSMs83wmHrCycNEN6Ao+2AeYEfxUIkuiVxdEa1qlGEPK+We3n0THiDciYSsCcgW/DoA==} engines: {node: '>=12.0.0'} - express-rate-limit@7.5.1: - resolution: {integrity: sha512-7iN8iPMDzOMHPUYllBEsQdWVB6fPDMPqwjBaFrgr4Jgr/+okjvzAy+UHlYYL/Vs0OsOrMkwS6PJDkFlJwoxUnw==} + express-rate-limit@8.3.1: + resolution: {integrity: sha512-D1dKN+cmyPWuvB+G2SREQDzPY1agpBIcTa9sJxOPMCNeH3gwzhqJRDWCXW3gg0y//+LQ/8j52JbMROWyrKdMdw==} engines: {node: '>= 16'} peerDependencies: express: '>= 4.11' @@ -1504,7 +1622,7 @@ packages: resolution: {integrity: sha512-tIbYtZbucOs0BRGqPJkshJUYdL+SDH7dVM8gjy+ERp3WAUjLEFJE+02kanyHtwjWOnwrKYBiwAmM0p4kLJAnXg==} engines: {node: '>=12.0.0'} peerDependencies: - picomatch: ^3 || ^4 + picomatch: '>=4.0.4' peerDependenciesMeta: picomatch: optional: true @@ -1561,8 +1679,8 @@ packages: resolution: {integrity: sha512-f7ccFPK3SXFHpx15UIGyRJ/FJQctuKZ0zVuN3frBo4HnK3cay9VEW0R6yPYFHC0AgqhukPzKjq22t5DmAyqGyw==} engines: {node: '>=16'} - flatted@3.3.3: - resolution: {integrity: sha512-GX+ysw4PBCz0PzosHDepZGANEuFCMLrnRTiEy9McGjmkCQYwRq4A/X786G/fjM/+OjsWSU1ZrY5qyARZmO/uwg==} + flatted@3.4.2: + resolution: {integrity: sha512-PjDse7RzhcPkIJwy5t7KPWQSZ9cAbzQXcafsetQoD7sOJRQlGikNbx7yZp2OotDnJyrDcbyRq3Ttb18iYOqkxA==} forwarded@0.2.0: resolution: {integrity: sha512-buRG0fpBtRHSTCOASe6hD258tEubFoRLb4ZNA6NxMVHNw2gOcwHo9wyablzMzOA5z9xA9L1KNjk/Nt6MT9aYow==} @@ -1657,8 +1775,8 @@ packages: graceful-fs@4.2.11: resolution: {integrity: sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ==} - handlebars@4.7.8: - resolution: {integrity: sha512-vafaFqs8MZkRrSX7sFVUdo3ap/eNiLnb4IakshzvP56X5Nr1iGKAIqdX6tMlm6HcNRIkr6AxO5jFEoJzzpT8aQ==} + handlebars@4.7.9: + resolution: {integrity: sha512-4E71E0rpOaQuJR2A3xDZ+GM1HyWYv1clR58tC8emQNeQe3RH7MAzSbat+V0wG78LQBo6m6bzSG/L4pBuCsgnUQ==} engines: {node: '>=0.4.7'} hasBin: true @@ -1681,8 +1799,8 @@ packages: highlight.js@10.7.3: resolution: {integrity: sha512-tzcUFauisWKNHaRkN4Wjl/ZA07gENAjFl3J/c480dprkGTg5EQstgaNFqBfUqCq54kZRIEcreTsAgF/m2quD7A==} - hono@4.11.7: - resolution: {integrity: sha512-l7qMiNee7t82bH3SeyUCt9UF15EVmaBvsppY2zQtrbIhl/yzBTny+YUxsVjSjQ6gaqaeVtZmGocom8TzBlA4Yw==} + hono@4.12.8: + resolution: {integrity: sha512-VJCEvtrezO1IAR+kqEYnxUOoStaQPGrCmX3j4wDTNOcD1uRPFpGlwQUIW8niPuvHXaTUxeOUl5MMDGrl+tmO9A==} engines: {node: '>=16.9.0'} hook-std@4.0.0: @@ -1782,6 +1900,10 @@ packages: resolution: {integrity: sha512-2dYz766i9HprMBasCMvHMuazJ7u4WzhJwo5kb3iPSiW/iRYV6uPari3zHoqZlnuaR7V1bEiNMxikhp37rdBXbw==} engines: {node: '>=12'} + ip-address@10.1.0: + resolution: {integrity: sha512-XXADHxXmvT9+CRxhXg56LJovE+bmWnEWB78LB83VZTprKTmaC5QfruXocxzTZ2Kl0DNwKuBdlIhjL8LeY8Sf8Q==} + engines: {node: '>= 12'} + ipaddr.js@1.9.1: resolution: {integrity: sha512-0KI/607xoxSToH7GjN1FfSbLoU0+btTicjsQSWQlh/hZykN8KpmMf7uYwPW3R+akZ6R/w18ZlXSHBYXiYUPO3g==} engines: {node: '>= 0.10'} @@ -1933,6 +2055,76 @@ packages: resolution: {integrity: sha512-+bT2uH4E5LGE7h/n3evcS/sQlJXCpIp6ym8OWJ5eV6+67Dsql/LaaT7qJBAt2rzfoa/5QBGBhxDix1dMt2kQKQ==} engines: {node: '>= 0.8.0'} + lightningcss-android-arm64@1.32.0: + resolution: {integrity: sha512-YK7/ClTt4kAK0vo6w3X+Pnm0D2cf2vPHbhOXdoNti1Ga0al1P4TBZhwjATvjNwLEBCnKvjJc2jQgHXH0NEwlAg==} + engines: {node: '>= 12.0.0'} + cpu: [arm64] + os: [android] + + lightningcss-darwin-arm64@1.32.0: + resolution: {integrity: sha512-RzeG9Ju5bag2Bv1/lwlVJvBE3q6TtXskdZLLCyfg5pt+HLz9BqlICO7LZM7VHNTTn/5PRhHFBSjk5lc4cmscPQ==} + engines: {node: '>= 12.0.0'} + cpu: [arm64] + os: [darwin] + + lightningcss-darwin-x64@1.32.0: + resolution: {integrity: sha512-U+QsBp2m/s2wqpUYT/6wnlagdZbtZdndSmut/NJqlCcMLTWp5muCrID+K5UJ6jqD2BFshejCYXniPDbNh73V8w==} + engines: {node: '>= 12.0.0'} + cpu: [x64] + os: [darwin] + + lightningcss-freebsd-x64@1.32.0: + resolution: {integrity: sha512-JCTigedEksZk3tHTTthnMdVfGf61Fky8Ji2E4YjUTEQX14xiy/lTzXnu1vwiZe3bYe0q+SpsSH/CTeDXK6WHig==} + engines: {node: '>= 12.0.0'} + cpu: [x64] + os: [freebsd] + + lightningcss-linux-arm-gnueabihf@1.32.0: + resolution: {integrity: sha512-x6rnnpRa2GL0zQOkt6rts3YDPzduLpWvwAF6EMhXFVZXD4tPrBkEFqzGowzCsIWsPjqSK+tyNEODUBXeeVHSkw==} + engines: {node: '>= 12.0.0'} + cpu: [arm] + os: [linux] + + lightningcss-linux-arm64-gnu@1.32.0: + resolution: {integrity: sha512-0nnMyoyOLRJXfbMOilaSRcLH3Jw5z9HDNGfT/gwCPgaDjnx0i8w7vBzFLFR1f6CMLKF8gVbebmkUN3fa/kQJpQ==} + engines: {node: '>= 12.0.0'} + cpu: [arm64] + os: [linux] + + lightningcss-linux-arm64-musl@1.32.0: + resolution: {integrity: sha512-UpQkoenr4UJEzgVIYpI80lDFvRmPVg6oqboNHfoH4CQIfNA+HOrZ7Mo7KZP02dC6LjghPQJeBsvXhJod/wnIBg==} + engines: {node: '>= 12.0.0'} + cpu: [arm64] + os: [linux] + + lightningcss-linux-x64-gnu@1.32.0: + resolution: {integrity: sha512-V7Qr52IhZmdKPVr+Vtw8o+WLsQJYCTd8loIfpDaMRWGUZfBOYEJeyJIkqGIDMZPwPx24pUMfwSxxI8phr/MbOA==} + engines: {node: '>= 12.0.0'} + cpu: [x64] + os: [linux] + + lightningcss-linux-x64-musl@1.32.0: + resolution: {integrity: sha512-bYcLp+Vb0awsiXg/80uCRezCYHNg1/l3mt0gzHnWV9XP1W5sKa5/TCdGWaR/zBM2PeF/HbsQv/j2URNOiVuxWg==} + engines: {node: '>= 12.0.0'} + cpu: [x64] + os: [linux] + + lightningcss-win32-arm64-msvc@1.32.0: + resolution: {integrity: sha512-8SbC8BR40pS6baCM8sbtYDSwEVQd4JlFTOlaD3gWGHfThTcABnNDBda6eTZeqbofalIJhFx0qKzgHJmcPTnGdw==} + engines: {node: '>= 12.0.0'} + cpu: [arm64] + os: [win32] + + lightningcss-win32-x64-msvc@1.32.0: + resolution: {integrity: sha512-Amq9B/SoZYdDi1kFrojnoqPLxYhQ4Wo5XiL8EVJrVsB8ARoC1PWW6VGtT0WKCemjy8aC+louJnjS7U18x3b06Q==} + engines: {node: '>= 12.0.0'} + cpu: [x64] + os: [win32] + + lightningcss@1.32.0: + resolution: {integrity: sha512-NXYBzinNrblfraPGyrbPoD19C1h9lfI/1mzgWYvXUTe414Gz/X1FD2XBZSZM7rRTrMA8JL3OtAaGifrIKhQ5yQ==} + engines: {node: '>= 12.0.0'} + lilconfig@3.1.3: resolution: {integrity: sha512-/vlFKAoH5Cgt3Ie+JLhRbwOsCQePABiU3tJ1egGvyQ+33R/vcwM2Zl2QR/LzjsBeItPt3oSVXapn+m4nQDvpzw==} engines: {node: '>=14'} @@ -1969,8 +2161,8 @@ packages: resolution: {integrity: sha512-gvVijfZvn7R+2qyPX8mAuKcFGDf6Nc61GdvGafQsHL0sBIxfKzA+usWn4GFC/bk+QdwPUD4kWFJLhElipq+0VA==} engines: {node: ^12.20.0 || ^14.13.1 || >=16.0.0} - lodash-es@4.17.23: - resolution: {integrity: sha512-kVI48u3PZr38HdYz98UmfPnXl2DXrpdctLrFLCd3kOx1xUkOmpFPx7gCWWM5MPkL/fD8zb+Ph0QzjGFs4+hHWg==} + lodash-es@4.18.1: + resolution: {integrity: sha512-J8xewKD/Gk22OZbhpOVSwcs60zhd95ESDwezOFuA3/099925PdHJ7OFHNTGtajL3AlZkykD32HykiMo+BIBI8A==} lodash.camelcase@4.3.0: resolution: {integrity: sha512-TwuEnCnxbc3rAvhf/LbG7tJUDzhqXyFnv3dtzLOPgCG/hODL7WFnsbwktkD7yUV0RrreP/l1PALq/YSg6VvjlA==} @@ -2011,8 +2203,8 @@ packages: lodash.upperfirst@4.3.1: resolution: {integrity: sha512-sReKOYJIJf74dhJONhU4e0/shzi1trVbSWDOhKYE5XV2O+H7Sb2Dihwuc7xWxVl+DgFPyTqIN3zMfT9cq5iWDg==} - lodash@4.17.23: - resolution: {integrity: sha512-LgVTMpQtIopCi79SJeDiP0TfWi5CNEc/L/aRdTh3yIvmZXTnheWpKjSZhnvMl8iXbC1tFg9gdHHDMLoV7CnG+w==} + lodash@4.18.1: + resolution: {integrity: sha512-dMInicTPVE8d1e5otfwmmjlxkZoUpiVLwyeTdUsi/Caj/gfzzblBcCE5sRHV/AsjuCmxWrte2TNGSYuCeCq+0Q==} log-symbols@7.0.1: resolution: {integrity: sha512-ja1E3yCr9i/0hmBVaM0bfwDjnGy8I/s6PP4DFp+yP+a+mrHO4Rm7DtmnqROTUkHIkqffC84YY7AeqX6oFk0WFg==} @@ -2106,16 +2298,9 @@ packages: resolution: {integrity: sha512-VP79XUPxV2CigYP3jWwAUFSku2aKqBH7uTAapFWCBqutsbmDo96KY5o8uh6U+/YSIn5OxJnXp73beVkpqMIGhA==} engines: {node: '>=18'} - minimatch@10.1.1: - resolution: {integrity: sha512-enIvLvRAFZYXJzkCYG5RKmPfrFArdLv+R+lbQ53BmIMLIry74bjKzX6iHAm8WYamJkhSSEabrWN5D97XnKObjQ==} - engines: {node: 20 || >=22} - - minimatch@3.1.2: - resolution: {integrity: sha512-J7p63hRiAjw1NDEww1W7i37+ByIrOWO5XQQAzZ3VOcL0PNybwpfmV/N05zFAzwQ9USyEcX6t3UO+K5aqBQOIHw==} - - minimatch@9.0.5: - resolution: {integrity: sha512-G6T0ZX48xgozx7587koeX9Ys2NYy6Gmv//P89sEte9V9whIapMNF4idKxnW2QtCcLiTWlb/wfCabAtAFWhhBow==} - engines: {node: '>=16 || 14 >=14.17'} + minimatch@10.2.4: + resolution: {integrity: sha512-oRjTw/97aTBN0RHbYCdtF1MQfvusSIBQM0IZEgzl6426+8jSC0nF1a/GmnVLpfB9yyr6g6FTqWqiZVbxrtaCIg==} + engines: {node: 18 || 20 || >=22} minimist@1.2.8: resolution: {integrity: sha512-2yyAR8qBkN3YuheJanUpWC5U3bb5osDywNB8RzDVlDwDHbocAJveqqj1u8+SVD7jkWT4yvsHCpWqqWqAxb0zCA==} @@ -2456,8 +2641,8 @@ packages: resolution: {integrity: sha512-oWyT4gICAu+kaA7QWk/jvCHWarMKNs6pXOGWKDTr7cw4IGcUbW+PeTfbaQiLGheFRpjo6O9J0PmyMfQPjH71oA==} engines: {node: 20 || >=22} - path-to-regexp@8.3.0: - resolution: {integrity: sha512-7jdwVIRtsP8MYpdXSwOS0YdD0Du+qOoF/AEPIt88PcCFrZCzx41oxku1jD88hZBwbNUIEfpqvuhjFaMAqMTWnA==} + path-to-regexp@8.4.2: + resolution: {integrity: sha512-qRcuIdP69NPm4qbACK+aDogI5CBDMi1jKe0ry5rSQJz8JVLsC7jV8XpiJjGRLLol3N+R5ihGYcrPLTno6pAdBA==} path-type@4.0.0: resolution: {integrity: sha512-gDKb8aZMDeD/tZWs9P6+q0J9Mwkdl6xMV8TjnGP3qJVJ06bdMgkbBlLU8IdfOsIsFz2BW1rNVT3XuNEl8zPAvw==} @@ -2469,12 +2654,8 @@ packages: picocolors@1.1.1: resolution: {integrity: sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA==} - picomatch@2.3.1: - resolution: {integrity: sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==} - engines: {node: '>=8.6'} - - picomatch@4.0.3: - resolution: {integrity: sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==} + picomatch@4.0.4: + resolution: {integrity: sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A==} engines: {node: '>=12'} pidtree@0.6.0: @@ -2508,7 +2689,7 @@ packages: jiti: '>=1.21.0' postcss: '>=8.0.9' tsx: ^4.8.1 - yaml: ^2.4.2 + yaml: '>=2.8.3' peerDependenciesMeta: jiti: optional: true @@ -2519,8 +2700,8 @@ packages: yaml: optional: true - postcss@8.5.6: - resolution: {integrity: sha512-3Ybi1tAuwAP9s0r1UQ2J4n5Y0G05bJkpUIO0/bI9MhwmD70S5aTWbXGBwxHrelT+XM1k6dM0pk+SwNkpTRN7Pg==} + postcss@8.5.8: + resolution: {integrity: sha512-OW/rX8O/jXnm82Ey1k44pObPtdblfiuWnrd8X7GJ7emImCOstunGbXUpp7HdBrFQX6rJzn3sPT397Wp5aCwCHg==} engines: {node: ^10 || ^12 || >=14} prelude-ls@1.2.1: @@ -2554,8 +2735,8 @@ packages: resolution: {integrity: sha512-vYt7UD1U9Wg6138shLtLOvdAu+8DsC/ilFtEVHcH+wydcSpNE20AfSOduf6MkRFahL5FY7X1oU7nKVZFtfq8Fg==} engines: {node: '>=6'} - qs@6.14.1: - resolution: {integrity: sha512-4EK3+xJl8Ts67nLYNwqw/dsFVnCf+qR7RgXSK9jEEm9unao3njwMDdmsdvoKBKHzxd7tCYz5e5M+SnMjdtXGQQ==} + qs@6.15.0: + resolution: {integrity: sha512-mAZTtNCeetKMH+pSjrb76NAM8V9a05I9aBZOHztWy/UqcJdQYNsf59vrRKWnojAT9Y+GbIvoTBC++CPHqpDBhQ==} engines: {node: '>=0.6'} range-parser@1.2.1: @@ -2620,8 +2801,13 @@ packages: rfdc@1.4.1: resolution: {integrity: sha512-q1b3N5QkRUWUl7iyylaaj3kOpIT0N2i9MqIEQXP73GVsN9cw3fdx8X63cEmWhJGi2PPCF23Ijp7ktmd39rawIA==} - rollup@4.57.0: - resolution: {integrity: sha512-e5lPJi/aui4TO1LpAXIRLySmwXSE8k3b9zoGfd42p67wzxog4WHjiZF3M2uheQih4DGyc25QEV4yRBbpueNiUA==} + rolldown@1.0.0-rc.12: + resolution: {integrity: sha512-yP4USLIMYrwpPHEFB5JGH1uxhcslv6/hL0OyvTuY+3qlOSJvZ7ntYnoWpehBxufkgN0cvXxppuTu5hHa/zPh+A==} + engines: {node: ^20.19.0 || >=22.12.0} + hasBin: true + + rollup@4.60.0: + resolution: {integrity: sha512-yqjxruMGBQJ2gG4HtjZtAfXArHomazDHoFwFFmZZl0r7Pdo7qCIXKqKHZc8yeoMgzJJ+pO6pEEHa+V7uzWlrAQ==} engines: {node: '>=18.0.0', npm: '>=8.0.0'} hasBin: true @@ -2635,16 +2821,11 @@ packages: safer-buffer@2.1.2: resolution: {integrity: sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==} - semantic-release@25.0.2: - resolution: {integrity: sha512-6qGjWccl5yoyugHt3jTgztJ9Y0JVzyH8/Voc/D8PlLat9pwxQYXz7W1Dpnq5h0/G5GCYGUaDSlYcyk3AMh5A6g==} + semantic-release@25.0.3: + resolution: {integrity: sha512-WRgl5GcypwramYX4HV+eQGzUbD7UUbljVmS+5G1uMwX/wLgYuJAxGeerXJDMO2xshng4+FXqCgyB5QfClV6WjA==} engines: {node: ^22.14.0 || >= 24.10.0} hasBin: true - semver-diff@5.0.0: - resolution: {integrity: sha512-0HbGtOm+S7T6NGQ/pxJSJipJvc4DK3FcRVMRkhsIwJDJ4Jcz5DQC1cPPzB5GhzyHjwttW878HaWQq46CkL3cqg==} - engines: {node: '>=12'} - deprecated: Deprecated as the semver package now supports this built-in. - semver-regex@4.0.5: resolution: {integrity: sha512-hunMQrEy1T6Jr2uEVjrAIqjwWcQTgOAcIM52C8MY1EZSD3DDNft04XzvYKPqjED65bNVVko0YI38nYeEHCX3yw==} engines: {node: '>=12'} @@ -2916,6 +3097,9 @@ packages: ts-interface-checker@0.1.13: resolution: {integrity: sha512-Y/arvbn+rrz3JCKl9C4kVNfTfSm2/mEp5FSz5EsZSANGPSlQrpRI5M4PKF+mJnE52jOO90PnPSc3Ur3bTQw0gA==} + tslib@2.8.1: + resolution: {integrity: sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==} + tsup@8.5.1: resolution: {integrity: sha512-xtgkqwdhpKWr3tKPmCkvYmS9xnQK3m3XgxZHwSUjvfTjp7YfXe5tT3GgWi0F2N+ZSMsOeWeZFh7ZZFg5iPhing==} engines: {node: '>=18'} @@ -2986,8 +3170,8 @@ packages: undici-types@7.16.0: resolution: {integrity: sha512-Zz+aZWSj8LE6zoxD+xrjh4VfkIG8Ya6LvYkZqtUQGJPZjYl53ypCaUwWqo7eI0x66KBGeRo+mlBEkMSeSZ38Nw==} - undici@7.19.1: - resolution: {integrity: sha512-Gpq0iNm5M6cQWlyHQv9MV+uOj1jWk7LpkoE5vSp/7zjb4zMdAcUD+VL5y0nH4p9EbUklq00eVIIX/XcDHzu5xg==} + undici@7.24.5: + resolution: {integrity: sha512-3IWdCpjgxp15CbJnsi/Y9TCDE7HWVN19j1hmzVhoAkY/+CJx449tVxT5wZc1Gwg8J+P0LWvzlBzxYRnHJ+1i7Q==} engines: {node: '>=20.18.1'} unicode-emoji-modifier-base@1.0.0: @@ -3034,31 +3218,34 @@ packages: resolution: {integrity: sha512-BNGbWLfd0eUPabhkXUVm0j8uuvREyTh5ovRa/dyow/BqAbZJyC+5fU+IzQOzmAKzYqYRAISoRhdQr3eIZ/PXqg==} engines: {node: '>= 0.8'} - vite@7.3.1: - resolution: {integrity: sha512-w+N7Hifpc3gRjZ63vYBXA56dvvRlNWRczTdmCBBa+CotUzAPf5b7YMdMR/8CQoeYE5LX3W4wj6RYTgonm1b9DA==} + vite@8.0.5: + resolution: {integrity: sha512-nmu43Qvq9UopTRfMx2jOYW5l16pb3iDC1JH6yMuPkpVbzK0k+L7dfsEDH4jRgYFmsg0sTAqkojoZgzLMlwHsCQ==} engines: {node: ^20.19.0 || >=22.12.0} hasBin: true peerDependencies: '@types/node': ^20.19.0 || >=22.12.0 + '@vitejs/devtools': ^0.1.0 + esbuild: ^0.27.0 || ^0.28.0 jiti: '>=1.21.0' less: ^4.0.0 - lightningcss: ^1.21.0 sass: ^1.70.0 sass-embedded: ^1.70.0 stylus: '>=0.54.8' sugarss: ^5.0.0 terser: ^5.16.0 tsx: ^4.8.1 - yaml: ^2.4.2 + yaml: '>=2.8.3' peerDependenciesMeta: '@types/node': optional: true + '@vitejs/devtools': + optional: true + esbuild: + optional: true jiti: optional: true less: optional: true - lightningcss: - optional: true sass: optional: true sass-embedded: @@ -3151,8 +3338,8 @@ packages: resolution: {integrity: sha512-0pfFzegeDWJHJIAmTLRP2DwHjdF5s7jo9tuztdQxAhINCdvS+3nGINqPd00AphqJR/0LhANUS6/+7SCb98YOfA==} engines: {node: '>=10'} - yaml@2.8.2: - resolution: {integrity: sha512-mplynKqc1C2hTVYxd0PU2xQAc22TI1vShAYGksCCfxbn/dFwnHTNi1bvYsBTkhdUNtGIf5xNOg938rrSSYvS9A==} + yaml@2.8.3: + resolution: {integrity: sha512-AvbaCLOO2Otw/lW5bmh9d/WEdcDFdQp2Z2ZUH3pX9U2ihyUY0nvLv7J6TrWowklRGPYbB/IuIMfYgxaCPg5Bpg==} engines: {node: '>= 14.6'} hasBin: true @@ -3214,7 +3401,7 @@ snapshots: '@actions/http-client@3.0.2': dependencies: tunnel: 0.0.6 - undici: 7.19.1 + undici: 7.24.5 '@actions/io@2.0.0': {} @@ -3263,7 +3450,7 @@ snapshots: '@commitlint/config-validator@20.3.1': dependencies: '@commitlint/types': 20.3.1 - ajv: 8.17.1 + ajv: 8.18.0 '@commitlint/ensure@20.3.1': dependencies: @@ -3352,6 +3539,22 @@ snapshots: '@types/conventional-commits-parser': 5.0.2 chalk: 5.6.2 + '@emnapi/core@1.9.2': + dependencies: + '@emnapi/wasi-threads': 1.2.1 + tslib: 2.8.1 + optional: true + + '@emnapi/runtime@1.9.2': + dependencies: + tslib: 2.8.1 + optional: true + + '@emnapi/wasi-threads@1.2.1': + dependencies: + tslib: 2.8.1 + optional: true + '@esbuild/aix-ppc64@0.27.2': optional: true @@ -3441,7 +3644,7 @@ snapshots: dependencies: '@eslint/object-schema': 2.1.7 debug: 4.4.3 - minimatch: 3.1.2 + minimatch: 10.2.4 transitivePeerDependencies: - supports-color @@ -3455,14 +3658,14 @@ snapshots: '@eslint/eslintrc@3.3.3': dependencies: - ajv: 6.12.6 + ajv: 6.14.0 debug: 4.4.3 espree: 10.4.0 globals: 14.0.0 ignore: 5.3.2 import-fresh: 3.3.1 js-yaml: 4.1.1 - minimatch: 3.1.2 + minimatch: 10.2.4 strip-json-comments: 3.1.1 transitivePeerDependencies: - supports-color @@ -3476,9 +3679,9 @@ snapshots: '@eslint/core': 0.17.0 levn: 0.4.1 - '@hono/node-server@1.19.9(hono@4.11.7)': + '@hono/node-server@1.19.11(hono@4.12.8)': dependencies: - hono: 4.11.7 + hono: 4.12.8 '@humanfs/core@0.19.1': {} @@ -3491,12 +3694,6 @@ snapshots: '@humanwhocodes/retry@0.4.3': {} - '@isaacs/balanced-match@4.0.1': {} - - '@isaacs/brace-expansion@5.0.0': - dependencies: - '@isaacs/balanced-match': 4.0.1 - '@jmlweb/commitlint-config@3.0.1(@commitlint/cli@20.3.1(@types/node@25.0.10)(typescript@5.9.3))(@commitlint/config-conventional@20.3.1)': dependencies: '@commitlint/cli': 20.3.1(@types/node@25.0.10)(typescript@5.9.3) @@ -3524,13 +3721,13 @@ snapshots: '@jmlweb/tsconfig-base@1.0.5': {} - '@jmlweb/tsup-config-base@1.1.4(tsup@8.5.1(jiti@2.6.1)(postcss@8.5.6)(typescript@5.9.3)(yaml@2.8.2))': + '@jmlweb/tsup-config-base@1.1.4(tsup@8.5.1(jiti@2.6.1)(postcss@8.5.8)(typescript@5.9.3)(yaml@2.8.3))': dependencies: - tsup: 8.5.1(jiti@2.6.1)(postcss@8.5.6)(typescript@5.9.3)(yaml@2.8.2) + tsup: 8.5.1(jiti@2.6.1)(postcss@8.5.8)(typescript@5.9.3)(yaml@2.8.3) - '@jmlweb/vitest-config@2.0.0(vitest@4.0.18(@types/node@25.0.10)(jiti@2.6.1)(yaml@2.8.2))': + '@jmlweb/vitest-config@2.0.0(vitest@4.0.18(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2)(@types/node@25.0.10)(esbuild@0.27.2)(jiti@2.6.1)(yaml@2.8.3))': dependencies: - vitest: 4.0.18(@types/node@25.0.10)(jiti@2.6.1)(yaml@2.8.2) + vitest: 4.0.18(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2)(@types/node@25.0.10)(esbuild@0.27.2)(jiti@2.6.1)(yaml@2.8.3) '@jridgewell/gen-mapping@0.3.13': dependencies: @@ -3546,18 +3743,19 @@ snapshots: '@jridgewell/resolve-uri': 3.1.2 '@jridgewell/sourcemap-codec': 1.5.5 - '@modelcontextprotocol/sdk@1.25.3(hono@4.11.7)(zod@4.3.6)': + '@modelcontextprotocol/sdk@1.27.1(zod@4.3.6)': dependencies: - '@hono/node-server': 1.19.9(hono@4.11.7) - ajv: 8.17.1 - ajv-formats: 3.0.1(ajv@8.17.1) + '@hono/node-server': 1.19.11(hono@4.12.8) + ajv: 8.18.0 + ajv-formats: 3.0.1(ajv@8.18.0) content-type: 1.0.5 cors: 2.8.6 cross-spawn: 7.0.6 eventsource: 3.0.7 eventsource-parser: 3.0.6 express: 5.2.1 - express-rate-limit: 7.5.1(express@5.2.1) + express-rate-limit: 8.3.1(express@5.2.1) + hono: 4.12.8 jose: 6.1.3 json-schema-typed: 8.0.2 pkce-challenge: 5.0.1 @@ -3565,9 +3763,15 @@ snapshots: zod: 4.3.6 zod-to-json-schema: 3.25.1(zod@4.3.6) transitivePeerDependencies: - - hono - supports-color + '@napi-rs/wasm-runtime@1.1.2(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2)': + dependencies: + '@emnapi/core': 1.9.2 + '@emnapi/runtime': 1.9.2 + '@tybys/wasm-util': 0.10.1 + optional: true + '@octokit/auth-token@6.0.0': {} '@octokit/core@7.0.6': @@ -3627,6 +3831,8 @@ snapshots: dependencies: '@octokit/openapi-types': 27.0.0 + '@oxc-project/types@0.122.0': {} + '@pnpm/config.env-replace@1.1.0': {} '@pnpm/network.ca-file@1.0.2': @@ -3639,92 +3845,144 @@ snapshots: '@pnpm/network.ca-file': 1.0.2 config-chain: 1.1.13 - '@rollup/rollup-android-arm-eabi@4.57.0': + '@rolldown/binding-android-arm64@1.0.0-rc.12': + optional: true + + '@rolldown/binding-darwin-arm64@1.0.0-rc.12': + optional: true + + '@rolldown/binding-darwin-x64@1.0.0-rc.12': + optional: true + + '@rolldown/binding-freebsd-x64@1.0.0-rc.12': + optional: true + + '@rolldown/binding-linux-arm-gnueabihf@1.0.0-rc.12': + optional: true + + '@rolldown/binding-linux-arm64-gnu@1.0.0-rc.12': + optional: true + + '@rolldown/binding-linux-arm64-musl@1.0.0-rc.12': + optional: true + + '@rolldown/binding-linux-ppc64-gnu@1.0.0-rc.12': + optional: true + + '@rolldown/binding-linux-s390x-gnu@1.0.0-rc.12': + optional: true + + '@rolldown/binding-linux-x64-gnu@1.0.0-rc.12': + optional: true + + '@rolldown/binding-linux-x64-musl@1.0.0-rc.12': + optional: true + + '@rolldown/binding-openharmony-arm64@1.0.0-rc.12': + optional: true + + '@rolldown/binding-wasm32-wasi@1.0.0-rc.12(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2)': + dependencies: + '@napi-rs/wasm-runtime': 1.1.2(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2) + transitivePeerDependencies: + - '@emnapi/core' + - '@emnapi/runtime' + optional: true + + '@rolldown/binding-win32-arm64-msvc@1.0.0-rc.12': + optional: true + + '@rolldown/binding-win32-x64-msvc@1.0.0-rc.12': optional: true - '@rollup/rollup-android-arm64@4.57.0': + '@rolldown/pluginutils@1.0.0-rc.12': {} + + '@rollup/rollup-android-arm-eabi@4.60.0': optional: true - '@rollup/rollup-darwin-arm64@4.57.0': + '@rollup/rollup-android-arm64@4.60.0': optional: true - '@rollup/rollup-darwin-x64@4.57.0': + '@rollup/rollup-darwin-arm64@4.60.0': optional: true - '@rollup/rollup-freebsd-arm64@4.57.0': + '@rollup/rollup-darwin-x64@4.60.0': optional: true - '@rollup/rollup-freebsd-x64@4.57.0': + '@rollup/rollup-freebsd-arm64@4.60.0': optional: true - '@rollup/rollup-linux-arm-gnueabihf@4.57.0': + '@rollup/rollup-freebsd-x64@4.60.0': optional: true - '@rollup/rollup-linux-arm-musleabihf@4.57.0': + '@rollup/rollup-linux-arm-gnueabihf@4.60.0': optional: true - '@rollup/rollup-linux-arm64-gnu@4.57.0': + '@rollup/rollup-linux-arm-musleabihf@4.60.0': optional: true - '@rollup/rollup-linux-arm64-musl@4.57.0': + '@rollup/rollup-linux-arm64-gnu@4.60.0': optional: true - '@rollup/rollup-linux-loong64-gnu@4.57.0': + '@rollup/rollup-linux-arm64-musl@4.60.0': optional: true - '@rollup/rollup-linux-loong64-musl@4.57.0': + '@rollup/rollup-linux-loong64-gnu@4.60.0': optional: true - '@rollup/rollup-linux-ppc64-gnu@4.57.0': + '@rollup/rollup-linux-loong64-musl@4.60.0': optional: true - '@rollup/rollup-linux-ppc64-musl@4.57.0': + '@rollup/rollup-linux-ppc64-gnu@4.60.0': optional: true - '@rollup/rollup-linux-riscv64-gnu@4.57.0': + '@rollup/rollup-linux-ppc64-musl@4.60.0': optional: true - '@rollup/rollup-linux-riscv64-musl@4.57.0': + '@rollup/rollup-linux-riscv64-gnu@4.60.0': optional: true - '@rollup/rollup-linux-s390x-gnu@4.57.0': + '@rollup/rollup-linux-riscv64-musl@4.60.0': optional: true - '@rollup/rollup-linux-x64-gnu@4.57.0': + '@rollup/rollup-linux-s390x-gnu@4.60.0': optional: true - '@rollup/rollup-linux-x64-musl@4.57.0': + '@rollup/rollup-linux-x64-gnu@4.60.0': optional: true - '@rollup/rollup-openbsd-x64@4.57.0': + '@rollup/rollup-linux-x64-musl@4.60.0': optional: true - '@rollup/rollup-openharmony-arm64@4.57.0': + '@rollup/rollup-openbsd-x64@4.60.0': optional: true - '@rollup/rollup-win32-arm64-msvc@4.57.0': + '@rollup/rollup-openharmony-arm64@4.60.0': optional: true - '@rollup/rollup-win32-ia32-msvc@4.57.0': + '@rollup/rollup-win32-arm64-msvc@4.60.0': optional: true - '@rollup/rollup-win32-x64-gnu@4.57.0': + '@rollup/rollup-win32-ia32-msvc@4.60.0': optional: true - '@rollup/rollup-win32-x64-msvc@4.57.0': + '@rollup/rollup-win32-x64-gnu@4.60.0': + optional: true + + '@rollup/rollup-win32-x64-msvc@4.60.0': optional: true '@sec-ant/readable-stream@0.4.1': {} - '@semantic-release/changelog@6.0.3(semantic-release@25.0.2(typescript@5.9.3))': + '@semantic-release/changelog@6.0.3(semantic-release@25.0.3(typescript@5.9.3))': dependencies: '@semantic-release/error': 3.0.0 aggregate-error: 3.1.0 fs-extra: 11.3.3 - lodash: 4.17.23 - semantic-release: 25.0.2(typescript@5.9.3) + lodash: 4.18.1 + semantic-release: 25.0.3(typescript@5.9.3) - '@semantic-release/commit-analyzer@13.0.1(semantic-release@25.0.2(typescript@5.9.3))': + '@semantic-release/commit-analyzer@13.0.1(semantic-release@25.0.3(typescript@5.9.3))': dependencies: conventional-changelog-angular: 8.1.0 conventional-changelog-writer: 8.2.0 @@ -3732,9 +3990,9 @@ snapshots: conventional-commits-parser: 6.2.1 debug: 4.4.3 import-from-esm: 2.0.0 - lodash-es: 4.17.23 + lodash-es: 4.18.1 micromatch: 4.0.8 - semantic-release: 25.0.2(typescript@5.9.3) + semantic-release: 25.0.3(typescript@5.9.3) transitivePeerDependencies: - supports-color @@ -3742,33 +4000,33 @@ snapshots: '@semantic-release/error@4.0.0': {} - '@semantic-release/exec@7.1.0(semantic-release@25.0.2(typescript@5.9.3))': + '@semantic-release/exec@7.1.0(semantic-release@25.0.3(typescript@5.9.3))': dependencies: '@semantic-release/error': 4.0.0 aggregate-error: 3.1.0 debug: 4.4.3 execa: 9.6.1 - lodash-es: 4.17.23 + lodash-es: 4.18.1 parse-json: 8.3.0 - semantic-release: 25.0.2(typescript@5.9.3) + semantic-release: 25.0.3(typescript@5.9.3) transitivePeerDependencies: - supports-color - '@semantic-release/git@10.0.1(semantic-release@25.0.2(typescript@5.9.3))': + '@semantic-release/git@10.0.1(semantic-release@25.0.3(typescript@5.9.3))': dependencies: '@semantic-release/error': 3.0.0 aggregate-error: 3.1.0 debug: 4.4.3 dir-glob: 3.0.1 execa: 5.1.1 - lodash: 4.17.23 + lodash: 4.18.1 micromatch: 4.0.8 p-reduce: 2.1.0 - semantic-release: 25.0.2(typescript@5.9.3) + semantic-release: 25.0.3(typescript@5.9.3) transitivePeerDependencies: - supports-color - '@semantic-release/github@12.0.2(semantic-release@25.0.2(typescript@5.9.3))': + '@semantic-release/github@12.0.2(semantic-release@25.0.3(typescript@5.9.3))': dependencies: '@octokit/core': 7.0.6 '@octokit/plugin-paginate-rest': 14.0.0(@octokit/core@7.0.6) @@ -3781,17 +4039,17 @@ snapshots: http-proxy-agent: 7.0.2 https-proxy-agent: 7.0.6 issue-parser: 7.0.1 - lodash-es: 4.17.23 + lodash-es: 4.18.1 mime: 4.1.0 p-filter: 4.1.0 - semantic-release: 25.0.2(typescript@5.9.3) + semantic-release: 25.0.3(typescript@5.9.3) tinyglobby: 0.2.15 - undici: 7.19.1 + undici: 7.24.5 url-join: 5.0.0 transitivePeerDependencies: - supports-color - '@semantic-release/npm@13.1.3(semantic-release@25.0.2(typescript@5.9.3))': + '@semantic-release/npm@13.1.3(semantic-release@25.0.3(typescript@5.9.3))': dependencies: '@actions/core': 2.0.3 '@semantic-release/error': 4.0.0 @@ -3799,18 +4057,18 @@ snapshots: env-ci: 11.2.0 execa: 9.6.1 fs-extra: 11.3.3 - lodash-es: 4.17.23 + lodash-es: 4.18.1 nerf-dart: 1.0.0 normalize-url: 8.1.1 npm: 11.8.0 rc: 1.2.8 read-pkg: 10.0.0 registry-auth-token: 5.1.1 - semantic-release: 25.0.2(typescript@5.9.3) + semantic-release: 25.0.3(typescript@5.9.3) semver: 7.7.3 tempy: 3.1.2 - '@semantic-release/release-notes-generator@14.1.0(semantic-release@25.0.2(typescript@5.9.3))': + '@semantic-release/release-notes-generator@14.1.0(semantic-release@25.0.3(typescript@5.9.3))': dependencies: conventional-changelog-angular: 8.1.0 conventional-changelog-writer: 8.2.0 @@ -3820,9 +4078,9 @@ snapshots: get-stream: 7.0.1 import-from-esm: 2.0.0 into-stream: 7.0.0 - lodash-es: 4.17.23 + lodash-es: 4.18.1 read-package-up: 11.0.0 - semantic-release: 25.0.2(typescript@5.9.3) + semantic-release: 25.0.3(typescript@5.9.3) transitivePeerDependencies: - supports-color @@ -3832,6 +4090,11 @@ snapshots: '@standard-schema/spec@1.1.0': {} + '@tybys/wasm-util@0.10.1': + dependencies: + tslib: 2.8.1 + optional: true + '@types/asciichart@1.5.8': {} '@types/chai@5.2.3': @@ -3929,7 +4192,7 @@ snapshots: '@typescript-eslint/types': 8.54.0 '@typescript-eslint/visitor-keys': 8.54.0 debug: 4.4.3 - minimatch: 9.0.5 + minimatch: 10.2.4 semver: 7.7.3 tinyglobby: 0.2.15 ts-api-utils: 2.4.0(typescript@5.9.3) @@ -3953,7 +4216,7 @@ snapshots: '@typescript-eslint/types': 8.54.0 eslint-visitor-keys: 4.2.1 - '@vitest/coverage-v8@4.0.18(vitest@4.0.18(@types/node@25.0.10)(jiti@2.6.1)(yaml@2.8.2))': + '@vitest/coverage-v8@4.0.18(vitest@4.0.18(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2)(@types/node@25.0.10)(esbuild@0.27.2)(jiti@2.6.1)(yaml@2.8.3))': dependencies: '@bcoe/v8-coverage': 1.0.2 '@vitest/utils': 4.0.18 @@ -3965,7 +4228,7 @@ snapshots: obug: 2.1.1 std-env: 3.10.0 tinyrainbow: 3.0.3 - vitest: 4.0.18(@types/node@25.0.10)(jiti@2.6.1)(yaml@2.8.2) + vitest: 4.0.18(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2)(@types/node@25.0.10)(esbuild@0.27.2)(jiti@2.6.1)(yaml@2.8.3) '@vitest/expect@4.0.18': dependencies: @@ -3976,13 +4239,13 @@ snapshots: chai: 6.2.2 tinyrainbow: 3.0.3 - '@vitest/mocker@4.0.18(vite@7.3.1(@types/node@25.0.10)(jiti@2.6.1)(yaml@2.8.2))': + '@vitest/mocker@4.0.18(vite@8.0.5(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2)(@types/node@25.0.10)(esbuild@0.27.2)(jiti@2.6.1)(yaml@2.8.3))': dependencies: '@vitest/spy': 4.0.18 estree-walker: 3.0.3 magic-string: 0.30.21 optionalDependencies: - vite: 7.3.1(@types/node@25.0.10)(jiti@2.6.1)(yaml@2.8.2) + vite: 8.0.5(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2)(@types/node@25.0.10)(esbuild@0.27.2)(jiti@2.6.1)(yaml@2.8.3) '@vitest/pretty-format@4.0.18': dependencies: @@ -4034,18 +4297,18 @@ snapshots: clean-stack: 5.3.0 indent-string: 5.0.0 - ajv-formats@3.0.1(ajv@8.17.1): + ajv-formats@3.0.1(ajv@8.18.0): optionalDependencies: - ajv: 8.17.1 + ajv: 8.18.0 - ajv@6.12.6: + ajv@6.14.0: dependencies: fast-deep-equal: 3.1.3 fast-json-stable-stringify: 2.1.0 json-schema-traverse: 0.4.1 uri-js: 4.4.1 - ajv@8.17.1: + ajv@8.18.0: dependencies: fast-deep-equal: 3.1.3 fast-uri: 3.1.0 @@ -4092,7 +4355,7 @@ snapshots: estree-walker: 3.0.3 js-tokens: 9.0.1 - balanced-match@1.0.2: {} + balanced-match@4.0.4: {} before-after-hook@4.0.0: {} @@ -4106,7 +4369,7 @@ snapshots: http-errors: 2.0.1 iconv-lite: 0.7.2 on-finished: 2.4.1 - qs: 6.14.1 + qs: 6.15.0 raw-body: 3.0.2 type-is: 2.0.1 transitivePeerDependencies: @@ -4125,14 +4388,9 @@ snapshots: widest-line: 5.0.0 wrap-ansi: 9.0.2 - brace-expansion@1.1.12: - dependencies: - balanced-match: 1.0.2 - concat-map: 0.0.1 - - brace-expansion@2.0.2: + brace-expansion@5.0.5: dependencies: - balanced-match: 1.0.2 + balanced-match: 4.0.4 braces@3.0.3: dependencies: @@ -4257,8 +4515,6 @@ snapshots: array-ify: 1.0.0 dot-prop: 5.3.0 - concat-map@0.0.1: {} - confbox@0.1.8: {} config-chain@1.1.13: @@ -4291,7 +4547,7 @@ snapshots: conventional-changelog-writer@8.2.0: dependencies: conventional-commits-filter: 5.0.0 - handlebars: 4.7.8 + handlebars: 4.7.9 meow: 13.2.0 semver: 7.7.3 @@ -4361,6 +4617,8 @@ snapshots: depd@2.0.0: {} + detect-libc@2.1.2: {} + dir-glob@3.0.1: dependencies: path-type: 4.0.0 @@ -4482,7 +4740,7 @@ snapshots: '@humanwhocodes/module-importer': 1.0.1 '@humanwhocodes/retry': 0.4.3 '@types/estree': 1.0.8 - ajv: 6.12.6 + ajv: 6.14.0 chalk: 4.1.2 cross-spawn: 7.0.6 debug: 4.4.3 @@ -4501,7 +4759,7 @@ snapshots: is-glob: 4.0.3 json-stable-stringify-without-jsonify: 1.0.1 lodash.merge: 4.6.2 - minimatch: 3.1.2 + minimatch: 10.2.4 natural-compare: 1.4.0 optionator: 0.9.4 optionalDependencies: @@ -4582,9 +4840,10 @@ snapshots: expect-type@1.3.0: {} - express-rate-limit@7.5.1(express@5.2.1): + express-rate-limit@8.3.1(express@5.2.1): dependencies: express: 5.2.1 + ip-address: 10.1.0 express@5.2.1: dependencies: @@ -4608,7 +4867,7 @@ snapshots: once: 1.4.0 parseurl: 1.3.3 proxy-addr: 2.0.7 - qs: 6.14.1 + qs: 6.15.0 range-parser: 1.2.1 router: 2.2.0 send: 1.2.1 @@ -4629,9 +4888,9 @@ snapshots: fast-uri@3.1.0: {} - fdir@6.5.0(picomatch@4.0.3): + fdir@6.5.0(picomatch@4.0.4): optionalDependencies: - picomatch: 4.0.3 + picomatch: 4.0.4 figlet@1.10.0: dependencies: @@ -4690,14 +4949,14 @@ snapshots: dependencies: magic-string: 0.30.21 mlly: 1.8.0 - rollup: 4.57.0 + rollup: 4.60.0 flat-cache@4.0.1: dependencies: - flatted: 3.3.3 + flatted: 3.4.2 keyv: 4.5.4 - flatted@3.3.3: {} + flatted@3.4.2: {} forwarded@0.2.0: {} @@ -4775,7 +5034,7 @@ snapshots: glob@13.0.0: dependencies: - minimatch: 10.1.1 + minimatch: 10.2.4 minipass: 7.1.2 path-scurry: 2.0.1 @@ -4791,7 +5050,7 @@ snapshots: graceful-fs@4.2.11: {} - handlebars@4.7.8: + handlebars@4.7.9: dependencies: minimist: 1.2.8 neo-async: 2.6.2 @@ -4812,7 +5071,7 @@ snapshots: highlight.js@10.7.3: {} - hono@4.11.7: {} + hono@4.12.8: {} hook-std@4.0.0: {} @@ -4897,6 +5156,8 @@ snapshots: from2: 2.3.0 p-is-promise: 3.0.0 + ip-address@10.1.0: {} + ipaddr.js@1.9.1: {} is-any-array@2.0.1: {} @@ -5011,6 +5272,55 @@ snapshots: prelude-ls: 1.2.1 type-check: 0.4.0 + lightningcss-android-arm64@1.32.0: + optional: true + + lightningcss-darwin-arm64@1.32.0: + optional: true + + lightningcss-darwin-x64@1.32.0: + optional: true + + lightningcss-freebsd-x64@1.32.0: + optional: true + + lightningcss-linux-arm-gnueabihf@1.32.0: + optional: true + + lightningcss-linux-arm64-gnu@1.32.0: + optional: true + + lightningcss-linux-arm64-musl@1.32.0: + optional: true + + lightningcss-linux-x64-gnu@1.32.0: + optional: true + + lightningcss-linux-x64-musl@1.32.0: + optional: true + + lightningcss-win32-arm64-msvc@1.32.0: + optional: true + + lightningcss-win32-x64-msvc@1.32.0: + optional: true + + lightningcss@1.32.0: + dependencies: + detect-libc: 2.1.2 + optionalDependencies: + lightningcss-android-arm64: 1.32.0 + lightningcss-darwin-arm64: 1.32.0 + lightningcss-darwin-x64: 1.32.0 + lightningcss-freebsd-x64: 1.32.0 + lightningcss-linux-arm-gnueabihf: 1.32.0 + lightningcss-linux-arm64-gnu: 1.32.0 + lightningcss-linux-arm64-musl: 1.32.0 + lightningcss-linux-x64-gnu: 1.32.0 + lightningcss-linux-x64-musl: 1.32.0 + lightningcss-win32-arm64-msvc: 1.32.0 + lightningcss-win32-x64-msvc: 1.32.0 + lilconfig@3.1.3: {} lines-and-columns@1.2.4: {} @@ -5023,7 +5333,7 @@ snapshots: nano-spawn: 2.0.0 pidtree: 0.6.0 string-argv: 0.3.2 - yaml: 2.8.2 + yaml: 2.8.3 listr2@9.0.5: dependencies: @@ -5056,7 +5366,7 @@ snapshots: dependencies: p-locate: 6.0.0 - lodash-es@4.17.23: {} + lodash-es@4.18.1: {} lodash.camelcase@4.3.0: {} @@ -5084,7 +5394,7 @@ snapshots: lodash.upperfirst@4.3.1: {} - lodash@4.17.23: {} + lodash@4.18.1: {} log-symbols@7.0.1: dependencies: @@ -5151,7 +5461,7 @@ snapshots: micromatch@4.0.8: dependencies: braces: 3.0.3 - picomatch: 2.3.1 + picomatch: 4.0.4 mime-db@1.54.0: {} @@ -5167,17 +5477,9 @@ snapshots: mimic-function@5.0.1: {} - minimatch@10.1.1: - dependencies: - '@isaacs/brace-expansion': 5.0.0 - - minimatch@3.1.2: - dependencies: - brace-expansion: 1.1.12 - - minimatch@9.0.5: + minimatch@10.2.4: dependencies: - brace-expansion: 2.0.2 + brace-expansion: 5.0.5 minimist@1.2.8: {} @@ -5448,7 +5750,7 @@ snapshots: lru-cache: 11.2.5 minipass: 7.1.2 - path-to-regexp@8.3.0: {} + path-to-regexp@8.4.2: {} path-type@4.0.0: {} @@ -5456,9 +5758,7 @@ snapshots: picocolors@1.1.1: {} - picomatch@2.3.1: {} - - picomatch@4.0.3: {} + picomatch@4.0.4: {} pidtree@0.6.0: {} @@ -5479,15 +5779,15 @@ snapshots: mlly: 1.8.0 pathe: 2.0.3 - postcss-load-config@6.0.1(jiti@2.6.1)(postcss@8.5.6)(yaml@2.8.2): + postcss-load-config@6.0.1(jiti@2.6.1)(postcss@8.5.8)(yaml@2.8.3): dependencies: lilconfig: 3.1.3 optionalDependencies: jiti: 2.6.1 - postcss: 8.5.6 - yaml: 2.8.2 + postcss: 8.5.8 + yaml: 2.8.3 - postcss@8.5.6: + postcss@8.5.8: dependencies: nanoid: 3.3.11 picocolors: 1.1.1 @@ -5517,7 +5817,7 @@ snapshots: punycode@2.3.1: {} - qs@6.14.1: + qs@6.15.0: dependencies: side-channel: 1.1.0 @@ -5596,35 +5896,59 @@ snapshots: rfdc@1.4.1: {} - rollup@4.57.0: + rolldown@1.0.0-rc.12(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2): + dependencies: + '@oxc-project/types': 0.122.0 + '@rolldown/pluginutils': 1.0.0-rc.12 + optionalDependencies: + '@rolldown/binding-android-arm64': 1.0.0-rc.12 + '@rolldown/binding-darwin-arm64': 1.0.0-rc.12 + '@rolldown/binding-darwin-x64': 1.0.0-rc.12 + '@rolldown/binding-freebsd-x64': 1.0.0-rc.12 + '@rolldown/binding-linux-arm-gnueabihf': 1.0.0-rc.12 + '@rolldown/binding-linux-arm64-gnu': 1.0.0-rc.12 + '@rolldown/binding-linux-arm64-musl': 1.0.0-rc.12 + '@rolldown/binding-linux-ppc64-gnu': 1.0.0-rc.12 + '@rolldown/binding-linux-s390x-gnu': 1.0.0-rc.12 + '@rolldown/binding-linux-x64-gnu': 1.0.0-rc.12 + '@rolldown/binding-linux-x64-musl': 1.0.0-rc.12 + '@rolldown/binding-openharmony-arm64': 1.0.0-rc.12 + '@rolldown/binding-wasm32-wasi': 1.0.0-rc.12(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2) + '@rolldown/binding-win32-arm64-msvc': 1.0.0-rc.12 + '@rolldown/binding-win32-x64-msvc': 1.0.0-rc.12 + transitivePeerDependencies: + - '@emnapi/core' + - '@emnapi/runtime' + + rollup@4.60.0: dependencies: '@types/estree': 1.0.8 optionalDependencies: - '@rollup/rollup-android-arm-eabi': 4.57.0 - '@rollup/rollup-android-arm64': 4.57.0 - '@rollup/rollup-darwin-arm64': 4.57.0 - '@rollup/rollup-darwin-x64': 4.57.0 - '@rollup/rollup-freebsd-arm64': 4.57.0 - '@rollup/rollup-freebsd-x64': 4.57.0 - '@rollup/rollup-linux-arm-gnueabihf': 4.57.0 - '@rollup/rollup-linux-arm-musleabihf': 4.57.0 - '@rollup/rollup-linux-arm64-gnu': 4.57.0 - '@rollup/rollup-linux-arm64-musl': 4.57.0 - '@rollup/rollup-linux-loong64-gnu': 4.57.0 - '@rollup/rollup-linux-loong64-musl': 4.57.0 - '@rollup/rollup-linux-ppc64-gnu': 4.57.0 - '@rollup/rollup-linux-ppc64-musl': 4.57.0 - '@rollup/rollup-linux-riscv64-gnu': 4.57.0 - '@rollup/rollup-linux-riscv64-musl': 4.57.0 - '@rollup/rollup-linux-s390x-gnu': 4.57.0 - '@rollup/rollup-linux-x64-gnu': 4.57.0 - '@rollup/rollup-linux-x64-musl': 4.57.0 - '@rollup/rollup-openbsd-x64': 4.57.0 - '@rollup/rollup-openharmony-arm64': 4.57.0 - '@rollup/rollup-win32-arm64-msvc': 4.57.0 - '@rollup/rollup-win32-ia32-msvc': 4.57.0 - '@rollup/rollup-win32-x64-gnu': 4.57.0 - '@rollup/rollup-win32-x64-msvc': 4.57.0 + '@rollup/rollup-android-arm-eabi': 4.60.0 + '@rollup/rollup-android-arm64': 4.60.0 + '@rollup/rollup-darwin-arm64': 4.60.0 + '@rollup/rollup-darwin-x64': 4.60.0 + '@rollup/rollup-freebsd-arm64': 4.60.0 + '@rollup/rollup-freebsd-x64': 4.60.0 + '@rollup/rollup-linux-arm-gnueabihf': 4.60.0 + '@rollup/rollup-linux-arm-musleabihf': 4.60.0 + '@rollup/rollup-linux-arm64-gnu': 4.60.0 + '@rollup/rollup-linux-arm64-musl': 4.60.0 + '@rollup/rollup-linux-loong64-gnu': 4.60.0 + '@rollup/rollup-linux-loong64-musl': 4.60.0 + '@rollup/rollup-linux-ppc64-gnu': 4.60.0 + '@rollup/rollup-linux-ppc64-musl': 4.60.0 + '@rollup/rollup-linux-riscv64-gnu': 4.60.0 + '@rollup/rollup-linux-riscv64-musl': 4.60.0 + '@rollup/rollup-linux-s390x-gnu': 4.60.0 + '@rollup/rollup-linux-x64-gnu': 4.60.0 + '@rollup/rollup-linux-x64-musl': 4.60.0 + '@rollup/rollup-openbsd-x64': 4.60.0 + '@rollup/rollup-openharmony-arm64': 4.60.0 + '@rollup/rollup-win32-arm64-msvc': 4.60.0 + '@rollup/rollup-win32-ia32-msvc': 4.60.0 + '@rollup/rollup-win32-x64-gnu': 4.60.0 + '@rollup/rollup-win32-x64-msvc': 4.60.0 fsevents: 2.3.3 router@2.2.0: @@ -5633,7 +5957,7 @@ snapshots: depd: 2.0.0 is-promise: 4.0.0 parseurl: 1.3.3 - path-to-regexp: 8.3.0 + path-to-regexp: 8.4.2 transitivePeerDependencies: - supports-color @@ -5641,13 +5965,13 @@ snapshots: safer-buffer@2.1.2: {} - semantic-release@25.0.2(typescript@5.9.3): + semantic-release@25.0.3(typescript@5.9.3): dependencies: - '@semantic-release/commit-analyzer': 13.0.1(semantic-release@25.0.2(typescript@5.9.3)) + '@semantic-release/commit-analyzer': 13.0.1(semantic-release@25.0.3(typescript@5.9.3)) '@semantic-release/error': 4.0.0 - '@semantic-release/github': 12.0.2(semantic-release@25.0.2(typescript@5.9.3)) - '@semantic-release/npm': 13.1.3(semantic-release@25.0.2(typescript@5.9.3)) - '@semantic-release/release-notes-generator': 14.1.0(semantic-release@25.0.2(typescript@5.9.3)) + '@semantic-release/github': 12.0.2(semantic-release@25.0.3(typescript@5.9.3)) + '@semantic-release/npm': 13.1.3(semantic-release@25.0.3(typescript@5.9.3)) + '@semantic-release/release-notes-generator': 14.1.0(semantic-release@25.0.3(typescript@5.9.3)) aggregate-error: 5.0.0 cosmiconfig: 9.0.0(typescript@5.9.3) debug: 4.4.3 @@ -5660,7 +5984,7 @@ snapshots: hook-std: 4.0.0 hosted-git-info: 9.0.2 import-from-esm: 2.0.0 - lodash-es: 4.17.23 + lodash-es: 4.18.1 marked: 15.0.12 marked-terminal: 7.3.0(marked@15.0.12) micromatch: 4.0.8 @@ -5669,17 +5993,12 @@ snapshots: read-package-up: 12.0.0 resolve-from: 5.0.0 semver: 7.7.3 - semver-diff: 5.0.0 signale: 1.4.0 yargs: 18.0.0 transitivePeerDependencies: - supports-color - typescript - semver-diff@5.0.0: - dependencies: - semver: 7.7.3 - semver-regex@4.0.5: {} semver@7.7.3: {} @@ -5923,8 +6242,8 @@ snapshots: tinyglobby@0.2.15: dependencies: - fdir: 6.5.0(picomatch@4.0.3) - picomatch: 4.0.3 + fdir: 6.5.0(picomatch@4.0.4) + picomatch: 4.0.4 tinyrainbow@3.0.3: {} @@ -5944,7 +6263,10 @@ snapshots: ts-interface-checker@0.1.13: {} - tsup@8.5.1(jiti@2.6.1)(postcss@8.5.6)(typescript@5.9.3)(yaml@2.8.2): + tslib@2.8.1: + optional: true + + tsup@8.5.1(jiti@2.6.1)(postcss@8.5.8)(typescript@5.9.3)(yaml@2.8.3): dependencies: bundle-require: 5.1.0(esbuild@0.27.2) cac: 6.7.14 @@ -5955,16 +6277,16 @@ snapshots: fix-dts-default-cjs-exports: 1.0.1 joycon: 3.1.1 picocolors: 1.1.1 - postcss-load-config: 6.0.1(jiti@2.6.1)(postcss@8.5.6)(yaml@2.8.2) + postcss-load-config: 6.0.1(jiti@2.6.1)(postcss@8.5.8)(yaml@2.8.3) resolve-from: 5.0.0 - rollup: 4.57.0 + rollup: 4.60.0 source-map: 0.7.6 sucrase: 3.35.1 tinyexec: 0.3.2 tinyglobby: 0.2.15 tree-kill: 1.2.2 optionalDependencies: - postcss: 8.5.6 + postcss: 8.5.8 typescript: 5.9.3 transitivePeerDependencies: - jiti @@ -6014,7 +6336,7 @@ snapshots: undici-types@7.16.0: {} - undici@7.19.1: {} + undici@7.24.5: {} unicode-emoji-modifier-base@1.0.0: {} @@ -6047,24 +6369,27 @@ snapshots: vary@1.1.2: {} - vite@7.3.1(@types/node@25.0.10)(jiti@2.6.1)(yaml@2.8.2): + vite@8.0.5(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2)(@types/node@25.0.10)(esbuild@0.27.2)(jiti@2.6.1)(yaml@2.8.3): dependencies: - esbuild: 0.27.2 - fdir: 6.5.0(picomatch@4.0.3) - picomatch: 4.0.3 - postcss: 8.5.6 - rollup: 4.57.0 + lightningcss: 1.32.0 + picomatch: 4.0.4 + postcss: 8.5.8 + rolldown: 1.0.0-rc.12(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2) tinyglobby: 0.2.15 optionalDependencies: '@types/node': 25.0.10 + esbuild: 0.27.2 fsevents: 2.3.3 jiti: 2.6.1 - yaml: 2.8.2 + yaml: 2.8.3 + transitivePeerDependencies: + - '@emnapi/core' + - '@emnapi/runtime' - vitest@4.0.18(@types/node@25.0.10)(jiti@2.6.1)(yaml@2.8.2): + vitest@4.0.18(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2)(@types/node@25.0.10)(esbuild@0.27.2)(jiti@2.6.1)(yaml@2.8.3): dependencies: '@vitest/expect': 4.0.18 - '@vitest/mocker': 4.0.18(vite@7.3.1(@types/node@25.0.10)(jiti@2.6.1)(yaml@2.8.2)) + '@vitest/mocker': 4.0.18(vite@8.0.5(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2)(@types/node@25.0.10)(esbuild@0.27.2)(jiti@2.6.1)(yaml@2.8.3)) '@vitest/pretty-format': 4.0.18 '@vitest/runner': 4.0.18 '@vitest/snapshot': 4.0.18 @@ -6075,20 +6400,23 @@ snapshots: magic-string: 0.30.21 obug: 2.1.1 pathe: 2.0.3 - picomatch: 4.0.3 + picomatch: 4.0.4 std-env: 3.10.0 tinybench: 2.9.0 tinyexec: 1.0.2 tinyglobby: 0.2.15 tinyrainbow: 3.0.3 - vite: 7.3.1(@types/node@25.0.10)(jiti@2.6.1)(yaml@2.8.2) + vite: 8.0.5(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2)(@types/node@25.0.10)(esbuild@0.27.2)(jiti@2.6.1)(yaml@2.8.3) why-is-node-running: 2.3.0 optionalDependencies: '@types/node': 25.0.10 transitivePeerDependencies: + - '@emnapi/core' + - '@emnapi/runtime' + - '@vitejs/devtools' + - esbuild - jiti - less - - lightningcss - msw - sass - sass-embedded @@ -6135,7 +6463,7 @@ snapshots: y18n@5.0.8: {} - yaml@2.8.2: {} + yaml@2.8.3: {} yargs-parser@20.2.9: {} diff --git a/src/benchmark/gold-standard.test.ts b/src/benchmark/gold-standard.test.ts new file mode 100644 index 0000000..4b66cb9 --- /dev/null +++ b/src/benchmark/gold-standard.test.ts @@ -0,0 +1,139 @@ +import { describe, expect, it } from 'vitest'; + +import { + calculateCorrelation, + EXCELLENT_PROMPTS, + FAIR_PROMPTS, + getExpectedScoreRange, + getScoreTier, + GOLD_STANDARD_PROMPTS, + GOOD_PROMPTS, + POOR_PROMPTS, + scoresMatch, +} from './gold-standard.js'; + +describe('gold-standard', () => { + describe('getScoreTier', () => { + it('returns excellent for scores >= 85', () => { + expect(getScoreTier(85)).toBe('excellent'); + expect(getScoreTier(100)).toBe('excellent'); + }); + + it('returns good for scores 70-84', () => { + expect(getScoreTier(70)).toBe('good'); + expect(getScoreTier(84)).toBe('good'); + }); + + it('returns fair for scores 50-69', () => { + expect(getScoreTier(50)).toBe('fair'); + expect(getScoreTier(69)).toBe('fair'); + }); + + it('returns poor for scores < 50', () => { + expect(getScoreTier(49)).toBe('poor'); + expect(getScoreTier(0)).toBe('poor'); + }); + }); + + describe('scoresMatch', () => { + it('returns true for scores in same tier', () => { + expect(scoresMatch(85, 95)).toBe(true); + expect(scoresMatch(70, 80)).toBe(true); + expect(scoresMatch(50, 60)).toBe(true); + expect(scoresMatch(10, 40)).toBe(true); + }); + + it('returns false for scores in different tiers', () => { + expect(scoresMatch(85, 70)).toBe(false); + expect(scoresMatch(50, 49)).toBe(false); + }); + }); + + describe('calculateCorrelation', () => { + it('returns 1 for perfect positive correlation', () => { + expect(calculateCorrelation([1, 2, 3], [1, 2, 3])).toBeCloseTo(1); + }); + + it('returns -1 for perfect negative correlation', () => { + expect(calculateCorrelation([1, 2, 3], [3, 2, 1])).toBeCloseTo(-1); + }); + + it('returns 0 for no correlation', () => { + expect(calculateCorrelation([1, 1, 1], [1, 2, 3])).toBeCloseTo(0); + }); + + it('returns 0 for empty arrays', () => { + expect(calculateCorrelation([], [])).toBe(0); + }); + + it('returns 0 for mismatched lengths', () => { + expect(calculateCorrelation([1, 2], [1, 2, 3])).toBe(0); + }); + }); + + describe('getExpectedScoreRange', () => { + it('returns 80-100 for 0 issues', () => { + expect(getExpectedScoreRange(0)).toEqual({ min: 80, max: 100 }); + }); + + it('returns 60-85 for 1 issue', () => { + expect(getExpectedScoreRange(1)).toEqual({ min: 60, max: 85 }); + }); + + it('returns 40-70 for 2 issues', () => { + expect(getExpectedScoreRange(2)).toEqual({ min: 40, max: 70 }); + }); + + it('returns 0-55 for 3+ issues', () => { + expect(getExpectedScoreRange(3)).toEqual({ min: 0, max: 55 }); + expect(getExpectedScoreRange(5)).toEqual({ min: 0, max: 55 }); + }); + }); + + describe('GOLD_STANDARD_PROMPTS', () => { + it('has 50 total prompts', () => { + expect(GOLD_STANDARD_PROMPTS).toHaveLength(50); + }); + + it('has 10 excellent prompts', () => { + expect(EXCELLENT_PROMPTS).toHaveLength(10); + }); + + it('has 15 good prompts', () => { + expect(GOOD_PROMPTS).toHaveLength(15); + }); + + it('has 15 fair prompts', () => { + expect(FAIR_PROMPTS).toHaveLength(15); + }); + + it('has 10 poor prompts', () => { + expect(POOR_PROMPTS).toHaveLength(10); + }); + + it('excellent prompts have scores >= 85', () => { + for (const prompt of EXCELLENT_PROMPTS) { + expect(prompt.expectedScore).toBeGreaterThanOrEqual(85); + } + }); + + it('good prompts have scores 70-84', () => { + for (const prompt of GOOD_PROMPTS) { + expect(prompt.expectedScore).toBeGreaterThanOrEqual(70); + expect(prompt.expectedScore).toBeLessThan(85); + } + }); + + it('fair prompts have scores 38-69', () => { + for (const prompt of FAIR_PROMPTS) { + expect(prompt.expectedScore).toBeLessThan(70); + } + }); + + it('poor prompts have scores < 25', () => { + for (const prompt of POOR_PROMPTS) { + expect(prompt.expectedScore).toBeLessThan(25); + } + }); + }); +}); diff --git a/src/benchmark/gold-standard.ts b/src/benchmark/gold-standard.ts new file mode 100644 index 0000000..4e6b7a5 --- /dev/null +++ b/src/benchmark/gold-standard.ts @@ -0,0 +1,375 @@ +/** + * Gold Standard Benchmark Dataset + */ + +export type GoldStandardPrompt = { + readonly text: string; + readonly expectedScore: number; + readonly expectedIssues: readonly string[]; + readonly rationale: string; +}; + +export type ScoreTier = 'excellent' | 'good' | 'fair' | 'poor'; + +export function getScoreTier(score: number): ScoreTier { + if (score >= 85) return 'excellent'; + if (score >= 70) return 'good'; + if (score >= 50) return 'fair'; + return 'poor'; +} + +export function scoresMatch(actual: number, expected: number): boolean { + return getScoreTier(actual) === getScoreTier(expected); +} + +export function calculateCorrelation( + predictions: readonly number[], + expectations: readonly number[], +): number { + if (predictions.length !== expectations.length || predictions.length === 0) + return 0; + const n = predictions.length; + const meanPred = predictions.reduce((a, b) => a + b, 0) / n; + const meanExp = expectations.reduce((a, b) => a + b, 0) / n; + let num = 0, + denomPred = 0, + denomExp = 0; + for (let i = 0; i < n; i++) { + const dp = (predictions[i] ?? 0) - meanPred, + de = (expectations[i] ?? 0) - meanExp; + num += dp * de; + denomPred += dp * dp; + denomExp += de * de; + } + const denom = Math.sqrt(denomPred * denomExp); + return denom === 0 ? 0 : num / denom; +} + +export const EXCELLENT_PROMPTS: readonly GoldStandardPrompt[] = [ + { + text: 'Fix the null pointer exception in auth.ts line 45 where user.email is undefined when called from the password reset flow', + expectedScore: 95, + expectedIssues: [], + rationale: 'Specific file, line, error, and context', + }, + { + text: 'Add error handling to validateUser() in src/utils/auth.ts for database timeout after 5s', + expectedScore: 92, + expectedIssues: [], + rationale: 'Clear goal, function, path, constraint', + }, + { + text: 'Refactor calculateTotal() to use reduce instead of forEach, maintaining return type number', + expectedScore: 88, + expectedIssues: [], + rationale: 'Specific function, clear transformation', + }, + { + text: 'Write unit tests for UserService.createUser covering: success, duplicate email, invalid password', + expectedScore: 90, + expectedIssues: [], + rationale: 'Clear method, specific test cases', + }, + { + text: 'Update docker-compose.yml to add Redis container for session caching on port 6379', + expectedScore: 91, + expectedIssues: [], + rationale: 'Specific file, clear addition', + }, + { + text: 'Debug why /api/users returns 500 when email contains + sign - check URL encoding', + expectedScore: 89, + expectedIssues: [], + rationale: 'Specific endpoint, exact condition', + }, + { + text: 'Implement rate limiting for login: max 5 attempts per IP per minute, return 429', + expectedScore: 93, + expectedIssues: [], + rationale: 'Exact limits specified', + }, + { + text: 'Add TypeScript types in src/types/api.ts matching docs/openapi.yaml', + expectedScore: 87, + expectedIssues: [], + rationale: 'Clear file paths, reference doc', + }, + { + text: 'Optimize getUserOrders() to use single JOIN instead of N+1 - currently 3s for 100 orders', + expectedScore: 90, + expectedIssues: [], + rationale: 'Specific function, perf baseline', + }, + { + text: 'Fix race condition in useAuth hook where logout completes before token refresh resolves', + expectedScore: 88, + expectedIssues: [], + rationale: 'Specific hook, exact race described', + }, +]; + +export const GOOD_PROMPTS: readonly GoldStandardPrompt[] = [ + { + text: 'Add validation to signup form for email format and password length', + expectedScore: 78, + expectedIssues: ['missing-technical-details'], + rationale: 'Missing file path and rules', + }, + { + text: 'Fix the bug where users are logged out after page refresh', + expectedScore: 72, + expectedIssues: ['missing-technical-details'], + rationale: 'No file paths or errors', + }, + { + text: 'Improve performance of product listing - too slow to load', + expectedScore: 70, + expectedIssues: ['missing-technical-details', 'insufficient-constraints'], + rationale: 'Vague "too slow"', + }, + { + text: 'Add loading spinner while API call is in progress in dashboard', + expectedScore: 75, + expectedIssues: ['missing-technical-details'], + rationale: 'Missing file path', + }, + { + text: 'Update button styles to match new design system colors', + expectedScore: 73, + expectedIssues: ['missing-technical-details'], + rationale: 'Missing colors and files', + }, + { + text: 'Fix TypeScript errors in auth module after v5 upgrade', + expectedScore: 74, + expectedIssues: ['missing-technical-details'], + rationale: 'Missing specific errors', + }, + { + text: 'Add error messages to form when validation fails', + expectedScore: 71, + expectedIssues: ['missing-technical-details', 'no-context'], + rationale: 'Which form?', + }, + { + text: 'Implement dark mode toggle in settings page', + expectedScore: 76, + expectedIssues: ['missing-technical-details'], + rationale: 'Missing implementation details', + }, + { + text: 'Write documentation for API endpoints in readme', + expectedScore: 72, + expectedIssues: ['missing-technical-details'], + rationale: 'Missing specifics', + }, + { + text: 'Add caching to expensive database queries', + expectedScore: 74, + expectedIssues: ['missing-technical-details', 'insufficient-constraints'], + rationale: 'Which queries?', + }, + { + text: 'Create reusable Modal component for confirmation dialogs', + expectedScore: 77, + expectedIssues: ['missing-technical-details'], + rationale: 'Missing props interface', + }, + { + text: 'Fix mobile layout issues on checkout page', + expectedScore: 73, + expectedIssues: ['missing-technical-details'], + rationale: 'Missing breakpoints', + }, + { + text: 'Add pagination to users list in admin panel', + expectedScore: 78, + expectedIssues: ['missing-technical-details'], + rationale: 'Missing page size', + }, + { + text: 'Implement search for blog posts', + expectedScore: 75, + expectedIssues: ['missing-technical-details'], + rationale: 'Missing criteria', + }, + { + text: 'Add input sanitization to prevent XSS in comments', + expectedScore: 79, + expectedIssues: ['missing-technical-details'], + rationale: 'Missing files', + }, +]; + +export const FAIR_PROMPTS: readonly GoldStandardPrompt[] = [ + { + text: 'Fix the bug in the login', + expectedScore: 55, + expectedIssues: ['no-context', 'missing-technical-details'], + rationale: 'What bug?', + }, + { + text: 'Make the page faster', + expectedScore: 50, + expectedIssues: ['vague', 'no-context', 'missing-technical-details'], + rationale: 'Which page?', + }, + { + text: 'Add some tests', + expectedScore: 45, + expectedIssues: ['vague', 'no-context', 'insufficient-constraints'], + rationale: 'What tests?', + }, + { + text: 'Clean up the code', + expectedScore: 42, + expectedIssues: ['vague', 'no-context'], + rationale: 'What code?', + }, + { + text: 'There is an error somewhere', + expectedScore: 38, + expectedIssues: ['vague', 'no-context', 'missing-technical-details'], + rationale: 'No location', + }, + { + text: 'Improve this component', + expectedScore: 48, + expectedIssues: ['vague', 'no-context', 'no-goal'], + rationale: 'Which component?', + }, + { + text: 'Update the dependencies', + expectedScore: 55, + expectedIssues: ['no-goal', 'insufficient-constraints'], + rationale: 'Which ones?', + }, + { + text: 'Something is wrong with authentication', + expectedScore: 40, + expectedIssues: ['vague', 'no-context', 'missing-technical-details'], + rationale: 'No details', + }, + { + text: 'Handle edge cases', + expectedScore: 44, + expectedIssues: ['vague', 'no-context'], + rationale: 'What cases?', + }, + { + text: 'Make it more secure', + expectedScore: 46, + expectedIssues: ['vague', 'no-context', 'no-goal'], + rationale: 'What concerns?', + }, + { + text: 'Add error handling', + expectedScore: 52, + expectedIssues: ['no-context', 'missing-technical-details'], + rationale: 'Where?', + }, + { + text: 'Refactor this file', + expectedScore: 45, + expectedIssues: ['vague', 'no-context', 'no-goal'], + rationale: 'Which file?', + }, + { + text: 'The button does not work', + expectedScore: 50, + expectedIssues: ['no-context', 'missing-technical-details'], + rationale: 'Which button?', + }, + { + text: 'Add logging', + expectedScore: 54, + expectedIssues: ['no-context', 'insufficient-constraints'], + rationale: 'Where?', + }, + { + text: 'Check the API', + expectedScore: 48, + expectedIssues: ['vague', 'no-goal', 'no-context'], + rationale: 'Check what?', + }, +]; + +export const POOR_PROMPTS: readonly GoldStandardPrompt[] = [ + { + text: 'Fix it', + expectedScore: 15, + expectedIssues: ['vague', 'no-context', 'no-goal'], + rationale: 'No information', + }, + { + text: 'Help', + expectedScore: 10, + expectedIssues: ['vague', 'no-context', 'no-goal'], + rationale: 'Single word', + }, + { + text: 'Make it work', + expectedScore: 20, + expectedIssues: ['vague', 'no-context', 'no-goal'], + rationale: 'What?', + }, + { + text: 'Debug', + expectedScore: 12, + expectedIssues: ['vague', 'no-context', 'no-goal', 'imperative'], + rationale: 'Just a verb', + }, + { + text: '???', + expectedScore: 5, + expectedIssues: ['vague', 'no-context', 'no-goal'], + rationale: 'Not a request', + }, + { + text: 'Code', + expectedScore: 8, + expectedIssues: ['vague', 'no-context', 'no-goal'], + rationale: 'Single word', + }, + { + text: 'Do the thing', + expectedScore: 18, + expectedIssues: ['vague', 'no-context', 'no-goal'], + rationale: 'What thing?', + }, + { + text: 'Error', + expectedScore: 10, + expectedIssues: ['vague', 'no-context', 'no-goal'], + rationale: 'Just a word', + }, + { + text: 'Please', + expectedScore: 5, + expectedIssues: ['vague', 'no-context', 'no-goal'], + rationale: 'Says nothing', + }, + { + text: 'This', + expectedScore: 5, + expectedIssues: ['vague', 'no-context', 'no-goal'], + rationale: 'Meaningless', + }, +]; + +export const GOLD_STANDARD_PROMPTS: readonly GoldStandardPrompt[] = [ + ...EXCELLENT_PROMPTS, + ...GOOD_PROMPTS, + ...FAIR_PROMPTS, + ...POOR_PROMPTS, +]; + +export function getExpectedScoreRange(issueCount: number): { + min: number; + max: number; +} { + if (issueCount === 0) return { min: 80, max: 100 }; + if (issueCount === 1) return { min: 60, max: 85 }; + if (issueCount === 2) return { min: 40, max: 70 }; + return { min: 0, max: 55 }; +} diff --git a/src/benchmark/index.ts b/src/benchmark/index.ts new file mode 100644 index 0000000..536a767 --- /dev/null +++ b/src/benchmark/index.ts @@ -0,0 +1,16 @@ +/** + * Benchmark module exports. + */ +export { + calculateCorrelation, + EXCELLENT_PROMPTS, + FAIR_PROMPTS, + getExpectedScoreRange, + getScoreTier, + GOLD_STANDARD_PROMPTS, + type GoldStandardPrompt, + GOOD_PROMPTS, + POOR_PROMPTS, + scoresMatch, + type ScoreTier, +} from './gold-standard.js'; diff --git a/src/cli.ts b/src/cli.ts index cbdb6f3..4425fe8 100644 --- a/src/cli.ts +++ b/src/cli.ts @@ -1734,7 +1734,8 @@ export async function cli(): Promise { const results: AnalysisResult[] = []; for (const group of groups) { - const prompts = group.prompts.map((p) => p.content); + const filteredPrompts = group.prompts.filter((p) => !p.isConfirmation); + const prompts = filteredPrompts.map((p) => p.content); const result = await analyzeWithProgress( provider, prompts, @@ -1743,7 +1744,7 @@ export async function cli(): Promise { config.rules, isJsonMode, args.noCache, - group.prompts, + filteredPrompts, ); results.push(result); } @@ -1824,7 +1825,19 @@ export async function cli(): Promise { } } else { // Single-day analysis - const prompts = logResult.prompts.map((p) => p.content); + const filteredPrompts = logResult.prompts.filter( + (p) => !p.isConfirmation, + ); + const confirmationsExcluded = + logResult.prompts.length - filteredPrompts.length; + if (confirmationsExcluded > 0) { + console.log( + chalk.dim( + ` Excluded ${String(confirmationsExcluded)} confirmation(s) (short replies to assistant questions)`, + ), + ); + } + const prompts = filteredPrompts.map((p) => p.content); const date = args.date; let result: AnalysisResult | EnhancedAnalysisResult = await analyzeWithProgress( @@ -1835,7 +1848,7 @@ export async function cli(): Promise { config.rules, isJsonMode, args.noCache, - logResult.prompts, + filteredPrompts, ); // Enrich with enhanced analytics if requested diff --git a/src/core/analyzer.test.ts b/src/core/analyzer.test.ts index 941faad..a7bed18 100644 --- a/src/core/analyzer.test.ts +++ b/src/core/analyzer.test.ts @@ -461,7 +461,7 @@ describe('mergeBatchResults', () => { }); const pattern = merged.patterns.find((p) => p.id === 'p1'); - expect(pattern?.frequency).toBe(5); // (4 + 6) / 2 = 5 + expect(pattern?.frequency).toBe(10); // 4 + 6 = 10 (sum across batches) }); it('should take max severity for duplicate patterns', () => { diff --git a/src/core/analyzer.ts b/src/core/analyzer.ts index 912c6a4..dbd480d 100644 --- a/src/core/analyzer.ts +++ b/src/core/analyzer.ts @@ -572,7 +572,7 @@ export function mergeBatchResults( const severities = group.map((p) => p.severity); const allExamples = group.flatMap((p) => Array.from(p.examples)); - const mergedFrequency = Math.round(average(frequencies)); + const mergedFrequency = sum(frequencies); const mergedSeverity = maxSeverity(severities); const mergedExamples = limitExamples( allExamples, diff --git a/src/core/log-reader.test.ts b/src/core/log-reader.test.ts index 8240cc2..dbbc6f1 100644 --- a/src/core/log-reader.test.ts +++ b/src/core/log-reader.test.ts @@ -15,7 +15,9 @@ import { getProjects, groupByDay, isClaudeMessage, + isConfirmationMessage, parseDate, + parseLogEntry, readLogs, } from './log-reader.js'; @@ -148,13 +150,15 @@ describe('readLogs', () => { const result = await readLogs(); expect(result.prompts).toHaveLength(1); - expect(result.prompts[0]).toEqual({ - content: 'Test prompt', - timestamp: '2025-01-23T14:30:00.000Z', - sessionId: 'abc123', - project: 'my-project', - date: '2025-01-23', - }); + expect(result.prompts[0]).toEqual( + expect.objectContaining({ + content: 'Test prompt', + timestamp: '2025-01-23T14:30:00.000Z', + sessionId: 'abc123', + project: 'my-project', + date: '2025-01-23', + }), + ); }); it('handles malformed JSON lines gracefully', async () => { @@ -907,3 +911,249 @@ describe('readLogs with filters', () => { expect(firstWarning).toContain('Invalid date'); }); }); + +describe('parseLogEntry', () => { + it('should parse a user message with uuid and parentUuid', () => { + const line = JSON.stringify({ + uuid: 'user-1', + parentUuid: 'system-1', + type: 'user', + message: { role: 'user', content: 'si' }, + timestamp: '2026-04-07T10:00:00.000Z', + sessionId: 'sess-1', + cwd: '/test', + }); + + const result = parseLogEntry(line); + expect(result).not.toBeNull(); + expect(result?.uuid).toBe('user-1'); + expect(result?.entry.type).toBe('user'); + expect(result?.entry.parentUuid).toBe('system-1'); + expect(result?.entry.contentTail).toBe('si'); + }); + + it('should extract text from assistant array content', () => { + const line = JSON.stringify({ + uuid: 'asst-1', + type: 'assistant', + message: { + role: 'assistant', + content: [{ type: 'text', text: 'Should I proceed with the changes?' }], + }, + timestamp: '2026-04-07T10:00:00.000Z', + }); + + const result = parseLogEntry(line); + expect(result?.entry.contentTail).toBe( + 'Should I proceed with the changes?', + ); + }); + + it('should return null for lines without uuid', () => { + const line = JSON.stringify({ + type: 'system', + message: { content: 'test' }, + }); + expect(parseLogEntry(line)).toBeNull(); + }); + + it('should return null for invalid JSON', () => { + expect(parseLogEntry('not json')).toBeNull(); + }); + + it('should return null for empty lines', () => { + expect(parseLogEntry('')).toBeNull(); + expect(parseLogEntry(' ')).toBeNull(); + }); +}); + +describe('isConfirmationMessage', () => { + function buildIndex( + entries: { + uuid: string; + type: string; + contentTail: string | null; + parentUuid: string | null; + }[], + ): Map< + string, + { type: string; contentTail: string | null; parentUuid: string | null } + > { + const index = new Map< + string, + { type: string; contentTail: string | null; parentUuid: string | null } + >(); + for (const e of entries) { + index.set(e.uuid, { + type: e.type, + contentTail: e.contentTail, + parentUuid: e.parentUuid, + }); + } + return index; + } + + it('should detect confirmation via assistant→system→user chain', () => { + const index = buildIndex([ + { + uuid: 'asst-1', + type: 'assistant', + contentTail: 'Should I proceed?', + parentUuid: null, + }, + { + uuid: 'sys-1', + type: 'system', + contentTail: null, + parentUuid: 'asst-1', + }, + ]); + + expect(isConfirmationMessage('si', 'sys-1', index)).toBe(true); + }); + + it('should return false when assistant does not end with question mark', () => { + const index = buildIndex([ + { + uuid: 'asst-1', + type: 'assistant', + contentTail: 'Done. Changes applied.', + parentUuid: null, + }, + { + uuid: 'sys-1', + type: 'system', + contentTail: null, + parentUuid: 'asst-1', + }, + ]); + + expect(isConfirmationMessage('ok', 'sys-1', index)).toBe(false); + }); + + it('should return false for long messages even with question parent', () => { + const index = buildIndex([ + { + uuid: 'asst-1', + type: 'assistant', + contentTail: 'Should I proceed?', + parentUuid: null, + }, + ]); + + expect( + isConfirmationMessage( + 'si, hazme un refactor completo de auth', + 'asst-1', + index, + ), + ).toBe(false); + }); + + it('should return false when there is no parentUuid', () => { + const index = buildIndex([]); + expect(isConfirmationMessage('si', null, index)).toBe(false); + }); + + it('should return false when chain exceeds max hops', () => { + const index = buildIndex([ + { uuid: 's1', type: 'system', contentTail: null, parentUuid: 's2' }, + { uuid: 's2', type: 'system', contentTail: null, parentUuid: 's3' }, + { uuid: 's3', type: 'system', contentTail: null, parentUuid: 's4' }, + { uuid: 's4', type: 'system', contentTail: null, parentUuid: 's5' }, + { uuid: 's5', type: 'system', contentTail: null, parentUuid: 's6' }, + { + uuid: 's6', + type: 'assistant', + contentTail: 'Question?', + parentUuid: null, + }, + ]); + + expect(isConfirmationMessage('si', 's1', index)).toBe(false); + }); + + it('should detect direct assistant→user confirmation', () => { + const index = buildIndex([ + { + uuid: 'asst-1', + type: 'assistant', + contentTail: '¿Procedo con los cambios?', + parentUuid: null, + }, + ]); + + expect(isConfirmationMessage('si', 'asst-1', index)).toBe(true); + }); + + it('should return false when parentUuid is not in index', () => { + const index = buildIndex([]); + expect(isConfirmationMessage('si', 'unknown-uuid', index)).toBe(false); + }); +}); + +describe('readJsonlFile confirmation integration', () => { + it('should mark short replies to assistant questions as confirmations', async () => { + mockExistsSync.mockReturnValue(true); + mockGlob.mockResolvedValue(['/mock/.claude/projects/my-project/log.jsonl']); + + const lines = [ + JSON.stringify({ + uuid: 'asst-1', + parentUuid: null, + type: 'assistant', + message: { + role: 'assistant', + content: [ + { type: 'text', text: '¿Quieres que proceda con los cambios?' }, + ], + }, + timestamp: '2026-04-07T10:00:00.000Z', + sessionId: 'sess-1', + cwd: '/test', + }), + JSON.stringify({ + uuid: 'sys-1', + parentUuid: 'asst-1', + type: 'system', + message: { role: 'system', content: 'stop_hook_summary' }, + timestamp: '2026-04-07T10:00:01.000Z', + sessionId: 'sess-1', + cwd: '/test', + }), + JSON.stringify({ + uuid: 'user-1', + parentUuid: 'sys-1', + type: 'user', + message: { role: 'user', content: 'si' }, + timestamp: '2026-04-07T10:00:02.000Z', + sessionId: 'sess-1', + cwd: '/test', + }), + JSON.stringify({ + uuid: 'user-2', + parentUuid: 'user-1', + type: 'user', + message: { + role: 'user', + content: 'Refactoriza el módulo de autenticación', + }, + timestamp: '2026-04-07T10:01:00.000Z', + sessionId: 'sess-1', + cwd: '/test', + }), + ].join('\n'); + + mockReadFile.mockResolvedValue(lines); + + const result = await readLogs(); + + expect(result.prompts).toHaveLength(2); + expect(result.prompts[0]?.content).toBe('si'); + expect(result.prompts[0]?.isConfirmation).toBe(true); + expect(result.prompts[1]?.content).toBe( + 'Refactoriza el módulo de autenticación', + ); + expect(result.prompts[1]?.isConfirmation).toBe(false); + }); +}); diff --git a/src/core/log-reader.ts b/src/core/log-reader.ts index 274cf4e..e4f399b 100644 --- a/src/core/log-reader.ts +++ b/src/core/log-reader.ts @@ -217,6 +217,95 @@ export function extractContent(message: ClaudeMessage): string { return message.message.content; } +// ============================================================================= +// Confirmation Detection +// ============================================================================= + +const MAX_CONFIRMATION_LENGTH = 20; +const MAX_CHAIN_HOPS = 5; + +type IndexEntry = { + readonly type: string; + readonly contentTail: string | null; + readonly parentUuid: string | null; +}; + +/** + * Lightweight parser that extracts minimal fields from any log entry type. + * Used to build the message index for confirmation detection. + * Separate from parseLine/isClaudeMessage to handle assistant array content. + */ +export function parseLogEntry( + line: string, +): { uuid: string; entry: IndexEntry } | null { + if (!line.trim()) return null; + + try { + const obj = JSON.parse(line) as Record; + const uuid = obj['uuid']; + if (typeof uuid !== 'string') return null; + + const type = typeof obj['type'] === 'string' ? obj['type'] : 'unknown'; + const parentUuid = + typeof obj['parentUuid'] === 'string' ? obj['parentUuid'] : null; + + let contentTail: string | null = null; + const message = obj['message'] as Record | undefined; + if (message) { + const content = message['content']; + if (typeof content === 'string') { + contentTail = content.slice(-100); + } else if (Array.isArray(content)) { + const textBlock = content.find( + (block: unknown) => + typeof block === 'object' && + block !== null && + (block as Record)['type'] === 'text', + ) as Record | undefined; + const text = textBlock?.['text']; + if (typeof text === 'string') { + contentTail = text.slice(-100); + } + } + } + + return { uuid, entry: { type, contentTail, parentUuid } }; + } catch { + return null; + } +} + +/** + * Detects if a short user message is a confirmation response to an assistant question. + * Walks the parentUuid chain backwards to find the nearest assistant message + * and checks if it ends with a question mark. + */ +export function isConfirmationMessage( + content: string, + parentUuid: string | null, + index: ReadonlyMap, +): boolean { + if (content.trim().length > MAX_CONFIRMATION_LENGTH) return false; + + let currentUuid = parentUuid; + for (let hop = 0; hop < MAX_CHAIN_HOPS; hop++) { + if (!currentUuid) return false; + const entry = index.get(currentUuid); + if (!entry) return false; + + if (entry.type === 'assistant') { + return entry.contentTail?.trimEnd().endsWith('?') === true; + } + currentUuid = entry.parentUuid; + } + + return false; +} + +// ============================================================================= +// File Reading +// ============================================================================= + /** * Reads and parses a single JSONL file. * @@ -236,6 +325,16 @@ async function readJsonlFile(filePath: string): Promise<{ const lines = content.split('\n'); let skippedLines = 0; + // First pass: build message index for confirmation detection + const messageIndex = new Map(); + for (const line of lines) { + const parsed = parseLogEntry(line); + if (parsed) { + messageIndex.set(parsed.uuid, parsed.entry); + } + } + + // Second pass: extract user prompts with confirmation detection for (let i = 0; i < lines.length; i++) { const line = lines[i] ?? ''; const lineNumber = i + 1; @@ -257,12 +356,21 @@ async function readJsonlFile(filePath: string): Promise<{ continue; } + // Get parentUuid for this user message from the index + const logEntry = parseLogEntry(line); + const userParentUuid = logEntry?.entry.parentUuid ?? null; + prompts.push({ content: extractedContent, timestamp: message.timestamp, sessionId: message.sessionId, project: projectName, date: extractDate(message.timestamp), + isConfirmation: isConfirmationMessage( + extractedContent, + userParentUuid, + messageIndex, + ), }); } diff --git a/src/core/model-suggester.test.ts b/src/core/model-suggester.test.ts index ef9aaf9..337d15b 100644 --- a/src/core/model-suggester.test.ts +++ b/src/core/model-suggester.test.ts @@ -11,7 +11,7 @@ describe('model-suggester', () => { it('should suggest qwen2.5:14b when available with 8GB+ RAM', () => { const systemInfo: SystemInfo = { ramGB: 16 }; const models: OllamaModelInfo[] = [ - { name: 'gemma3:4b:latest', size: 2000000000 }, + { name: 'gemma4:e4b:latest', size: 5000000000 }, { name: 'qwen2.5:14b', size: 9000000000 }, { name: 'mistral:7b', size: 4000000000 }, ]; @@ -25,40 +25,40 @@ describe('model-suggester', () => { it('should suggest mistral:7b when qwen not available but mistral is, with 8GB+ RAM', () => { const systemInfo: SystemInfo = { ramGB: 16 }; const models: OllamaModelInfo[] = [ - { name: 'gemma3:4b:latest', size: 2000000000 }, + { name: 'gemma4:e4b:latest', size: 5000000000 }, { name: 'mistral:7b', size: 4000000000 }, ]; const result = suggestBestModel(systemInfo, models); expect(result.suggestedModel).toBe('mistral:7b'); - expect(result.reason).toContain('Small Schema'); + expect(result.reason).toContain('Full Schema'); }); - it('should suggest gemma3:4b when available with low RAM', () => { + it('should suggest gemma4:e4b when available with low RAM', () => { const systemInfo: SystemInfo = { ramGB: 4 }; const models: OllamaModelInfo[] = [ - { name: 'gemma3:4b:latest', size: 2000000000 }, + { name: 'gemma4:e4b:latest', size: 5000000000 }, { name: 'mistral:7b', size: 4000000000 }, ]; const result = suggestBestModel(systemInfo, models); - expect(result.suggestedModel).toBe('gemma3:4b'); - expect(result.reason).toContain('lightweight'); + expect(result.suggestedModel).toBe('gemma4:e4b'); + expect(result.reason).toContain('128K context'); }); - it('should fallback to gemma3:4b when no models available', () => { + it('should fallback to gemma4:e4b when no models available', () => { const systemInfo: SystemInfo = { ramGB: 16 }; const models: OllamaModelInfo[] = []; const result = suggestBestModel(systemInfo, models); - expect(result.suggestedModel).toBe('gemma3:4b'); + expect(result.suggestedModel).toBe('gemma4:e4b'); expect(result.reason).toContain('default'); }); - it('should fallback to gemma3:4b when only unknown models available', () => { + it('should fallback to gemma4:e4b when only unknown models available', () => { const systemInfo: SystemInfo = { ramGB: 16 }; const models: OllamaModelInfo[] = [ { name: 'unknown-model:latest', size: 5000000000 }, @@ -66,13 +66,13 @@ describe('model-suggester', () => { const result = suggestBestModel(systemInfo, models); - expect(result.suggestedModel).toBe('gemma3:4b'); + expect(result.suggestedModel).toBe('gemma4:e4b'); }); it('should prefer qwen2.5:14b over mistral:7b when both available with high RAM', () => { const systemInfo: SystemInfo = { ramGB: 32 }; const models: OllamaModelInfo[] = [ - { name: 'gemma3:4b:latest', size: 2000000000 }, + { name: 'gemma4:e4b:latest', size: 5000000000 }, { name: 'mistral:7b', size: 4000000000 }, { name: 'qwen2.5:14b', size: 9000000000 }, ]; @@ -85,7 +85,7 @@ describe('model-suggester', () => { it('should handle models with tags (e.g., :latest)', () => { const systemInfo: SystemInfo = { ramGB: 16 }; const models: OllamaModelInfo[] = [ - { name: 'gemma3:4b:latest', size: 2000000000 }, + { name: 'gemma4:e4b:latest', size: 5000000000 }, { name: 'mistral:7b-instruct', size: 4000000000 }, ]; @@ -94,21 +94,21 @@ describe('model-suggester', () => { expect(result.suggestedModel).toContain('mistral'); }); - it('should suggest gemma3:4b with exactly 8GB RAM when only gemma3:4b available', () => { + it('should suggest gemma4:e4b with exactly 8GB RAM when only gemma4:e4b available', () => { const systemInfo: SystemInfo = { ramGB: 8 }; const models: OllamaModelInfo[] = [ - { name: 'gemma3:4b:latest', size: 2000000000 }, + { name: 'gemma4:e4b:latest', size: 5000000000 }, ]; const result = suggestBestModel(systemInfo, models); - expect(result.suggestedModel).toBe('gemma3:4b'); + expect(result.suggestedModel).toBe('gemma4:e4b'); }); it('should suggest mistral:7b with exactly 8GB RAM when available', () => { const systemInfo: SystemInfo = { ramGB: 8 }; const models: OllamaModelInfo[] = [ - { name: 'gemma3:4b:latest', size: 2000000000 }, + { name: 'gemma4:e4b:latest', size: 5000000000 }, { name: 'mistral:7b', size: 4000000000 }, ]; @@ -122,7 +122,7 @@ describe('model-suggester', () => { it('should suggest mistral:7b with 8GB+ RAM when not installed', () => { const systemInfo: SystemInfo = { ramGB: 16 }; const models: OllamaModelInfo[] = [ - { name: 'gemma3:4b:latest', size: 2000000000 }, + { name: 'gemma4:e4b:latest', size: 5000000000 }, ]; const result = getInstallationSuggestion(systemInfo, models); @@ -132,21 +132,21 @@ describe('model-suggester', () => { expect(result?.benefits).toContain('Better analysis quality'); }); - it('should suggest gemma3:4b with 4-7GB RAM when not installed', () => { + it('should suggest gemma4:e4b with 4-7GB RAM when not installed', () => { const systemInfo: SystemInfo = { ramGB: 6 }; const models: OllamaModelInfo[] = []; const result = getInstallationSuggestion(systemInfo, models); expect(result).not.toBeNull(); - expect(result?.suggestedModel).toBe('gemma3:4b'); - expect(result?.benefits).toContain('Fast and lightweight'); + expect(result?.suggestedModel).toBe('gemma4:e4b'); + expect(result?.benefits).toContain('128K context'); }); it('should not suggest when mistral:7b already installed with 8GB+ RAM', () => { const systemInfo: SystemInfo = { ramGB: 16 }; const models: OllamaModelInfo[] = [ - { name: 'gemma3:4b:latest', size: 2000000000 }, + { name: 'gemma4:e4b:latest', size: 5000000000 }, { name: 'mistral:7b', size: 4000000000 }, ]; @@ -158,7 +158,7 @@ describe('model-suggester', () => { it('should not suggest when qwen2.5:14b already installed with 8GB+ RAM', () => { const systemInfo: SystemInfo = { ramGB: 16 }; const models: OllamaModelInfo[] = [ - { name: 'gemma3:4b:latest', size: 2000000000 }, + { name: 'gemma4:e4b:latest', size: 5000000000 }, { name: 'qwen2.5:14b', size: 9000000000 }, ]; @@ -167,10 +167,10 @@ describe('model-suggester', () => { expect(result).toBeNull(); }); - it('should not suggest when gemma3:4b already installed with 4-7GB RAM', () => { + it('should not suggest when gemma4:e4b already installed with 4-7GB RAM', () => { const systemInfo: SystemInfo = { ramGB: 6 }; const models: OllamaModelInfo[] = [ - { name: 'gemma3:4b:latest', size: 2000000000 }, + { name: 'gemma4:e4b:latest', size: 5000000000 }, ]; const result = getInstallationSuggestion(systemInfo, models); @@ -201,7 +201,7 @@ describe('model-suggester', () => { it('should suggest with exactly 8GB RAM when mistral not installed', () => { const systemInfo: SystemInfo = { ramGB: 8 }; const models: OllamaModelInfo[] = [ - { name: 'gemma3:4b:latest', size: 2000000000 }, + { name: 'gemma4:e4b:latest', size: 5000000000 }, ]; const result = getInstallationSuggestion(systemInfo, models); @@ -210,14 +210,14 @@ describe('model-suggester', () => { expect(result?.suggestedModel).toBe('mistral:7b'); }); - it('should suggest with exactly 4GB RAM when gemma3:4b not installed', () => { + it('should suggest with exactly 4GB RAM when gemma4:e4b not installed', () => { const systemInfo: SystemInfo = { ramGB: 4 }; const models: OllamaModelInfo[] = []; const result = getInstallationSuggestion(systemInfo, models); expect(result).not.toBeNull(); - expect(result?.suggestedModel).toBe('gemma3:4b'); + expect(result?.suggestedModel).toBe('gemma4:e4b'); }); }); @@ -227,7 +227,7 @@ describe('model-suggester', () => { const models: OllamaModelInfo[] = []; const modelSuggestion = suggestBestModel(systemInfo, models); - expect(modelSuggestion.suggestedModel).toBe('gemma3:4b'); + expect(modelSuggestion.suggestedModel).toBe('gemma4:e4b'); const installSuggestion = getInstallationSuggestion(systemInfo, models); expect(installSuggestion).not.toBeNull(); @@ -247,11 +247,11 @@ describe('model-suggester', () => { it('should handle very low RAM values', () => { const systemInfo: SystemInfo = { ramGB: 1 }; const models: OllamaModelInfo[] = [ - { name: 'gemma3:4b:latest', size: 2000000000 }, + { name: 'gemma4:e4b:latest', size: 5000000000 }, ]; const modelSuggestion = suggestBestModel(systemInfo, models); - expect(modelSuggestion.suggestedModel).toBe('gemma3:4b'); + expect(modelSuggestion.suggestedModel).toBe('gemma4:e4b'); const installSuggestion = getInstallationSuggestion(systemInfo, models); expect(installSuggestion).toBeNull(); // Too little RAM diff --git a/src/core/model-suggester.ts b/src/core/model-suggester.ts index 792b2fc..76ee055 100644 --- a/src/core/model-suggester.ts +++ b/src/core/model-suggester.ts @@ -38,9 +38,9 @@ const LOW_RAM_THRESHOLD = 4; * Priority order (with 8GB+ RAM): * 1. qwen2.5:14b if available * 2. mistral:7b if available - * 3. llama3.2 (default fallback) + * 3. gemma4:e4b (default fallback) * - * With less than 8GB RAM, always suggests llama3.2 for reliability. + * With less than 8GB RAM, always suggests gemma4:e4b for reliability. * * @param systemInfo - System hardware information * @param models - List of available Ollama models @@ -58,22 +58,22 @@ export function suggestBestModel( systemInfo: SystemInfo, models: OllamaModelInfo[], ): ModelRecommendation { - // With low RAM, always suggest llama3.2 for reliability + // With low RAM, suggest gemma4:e4b for reliability if (systemInfo.ramGB < HIGH_RAM_THRESHOLD) { - if (isModelAvailable('gemma3:4b', models)) { + if (isModelAvailable('gemma4:e4b', models)) { return { - suggestedModel: 'gemma3:4b', - reason: 'Fast and lightweight model, optimal for your system', + suggestedModel: 'gemma4:e4b', + reason: 'Fast model with 128K context and native function calling', }; } return { - suggestedModel: 'gemma3:4b', + suggestedModel: 'gemma4:e4b', reason: 'Recommended default model (needs installation)', }; } // With 8GB+ RAM, suggest better models if available - // Priority: qwen2.5:14b > mistral:7b > llama3.2 + // Priority: qwen2.5:14b > mistral:7b > gemma4:e4b if (isModelAvailable('qwen2.5:14b', models)) { return { @@ -85,20 +85,20 @@ export function suggestBestModel( if (isModelAvailable('mistral:7b', models)) { return { suggestedModel: 'mistral:7b', - reason: 'Better analysis quality with Small Schema', + reason: 'Good analysis quality with Full Schema', }; } - // Fallback to llama3.2 - if (isModelAvailable('gemma3:4b', models)) { + // Fallback to gemma4:e4b + if (isModelAvailable('gemma4:e4b', models)) { return { - suggestedModel: 'gemma3:4b', - reason: 'Fast and reliable default model', + suggestedModel: 'gemma4:e4b', + reason: 'Fast and reliable default model with 128K context', }; } return { - suggestedModel: 'gemma3:4b', + suggestedModel: 'gemma4:e4b', reason: 'Recommended default model (needs installation)', }; } @@ -107,7 +107,7 @@ export function suggestBestModel( * Determines if we should suggest installing a better model. * Logic: * - 8GB+ RAM: suggest mistral:7b if not installed (and qwen2.5:14b not installed) - * - 4-7GB RAM: suggest llama3.2 if not installed + * - 4-7GB RAM: suggest gemma4:e4b if not installed * - < 4GB RAM: no suggestion (insufficient resources) * * @param systemInfo - System hardware information @@ -139,7 +139,7 @@ export function getInstallationSuggestion( if (!hasMistral && !hasQwen) { return { suggestedModel: 'mistral:7b', - benefits: 'Better analysis quality with Small Schema', + benefits: 'Better analysis quality with Full Schema', installCommand: 'ollama pull mistral:7b', }; } @@ -147,14 +147,14 @@ export function getInstallationSuggestion( return null; // Already has optimal model } - // Low RAM (4-7GB): suggest llama3.2 if not installed - const hasLlama = isModelAvailable('gemma3:4b', models); + // Low RAM (4-7GB): suggest gemma4:e4b if not installed + const hasDefault = isModelAvailable('gemma4:e4b', models); - if (!hasLlama) { + if (!hasDefault) { return { - suggestedModel: 'gemma3:4b', - benefits: 'Fast and lightweight model for your system', - installCommand: 'ollama pull llama3.2', + suggestedModel: 'gemma4:e4b', + benefits: '128K context and native function calling', + installCommand: 'ollama pull gemma4:e4b', }; } diff --git a/src/core/reminder.test.ts b/src/core/reminder.test.ts index 2ae1adf..da99f96 100644 --- a/src/core/reminder.test.ts +++ b/src/core/reminder.test.ts @@ -31,7 +31,7 @@ vi.mock('../utils/env.js', () => ({ getEnvConfig: vi.fn(() => ({ reminder: '7d', services: ['ollama'], - ollama: { model: 'gemma3:4b', host: 'http://localhost:11434' }, + ollama: { model: 'gemma4:e4b', host: 'http://localhost:11434' }, anthropic: { model: 'claude-3-5-haiku-latest', apiKey: '' }, google: { model: 'gemini-2.0-flash-exp', apiKey: '' }, })), @@ -264,7 +264,7 @@ describe('shouldShowReminder', () => { vi.mocked(getEnvConfig).mockReturnValue({ reminder: 'never', services: ['ollama'], - ollama: { model: 'gemma3:4b', host: 'http://localhost:11434' }, + ollama: { model: 'gemma4:e4b', host: 'http://localhost:11434' }, anthropic: { model: 'claude-3-5-haiku-latest', apiKey: '' }, google: { model: 'gemini-2.0-flash-exp', apiKey: '' }, }); @@ -279,7 +279,7 @@ describe('shouldShowReminder', () => { vi.mocked(getEnvConfig).mockReturnValue({ reminder: '7d', services: ['ollama'], - ollama: { model: 'gemma3:4b', host: 'http://localhost:11434' }, + ollama: { model: 'gemma4:e4b', host: 'http://localhost:11434' }, anthropic: { model: 'claude-3-5-haiku-latest', apiKey: '' }, google: { model: 'gemini-2.0-flash-exp', apiKey: '' }, }); @@ -299,7 +299,7 @@ describe('shouldShowReminder', () => { vi.mocked(getEnvConfig).mockReturnValue({ reminder: '7d', services: ['ollama'], - ollama: { model: 'gemma3:4b', host: 'http://localhost:11434' }, + ollama: { model: 'gemma4:e4b', host: 'http://localhost:11434' }, anthropic: { model: 'claude-3-5-haiku-latest', apiKey: '' }, google: { model: 'gemini-2.0-flash-exp', apiKey: '' }, }); @@ -319,7 +319,7 @@ describe('shouldShowReminder', () => { vi.mocked(getEnvConfig).mockReturnValue({ reminder: '7d', services: ['ollama'], - ollama: { model: 'gemma3:4b', host: 'http://localhost:11434' }, + ollama: { model: 'gemma4:e4b', host: 'http://localhost:11434' }, anthropic: { model: 'claude-3-5-haiku-latest', apiKey: '' }, google: { model: 'gemini-2.0-flash-exp', apiKey: '' }, }); @@ -339,7 +339,7 @@ describe('shouldShowReminder', () => { vi.mocked(getEnvConfig).mockReturnValue({ reminder: '14d', services: ['ollama'], - ollama: { model: 'gemma3:4b', host: 'http://localhost:11434' }, + ollama: { model: 'gemma4:e4b', host: 'http://localhost:11434' }, anthropic: { model: 'claude-3-5-haiku-latest', apiKey: '' }, google: { model: 'gemini-2.0-flash-exp', apiKey: '' }, }); @@ -359,7 +359,7 @@ describe('shouldShowReminder', () => { vi.mocked(getEnvConfig).mockReturnValue({ reminder: '30d', services: ['ollama'], - ollama: { model: 'gemma3:4b', host: 'http://localhost:11434' }, + ollama: { model: 'gemma4:e4b', host: 'http://localhost:11434' }, anthropic: { model: 'claude-3-5-haiku-latest', apiKey: '' }, google: { model: 'gemini-2.0-flash-exp', apiKey: '' }, }); @@ -379,7 +379,7 @@ describe('shouldShowReminder', () => { vi.mocked(getEnvConfig).mockReturnValue({ reminder: 'unknown', services: ['ollama'], - ollama: { model: 'gemma3:4b', host: 'http://localhost:11434' }, + ollama: { model: 'gemma4:e4b', host: 'http://localhost:11434' }, anthropic: { model: 'claude-3-5-haiku-latest', apiKey: '' }, google: { model: 'gemini-2.0-flash-exp', apiKey: '' }, }); diff --git a/src/core/results-storage.test.ts b/src/core/results-storage.test.ts index b0e038d..b216eb4 100644 --- a/src/core/results-storage.test.ts +++ b/src/core/results-storage.test.ts @@ -73,7 +73,7 @@ const mockMetadata: Omit = { date: '2025-12-26', project: 'test-project', provider: 'ollama', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }; @@ -104,14 +104,14 @@ describe('getPromptResultHash', () => { const hash1 = getPromptResultHash('test prompt', { date: '2025-12-26', project: 'project1', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); const hash2 = getPromptResultHash('test prompt', { date: '2025-12-26', project: 'project1', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); @@ -123,14 +123,14 @@ describe('getPromptResultHash', () => { const hash1 = getPromptResultHash('prompt 1', { date: '2025-12-26', project: 'project1', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); const hash2 = getPromptResultHash('prompt 2', { date: '2025-12-26', project: 'project1', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); @@ -141,14 +141,14 @@ describe('getPromptResultHash', () => { const hash1 = getPromptResultHash('test prompt', { date: '2025-12-26', project: 'project1', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); const hash2 = getPromptResultHash('test prompt', { date: '2025-12-27', project: 'project1', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); @@ -159,14 +159,14 @@ describe('getPromptResultHash', () => { const hash1 = getPromptResultHash('test prompt', { date: '2025-12-26', project: 'project1', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); const hash2 = getPromptResultHash('test prompt', { date: '2025-12-26', project: 'project2', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); @@ -177,7 +177,7 @@ describe('getPromptResultHash', () => { const hash1 = getPromptResultHash('test prompt', { date: '2025-12-26', project: 'project1', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); @@ -195,14 +195,14 @@ describe('getPromptResultHash', () => { const hash1 = getPromptResultHash('test prompt', { date: '2025-12-26', project: 'project1', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); const hash2 = getPromptResultHash('test prompt', { date: '2025-12-26', project: 'project1', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'minimal', }); @@ -213,14 +213,14 @@ describe('getPromptResultHash', () => { const hash1 = getPromptResultHash('test prompt', { date: '2025-12-26', project: undefined, - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); const hash2 = getPromptResultHash('test prompt', { date: '2025-12-26', project: undefined, - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); @@ -235,7 +235,7 @@ describe('getPromptResultHash', () => { const hash1 = getPromptResultHash('test', { date: '2025-12-26', project: 'test', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); diff --git a/src/core/semantic-validator.test.ts b/src/core/semantic-validator.test.ts new file mode 100644 index 0000000..ea6a7a0 --- /dev/null +++ b/src/core/semantic-validator.test.ts @@ -0,0 +1,274 @@ +import { describe, expect, it } from 'vitest'; + +import type { AnalysisResult } from '../types/index.js'; +import { + autoCorrectResult, + validateSemantics, + type ValidationResult, +} from './semantic-validator.js'; + +// Helper to create a valid base result +function createResult(overrides: Partial = {}): AnalysisResult { + return { + date: '2025-01-15', + patterns: [], + stats: { totalPrompts: 10, promptsWithIssues: 0, overallScore: 90 }, + topSuggestion: 'Be more specific', + ...overrides, + }; +} + +// Helper to create a pattern +function createPattern(overrides: Record = {}) { + return { + id: 'vague', + name: 'Vague Request', + frequency: 3, + severity: 'high' as const, + examples: ['help me'], + suggestion: 'Be specific', + beforeAfter: { before: 'help me', after: 'Help me fix X in Y' }, + ...overrides, + }; +} + +describe('validateSemantics', () => { + it('returns valid for a well-formed result', () => { + const result = createResult({ + patterns: [createPattern()], + stats: { totalPrompts: 10, promptsWithIssues: 3, overallScore: 70 }, + }); + const validation = validateSemantics(result, ['help me', 'fix bug']); + expect(validation.valid).toBe(true); + expect(validation.issues).toHaveLength(0); + }); + + it('detects score-mismatch when score is outside expected range for issue count', () => { + const result = createResult({ + patterns: [ + createPattern(), + createPattern({ id: 'no-context' }), + createPattern({ id: 'too-broad' }), + ], + stats: { totalPrompts: 10, promptsWithIssues: 7, overallScore: 95 }, + }); + const validation = validateSemantics(result, ['help me']); + expect(validation.issues.some((i) => i.type === 'score-mismatch')).toBe( + true, + ); + }); + + it('detects empty-patterns with low score', () => { + const result = createResult({ + patterns: [], + stats: { totalPrompts: 10, promptsWithIssues: 0, overallScore: 50 }, + }); + const validation = validateSemantics(result, ['some prompt']); + expect(validation.issues.some((i) => i.type === 'empty-patterns')).toBe( + true, + ); + }); + + it('detects duplicate-issues', () => { + const result = createResult({ + patterns: [ + createPattern({ id: 'vague' }), + createPattern({ id: 'vague' }), + ], + }); + const validation = validateSemantics(result, ['help me']); + expect(validation.issues.some((i) => i.type === 'duplicate-issues')).toBe( + true, + ); + }); + + it('detects example-not-found when examples do not match prompts', () => { + const result = createResult({ + patterns: [ + createPattern({ + examples: ['this example does not exist in any prompt at all'], + }), + ], + }); + const validation = validateSemantics(result, [ + 'completely different text here', + ]); + expect(validation.issues.some((i) => i.type === 'example-not-found')).toBe( + true, + ); + }); + + it('detects stats-inconsistent when promptsWithIssues exceeds totalPrompts', () => { + const result = createResult({ + stats: { totalPrompts: 3, promptsWithIssues: 12, overallScore: 6 }, + }); + const validation = validateSemantics(result, ['a', 'b', 'c']); + const issue = validation.issues.find( + (i) => i.type === 'stats-inconsistent', + ); + expect(issue).toBeDefined(); + expect(issue?.severity).toBe('error'); + expect(validation.valid).toBe(false); + }); + + it('detects unknown-pattern-id for IDs not in taxonomy', () => { + const result = createResult({ + patterns: [ + createPattern({ id: 'vague-project-name', name: 'Vague Project Name' }), + ], + stats: { totalPrompts: 3, promptsWithIssues: 2, overallScore: 5 }, + }); + const validation = validateSemantics(result, ['help me']); + expect(validation.issues.some((i) => i.type === 'unknown-pattern-id')).toBe( + true, + ); + }); + + it('does not flag valid taxonomy IDs', () => { + const result = createResult({ + patterns: [ + createPattern({ id: 'vague' }), + createPattern({ id: 'no-context', name: 'Missing Context' }), + ], + stats: { totalPrompts: 10, promptsWithIssues: 5, overallScore: 5 }, + }); + const validation = validateSemantics(result, ['help me']); + expect( + validation.issues.filter((i) => i.type === 'unknown-pattern-id'), + ).toHaveLength(0); + }); + + it('detects placeholder-example in pattern examples', () => { + const result = createResult({ + patterns: [createPattern({ examples: ['example prompt 1'] })], + stats: { totalPrompts: 3, promptsWithIssues: 2, overallScore: 5 }, + }); + const validation = validateSemantics(result, ['example prompt 1']); + expect( + validation.issues.some((i) => i.type === 'placeholder-example'), + ).toBe(true); + }); +}); + +describe('autoCorrectResult', () => { + it('adjusts score to expected range midpoint', () => { + const result = createResult({ + patterns: [ + createPattern(), + createPattern({ id: 'no-context' }), + createPattern({ id: 'too-broad' }), + ], + stats: { totalPrompts: 10, promptsWithIssues: 7, overallScore: 95 }, + }); + const validation = validateSemantics(result, ['help me']); + const corrected = autoCorrectResult(result, validation); + expect(corrected.stats.overallScore).toBeLessThan(95); + }); + + it('removes duplicate patterns', () => { + const result = createResult({ + patterns: [ + createPattern({ id: 'vague' }), + createPattern({ id: 'vague' }), + ], + }); + const validation: ValidationResult = { + valid: true, + issues: [ + { + type: 'duplicate-issues', + message: 'Duplicate: vague', + severity: 'warning', + }, + ], + }; + const corrected = autoCorrectResult(result, validation); + expect(corrected.patterns).toHaveLength(1); + }); + + it('clamps promptsWithIssues to totalPrompts', () => { + const result = createResult({ + stats: { totalPrompts: 3, promptsWithIssues: 12, overallScore: 6 }, + }); + const validation: ValidationResult = { + valid: false, + issues: [ + { + type: 'stats-inconsistent', + message: 'promptsWithIssues exceeds totalPrompts', + severity: 'error', + }, + ], + }; + const corrected = autoCorrectResult(result, validation); + expect(corrected.stats.promptsWithIssues).toBe(3); + expect(corrected.stats.totalPrompts).toBe(3); + }); + + it('maps unknown pattern IDs to taxonomy', () => { + const result = createResult({ + patterns: [ + createPattern({ + id: 'vague-project-name', + name: 'Vague Project Name', + severity: 'high', + suggestion: 'custom suggestion', + }), + ], + }); + const validation: ValidationResult = { + valid: true, + issues: [ + { + type: 'unknown-pattern-id', + message: 'Unknown: vague-project-name', + severity: 'warning', + }, + ], + }; + const corrected = autoCorrectResult(result, validation); + expect(corrected.patterns[0]?.id).toBe('vague'); + expect(corrected.patterns[0]?.name).toBe('Vague Request'); + }); + + it('deduplicates patterns after remapping IDs', () => { + const result = createResult({ + patterns: [ + createPattern({ id: 'vague-project-name', name: 'Vague Project' }), + createPattern({ id: 'vague', name: 'Vague Request' }), + ], + }); + const validation: ValidationResult = { + valid: true, + issues: [ + { + type: 'unknown-pattern-id', + message: 'Unknown: vague-project-name', + severity: 'warning', + }, + ], + }; + const corrected = autoCorrectResult(result, validation); + expect(corrected.patterns.filter((p) => p.id === 'vague')).toHaveLength(1); + }); + + it('removes placeholder examples', () => { + const result = createResult({ + patterns: [ + createPattern({ examples: ['example prompt 1', 'real user prompt'] }), + ], + }); + const validation: ValidationResult = { + valid: true, + issues: [ + { + type: 'placeholder-example', + message: 'Placeholder detected', + severity: 'warning', + }, + ], + }; + const corrected = autoCorrectResult(result, validation); + expect(corrected.patterns[0]?.examples).toEqual(['real user prompt']); + }); +}); diff --git a/src/core/semantic-validator.ts b/src/core/semantic-validator.ts new file mode 100644 index 0000000..eeff3a3 --- /dev/null +++ b/src/core/semantic-validator.ts @@ -0,0 +1,308 @@ +/** + * Semantic Validation for Analysis Results + * + * Post-processing validation to catch logical inconsistencies + * that JSON schema validation cannot detect. + */ + +import { getExpectedScoreRange } from '../benchmark/gold-standard.js'; +import { ISSUE_TAXONOMY } from '../providers/schemas.js'; +import type { AnalysisResult } from '../types/index.js'; +import { logger } from '../utils/logger-base.js'; + +export type ValidationIssue = { + readonly type: + | 'score-mismatch' + | 'example-not-found' + | 'empty-patterns' + | 'duplicate-issues' + | 'stats-inconsistent' + | 'unknown-pattern-id' + | 'placeholder-example'; + readonly message: string; + readonly severity: 'warning' | 'error'; +}; + +export type ValidationResult = { + readonly valid: boolean; + readonly issues: readonly ValidationIssue[]; + readonly adjustedResult?: AnalysisResult; +}; + +/** + * Validates an analysis result for semantic consistency. + * Checks that scores correlate with issue counts and examples exist in prompts. + */ +export function validateSemantics( + result: AnalysisResult, + originalPrompts: readonly string[], +): ValidationResult { + const issues: ValidationIssue[] = []; + + // 1. Check score-to-issues correlation + const { patterns } = result; + const issueCount = patterns.length; + const expectedRange = getExpectedScoreRange(issueCount); + const actualScore = result.stats.overallScore; + + if (actualScore < expectedRange.min || actualScore > expectedRange.max) { + issues.push({ + type: 'score-mismatch', + message: `Score ${String(actualScore)} is inconsistent with ${String(issueCount)} issues (expected ${String(expectedRange.min)}-${String(expectedRange.max)})`, + severity: 'warning', + }); + } + + // 2. Check for empty patterns with low score + if (issueCount === 0 && actualScore < 70) { + issues.push({ + type: 'empty-patterns', + message: `No patterns detected but score is ${String(actualScore)} (expected 70+)`, + severity: 'warning', + }); + } + + // 3. Check for duplicate issue IDs + if (patterns.length > 0) { + const seenIds = new Set(); + for (const pattern of patterns) { + if (seenIds.has(pattern.id)) { + issues.push({ + type: 'duplicate-issues', + message: `Duplicate pattern ID: ${pattern.id}`, + severity: 'warning', + }); + } + seenIds.add(pattern.id); + } + } + + // 4. Verify examples are substrings of original prompts + for (const pattern of patterns) { + const { examples } = pattern; + for (const example of examples) { + const found = originalPrompts.some( + (prompt) => + prompt.includes(example) || + example.includes(prompt.slice(0, Math.min(50, prompt.length))), + ); + if (!found && example.length > 10) { + issues.push({ + type: 'example-not-found', + message: `Example "${example.slice(0, 30)}..." not found in prompts`, + severity: 'warning', + }); + } + } + } + + // 5. Check stats consistency: promptsWithIssues must not exceed totalPrompts + if (result.stats.promptsWithIssues > result.stats.totalPrompts) { + issues.push({ + type: 'stats-inconsistent', + message: `promptsWithIssues (${String(result.stats.promptsWithIssues)}) exceeds totalPrompts (${String(result.stats.totalPrompts)})`, + severity: 'error', + }); + } + + // 6. Check for unknown pattern IDs not in taxonomy + const validIds = Object.keys(ISSUE_TAXONOMY); + for (const pattern of result.patterns) { + if (!validIds.includes(pattern.id)) { + issues.push({ + type: 'unknown-pattern-id', + message: `Unknown pattern ID: "${pattern.id}" (valid: ${validIds.join(', ')})`, + severity: 'warning', + }); + } + } + + // 7. Check for placeholder examples in patterns + const PLACEHOLDER_TEXTS = [ + 'example prompt 1', + 'example prompt 2', + 'example prompt 3', + 'original prompt', + 'improved prompt', + ]; + for (const pattern of result.patterns) { + for (const example of pattern.examples) { + if (PLACEHOLDER_TEXTS.some((p) => example.toLowerCase().includes(p))) { + issues.push({ + type: 'placeholder-example', + message: `Placeholder example detected: "${example.slice(0, 40)}"`, + severity: 'warning', + }); + } + } + } + + // Log warnings for debugging + if (issues.length > 0) { + logger.debug( + `Semantic validation found ${String(issues.length)} issue(s)`, + 'validator', + ); + for (const issue of issues) { + logger.debug(` [${issue.type}] ${issue.message}`, 'validator'); + } + } + + return { + valid: issues.filter((i) => i.severity === 'error').length === 0, + issues, + }; +} + +/** + * Finds the closest taxonomy ID for an unknown pattern ID. + * Uses substring and keyword matching as heuristics. + */ +function findClosestTaxonomyId( + unknownId: string, + validIds: readonly string[], +): string | undefined { + const normalized = unknownId.toLowerCase(); + + // Direct substring match: check if any valid ID is contained in the unknown ID + for (const validId of validIds) { + if (normalized.includes(validId.replace('-', ''))) return validId; + if (normalized.includes(validId)) return validId; + } + + // Keyword-based matching + const keywordMap: Record = { + vague: 'vague', + context: 'no-context', + broad: 'too-broad', + goal: 'no-goal', + imperative: 'imperative', + command: 'imperative', + technical: 'missing-technical-details', + detail: 'missing-technical-details', + priorit: 'unclear-priorities', + constraint: 'insufficient-constraints', + specif: 'vague', + unclear: 'vague', + missing: 'no-context', + }; + + for (const [keyword, taxonomyId] of Object.entries(keywordMap)) { + if (normalized.includes(keyword)) return taxonomyId; + } + + return undefined; +} + +/** + * Attempts to fix common semantic issues in results. + * Returns adjusted result if fixes were applied. + */ +export function autoCorrectResult( + result: AnalysisResult, + validation: ValidationResult, +): AnalysisResult { + let adjusted = result; + + for (const issue of validation.issues) { + if (issue.type === 'score-mismatch') { + // Adjust score to match issue count + const { patterns } = result; + const issueCount = patterns.length; + const expectedRange = getExpectedScoreRange(issueCount); + const midpoint = Math.round((expectedRange.min + expectedRange.max) / 2); + + if ( + result.stats.overallScore < expectedRange.min || + result.stats.overallScore > expectedRange.max + ) { + adjusted = { + ...adjusted, + stats: { + ...adjusted.stats, + overallScore: midpoint, + }, + }; + logger.debug( + `Auto-corrected score from ${String(result.stats.overallScore)} to ${String(midpoint)}`, + 'validator', + ); + } + } + + if (issue.type === 'duplicate-issues') { + // Remove duplicate patterns + const seen = new Set(); + const uniquePatterns = adjusted.patterns.filter((p) => { + if (seen.has(p.id)) return false; + seen.add(p.id); + return true; + }); + adjusted = { ...adjusted, patterns: uniquePatterns }; + } + + if (issue.type === 'stats-inconsistent') { + adjusted = { + ...adjusted, + stats: { + ...adjusted.stats, + promptsWithIssues: Math.min( + adjusted.stats.promptsWithIssues, + adjusted.stats.totalPrompts, + ), + }, + }; + } + + if (issue.type === 'unknown-pattern-id') { + const validIds = Object.keys(ISSUE_TAXONOMY); + const mappedPatterns = adjusted.patterns.map((pattern) => { + if (validIds.includes(pattern.id)) return pattern; + const matchedId = findClosestTaxonomyId(pattern.id, validIds); + const metadata = matchedId ? ISSUE_TAXONOMY[matchedId] : undefined; + if (matchedId && metadata) { + return { + ...pattern, + id: matchedId, + name: metadata.name, + severity: metadata.severity, + suggestion: metadata.suggestion, + }; + } + return pattern; + }); + // Deduplicate after remapping (multiple IDs may map to same taxonomy ID) + const seen = new Set(); + adjusted = { + ...adjusted, + patterns: mappedPatterns.filter((p) => { + if (seen.has(p.id)) return false; + seen.add(p.id); + return true; + }), + }; + } + + if (issue.type === 'placeholder-example') { + const PLACEHOLDER_TEXTS = [ + 'example prompt 1', + 'example prompt 2', + 'example prompt 3', + 'original prompt', + 'improved prompt', + ]; + adjusted = { + ...adjusted, + patterns: adjusted.patterns.map((pattern) => ({ + ...pattern, + examples: pattern.examples.filter( + (ex) => + !PLACEHOLDER_TEXTS.some((p) => ex.toLowerCase().includes(p)), + ), + })), + }; + } + } + + return adjusted; +} diff --git a/src/core/setup.test.ts b/src/core/setup.test.ts index 1480b4d..292d754 100644 --- a/src/core/setup.test.ts +++ b/src/core/setup.test.ts @@ -68,7 +68,7 @@ describe('setup', () => { it('should complete successfully with ollama provider selected', async () => { vi.mocked(prompts) .mockResolvedValueOnce({ providers: ['ollama'] }) // Provider selection - .mockResolvedValueOnce({ model: 'gemma3:4b' }) // Ollama model + .mockResolvedValueOnce({ model: 'gemma4:e4b' }) // Ollama model .mockResolvedValueOnce({ host: 'http://localhost:11434' }) // Ollama host .mockResolvedValueOnce({ reminder: '7d' }) // Reminder .mockResolvedValueOnce({ saveToShell: true }); // Save to shell @@ -83,10 +83,10 @@ describe('setup', () => { const config = await runSetup(); expect(config.services).toEqual(['ollama']); - expect(config.ollama.model).toBe('gemma3:4b'); + expect(config.ollama.model).toBe('gemma4:e4b'); expect(config.ollama.host).toBe('http://localhost:11434'); expect(process.env['HYNTX_SERVICES']).toBe('ollama'); - expect(process.env['HYNTX_OLLAMA_MODEL']).toBe('gemma3:4b'); + expect(process.env['HYNTX_OLLAMA_MODEL']).toBe('gemma4:e4b'); }); it('should complete successfully with anthropic provider selected', async () => { @@ -140,7 +140,7 @@ describe('setup', () => { it('should handle multiple providers selected', async () => { vi.mocked(prompts) .mockResolvedValueOnce({ providers: ['ollama', 'anthropic'] }) - .mockResolvedValueOnce({ model: 'gemma3:4b' }) // Ollama model + .mockResolvedValueOnce({ model: 'gemma4:e4b' }) // Ollama model .mockResolvedValueOnce({ host: 'http://localhost:11434' }) // Ollama host .mockResolvedValueOnce({ model: 'claude-3-5-haiku-latest' }) // Anthropic model .mockResolvedValueOnce({ apiKey: 'sk-test-key' }) // Anthropic key @@ -150,7 +150,7 @@ describe('setup', () => { const config = await runSetup(); expect(config.services).toEqual(['ollama', 'anthropic']); - expect(config.ollama.model).toBe('gemma3:4b'); + expect(config.ollama.model).toBe('gemma4:e4b'); expect(config.anthropic.apiKey).toBe('sk-test-key'); expect(process.env['HYNTX_SERVICES']).toBe('ollama,anthropic'); }); @@ -191,7 +191,7 @@ describe('setup', () => { const { EXIT_CODES } = await import('../types/index.js'); vi.mocked(prompts) .mockResolvedValueOnce({ providers: ['ollama'] }) - .mockResolvedValueOnce({ model: 'gemma3:4b' }) + .mockResolvedValueOnce({ model: 'gemma4:e4b' }) .mockResolvedValueOnce({ host: 'http://localhost:11434' }) .mockResolvedValueOnce({ reminder: '7d' }) .mockResolvedValueOnce({}); // User cancelled - empty response @@ -211,7 +211,7 @@ describe('setup', () => { it('should set environment variables for selected providers', async () => { vi.mocked(prompts) .mockResolvedValueOnce({ providers: ['ollama'] }) - .mockResolvedValueOnce({ model: 'gemma3:4b' }) + .mockResolvedValueOnce({ model: 'gemma4:e4b' }) .mockResolvedValueOnce({ host: 'http://localhost:11434' }) .mockResolvedValueOnce({ reminder: '7d' }) .mockResolvedValueOnce({ saveToShell: false }); @@ -220,14 +220,14 @@ describe('setup', () => { expect(process.env['HYNTX_SERVICES']).toBe('ollama'); expect(process.env['HYNTX_REMINDER']).toBe('7d'); - expect(process.env['HYNTX_OLLAMA_MODEL']).toBe('gemma3:4b'); + expect(process.env['HYNTX_OLLAMA_MODEL']).toBe('gemma4:e4b'); expect(process.env['HYNTX_OLLAMA_HOST']).toBe('http://localhost:11434'); }); it('should not set anthropic env vars if provider not selected', async () => { vi.mocked(prompts) .mockResolvedValueOnce({ providers: ['ollama'] }) - .mockResolvedValueOnce({ model: 'gemma3:4b' }) + .mockResolvedValueOnce({ model: 'gemma4:e4b' }) .mockResolvedValueOnce({ host: 'http://localhost:11434' }) .mockResolvedValueOnce({ reminder: 'never' }) .mockResolvedValueOnce({ saveToShell: false }); @@ -241,7 +241,7 @@ describe('setup', () => { it('should call saveConfigToShell when user confirms', async () => { vi.mocked(prompts) .mockResolvedValueOnce({ providers: ['ollama'] }) - .mockResolvedValueOnce({ model: 'gemma3:4b' }) + .mockResolvedValueOnce({ model: 'gemma4:e4b' }) .mockResolvedValueOnce({ host: 'http://localhost:11434' }) .mockResolvedValueOnce({ reminder: '7d' }) .mockResolvedValueOnce({ saveToShell: true }); @@ -265,7 +265,7 @@ describe('setup', () => { it('should show manual instructions when saveConfigToShell fails', async () => { vi.mocked(prompts) .mockResolvedValueOnce({ providers: ['ollama'] }) - .mockResolvedValueOnce({ model: 'gemma3:4b' }) + .mockResolvedValueOnce({ model: 'gemma4:e4b' }) .mockResolvedValueOnce({ host: 'http://localhost:11434' }) .mockResolvedValueOnce({ reminder: '7d' }) .mockResolvedValueOnce({ saveToShell: true }); @@ -285,7 +285,7 @@ describe('setup', () => { it('should show manual instructions when user declines save', async () => { vi.mocked(prompts) .mockResolvedValueOnce({ providers: ['ollama'] }) - .mockResolvedValueOnce({ model: 'gemma3:4b' }) + .mockResolvedValueOnce({ model: 'gemma4:e4b' }) .mockResolvedValueOnce({ host: 'http://localhost:11434' }) .mockResolvedValueOnce({ reminder: '7d' }) .mockResolvedValueOnce({ saveToShell: false }); @@ -299,7 +299,7 @@ describe('setup', () => { it('should handle undefined saveToShell response (defaults to false)', async () => { vi.mocked(prompts) .mockResolvedValueOnce({ providers: ['ollama'] }) - .mockResolvedValueOnce({ model: 'gemma3:4b' }) + .mockResolvedValueOnce({ model: 'gemma4:e4b' }) .mockResolvedValueOnce({ host: 'http://localhost:11434' }) .mockResolvedValueOnce({ reminder: '7d' }) .mockResolvedValueOnce({ saveToShell: undefined }); @@ -321,14 +321,14 @@ describe('setup', () => { const config = await runSetup(); // Should use ENV_DEFAULTS values - expect(config.ollama.model).toBe('gemma3:4b'); + expect(config.ollama.model).toBe('gemma4:e4b'); expect(config.ollama.host).toBe('http://localhost:11434'); }); it('should handle reminder never option', async () => { vi.mocked(prompts) .mockResolvedValueOnce({ providers: ['ollama'] }) - .mockResolvedValueOnce({ model: 'gemma3:4b' }) + .mockResolvedValueOnce({ model: 'gemma4:e4b' }) .mockResolvedValueOnce({ host: 'http://localhost:11434' }) .mockResolvedValueOnce({ reminder: 'never' }) .mockResolvedValueOnce({ saveToShell: false }); @@ -343,7 +343,7 @@ describe('setup', () => { it('should update config when reminder is not never', async () => { vi.mocked(prompts) .mockResolvedValueOnce({ providers: ['ollama'] }) - .mockResolvedValueOnce({ model: 'gemma3:4b' }) + .mockResolvedValueOnce({ model: 'gemma4:e4b' }) .mockResolvedValueOnce({ host: 'http://localhost:11434' }) .mockResolvedValueOnce({ reminder: '30d' }) .mockResolvedValueOnce({ saveToShell: false }); @@ -386,7 +386,7 @@ describe('setup', () => { const config: EnvConfig = { services: ['ollama'], reminder: '7d', - ollama: { model: 'gemma3:4b', host: 'http://localhost:11434' }, + ollama: { model: 'gemma4:e4b', host: 'http://localhost:11434' }, anthropic: { model: 'claude-3-5-haiku-latest', apiKey: '' }, google: { model: 'gemini-2.0-flash-exp', apiKey: '' }, }; @@ -405,7 +405,7 @@ describe('setup', () => { const config: EnvConfig = { services: ['ollama'], reminder: 'never', - ollama: { model: 'gemma3:4b', host: 'http://localhost:11434' }, + ollama: { model: 'gemma4:e4b', host: 'http://localhost:11434' }, anthropic: { model: 'claude-3-5-haiku-latest', apiKey: '' }, google: { model: 'gemini-2.0-flash-exp', apiKey: '' }, }; diff --git a/src/providers/base.ts b/src/providers/base.ts index 7d53f5f..0e66c44 100644 --- a/src/providers/base.ts +++ b/src/providers/base.ts @@ -29,6 +29,7 @@ const PLACEHOLDER_PATTERNS = [ 'Improved version addressing the issue', '', 'Human-Readable Issue Name', + 'example prompt 1', ] as const; /** diff --git a/src/providers/ollama.test.ts b/src/providers/ollama.test.ts index a854245..13bfb69 100644 --- a/src/providers/ollama.test.ts +++ b/src/providers/ollama.test.ts @@ -15,10 +15,9 @@ import { detectBatchStrategy, OllamaProvider } from './ollama.js'; describe('OllamaProvider', () => { const mockConfig: OllamaConfig = { - model: 'llama3.2', + model: 'llama3:70b', host: 'http://localhost:11434', - // Force full schema for tests to match expected response format - schemaOverride: 'batch', + // Use standard-strategy model so auto-detection selects 'full' schema }; let provider: OllamaProvider; @@ -54,7 +53,7 @@ describe('OllamaProvider', () => { global.fetch = vi.fn().mockResolvedValue({ ok: true, json: async () => ({ - models: [{ name: 'llama3.2' }, { name: 'other-model' }], + models: [{ name: 'llama3:70b' }, { name: 'other-model' }], }), }); @@ -72,7 +71,7 @@ describe('OllamaProvider', () => { global.fetch = vi.fn().mockResolvedValue({ ok: true, json: async () => ({ - models: [{ name: 'llama3.2:latest' }, { name: 'other-model' }], + models: [{ name: 'llama3:70b-instruct' }, { name: 'other-model' }], }), }); @@ -238,7 +237,7 @@ describe('OllamaProvider', () => { if (!callArgs) throw new Error('Expected fetch to be called'); const body = JSON.parse(callArgs[1].body); - expect(body.model).toBe('llama3.2'); + expect(body.model).toBe('llama3:70b'); expect(body.stream).toBe(false); expect(body.options.temperature).toBe(0.3); expect(body.prompt).toContain('Test prompt'); @@ -275,7 +274,7 @@ describe('OllamaProvider', () => { }); await expect(provider.analyze(['Test'], '2025-01-15')).rejects.toThrow( - 'Ollama API request failed: 500 Internal Server Error', + 'Ollama API failed: 500 Internal Server Error', ); }); @@ -375,7 +374,7 @@ describe('OllamaProvider', () => { 'Failed to parse response as JSON', ); - expect(global.fetch).toHaveBeenCalledTimes(1); // No retries for parse errors + expect(global.fetch).toHaveBeenCalledTimes(3); // Tries all 3 temperature levels before failing }); it('should not retry on schema validation errors', async () => { @@ -390,7 +389,7 @@ describe('OllamaProvider', () => { 'Response does not match expected schema', ); - expect(global.fetch).toHaveBeenCalledTimes(1); // No retries for schema errors + expect(global.fetch).toHaveBeenCalledTimes(3); // Tries all 3 temperature levels before failing }); it('should use exponential backoff for retries', async () => { @@ -597,33 +596,44 @@ describe('OllamaProvider', () => { }); describe('getBatchLimits', () => { - it('should return micro strategy limits for llama3.2', () => { + it('should return micro strategy limits for llama3.2 (individual mode)', () => { const provider = new OllamaProvider({ model: 'llama3.2', host: 'http://localhost:11434', - // Without override, llama3.2 uses individual schema (maxPromptsPerBatch = 1) - // With batch override, it uses the strategy's actual limits - schemaOverride: 'batch', }); const limits = provider.getBatchLimits(); expect(limits.maxTokensPerBatch).toBe(500); - expect(limits.maxPromptsPerBatch).toBe(3); + // Micro models use individual schema → 1 prompt per batch + expect(limits.maxPromptsPerBatch).toBe(1); expect(limits.prioritization).toBe('longest-first'); }); - it('should return small strategy limits for mistral:7b', () => { + it('should return small strategy limits for mistral:7b (full schema mode)', () => { const provider = new OllamaProvider({ model: 'mistral:7b', host: 'http://localhost:11434', - // Without override, mistral:7b uses individual schema (maxPromptsPerBatch = 1) - schemaOverride: 'batch', }); const limits = provider.getBatchLimits(); expect(limits.maxTokensPerBatch).toBe(1500); + // Small models use full schema → up to 10 prompts per batch + expect(limits.maxPromptsPerBatch).toBe(10); + expect(limits.prioritization).toBe('longest-first'); + }); + + it('should return small strategy limits for gemma4:e4b (full schema mode)', () => { + const provider = new OllamaProvider({ + model: 'gemma4:e4b', + host: 'http://localhost:11434', + }); + + const limits = provider.getBatchLimits(); + + expect(limits.maxTokensPerBatch).toBe(1500); + // gemma4:e4b maps to small strategy → full schema → up to 10 prompts per batch expect(limits.maxPromptsPerBatch).toBe(10); expect(limits.prioritization).toBe('longest-first'); }); @@ -641,18 +651,17 @@ describe('OllamaProvider', () => { expect(limits.prioritization).toBe('longest-first'); }); - it('should return micro strategy limits for unknown models', () => { + it('should return micro strategy limits for unknown models (individual mode)', () => { const provider = new OllamaProvider({ model: 'unknown-model', host: 'http://localhost:11434', - // Unknown models default to micro strategy with individual schema - schemaOverride: 'batch', }); const limits = provider.getBatchLimits(); expect(limits.maxTokensPerBatch).toBe(500); - expect(limits.maxPromptsPerBatch).toBe(3); + // Unknown models default to micro → individual schema → 1 prompt per batch + expect(limits.maxPromptsPerBatch).toBe(1); expect(limits.prioritization).toBe('longest-first'); }); }); @@ -699,6 +708,18 @@ describe('detectBatchStrategy', () => { it('should detect standard strategy for qwen2.5:14b', () => { expect(detectBatchStrategy('qwen2.5:14b')).toBe('standard'); }); + + it('should detect micro strategy for gemma4:e2b', () => { + expect(detectBatchStrategy('gemma4:e2b')).toBe('micro'); + }); + + it('should detect small strategy for gemma4:e4b', () => { + expect(detectBatchStrategy('gemma4:e4b')).toBe('small'); + }); + + it('should detect standard strategy for gemma4:31b', () => { + expect(detectBatchStrategy('gemma4:31b')).toBe('standard'); + }); }); describe('partial match', () => { @@ -721,6 +742,10 @@ describe('detectBatchStrategy', () => { it('should detect standard strategy for mixtral-8x7b-instruct', () => { expect(detectBatchStrategy('mixtral-8x7b-instruct')).toBe('standard'); }); + + it('should detect small strategy for gemma4:e4b:latest', () => { + expect(detectBatchStrategy('gemma4:e4b:latest')).toBe('small'); + }); }); describe('default fallback', () => { @@ -746,21 +771,27 @@ describe('detectBatchStrategy', () => { const micro = BATCH_STRATEGIES.micro; expect(micro.maxTokensPerBatch).toBe(500); expect(micro.maxPromptsPerBatch).toBe(3); - expect(micro.description).toBe('For models < 4GB'); + expect(micro.description).toBe( + 'Conservative: individual schema, 1 prompt at a time', + ); }); it('should have correct small strategy configuration', () => { const small = BATCH_STRATEGIES.small; expect(small.maxTokensPerBatch).toBe(1500); expect(small.maxPromptsPerBatch).toBe(10); - expect(small.description).toBe('For models 4-7GB'); + expect(small.description).toBe( + 'Balanced: full schema, up to 10 prompts per batch', + ); }); it('should have correct standard strategy configuration', () => { const standard = BATCH_STRATEGIES.standard; expect(standard.maxTokensPerBatch).toBe(3000); expect(standard.maxPromptsPerBatch).toBe(50); - expect(standard.description).toBe('For models > 7GB'); + expect(standard.description).toBe( + 'Maximum: full schema, up to 50 prompts per batch', + ); }); }); }); diff --git a/src/providers/ollama.ts b/src/providers/ollama.ts index b9cf079..f811702 100644 --- a/src/providers/ollama.ts +++ b/src/providers/ollama.ts @@ -1,13 +1,11 @@ /** - * Ollama AI provider implementation. - * - * This module provides integration with local Ollama instances for prompt analysis. - * Features include: - * - Availability checking with timeout and model verification - * - Retry logic with exponential backoff for network errors - * - Support for both raw JSON and markdown-wrapped responses + * Ollama AI provider implementation with improved retry and validation. */ +import { + autoCorrectResult, + validateSemantics, +} from '../core/semantic-validator.js'; import { type AnalysisProvider, type AnalysisResult, @@ -30,145 +28,66 @@ import { SYSTEM_PROMPT_MINIMAL, } from './schemas.js'; -/** - * Maximum number of retry attempts for network errors. - */ const MAX_RETRIES = 2; - -/** - * Timeout for availability check (3 seconds). - */ +const TEMPERATURE_LEVELS = [0.3, 0.1, 0.0] as const; const AVAILABILITY_TIMEOUT_MS = 3000; - -/** - * Timeout for analysis request (60 seconds). - */ const ANALYSIS_TIMEOUT_MS = 60000; - -/** - * Base delay for exponential backoff (1 second). - */ const BASE_RETRY_DELAY_MS = 1000; -/** - * Model-to-strategy mapping for known Ollama models. - * Maps model names to their optimal batch strategy. - */ const MODEL_STRATEGY_MAP: Record = { - // Micro (< 4GB) 'llama3.2': 'micro', 'phi3:mini': 'micro', 'gemma3:4b': 'micro', 'gemma2:2b': 'micro', - - // Small (4-7GB) + 'gemma4:e2b': 'micro', 'mistral:7b': 'small', 'llama3:8b': 'small', 'codellama:7b': 'small', - - // Standard (> 7GB) + 'gemma4:e4b': 'small', 'llama3:70b': 'standard', mixtral: 'standard', 'qwen2.5:14b': 'standard', + 'gemma4:31b': 'standard', }; -/** - * Detects the optimal batch strategy for a given model. - * Uses exact and partial matching against known model names. - * - * @param modelName - Name of the Ollama model - * @returns Batch strategy type - * - * @example - * ```typescript - * detectBatchStrategy('llama3.2') // 'micro' - * detectBatchStrategy('llama3.2:latest') // 'micro' (partial match) - * detectBatchStrategy('unknown-model') // 'micro' (safe default) - * ``` - */ export function detectBatchStrategy(modelName: string): BatchStrategyType { - // Check exact match first - if (MODEL_STRATEGY_MAP[modelName]) { - return MODEL_STRATEGY_MAP[modelName]; - } - - // Check partial match (e.g., "llama3.2:latest" matches "llama3.2") + if (MODEL_STRATEGY_MAP[modelName]) return MODEL_STRATEGY_MAP[modelName]; for (const [pattern, strategy] of Object.entries(MODEL_STRATEGY_MAP)) { - if (modelName.includes(pattern)) { - return strategy; - } + if (modelName.includes(pattern)) return strategy; } - - // Default to micro for unknown models (safest) return 'micro'; } -/** - * Ollama provider for local AI analysis. - * Implements the AnalysisProvider interface for Ollama instances. - */ export class OllamaProvider implements AnalysisProvider { public readonly name = 'Ollama'; private readonly config: OllamaConfig; private readonly batchStrategy: BatchStrategyType; private readonly schemaType: SchemaType; - /** - * Creates a new OllamaProvider instance. - * Automatically detects the optimal batch strategy and schema type based on model name. - * - * @param config - Ollama configuration with model and host - */ constructor(config: OllamaConfig) { this.config = config; this.batchStrategy = detectBatchStrategy(config.model); this.schemaType = this.selectSchemaType(); - const strategy = BATCH_STRATEGIES[this.batchStrategy]; logger.debug( - `Detected batch strategy: ${this.batchStrategy} (${strategy.description}), schema type: ${this.schemaType}`, + `Detected strategy: ${this.batchStrategy} (${strategy.description}), schema: ${this.schemaType}`, 'ollama', ); } - /** - * Selects the appropriate schema type based on model size and user override. - * Micro and small models use individual schema (hybrid approach) by default. - * Standard models use full schema for detailed analysis. - * User can override via CLI --analysis-mode flag. - * - * @returns Schema type identifier - */ private selectSchemaType(): SchemaType { - // Check for user override first - if (this.config.schemaOverride) { - return this.config.schemaOverride === 'individual' - ? 'individual' - : 'full'; + // Only override auto-detection when explicitly requesting 'individual' + // 'batch' mode should respect the model's strategy-based schema selection + if (this.config.schemaOverride === 'individual') { + return 'individual'; } - - // Auto-select based on model size - // Micro and small models use batch-individual hybrid for better accuracy - // This combines batching performance with individual result clarity - return ['micro', 'small'].includes(this.batchStrategy) - ? 'individual' - : 'full'; + return this.batchStrategy === 'micro' ? 'individual' : 'full'; } - /** - * Returns dynamic batch limits based on detected model strategy. - * When using individual schema, processes one prompt at a time for reliability. - * - * @returns Provider limits with model-specific constraints - */ public getBatchLimits(): ProviderLimits { const strategy = BATCH_STRATEGIES[this.batchStrategy]; - - // Individual schema processes one prompt at a time - // This is necessary because small models struggle to return arrays const maxPromptsPerBatch = this.schemaType === 'individual' ? 1 : strategy.maxPromptsPerBatch; - return { maxTokensPerBatch: strategy.maxTokensPerBatch, maxPromptsPerBatch, @@ -176,76 +95,47 @@ export class OllamaProvider implements AnalysisProvider { }; } - /** - * Checks if the Ollama service is available and has the required model. - * Uses a 3-second timeout to avoid hanging on unreachable services. - * - * @returns Promise that resolves to true if available, false otherwise - */ public async isAvailable(): Promise { logger.debug(`Connecting to Ollama at ${this.config.host}`, 'ollama'); - try { const controller = new AbortController(); const timeoutId = setTimeout(() => { controller.abort(); }, AVAILABILITY_TIMEOUT_MS); - const response = await fetch(`${this.config.host}/api/tags`, { signal: controller.signal, }); - clearTimeout(timeoutId); - if (!response.ok) { logger.debug( - `Ollama API returned ${String(response.status)}`, + `Ollama API returned ${response.status.toString()}`, 'ollama', ); return false; } - const data = (await response.json()) as { models?: { name: string }[] }; - if (!data.models || !Array.isArray(data.models)) { logger.debug('Ollama returned invalid model list', 'ollama'); return false; } - - // Check if the configured model is available - const modelAvailable = data.models.some((model) => - model.name.includes(this.config.model), + const modelAvailable = data.models.some((m) => + m.name.includes(this.config.model), + ); + logger.debug( + modelAvailable + ? `Model ${this.config.model} available` + : `Model ${this.config.model} not found`, + 'ollama', ); - - if (modelAvailable) { - logger.debug( - `Model ${this.config.model} available (${String(data.models.length)} models found)`, - 'ollama', - ); - } else { - logger.debug( - `Model ${this.config.model} not found in available models`, - 'ollama', - ); - } - return modelAvailable; } catch { - // Network errors, timeouts, or JSON parse errors all indicate unavailability logger.debug('Ollama connection failed', 'ollama'); return false; } } /** - * Analyzes prompts using the Ollama service. - * Implements retry logic with exponential backoff for network errors. - * - * @param prompts - Array of prompt strings to analyze - * @param date - Date context for the analysis - * @param context - Optional project context for analysis - * @returns Promise resolving to AnalysisResult - * @throws Error if analysis fails after retries or if response is invalid + * Analyzes prompts with temperature fallback and semantic validation. */ public async analyze( prompts: readonly string[], @@ -265,91 +155,110 @@ export class OllamaProvider implements AnalysisProvider { : SYSTEM_PROMPT_FULL; let lastError: Error | undefined; + let lastParseError: Error | undefined; + // Outer loop: network retries for (let attempt = 0; attempt <= MAX_RETRIES; attempt++) { - try { - const controller = new AbortController(); - const timeoutId = setTimeout(() => { - controller.abort(); - }, ANALYSIS_TIMEOUT_MS); - - const response = await fetch(`${this.config.host}/api/generate`, { - method: 'POST', - headers: { - 'Content-Type': 'application/json', - }, - body: JSON.stringify({ - model: this.config.model, - prompt: userPrompt, - system: systemPrompt, - stream: false, - format: 'json', - options: { - temperature: 0.3, - }, - }), - signal: controller.signal, - }); - - clearTimeout(timeoutId); - - if (!response.ok) { - const status = String(response.status); - throw new Error( - `Ollama API request failed: ${status} ${response.statusText}`, + // Inner loop: temperature fallback for parse errors + for (const temperature of TEMPERATURE_LEVELS) { + try { + const controller = new AbortController(); + const timeoutId = setTimeout(() => { + controller.abort(); + }, ANALYSIS_TIMEOUT_MS); + + const response = await fetch(`${this.config.host}/api/generate`, { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify({ + model: this.config.model, + prompt: userPrompt, + system: systemPrompt, + stream: false, + format: 'json', + options: { temperature }, + }), + signal: controller.signal, + }); + + clearTimeout(timeoutId); + + if (!response.ok) { + throw new Error( + `Ollama API failed: ${response.status.toString()} ${response.statusText}`, + ); + } + + const data = (await response.json()) as { response?: string }; + if (typeof data.response !== 'string') { + throw new Error('Invalid response format from Ollama API'); + } + + // Parse response + let result: AnalysisResult; + logger.debug( + `Schema type: ${this.schemaType}, response length: ${String(data.response.length)}`, + 'ollama', ); - } - - const data = (await response.json()) as { response?: string }; - - if (typeof data.response !== 'string') { - throw new Error('Invalid response format from Ollama API'); - } - - // Parse and validate the response based on schema type - if (this.schemaType === 'individual') { - return parseBatchIndividualResponse(data.response, date, prompts); - } - return parseResponse(data.response, date, undefined, prompts); - } catch (error) { - lastError = error instanceof Error ? error : new Error(String(error)); - - // Don't retry parse errors - they won't succeed on retry - if ( - lastError.message.includes('parse') || - lastError.message.includes('schema') - ) { - throw lastError; - } - - // If this was the last attempt, throw the error - if (attempt === MAX_RETRIES) { + if (this.schemaType === 'individual') { + result = parseBatchIndividualResponse(data.response, date, prompts); + } else { + result = parseResponse(data.response, date, undefined, prompts); + } + + // Semantic validation + const validation = validateSemantics(result, prompts); + if (!validation.valid) { + logger.debug( + 'Semantic validation failed, attempting auto-correction', + 'ollama', + ); + result = autoCorrectResult(result, validation); + } + + return result; + } catch (error) { + const err = error instanceof Error ? error : new Error(String(error)); + + // Parse errors: try lower temperature + if (err.message.includes('parse') || err.message.includes('schema')) { + lastParseError = err; + logger.debug( + `Parse failed at temp ${temperature.toString()}, trying lower`, + 'ollama', + ); + continue; // Try next temperature + } + + // Network errors: break inner loop, retry outer + lastError = err; break; } + } - // Exponential backoff: 1s, 2s, 4s, ... - const delay = BASE_RETRY_DELAY_MS * Math.pow(2, attempt); - logger.debug( - `Retry attempt ${String(attempt + 1)}/${String(MAX_RETRIES)}, waiting ${String(delay)}ms`, - 'ollama', - ); - await sleep(delay); + // If we exhausted temperatures due to parse errors, throw + if (lastParseError && !lastError) { + throw lastParseError; } + + // If this was the last network retry, break + if (attempt === MAX_RETRIES) break; + + // Exponential backoff for network errors + const delay = BASE_RETRY_DELAY_MS * Math.pow(2, attempt); + logger.debug( + `Retry ${String(attempt + 1)}/${String(MAX_RETRIES)}, waiting ${String(delay)}ms`, + 'ollama', + ); + await sleep(delay); } - const attempts = String(MAX_RETRIES + 1); throw new Error( - `Ollama analysis failed after ${attempts} attempts: ${lastError?.message ?? 'Unknown error'}`, + `Ollama analysis failed after ${String(MAX_RETRIES + 1)} attempts: ${lastError?.message ?? lastParseError?.message ?? 'Unknown error'}`, ); } } -/** - * Sleep utility for retry backoff. - * - * @param ms - Milliseconds to sleep - * @returns Promise that resolves after the delay - */ function sleep(ms: number): Promise { return new Promise((resolve) => setTimeout(resolve, ms)); } diff --git a/src/providers/schemas.test.ts b/src/providers/schemas.test.ts index 849b2d0..1d23451 100644 --- a/src/providers/schemas.test.ts +++ b/src/providers/schemas.test.ts @@ -114,9 +114,9 @@ describe('schemas', () => { }); it('should include examples', () => { - expect(SYSTEM_PROMPT_MINIMAL).toContain('Examples:'); + expect(SYSTEM_PROMPT_MINIMAL).toContain('Examples'); expect(SYSTEM_PROMPT_MINIMAL).toContain('Input:'); - expect(SYSTEM_PROMPT_MINIMAL).toContain('Output:'); + expect(SYSTEM_PROMPT_MINIMAL).toContain('->'); }); it('should specify score range', () => { @@ -184,12 +184,11 @@ describe('schemas', () => { }); describe('schema progression', () => { - it('should have minimal schema simpler than full schema', () => { - // Minimal should not require as many fields - const minimalFields = SYSTEM_PROMPT_MINIMAL.match(/"[^"]+"/g) ?? []; - const fullFields = SYSTEM_PROMPT_FULL.match(/"[^"]+"/g) ?? []; - - expect(minimalFields.length).toBeLessThan(fullFields.length); + it('should have minimal schema shorter than full schema', () => { + // Minimal should be shorter in total length + expect(SYSTEM_PROMPT_MINIMAL.length).toBeLessThan( + SYSTEM_PROMPT_FULL.length, + ); }); it('should have consistent score range across all schemas', () => { diff --git a/src/providers/schemas.ts b/src/providers/schemas.ts index 88ebb88..bf7bc7c 100644 --- a/src/providers/schemas.ts +++ b/src/providers/schemas.ts @@ -133,32 +133,48 @@ export const ISSUE_TAXONOMY: IssueTaxonomy = { * Minimal system prompt for small models. * Returns only issue IDs and score - no examples or detailed metadata. */ -export const SYSTEM_PROMPT_MINIMAL = `You analyze prompts for quality issues. +export const SYSTEM_PROMPT_MINIMAL = `You analyze coding prompts for quality issues. Respond with JSON only: {"issues": ["issue-id", ...], "score": 0-100} Valid issue IDs: vague, no-context, too-broad, no-goal, imperative, missing-technical-details, unclear-priorities, insufficient-constraints Issue definitions: -- vague: Generic requests without specifics ("help", "fix", "improve") -- no-context: Missing background info (uses "this", "it", "the bug" without context) -- too-broad: Requests covering multiple unrelated topics -- no-goal: Ambiguous success criteria or desired outcome -- imperative: Commands without explanation or reasoning -- missing-technical-details: No file paths, function names, or error messages -- unclear-priorities: Multiple requests without ordering -- insufficient-constraints: No requirements or edge cases mentioned - -Scoring: 0-100 (100=perfect, 90+=excellent, 70-89=good, 50-69=fair, <50=poor) - -Examples: -Input: "Help me with code" -Output: {"issues": ["vague", "no-context"], "score": 35} - -Input: "Debug this TypeScript function that returns undefined" -Output: {"issues": ["missing-technical-details"], "score": 70} - -Input: "Debug calculateTotal() in utils.ts that returns undefined when called with empty array" -Output: {"issues": [], "score": 90}`; +- vague: Generic verbs without objects ("help", "fix", "improve" alone) +- no-context: References without context ("this", "it", "the bug") +- too-broad: Multiple unrelated tasks in one request +- no-goal: No clear desired outcome or success criteria +- imperative: Commands without explanation ("Add button") +- missing-technical-details: No file paths, function names, errors +- unclear-priorities: Multiple tasks without ordering +- insufficient-constraints: No requirements or edge cases + +Scoring guide: +- 90-100: Specific file/function, clear goal, actionable +- 70-89: Good intent but missing some details +- 50-69: Vague or missing context +- 25-49: Multiple major issues +- 0-24: Unusable (single words, no meaning) + +Examples (study the contrast): + +POOR (score 10-25): +Input: "Help" -> {"issues": ["vague", "no-context", "no-goal"], "score": 10} +Input: "Fix it" -> {"issues": ["vague", "no-context", "no-goal"], "score": 15} +Input: "Debug" -> {"issues": ["vague", "no-context", "no-goal"], "score": 12} + +FAIR (score 45-60): +Input: "Fix the login bug" -> {"issues": ["no-context", "missing-technical-details"], "score": 55} +Input: "Add error handling" -> {"issues": ["no-context", "missing-technical-details"], "score": 52} +Input: "Make it faster" -> {"issues": ["vague", "no-context"], "score": 48} + +GOOD (score 70-84): +Input: "Fix the login bug where users cannot reset password" -> {"issues": ["missing-technical-details"], "score": 72} +Input: "Add validation to signup form for email and password" -> {"issues": ["missing-technical-details"], "score": 78} + +EXCELLENT (score 85-100): +Input: "Fix null pointer in auth.ts line 45 when user.email is undefined" -> {"issues": [], "score": 95} +Input: "Add rate limiting to /api/login: 5 attempts per IP per minute" -> {"issues": [], "score": 93} +Input: "Refactor calculateTotal() to use reduce, keep return type number" -> {"issues": [], "score": 88}`; /** * Simple system prompt for medium models. diff --git a/src/types/index.ts b/src/types/index.ts index f453209..79fbfbf 100644 --- a/src/types/index.ts +++ b/src/types/index.ts @@ -61,6 +61,7 @@ export type ExtractedPrompt = { readonly sessionId: string; readonly project: string; readonly date: string; + readonly isConfirmation?: boolean; }; /** @@ -188,23 +189,23 @@ export type BatchStrategy = { * Available batch strategies by model size. */ export const BATCH_STRATEGIES: Record = { - // For models < 4GB + // Conservative: minimal schema, 1 prompt at a time micro: { maxTokensPerBatch: 500, maxPromptsPerBatch: 3, - description: 'For models < 4GB', + description: 'Conservative: individual schema, 1 prompt at a time', }, - // For models 4-7GB + // Balanced: full schema, up to 10 prompts per batch small: { maxTokensPerBatch: 1_500, maxPromptsPerBatch: 10, - description: 'For models 4-7GB', + description: 'Balanced: full schema, up to 10 prompts per batch', }, - // For models > 7GB + // Maximum: full schema, up to 50 prompts per batch standard: { maxTokensPerBatch: 3_000, maxPromptsPerBatch: 50, - description: 'For models > 7GB', + description: 'Maximum: full schema, up to 50 prompts per batch', }, } as const; @@ -316,7 +317,7 @@ export type EnvConfig = { export const ENV_DEFAULTS = { reminder: '7d', ollama: { - model: 'gemma3:4b', + model: 'gemma4:e4b', host: 'http://localhost:11434', }, anthropic: { diff --git a/tests/integration/api/library-usage.test.ts b/tests/integration/api/library-usage.test.ts index 9285715..2a69da9 100644 --- a/tests/integration/api/library-usage.test.ts +++ b/tests/integration/api/library-usage.test.ts @@ -156,7 +156,7 @@ describe('Library API Usage', () => { it('should export ENV_DEFAULTS as constants', () => { expect(ENV_DEFAULTS).toBeDefined(); expect(ENV_DEFAULTS.reminder).toBe('7d'); - expect(ENV_DEFAULTS.ollama.model).toBe('gemma3:4b'); + expect(ENV_DEFAULTS.ollama.model).toBe('gemma4:e4b'); expect(ENV_DEFAULTS.ollama.host).toBe('http://localhost:11434'); expect(ENV_DEFAULTS.anthropic.model).toBe('claude-3-5-haiku-latest'); expect(ENV_DEFAULTS.google.model).toBe('gemini-2.0-flash-exp'); diff --git a/tests/integration/core/incremental-analysis.test.ts b/tests/integration/core/incremental-analysis.test.ts index 0d6a31f..14abef6 100644 --- a/tests/integration/core/incremental-analysis.test.ts +++ b/tests/integration/core/incremental-analysis.test.ts @@ -188,7 +188,7 @@ describe('Incremental Analysis Integration', () => { // Check with different model const cacheCheck = await getPromptsWithCache( logResult.prompts, - 'gemma3:4b', // Different model + 'gemma4:e4b', // Different model 'full', ); @@ -394,8 +394,8 @@ describe('Incremental Analysis Integration', () => { // Note: The function uses simple average, not weighted by prompt count expect(merged.stats.overallScore).toBe(6); - // Pattern frequency: average of 0.5 and 1.0 = 0.75, rounded to 1 - expect(merged.patterns[0]?.frequency).toBe(1); + // Pattern frequency: sum of 0.5 and 1.0 = 1.5 + expect(merged.patterns[0]?.frequency).toBe(1.5); // Severity should be the highest expect(merged.patterns[0]?.severity).toBe('high'); }); diff --git a/tests/integration/core/performance.test.ts b/tests/integration/core/performance.test.ts index 18fc6ed..240b882 100644 --- a/tests/integration/core/performance.test.ts +++ b/tests/integration/core/performance.test.ts @@ -43,7 +43,7 @@ describe('Performance - Incremental Analysis', () => { await populateResultsCache(resultsDir, testData, { date: '2025-01-20', project: 'perf-test', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); @@ -65,7 +65,7 @@ describe('Performance - Incremental Analysis', () => { const startTime = Date.now(); const cacheCheck = await getPromptsWithCache( prompts, - 'gemma3:4b', + 'gemma4:e4b', 'full', ); const loadTime = Date.now() - startTime; @@ -87,7 +87,7 @@ describe('Performance - Incremental Analysis', () => { await populateResultsCache(resultsDir, testData, { date: '2025-01-20', project: 'perf-test', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); const populateTime = Date.now() - populateStart; @@ -114,7 +114,7 @@ describe('Performance - Incremental Analysis', () => { const startTime = Date.now(); const cacheCheck = await getPromptsWithCache( prompts, - 'gemma3:4b', + 'gemma4:e4b', 'full', ); const loadTime = Date.now() - startTime; @@ -141,7 +141,7 @@ describe('Performance - Incremental Analysis', () => { const cachedData = generatePerformanceTestData(500); await populateResultsCache(resultsDir, cachedData, { date: '2025-01-20', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); @@ -173,7 +173,7 @@ describe('Performance - Incremental Analysis', () => { const startTime = Date.now(); const cacheCheck = await getPromptsWithCache( mixedPrompts, - 'gemma3:4b', + 'gemma4:e4b', 'full', ); const loadTime = Date.now() - startTime; @@ -205,7 +205,7 @@ describe('Performance - Incremental Analysis', () => { await populateResultsCache(resultsDir, testData, { date: '2025-01-20', project: 'perf-test', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); @@ -225,7 +225,7 @@ describe('Performance - Incremental Analysis', () => { // Load all cached results const cacheCheck = await getPromptsWithCache( prompts, - 'gemma3:4b', + 'gemma4:e4b', 'full', ); @@ -265,7 +265,7 @@ describe('Performance - Incremental Analysis', () => { await populateResultsCache(resultsDir, testData, { date: '2025-01-20', project: 'perf-test', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); @@ -292,7 +292,7 @@ describe('Performance - Incremental Analysis', () => { // Perform multiple cache lookups for (let i = 0; i < 10; i++) { - await getPromptsWithCache(prompts, 'gemma3:4b', 'full'); + await getPromptsWithCache(prompts, 'gemma4:e4b', 'full'); } // Force garbage collection again @@ -321,7 +321,7 @@ describe('Performance - Incremental Analysis', () => { await populateResultsCache(resultsDir, testData, { date: '2025-01-20', project: 'perf-test', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); @@ -344,7 +344,9 @@ describe('Performance - Incremental Analysis', () => { // Read all batches concurrently const startTime = Date.now(); const results = await Promise.all( - batches.map((batch) => getPromptsWithCache(batch, 'gemma3:4b', 'full')), + batches.map((batch) => + getPromptsWithCache(batch, 'gemma4:e4b', 'full'), + ), ); const totalTime = Date.now() - startTime; @@ -388,7 +390,7 @@ describe('Performance - Incremental Analysis', () => { date: '2025-01-20', project: 'perf-test', provider: 'test-provider', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }, ), @@ -414,7 +416,7 @@ describe('Performance - Incremental Analysis', () => { const cacheCheck = await getPromptsWithCache( promptsToCheck, - 'gemma3:4b', + 'gemma4:e4b', 'full', ); @@ -431,7 +433,7 @@ describe('Performance - Incremental Analysis', () => { const testData = generatePerformanceTestData(100, date); await populateResultsCache(resultsDir, testData, { date, - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); } @@ -454,7 +456,7 @@ describe('Performance - Incremental Analysis', () => { const startTime = Date.now(); const cacheCheck = await getPromptsWithCache( prompts, - 'gemma3:4b', + 'gemma4:e4b', 'full', ); const loadTime = Date.now() - startTime; @@ -479,7 +481,7 @@ describe('Performance - Incremental Analysis', () => { const testData = generatePerformanceTestData(50, date); await populateResultsCache(resultsDir, testData, { date, - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); } diff --git a/tests/integration/core/watch-mode-incremental.test.ts b/tests/integration/core/watch-mode-incremental.test.ts index 4d590ea..be91eff 100644 --- a/tests/integration/core/watch-mode-incremental.test.ts +++ b/tests/integration/core/watch-mode-incremental.test.ts @@ -110,7 +110,7 @@ describe('Watch Mode - Incremental Analysis', () => { { date: '2025-01-20', provider: 'test-provider', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }, ); @@ -118,7 +118,7 @@ describe('Watch Mode - Incremental Analysis', () => { // Verify result was saved const cached = await getPromptResult('Fix authentication bug', { date: '2025-01-20', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); @@ -145,7 +145,7 @@ describe('Watch Mode - Incremental Analysis', () => { await populateResultsCache(resultsDir, cachedPrompts, { date: '2025-01-20', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); @@ -163,7 +163,7 @@ describe('Watch Mode - Incremental Analysis', () => { // Check if prompt is cached const cached = await getPromptResult(event.prompt.content, { date: event.prompt.date, - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); @@ -222,7 +222,7 @@ describe('Watch Mode - Incremental Analysis', () => { [{ content: 'Add tests', result: cachedResult }], { date: '2025-01-20', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }, ); @@ -246,7 +246,7 @@ describe('Watch Mode - Incremental Analysis', () => { const startTime = Date.now(); const cached = await getPromptResult(event.prompt.content, { date: event.prompt.date, - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); const retrievalTime = Date.now() - startTime; @@ -293,7 +293,7 @@ describe('Watch Mode - Incremental Analysis', () => { { date: '2025-01-20', project: 'test-project', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }, ); @@ -312,7 +312,7 @@ describe('Watch Mode - Incremental Analysis', () => { const cached = await getPromptResult(event.prompt.content, { date: event.prompt.date, project: event.prompt.project, - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); @@ -463,7 +463,7 @@ describe('Watch Mode - Incremental Analysis', () => { try { await getPromptResult(event.prompt.content, { date: event.prompt.date, - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); } catch { @@ -515,7 +515,7 @@ describe('Watch Mode - Incremental Analysis', () => { // Check cache const cached = await getPromptResult(event.prompt.content, { date: event.prompt.date, - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }); @@ -535,7 +535,7 @@ describe('Watch Mode - Incremental Analysis', () => { { date: event.prompt.date, provider: 'test-provider', - model: 'gemma3:4b', + model: 'gemma4:e4b', schemaType: 'full', }, ); diff --git a/tests/integration/providers/index.test.ts b/tests/integration/providers/index.test.ts index 2a32eda..1646fae 100644 --- a/tests/integration/providers/index.test.ts +++ b/tests/integration/providers/index.test.ts @@ -181,10 +181,9 @@ describe('Provider Integration - Ollama Provider', () => { it('should analyze prompts successfully', async () => { const config = { - model: 'llama3.2', + // Use standard-strategy model so auto-detection selects 'full' schema + model: 'llama3:70b', host: 'http://localhost:11434', - // Force batch schema to match the mock response format - schemaOverride: 'batch' as const, }; const mockAnalysis = createMockAnalysis(); diff --git a/tests/unit/analytics/clustering.test.ts b/tests/unit/analytics/clustering.test.ts index a782046..f89b6b3 100644 --- a/tests/unit/analytics/clustering.test.ts +++ b/tests/unit/analytics/clustering.test.ts @@ -31,8 +31,10 @@ describe('clustering', () => { for (const center of clusterCenters) { for (let i = 0; i < pointsPerCluster; i++) { - // Add small random variation around center - const point = center.map((val) => val + (Math.random() - 0.5) * 0.1); + // Add small deterministic variation around center + const point = center.map( + (val, d) => val + Math.sin(i * 7 + d * 3) * 0.01, + ); embeddings.push(point); } } @@ -42,10 +44,10 @@ describe('clustering', () => { describe('clusterPrompts', () => { it('should cluster embeddings into groups', () => { - // Create 3 distinct clusters - const cluster1Center = Array(10).fill(0.2); - const cluster2Center = Array(10).fill(0.5); - const cluster3Center = Array(10).fill(0.8); + // Create 3 distinct clusters with different directions (for cosine distance) + const cluster1Center = [1, 0, 0, 0, 0, 0, 0, 0, 0, 0]; + const cluster2Center = [0, 0, 0, 1, 0, 0, 0, 0, 0, 0]; + const cluster3Center = [0, 0, 0, 0, 0, 0, 0, 1, 0, 0]; const embeddings = createClusterEmbeddings( [cluster1Center, cluster2Center, cluster3Center], diff --git a/tests/unit/core/analyzer.test.ts b/tests/unit/core/analyzer.test.ts index 2d1327d..a9e203f 100644 --- a/tests/unit/core/analyzer.test.ts +++ b/tests/unit/core/analyzer.test.ts @@ -285,7 +285,7 @@ describe('mergeBatchResults', () => { expect(ids).toHaveLength(3); // No duplicates }); - it('should average frequencies for duplicate patterns', () => { + it('should sum frequencies for duplicate patterns across batches', () => { const results = [ createResult({ patterns: [createPattern('p1', { frequency: 4 })], @@ -301,7 +301,7 @@ describe('mergeBatchResults', () => { }); const pattern = merged.patterns.find((p) => p.id === 'p1'); - expect(pattern?.frequency).toBe(5); // (4 + 6) / 2 = 5 + expect(pattern?.frequency).toBe(10); // 4 + 6 = 10 (sum across batches) }); it('should take max severity for duplicate patterns', () => {