Merged
Conversation
Configuration File & XDG Paths infrastructure. New features: - SentinelConfig frozen dataclass with sensible defaults - TOML config loading via Python 3.11 stdlib tomllib - XDG Base Directory compliance (~/.config/sentinel/config.toml) - Respects XDG_CONFIG_HOME environment variable - Atomic write pattern for safe config file creation - Secure permissions: 0o700 for directory, 0o600 for file - Runtime validation for Literal field values (energy_threshold, default_format) - ConfigError exception for invalid TOML or config values
…nsitivity Allow users to set energy_threshold in config.toml to tune how sensitive Sentinel is to energy collisions: - "low": show collisions with confidence >= 0.3 (maximum sensitivity) - "medium": show collisions with confidence >= 0.5 (default) - "high": show collisions with confidence >= 0.7 (fewer, more significant) Implementation: - Add ENERGY_THRESHOLD_LOW/MEDIUM/HIGH constants and mapping dict - Add get_confidence_threshold() to convert config string to float - Integrate config loading into check command with ConfigError handling - Invalid threshold values exit with EXIT_CONFIG_ERROR (code 3) - Threshold changes take effect immediately on next run
LLM & Embedding Configuration allowing users to choose their preferred AI service (OpenAI, Anthropic, Ollama) for Cognee operations. Core functions added to src/sentinel/core/config.py: - configure_cognee(): Sets Cognee environment variables from config - validate_api_key(): Checks API keys in priority order with helpful errors - mask_api_key(): Masks keys for safe logging (sk-...xxxx format) - check_embedding_compatibility(): Validates embedding/API key combinations Key features: - Multi-provider support: OpenAI, Anthropic, Ollama - API key priority: LLM_API_KEY > OPENAI_API_KEY > ANTHROPIC_API_KEY - Embedding compatibility validation with guidance for Anthropic-only users - Telemetry disabled by default (NFR9 compliance) - Fail-fast validation before LLM operations in paste and check commands
CLI-based configuration management enabling users to view and modify Sentinel settings without manual TOML editing. Add sentinel config command with four modes: - No args: display all settings with section headers - Single arg: show specific setting value - Two args: update key-value pair with validation - --reset flag: restore default configuration Add CONFIG_KEYS metadata mapping all 8 configuration fields to descriptions and validation constraints. Implement get_config_display for formatted output, get_setting_value for single key retrieval, update_config using regex-based TOML modification to preserve comments, and reset_config wrapper for defaults restoration. Validation provides helpful error messages listing valid values when users specify invalid keys or values. Boolean conversion handles telemetry_enabled string-to-bool mapping. Empty llm_endpoint displays as "(not set)" for clarity. Supports multi-command Ollama setup for fully local operation: sentinel config llm_provider ollama sentinel config llm_model llama3.1:8b sentinel config llm_endpoint http://localhost:11434/v1 sentinel config embedding_provider ollama sentinel config embedding_model nomic-embed-text:latest Add 30 unit tests covering CONFIG_KEYS structure, display formatting, setting retrieval, value updates, boolean conversion, and exports. Add 11 integration tests validating CLI behavior, error handling, reset functionality, and Ollama configuration workflow.
Help Text & Verbose Mode Add user-friendly verbose logging and improve CLI discoverability. Changes: - Add global --verbose / -v flag at the main command level - Introduce VerboseLogger utility with timestamped operation tracking - Output verbose logs to stderr to avoid polluting stdout - Add usage examples to all command help text - Document exit codes in check command help - Instrument paste and check commands with operation timing Example usage: sentinel --verbose check # Show operation timing sentinel paste --help # See usage examples
Demo Fixtures & CI/CD Add continuous integration pipeline and comprehensive demo fixture validation tests to ensure Sentinel is competition-ready. GitHub Actions CI (.github/workflows/ci.yml): - Quality job runs on push/PR: lint, format, typecheck, tests - Live-tests job for manual API testing with OPENAI_API_KEY - Dependency caching with actions/cache@v4 for faster builds - Concurrency controls to cancel outdated workflow runs - Job timeouts (15min quality, 30min live-tests) - Pinned uv version (0.5.14) for build stability - Configurable test_filter input for manual dispatch Demo fixture tests (tests/integration/test_demo_fixtures.py): - TestDemoFixturesExist: Validates 3 fixture files present - TestTypicalWeekCollision: Verifies collision detection - TestBoringWeekNoCollision: Verifies graceful empty state - TestEdgeCasesNoCrash: Validates Unicode/emoji handling - TestFixtureContentIntegration: Validates content flows through CLI - TestDemoStability: 5 consecutive runs via parametrize - TestCIReadiness: Validates pytest markers and no-API-key operation Dependencies: - Add ty to dev dependencies for type checking in CI
Configuration and demo polish
Tests were failing in CI because validate_api_key() is called before mocked CogneeEngine methods. The CLI requires an API key even when the engine is mocked. Changes: - tests/conftest.py: Add autouse fixture that sets fake LLM_API_KEY for all non-live tests. Live tests (@pytest.mark.live) are excluded so they can use real credentials. - .github/workflows/ci.yml: Use LLM_API_KEY instead of OPENAI_API_KEY to match project convention in .env.template. - tests/integration/test_demo_fixtures.py: Update test to mock both CogneeEngine and validate_api_key, use LLM_API_KEY convention.
fix(tests): add fake LLM_API_KEY fixture for CI compatibility
…cation warnings Add explicit None checks before accessing .nodes/.edges on Graph | None return types from engine.load() in test files: - tests/unit/test_persistence.py (5 locations) - tests/unit/test_cli_correct.py (2 locations) - tests/integration/test_persistence.py (2 locations) Configure pytest filterwarnings in pyproject.toml to suppress deprecation warnings from external dependencies: - cognee: table_names() -> list_tables() - pydantic: json_encoders and Field extra kwargs All lefthook pre-commit checks now pass with 0 warnings.
…gnee Sentinel’s validate_api_key() and check_embedding_compatibility() included fallbacks to OPENAI_API_KEY and ANTHROPIC_API_KEY, but Cognee only reads LLM_API_KEY. As a result, users setting OPENAI_API_KEY saw Sentinel accept it, while Cognee failed silently. Changes: - Simplified validate_api_key() to only check LLM_API_KEY - Simplified check_embedding_compatibility() to only check LLM_API_KEY - Added .strip() to reject whitespace-only API keys - Updated error messages to reference LLM_API_KEY only - Ollama guidance always shown when key missing Test updates: - Removed 2 obsolete fallback tests - Updated 3 integration tests to use LLM_API_KEY - Added 3 edge case tests (empty string, whitespace, case sensitivity) - Updated conftest.py fixture comments
…mmand Update README to include comprehensive instructions for the `sentinel config` command. Adds examples for viewing, modifying, and resetting configuration settings, along with a detailed table of available options.
Configuration and demo polish
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.