Skip to content

Release/260325#49

Merged
codeErrorSleep merged 21 commits intomainfrom
release/260325
Mar 30, 2026
Merged

Release/260325#49
codeErrorSleep merged 21 commits intomainfrom
release/260325

Conversation

@codeErrorSleep
Copy link
Copy Markdown
Owner

@codeErrorSleep codeErrorSleep commented Mar 30, 2026

Summary by CodeRabbit

  • New Features

    • Added SQL file import capability with transaction rollback support across all databases
    • Enabled table mutations (UPDATE/DELETE) for ClickHouse via dedicated ALTER TABLE syntax
    • Added database import action to connection context menu with driver capability detection
  • Improvements

    • Enhanced AI system prompt with context history awareness
    • Improved connection error messages with driver-specific hints
    • Upgraded MSSQL connection pooling for better resource management
  • Infrastructure

    • Established comprehensive CI/CD pipeline with automated testing
    • Expanded integration test coverage across all supported databases
    • Added localization for import functionality (English, Japanese, Chinese)

codeErrorSleep added 21 commits March 25, 2026 16:17
…ndent history assembly

Change prompt construction from user message mode to system message mode to clearly distinguish system context and dialog history.
Extract message assembly logic to independent functions to improve testability and maintainability.
Update tests to reflect role changes and new assembly features.
…correctly after deletion

After the deletion operation, the selectedRows and selectedCell statuses are not completely synchronized and updated, which may lead to inconsistent UI statuses. Ensure status synchronization by updating ref and state at the same time, and clear selectedCellRef.
…t port

- Added a new universal connection error handling function to provide targeted prompts based on the error content.
- Set the correct SSH tunnel default port for different database drivers
- Disable SSL by default for MySQL connections to avoid TLS handshake failures
- Update related tests to verify new features
…full rollback

Added SQL file import function, supporting MySQL and PostgreSQL drivers. The import process is executed in a transaction. If any statement fails, all changes will be rolled back automatically. File size and statement number limits have been added to ensure security. The front-end adds import dialog boxes, menu items and multi language support, and the back-end implements SQL statement parsing and error handling.
…House tables

-Add support for ClickHouse in the database creation function
-Enable row level updates and deletes for ClickHouse MergeTree engine tables
-Add ClickHouse specific SQL statement building function
-Extended table editing permission check to adapt to ClickHouse engine features
-Update the test case to override the new ClickHouse function
… error handling

-Expand SQL import function to support MariaDB, TiDB, SQLite, DuckDB and SQL Server
-Add explicit "not supported" status and interface prompt for read-only driver (ClickHouse)
-Improve SQL Server import and support 'GO' batch separator
-Enhanced error reporting, including index and statement preview of failed batches
-Automatically detect and process transaction control statements in SQL scripts (BEGIN/COMMIT)
-Update the front-end interface, multilingual translation and documentation to reflect the new scope of support
-Add testcontainers dependency to automatically start MySQL/Postgres integration tests
-Create. github/workflows/ci.iyml to automatically run unit, service and integration tests in PR
-Reconstruct the test script, support the IT_DB environment variable to select the database type, and add the container cleaning mechanism
-Update the development document, and specify the use method and recommended workflow of integration test
-Unified test context management, providing reliability enhancements such as connection retry and port waiting
…d SQLite

Add the test container context of MariaDB and SQL Server, and extend the integration test of SQLite.
Update the test scripts and documentation to support the selection of these databases through the IT_DB environment variable.
For MySQL, PostgreSQL and new databases, regression tests for paging, sorting, filtering, table structure, metadata (indexes, foreign keys) and Boolean/JSON type mapping have been added.
Add a complete integration test framework for ClickHouse and DuckDB, including:
-Add clickhouse and duckdb option support in the test script
-Add containerized test environment management for ClickHouse
-Add local file test environment management for DuckDB
-Expand the test coverage of the two databases, including table structure, data paging, sorting and filtering, etc
-Update the development document to describe the new test options and environment variables
…DATE/DELETE statements

Add is_affected_rows_statement function to identify data modification statements
For the INSERT/UPDATE/DELETE/REPLACE statement, use the execute method to obtain rows_affected
Fix the problem that these statements returned empty result sets incorrectly before
Add integration tests for DuckDB, PostgreSQL, SQLite and MSSQL to cover the following key functions:
-Transaction commit and rollback
-UPDATE/DELETE returns the number of affected rows
-Round trip test of large text and BLOB fields
-SQL error handling
-Concurrent connection query
-List and query of views
-Connection failure scenario (invalid path, wrong password, timeout)
-Batch Insertion and Batch Execution Process
-Lock conflict and busy error handling
-Preparation, Execution and Release of Preprocessing Statements

These tests ensure the consistency and reliability of each database driver's behavior in real scenarios.
…ance and refactor tests

-Add bb8 connection pool support for MSSQL driver to improve resource utilization and performance in multi connection scenarios
-Reconstruct MSSQL integration tests, use shared containers and connection pools, and reduce test startup overhead
-Fix SQL syntax errors in DuckDB transaction test, and combine multiple statements into a single query\
-Add MySQL command integration test, covering core functions such as connection test, query execution and database list
-Improve error handling and connection timeout mechanism to enhance drive stability
Add a series of direct calling functions for the library module that do not rely on Tauri state management to facilitate the integration of other components. Major changes include:

-In 'commands/mod.rs', the' ensure_connection_with_db_from_app_state 'and' execute_with_retry_from_app_state 'functions are added. They support obtaining connections directly from' AppState 'and executing retry logic.
-Add the corresponding '_direct' suffix function for the 'storage', 'metadata', 'query', 'connection', 'ai' and 'transfer' modules, and provide a direct call interface that does not pass through the Tauri command layer.
-Reconstruct 'LocalDb:: init' in 'db/local.rs' to extract the' init_with_app_dir 'method to improve testability.
-Update 'scripts/test integration. sh' to add the execution of the new MySQL and PostgreSQL command integration test.
-Add 'mysql_command_integration. rs' and' postgres_command_integration. rs' test files to verify the correctness of direct command functions.
-Fix the test assertion in 'postgres_integration. rs', change the check primary key to check the existence of columns, and optimize the verification logic for the number of affected rows.

These changes enable core functions to be reused away from the Tauri environment, while maintaining backward compatibility.
…iguration

Add the auxiliary script db-onboard.sh for database integration testing, which is used to automatically check scaffolding, run syntax and compilation checks, and synchronize the test matrix.
At the same time, update the. gitignore file and ignore the files in the. tray/skills/directory.
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 30, 2026

📝 Walkthrough

Walkthrough

This PR introduces SQL import functionality with statement-level execution tracking, refactors backend commands with "direct" APIs bypassing Tauri, adds testcontainers-based integration testing infrastructure, enhances ClickHouse support (mutation semantics, identifier quoting, CREATE DATABASE), implements connection pooling for MSSQL via bb8, and bumps the version to 0.3.0 with corresponding documentation and UI updates.

Changes

Cohort / File(s) Summary
Infrastructure & Configuration
.github/workflows/ci.yml, package.json, src-tauri/tauri.conf.json, CHANGELOG.md, .gitignore
Added GitHub Actions CI workflow for running Bun/Rust tests with Docker support; bumped version to 0.3.0; cleared changelog; added gitignore paths.
Documentation
README.md, README_CN.md, docs/zh/Development/DEVELOPMENT.md
Added SQL import capability to feature lists; added Chinese documentation for integration test automation with environment variables and troubleshooting guidance.
Test Infrastructure & Scripts
scripts/db-onboard.sh, scripts/test-integration.sh, src-tauri/Cargo.toml
New DB onboarding validation script; updated integration test runner to use IT_DB parameter and IT_REUSE_LOCAL_DB for container reuse; added bb8 (connection pooling) and testcontainers dependencies.
Shared Test Context Modules
src-tauri/tests/common/*.rs
New testcontainers-based context helpers for MySQL, PostgreSQL, MariaDB, MSSQL, ClickHouse, and DuckDB with optional local DB reuse; shared utilities for Docker readiness, port waiting, and retry logic.
Backend Commands: Direct APIs
src-tauri/src/commands/{ai,connection,metadata,query,storage}.rs, src-tauri/src/commands/mod.rs
Added "direct" (non-Tauri) variants of provider, chat, conversation, connection, database, table metadata, SQL execution, and saved query functions accepting &AppState instead of State; introduced get_db_from_app_state and retry helpers.
Backend SQL Import
src-tauri/src/commands/transfer.rs
Implemented import_sql_file command with multi-statement parsing (including MSSQL GO batch splitting), transaction detection, bounded file size/statement count validation, and structured error reporting with rollback support.
Backend Connection & Drivers
src-tauri/src/connection_input/mod.rs, src-tauri/src/db/local.rs, src-tauri/src/ssh.rs
Updated connection form normalization to preserve empty secret fields; refactored LocalDb initialization with app-dir parameter; added driver-based SSH tunnel port defaults.
Backend Database Drivers
src-tauri/src/db/drivers/{mod,mssql,mysql,postgres,sqlite,clickhouse,duckdb}.rs
Refactored MSSQL to use bb8 connection pooling; added conn_failed_error helper for standardized connection error messages; added ClickHouse identifier quoting and CREATE DATABASE support; updated MySQL with ssl-mode=DISABLED and prepared-statement fallback; enhanced error mapping across drivers.
Backend Models & Exports
src-tauri/src/models/mod.rs, src-tauri/src/lib.rs
Added custom Debug impl for ConnectionForm to redact sensitive fields; registered import_sql_file command in Tauri handler.
Integration Tests: Setup & Helpers
src-tauri/tests/{common/shared,duckdb_context,mariadb_context,mssql_context,mysql_context,postgres_context}.rs
Established centralized test utilities and database-specific container/context builders with environment override support.
Integration Tests: Core Tests
src-tauri/tests/{sqlite,duckdb,mysql,postgres,mssql,clickhouse}_integration.rs
Refactored tests to use testcontainers and shared context modules; expanded coverage to include pagination, filtering, sorting, metadata, transactions, concurrent queries, views, and error handling; updated table/view naming conventions.
Integration Tests: Command Layer
src-tauri/tests/{mysql,postgres}_command_integration.rs
Added new command-level tests for MySQL and Postgres validating connection, metadata, query execution, logging, and pagination via direct helper functions.
Integration Tests: Stateful Commands
src-tauri/tests/{mysql,postgres,mssql}_stateful_command_integration.rs
New comprehensive test suites exercising connection CRUD, metadata introspection, query execution, SQL import/export, and AI provider/conversation workflows against real containerized databases.
Frontend: DataGrid & ClickHouse Support
src/components/business/DataGrid/{TableView.tsx,tableView/utils.ts,tableView/utils.unit.test.ts}
Added ClickHouse engine detection and mutability gating; implemented canMutateClickHouseTable, isClickHouseMergeTreeEngine, buildUpdateStatement, and buildDeleteStatement for ClickHouse ALTER TABLE syntax; added refresh delay for ClickHouse mutations.
Frontend: SQL Import UI
src/components/business/Sidebar/{ConnectionList.tsx,connection-list/helpers.tsx}
Added SQL import workflow in connection context menu with driver capability checking, file picker, import dialog, and api.transfer.importSqlFile invocation; extended createDatabaseSupportedDrivers to include clickhouse.
Frontend: Component Formatting
src/App.tsx, src/components/business/{Editor/SqlEditor.tsx,Editor/codemirrorTheme.ts,Metadata/TableMetadataView.tsx}, src/components/{settings/\*,business/SqlLogs/\*,business/Sidebar/\*,updater-checker.tsx}
Applied multi-line JSX/prop formatting, function signature reformatting, and assertion restructuring for consistency; no semantic logic changes.
Frontend: Services & API
src/services/{api.ts,mocks.ts,updater.ts}, src/lib/{connection-form/\*,i18n/locales/\*,queryExecutionState.unit.test.ts}, src/services/mocks.service.test.ts
Added ImportDriverCapability type, normalizeImportDriver, getImportDriverCapability, and ImportSqlResult interface; added mock SQL import handler; extended translations (EN/JP/ZH) with import dialog and status messages; updated validation and formatting throughout.

Sequence Diagram(s)

sequenceDiagram
    participant User as User
    participant UI as Frontend UI<br/>(ConnectionList)
    participant API as API Layer<br/>(services/api.ts)
    participant Tauri as Tauri IPC
    participant Transfer as Transfer Module<br/>(commands/transfer.rs)
    participant Parser as SQL Parser
    participant DB as Database Driver
    
    User->>UI: Select "Import SQL" on database
    UI->>UI: Open file picker
    User->>UI: Select .sql file
    UI->>API: importSqlFile(id, database, filePath)
    API->>Tauri: invoke("import_sql_file", params)
    Tauri->>Transfer: import_sql_file(...)
    Transfer->>Transfer: Validate file (size, extension, existence)
    Transfer->>Transfer: Read & BOM-strip file content
    Transfer->>Parser: Parse SQL into statements<br/>(handle GO, transactions)
    Parser-->>Transfer: Vec<(unit, preview)>
    Transfer->>Transfer: Detect transaction management
    Transfer->>DB: BEGIN (if needed)
    loop Each parsed statement
        Transfer->>DB: Execute statement
        alt Success
            Transfer->>Transfer: Log success, increment count
        else Error
            Transfer->>Transfer: Store error, position, batch
            Transfer->>DB: ROLLBACK (if using outer TX)
            Transfer-->>Transfer: Break loop
        end
    end
    Transfer-->>Tauri: ImportSqlResult<br/>{totalStatements, successStatements,<br/>failedAt?, error?, rolledBack}
    Tauri-->>API: Result
    API-->>UI: ImportSqlResult
    UI->>User: Show import summary<br/>(success count, rollback status)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Poem

🐰 SQL imports hopping in with testcontainers near,
ClickHouse engines purring through the CI frontier,
Pooled connections swimming, direct APIs standing tall,
A patchwork quilt of features, we've woven them all!
Thump thump goes the version, now 0.3.0 it reads,
Integration tests blooming from infrastructure seeds. 🌱

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch release/260325

@qodo-code-review
Copy link
Copy Markdown

Review Summary by Qodo

Release 260325 - SQL import feature, ClickHouse mutations, and comprehensive test expansion

✨ Enhancement 🧪 Tests

Grey Divider

Walkthroughs

Description
• **SQL import functionality**: Added comprehensive SQL import feature with support for multiple
  database drivers (MSSQL, PostgreSQL, MySQL, SQLite, DuckDB, ClickHouse), including SQL parsing for
  comments, string literals, and driver-specific syntax (MSSQL GO batches, PostgreSQL dollar quotes)
• **ClickHouse mutation support**: Implemented statement builders for UPDATE and DELETE operations
  on ClickHouse tables with MergeTree engine detection and validation
• **Internationalization**: Added SQL import i18n strings for English, Chinese, and Japanese locales
  with import-related toast messages
• **API enhancements**: Added ImportDriverCapability type, ImportSqlResult interface, and
  importSqlFile API method for SQL import operations
• **Direct method variants**: Implemented _direct variants for storage, AI, and export commands to
  support non-Tauri contexts
• **Database initialization refactoring**: Extracted database initialization logic into
  init_with_app_dir method with secure random key generation for tests
• **Comprehensive integration test expansion**: Added extensive test suites for SQLite, DuckDB,
  PostgreSQL, MySQL, MSSQL, and ClickHouse with pagination, sorting, filtering, transactions,
  concurrent connections, and error handling coverage
• **Test infrastructure improvements**: Created shared context modules (postgres_context,
  mysql_context, mssql_context, clickhouse_context, duckdb_context) for consistent test
  database setup and management
• **Code formatting and consistency**: Reformatted multiple files for improved readability including
  multi-line function signatures, string quotes standardization, and consistent indentation
Diagram
flowchart LR
  A["SQL Import Feature"] --> B["Transfer Commands"]
  A --> C["API Types & Methods"]
  A --> D["Mock Handler"]
  E["ClickHouse Mutations"] --> F["Statement Builders"]
  E --> G["Mutation Guards"]
  H["Internationalization"] --> I["EN/ZH/JA Locales"]
  J["Direct Methods"] --> K["Storage Commands"]
  J --> L["AI Commands"]
  J --> M["Export Methods"]
  N["Database Refactoring"] --> O["Init with App Dir"]
  N --> P["Secure Random Keys"]
  Q["Test Infrastructure"] --> R["Context Modules"]
  Q --> S["Integration Tests"]
  R --> T["Postgres/MySQL/MSSQL/ClickHouse/DuckDB"]
  S --> U["Pagination/Sorting/Filtering/Transactions"]
Loading

Grey Divider

File Changes

1. src/components/business/DataGrid/tableView/utils.unit.test.ts 🧪 Tests +69/-13

Add ClickHouse mutation support tests and formatting

• Added imports for new ClickHouse mutation functions (buildDeleteStatement,
 buildUpdateStatement, canMutateClickHouseTable, isClickHouseMergeTreeEngine)
• Reformatted existing test function calls to use multi-line formatting for better readability
• Added comprehensive test suite for ClickHouse mutation guards and statement builders

src/components/business/DataGrid/tableView/utils.unit.test.ts


2. src/lib/queryExecutionState.unit.test.ts Formatting +55/-17

Reformat test function calls for consistency

• Reformatted function calls to use multi-line parameter formatting for improved readability
• Applied consistent indentation and line breaking across multiple test cases

src/lib/queryExecutionState.unit.test.ts


3. src/lib/i18n/locales/en.ts ✨ Enhancement +32/-11

Add SQL import i18n strings and reformat

• Added new i18n keys for SQL import functionality (importSql, importSqlReadOnly, importDialog
 section)
• Added import-related toast messages (importDesktopOnly, importUnsupportedDriver,
 importReadOnlyDriver, selectImportSqlFile, importSuccess, importFailed)
• Reformatted long string lines to use multi-line formatting for better maintainability
• Fixed TypeScript type definition indentation for DeepStringify utility type

src/lib/i18n/locales/en.ts


View more (73)
4. src/services/updater.ts Formatting +24/-20

Standardize string quotes and reformat code

• Changed string literals from single quotes to double quotes for consistency
• Reformatted function signatures and return types to use multi-line formatting
• Improved code formatting for better readability in mock update scenarios

src/services/updater.ts


5. src/lib/i18n/locales/zh.ts ✨ Enhancement +18/-3

Add Chinese SQL import translations

• Added Chinese translations for SQL import feature (importSql, importSqlReadOnly,
 importDialog)
• Added import-related toast messages in Chinese
• Reformatted long strings to use multi-line formatting

src/lib/i18n/locales/zh.ts


6. src/lib/i18n/locales/ja.ts ✨ Enhancement +19/-4

Add Japanese SQL import translations

• Added Japanese translations for SQL import feature (importSql, importSqlReadOnly,
 importDialog)
• Added import-related toast messages in Japanese
• Reformatted long strings to use multi-line formatting

src/lib/i18n/locales/ja.ts


7. src/services/api.ts ✨ Enhancement +55/-1

Add SQL import API types and methods

• Added new ImportDriverCapability type and related utility functions (normalizeImportDriver,
 getImportDriverCapability)
• Added ImportSqlResult interface for SQL import operation results
• Added importSqlFile API method to query execution interface
• Reformatted function signatures for better readability

src/services/api.ts


8. src/lib/connection-form/validate.ts Formatting +22/-6

Reformat validation logic for readability

• Reformatted multi-line conditional expressions to improve readability
• Applied consistent line breaking for complex validation logic

src/lib/connection-form/validate.ts


9. src/services/mocks.ts ✨ Enhancement +23/-2

Add SQL import mock handler

• Added ImportSqlResult to imports
• Added mockImportSqlFile function to simulate SQL import operations
• Added case handler for import_sql_file command in invokeMock function
• Reformatted long string lines to use multi-line formatting

src/services/mocks.ts


10. src/components/business/DataGrid/tableView/utils.ts ✨ Enhancement +42/-1

Add ClickHouse mutation statement builders

• Added isClickHouseMergeTreeEngine function to detect ClickHouse MergeTree engine variants
• Added canMutateClickHouseTable function to validate ClickHouse table mutation capability
• Added buildUpdateStatement function to generate driver-specific UPDATE statements
• Added buildDeleteStatement function to generate driver-specific DELETE statements
• Reformatted function signature for collectSearchMatches to use multi-line formatting

src/components/business/DataGrid/tableView/utils.ts


11. src/services/mocks.service.test.ts Formatting +7/-8

Reformat test type parameters

• Reformatted generic type parameter to use multi-line formatting
• Applied consistent indentation for improved readability

src/services/mocks.service.test.ts


12. src/lib/connection-form/validate.unit.test.ts Formatting +3/-1

Reformat test assertion

• Reformatted assertion to use multi-line formatting for better readability

src/lib/connection-form/validate.unit.test.ts


13. src/components/business/Editor/codemirrorTheme.ts Formatting +1/-1

Fix CSS property indentation

• Fixed indentation for CSS selector property definition

src/components/business/Editor/codemirrorTheme.ts


14. src/lib/connection-form/rules.ts Formatting +3/-1

Reformat function signature

• Reformatted function signature to use multi-line formatting

src/lib/connection-form/rules.ts


15. src-tauri/tests/mssql_integration.rs 🧪 Tests +1065/-107

Expand MSSQL integration tests with context module

• Refactored to use new mssql_context module for connection setup and management
• Added comprehensive integration tests for MSSQL driver functionality (pagination, sorting,
 filtering, metadata, transactions, concurrent connections, views, prepared statements)
• Added helper function scalar_to_i64 for type conversion
• Replaced environment variable-based configuration with container-based test context

src-tauri/tests/mssql_integration.rs


16. src-tauri/tests/mysql_integration.rs 🧪 Tests +922/-92

Expand MySQL integration tests with context module

• Refactored to use new mysql_context module for connection setup and management
• Added comprehensive integration tests for MySQL driver functionality (pagination, sorting,
 filtering, metadata, transactions, concurrent connections, views, prepared statements, large fields)
• Replaced environment variable-based configuration with container-based test context
• Simplified connection initialization across all test functions

src-tauri/tests/mysql_integration.rs


17. src-tauri/src/db/local.rs ✨ Enhancement +12/-6

Refactor database initialization and improve test security

• Extracted database initialization logic into new init_with_app_dir method for better testability
• Updated init method to delegate to init_with_app_dir
• Modified test database creation to use cryptographically secure random key generation instead of
 hardcoded values
• Added rand::RngCore import for secure random number generation

src-tauri/src/db/local.rs


18. src-tauri/src/commands/transfer.rs ✨ Enhancement +1077/-2

SQL import functionality and export direct methods

• Added SQL import functionality with import_sql_file and import_sql_file_direct commands
 supporting multiple database drivers
• Implemented comprehensive SQL parsing for handling comments, string literals, and driver-specific
 syntax (MSSQL GO batches, PostgreSQL dollar quotes)
• Added export_table_data_direct and export_query_result_direct functions for non-Tauri contexts
• Introduced validation for import files (size limits, extension checks) and transaction management
 with rollback support
• Added extensive test coverage for SQL parsing, statement detection, and import plan preparation

src-tauri/src/commands/transfer.rs


19. src-tauri/tests/sqlite_integration.rs 🧪 Tests +858/-13

Comprehensive SQLite integration test suite expansion

• Added helper function json_to_i64 for flexible JSON value to integer conversion
• Renamed test tables from sqlite_type_probe to dbpaw_sqlite_type_probe for consistency
• Added 15+ new integration tests covering pagination, sorting, filtering, transactions, concurrent
 connections, views, and error handling
• Tests validate batch operations, prepared statements, large field handling, and lock conflict
 scenarios

src-tauri/tests/sqlite_integration.rs


20. src-tauri/tests/duckdb_integration.rs 🧪 Tests +854/-36

DuckDB integration tests refactoring and expansion

• Refactored to use shared duckdb_context module for test setup and database path management
• Renamed test tables from duck_type_probe to dbpaw_duckdb_type_probe for naming consistency
• Added 14+ new integration tests covering pagination, sorting, filtering, transactions, concurrent
 connections, views, and error handling
• Tests validate batch operations, large field handling, connection failures, and lock scenarios

src-tauri/tests/duckdb_integration.rs


21. src-tauri/src/commands/ai.rs ✨ Enhancement +300/-4

AI command direct methods and helper refactoring

• Added assemble_final_messages helper function to combine prompt bundle and conversation history
• Added get_db_from_app_state function for accessing local database from AppState reference
• Implemented _direct variants of AI provider and conversation functions for non-Tauri contexts
• Added run_chat_direct function supporting AI chat operations without Tauri event emission
• Added test for assemble_final_messages to verify correct message ordering

src-tauri/src/commands/ai.rs


22. src-tauri/src/commands/storage.rs ✨ Enhancement +53/-0

Storage command direct methods for non-Tauri usage

• Added _direct variants for all storage operations: save_query_direct,
 update_saved_query_direct, delete_saved_query_direct, get_saved_queries_direct
• These functions accept &AppState instead of State<'_, AppState> for use in non-Tauri contexts

src-tauri/src/commands/storage.rs


23. src-tauri/tests/common/duckdb_context.rs 🧪 Tests +40/-0

Shared DuckDB test context module

• New shared module providing duckdb_form_from_test_context function for consistent DuckDB test
 database setup
• Centralizes database path resolution and connection form creation for integration tests

src-tauri/tests/common/duckdb_context.rs


24. src-tauri/tests/postgres_stateful_command_integration.rs 🧪 Tests +852/-0

PostgreSQL stateful command integration tests

• New comprehensive integration test suite for PostgreSQL with 852 lines covering stateful command
 operations
• Tests database creation, listing, connection CRUD, metadata retrieval, query execution, and AI
 provider management
• Includes helper functions for test setup, database fixture preparation, and cleanup
• All tests marked with #[ignore] attribute for manual execution

src-tauri/tests/postgres_stateful_command_integration.rs


25. src-tauri/tests/mysql_stateful_command_integration.rs 🧪 Tests +846/-0

MySQL stateful command integration tests

• New comprehensive integration test suite for MySQL with 846 lines covering stateful command
 operations
• Mirrors PostgreSQL test structure with database creation, metadata, query execution, and AI
 provider tests
• Includes MySQL-specific helper functions and connection management
• All tests marked with #[ignore] attribute for manual execution

src-tauri/tests/mysql_stateful_command_integration.rs


26. src-tauri/tests/postgres_integration.rs 🧪 Tests +845/-32

PostgreSQL integration tests refactored and expanded

• Refactored to use new postgres_context module for connection setup instead of environment
 variables
• Added 13 new integration tests covering pagination, sorting, filtering, metadata, transactions,
 and concurrent operations
• Tests cover table structure, schema overview, indexes, foreign keys, boolean/JSON types, and error
 handling
• Expanded from basic flow test to comprehensive driver functionality validation

src-tauri/tests/postgres_integration.rs


27. src-tauri/tests/clickhouse_integration.rs 🧪 Tests +406/-118

ClickHouse integration tests refactored and expanded

• Refactored to use new clickhouse_context module for connection setup instead of environment
 variables
• Simplified main test flow and added 5 new integration tests for pagination, sorting, metadata, and
 type mapping
• Tests cover table structure, schema overview, engine metadata, and boolean/JSON type handling
• Reduced from 2 tests to 7 tests with improved coverage

src-tauri/tests/clickhouse_integration.rs


28. src-tauri/tests/common/duckdb_context.rs 🧪 Tests +40/-0

DuckDB test context helper module

• New context helper module for DuckDB integration tests
• Provides should_reuse_local_db() function to check if local database should be reused
• Implements duckdb_form_from_test_context() to create connection forms from environment or
 defaults
• Supports environment variables DUCKDB_IT_DB_PATH, DUCKDB_DB_PATH, and IT_REUSE_LOCAL_DB

src-tauri/tests/common/duckdb_context.rs


29. src-tauri/src/db/drivers/clickhouse.rs Formatting +6/-3

ClickHouse driver formatting improvements

• Reformatted required_i64_from_json_row() function for improved readability
• Split error message construction across multiple lines for better code formatting
• No functional changes to the error handling logic

src-tauri/src/db/drivers/clickhouse.rs


30. .github/workflows/ci.yml Additional files +66/-0

...

.github/workflows/ci.yml


31. CHANGELOG.md Additional files +0/-86

...

CHANGELOG.md


32. README.md Additional files +1/-0

...

README.md


33. README_CN.md Additional files +1/-0

...

README_CN.md


34. docs/zh/Development/DEVELOPMENT.md Additional files +97/-0

...

docs/zh/Development/DEVELOPMENT.md


35. package.json Additional files +1/-1

...

package.json


36. scripts/db-onboard.sh Additional files +97/-0

...

scripts/db-onboard.sh


37. scripts/test-integration.sh Additional files +68/-15

...

scripts/test-integration.sh


38. src-tauri/Cargo.toml Additional files +4/-0

...

src-tauri/Cargo.toml


39. src-tauri/src/ai/prompt.rs Additional files +20/-10

...

src-tauri/src/ai/prompt.rs


40. src-tauri/src/commands/connection.rs Additional files +268/-9

...

src-tauri/src/commands/connection.rs


41. src-tauri/src/commands/metadata.rs Additional files +71/-0

...

src-tauri/src/commands/metadata.rs


42. src-tauri/src/commands/mod.rs Additional files +58/-0

...

src-tauri/src/commands/mod.rs


43. src-tauri/src/commands/query.rs Additional files +175/-3

...

src-tauri/src/commands/query.rs


44. src-tauri/src/connection_input/mod.rs Additional files +25/-3

...

src-tauri/src/connection_input/mod.rs


45. src-tauri/src/db/drivers/duckdb.rs Additional files +8/-2

...

src-tauri/src/db/drivers/duckdb.rs


46. src-tauri/src/db/drivers/mod.rs Additional files +88/-1

...

src-tauri/src/db/drivers/mod.rs


47. src-tauri/src/db/drivers/mssql.rs Additional files +98/-37

...

src-tauri/src/db/drivers/mssql.rs


48. src-tauri/src/db/drivers/mysql.rs Additional files +138/-24

...

src-tauri/src/db/drivers/mysql.rs


49. src-tauri/src/db/drivers/postgres.rs Additional files +39/-18

...

src-tauri/src/db/drivers/postgres.rs


50. src-tauri/src/db/drivers/sqlite.rs Additional files +13/-18

...

src-tauri/src/db/drivers/sqlite.rs


51. src-tauri/src/lib.rs Additional files +1/-0

...

src-tauri/src/lib.rs


52. src-tauri/src/models/mod.rs Additional files +58/-1

...

src-tauri/src/models/mod.rs


53. src-tauri/src/ssh.rs Additional files +62/-5

...

src-tauri/src/ssh.rs


54. src-tauri/tauri.conf.json Additional files +1/-1

...

src-tauri/tauri.conf.json


55. src-tauri/tests/common/clickhouse_context.rs Additional files +80/-0

...

src-tauri/tests/common/clickhouse_context.rs


56. src-tauri/tests/common/mariadb_context.rs Additional files +79/-0

...

src-tauri/tests/common/mariadb_context.rs


57. src-tauri/tests/common/mssql_context.rs Additional files +118/-0

...

src-tauri/tests/common/mssql_context.rs


58. src-tauri/tests/common/mysql_context.rs Additional files +80/-0

...

src-tauri/tests/common/mysql_context.rs


59. src-tauri/tests/common/postgres_context.rs Additional files +79/-0

...

src-tauri/tests/common/postgres_context.rs


60. src-tauri/tests/common/shared.rs Additional files +112/-0

...

src-tauri/tests/common/shared.rs


61. src-tauri/tests/mariadb_integration.rs Additional files +475/-107

...

src-tauri/tests/mariadb_integration.rs


62. src-tauri/tests/mysql_command_integration.rs Additional files +326/-0

...

src-tauri/tests/mysql_command_integration.rs


63. src-tauri/tests/postgres_command_integration.rs Additional files +337/-0

...

src-tauri/tests/postgres_command_integration.rs


64. src/App.tsx Additional files +134/-127

...

src/App.tsx


65. src/components/business/DataGrid/TableView.tsx Additional files +142/-53

...

src/components/business/DataGrid/TableView.tsx


66. src/components/business/Editor/SqlEditor.tsx Additional files +18/-13

...

src/components/business/Editor/SqlEditor.tsx


67. src/components/business/Metadata/TableMetadataView.tsx Additional files +9/-3

...

src/components/business/Metadata/TableMetadataView.tsx


68. src/components/business/Sidebar/ConnectionList.tsx Additional files +597/-312

...

src/components/business/Sidebar/ConnectionList.tsx


69. src/components/business/Sidebar/SavedQueriesList.tsx Additional files +13/-6

...

src/components/business/Sidebar/SavedQueriesList.tsx


70. src/components/business/Sidebar/Sidebar.tsx Additional files +5/-1

...

src/components/business/Sidebar/Sidebar.tsx


71. src/components/business/Sidebar/connection-list/TreeNode.tsx Additional files +1/-3

...

src/components/business/Sidebar/connection-list/TreeNode.tsx


72. src/components/business/Sidebar/connection-list/helpers.tsx Additional files +8/-1

...

src/components/business/Sidebar/connection-list/helpers.tsx


73. src/components/business/SqlLogs/SqlExecutionLogsDialog.tsx Additional files +8/-2

...

src/components/business/SqlLogs/SqlExecutionLogsDialog.tsx


74. src/components/settings/LanguageSelector.tsx Additional files +4/-1

...

src/components/settings/LanguageSelector.tsx


75. src/components/settings/SettingsDialog.tsx Additional files +85/-48

...

src/components/settings/SettingsDialog.tsx


76. src/components/updater-checker.tsx Additional files +71/-36

...

src/components/updater-checker.tsx


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Copy Markdown

qodo-code-review bot commented Mar 30, 2026

Code Review by Qodo

🐞 Bugs (3) 📘 Rule violations (0) 📎 Requirement gaps (0)

Grey Divider


Action required

1. MySQL DML misdetected 🐞 Bug ✓ Correctness
Description
MysqlDriver::execute_query relies on is_affected_rows_statement(), but its keyword parsing
doesn’t skip leading SQL comments, so commented INSERT/UPDATE/DELETE statements fall through to
fetch_all() and can fail at runtime. This breaks valid user SQL like -- note\nINSERT ....
Code

src-tauri/src/db/drivers/mysql.rs[R460-465]

+fn is_affected_rows_statement(sql: &str) -> bool {
+    matches!(
+        first_sql_keyword(sql).as_deref(),
+        Some("INSERT" | "UPDATE" | "DELETE" | "REPLACE")
+    )
+}
Evidence
first_sql_keyword() only trims whitespace and stops at the first non-alphanumeric character, so
SQL starting with -- or /* yields None, making is_affected_rows_statement() return false.
execute_query() then falls back to sqlx::query(&sql).fetch_all(...) for that DML, which is the
wrong execution path for statements without a result set.

src-tauri/src/db/drivers/mysql.rs[437-465]
src-tauri/src/db/drivers/mysql.rs[852-893]
src-tauri/src/db/drivers/postgres.rs[462-476]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`MysqlDriver::execute_query` uses `is_affected_rows_statement()` to decide whether to call `execute()` (for DML) or `fetch_all()` (for result sets). The helper `first_sql_keyword()` does not skip SQL comments, so queries that begin with `-- ...` or `/* ... */` are not recognized as DML and incorrectly go through `fetch_all()`, causing runtime failures.

### Issue Context
Postgres already has a comment-aware prefix skipper (`skip_ignorable_sql_prefix`) used by its `first_sql_keyword()`.

### Fix Focus Areas
- src-tauri/src/db/drivers/mysql.rs[437-465]
- src-tauri/src/db/drivers/mysql.rs[852-893]

### What to change
- Implement a MySQL equivalent of `skip_ignorable_sql_prefix` (handle whitespace, `--` line comments, `/* */` block comments, and optionally `#` for MySQL) and use it in `first_sql_keyword()`.
- Add unit tests proving `is_affected_rows_statement("-- c\nINSERT ...")` and `is_affected_rows_statement("/*c*/UPDATE ...")` return true.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. SQL import reads special files 🐞 Bug ⛨ Security
Description
import_sql_file reads the selected path via fs::read_to_string() after only checking exists,
!is_dir, and .sql extension; it never verifies the path is a regular file. This allows
FIFOs/devices/symlinks-with-.sql to bypass intended safety checks and can hang or behave
unpredictably.
Code

src-tauri/src/commands/transfer.rs[R935-964]

+fn validate_import_path(path: &Path) -> Result<(), String> {
+    if path.as_os_str().is_empty() {
+        return Err("[IMPORT_ERROR] Invalid import path".to_string());
+    }
+    if path.is_dir() {
+        return Err("[IMPORT_ERROR] Import path points to a directory".to_string());
+    }
+    if !path.exists() {
+        return Err("[IMPORT_ERROR] Import file does not exist".to_string());
+    }
+    let Some(ext) = path.extension().and_then(|v| v.to_str()) else {
+        return Err("[IMPORT_ERROR] Import file must use .sql extension".to_string());
+    };
+    if !ext.eq_ignore_ascii_case("sql") {
+        return Err("[IMPORT_ERROR] Import file must use .sql extension".to_string());
+    }
+    Ok(())
+}
+
+fn validate_import_file_size(path: &Path) -> Result<(), String> {
+    let metadata = fs::metadata(path)
+        .map_err(|e| format!("[IMPORT_ERROR] failed to read file metadata: {e}"))?;
+    if metadata.len() > MAX_IMPORT_FILE_SIZE_BYTES {
+        return Err(format!(
+            "[IMPORT_ERROR] file is too large (max {} bytes)",
+            MAX_IMPORT_FILE_SIZE_BYTES
+        ));
+    }
+    Ok(())
+}
Evidence
The import flow calls validate_import_path() and validate_import_file_size() and then
unconditionally fs::read_to_string(&import_path). validate_import_path() does not check
path.is_file() nor does it use symlink_metadata() to reject symlinks/special file types, and the
size guard uses only metadata.len() which is not sufficient for non-regular files.

src-tauri/src/commands/transfer.rs[398-415]
src-tauri/src/commands/transfer.rs[935-964]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
The SQL import feature reads the target path with `fs::read_to_string()`, but validation only checks existence, not-a-directory, and extension. Non-regular files (FIFO/device) or symlink tricks can bypass the intended safety constraints and may hang the import or cause resource issues.

### Issue Context
`validate_import_file_size()` uses `fs::metadata().len()` which is meaningful for regular files but not a complete safety check for special file types.

### Fix Focus Areas
- src-tauri/src/commands/transfer.rs[398-415]
- src-tauri/src/commands/transfer.rs[935-964]

### What to change
- In `validate_import_path`, use `std::fs::symlink_metadata` and verify `file_type.is_file()`.
- Consider rejecting symlinks explicitly (`file_type.is_symlink()`), or resolve/canonicalize and re-check.
- Update `validate_import_file_size` to reuse the same metadata object and ensure it’s a regular file before checking `len()`.
- Add tests for: directory, missing file, wrong extension, and non-regular file handling (where feasible).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

3. MySQL TLS forced disabled 🐞 Bug ⛨ Security
Description
MySQL DSN building appends ?ssl-mode=DISABLED whenever ConnectionForm.ssl is None/false, which
hard-disables TLS for callers that omit the field and can break servers requiring TLS unless the
user explicitly enables SSL. This is a risky default compared to Postgres which leaves TLS
unspecified when SSL is not enabled.
Code

src-tauri/src/db/drivers/mysql.rs[R135-139]

+    } else {
+        // Explicitly disable TLS to avoid HandshakeFailure on servers with TLS
+        // versions or cipher suites incompatible with rustls (TLS 1.2+ only).
+        dsn.push_str("?ssl-mode=DISABLED");
    }
Evidence
ConnectionForm.ssl is an Option<bool>, but MySQL treats None the same as false and always
appends ssl-mode=DISABLED. Postgres only appends sslmode=require/verify-ca when SSL is
explicitly enabled and otherwise leaves the DSN without an explicit disable flag.

src-tauri/src/db/drivers/mysql.rs[115-139]
src-tauri/src/models/mod.rs[222-243]
src-tauri/src/db/drivers/postgres.rs[84-105]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
MySQL DSN construction forces `ssl-mode=DISABLED` whenever `ssl` is unset/false (`unwrap_or(false)`), which hard-disables TLS for callers that omit the SSL field and may cause unexpected plaintext connections or failures against TLS-required servers.

### Issue Context
`ConnectionForm.ssl` is optional. Treating `None` as an explicit opt-out is a semantic change vs leaving the driver defaults in place.

### Fix Focus Areas
- src-tauri/src/db/drivers/mysql.rs[115-139]
- src-tauri/src/models/mod.rs[222-243]

### What to change
- Treat `ssl == Some(true)` as current behavior (REQUIRED/VERIFY_CA).
- Treat `ssl == Some(false)` as explicit disable (append `ssl-mode=DISABLED`).
- Treat `ssl == None` as “unspecified”: do not append any `ssl-mode` query param and let the driver’s default apply.
- Add unit tests covering `ssl: None` does not include `ssl-mode=DISABLED` while `ssl: Some(false)` does.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Comment on lines +10 to +66
runs-on: ubuntu-22.04
timeout-minutes: 40

steps:
- name: Checkout
uses: actions/checkout@v4

- name: Setup Bun
uses: oven-sh/setup-bun@v2
with:
bun-version: latest

- name: Cache Bun dependencies
uses: actions/cache@v4
with:
path: |
node_modules
~/.bun/install/cache
key: ${{ runner.os }}-bun-${{ hashFiles('bun.lock') }}
restore-keys: |
${{ runner.os }}-bun-

- name: Install frontend dependencies
run: bun install --frozen-lockfile

- name: Setup Rust toolchain
uses: dtolnay/rust-toolchain@stable

- name: Cache Rust build artifacts
uses: Swatinem/rust-cache@v2
with:
workspaces: src-tauri -> src-tauri/target

- name: Run unit tests
run: bun run test:unit

- name: Run service tests
run: bun run test:service

- name: Run rust unit tests
run: bun run test:rust:unit

- name: Run integration tests (MySQL + Postgres with testcontainers)
run: IT_DB=all bun run test:integration

- name: Docker diagnostics on failure
if: failure()
run: |
echo "==== docker ps -a ===="
docker ps -a || true
echo "==== recent mysql/postgres logs ===="
for image in mysql:8.0 postgres:16-alpine; do
for id in $(docker ps -aq --filter "ancestor=${image}"); do
echo "--- logs for $id (${image}) ---"
docker logs "$id" || true
done
done

Check warning

Code scanning / CodeQL

Workflow does not contain permissions Medium

Actions job or workflow does not limit the permissions of the GITHUB_TOKEN. Consider setting an explicit permissions block, using the following as a minimal starting point: {contents: read}

Copilot Autofix

AI about 22 hours ago

In general, to fix this class of issue you add an explicit permissions block at the workflow root or per job, restricting the GITHUB_TOKEN to the minimal rights needed (typically contents: read for a pure CI/test workflow). This both documents the intended access and ensures the workflow does not inherit broader defaults.

For this workflow, no step modifies repository contents, creates releases, comments on pull requests, or otherwise needs write access. The only privileged action is actions/checkout@v4, which works with contents: read. Caching with actions/cache, installing dependencies, running tests, and running Docker commands do not require GITHUB_TOKEN write scopes. Therefore the safest minimal fix is to add permissions: contents: read at the workflow root so it applies to all jobs.

Concretely:

  • Edit .github/workflows/ci.yml.
  • Add a root-level permissions: block after the name: CI line (before on:), with contents: read.
  • No additional imports, definitions, or changes to steps are required; this merely limits the default GITHUB_TOKEN capabilities.

Suggested changeset 1
.github/workflows/ci.yml

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -1,5 +1,8 @@
 name: CI
 
+permissions:
+  contents: read
+
 on:
   pull_request:
     branches:
EOF
@@ -1,5 +1,8 @@
name: CI

permissions:
contents: read

on:
pull_request:
branches:
Copilot is powered by AI and may make mistakes. Always verify output.
@qodo-code-review
Copy link
Copy Markdown

CI Feedback 🧐

A test triggered by this PR failed. Here is an AI-generated analysis of the failure:

Action: test

Failed stage: Run rust unit tests [❌]

Failed test name: ""

Failure summary:

The action failed during the Rust unit test step bun run test:rust:unit because the Rust build could
not compile gdk-sys v0.18.2.
- Cargo failed to run the custom build script for gdk-sys when
pkg-config could not find the required system library gdk-3.0 (gdk-3.0.pc).
- The log shows
pkg-config exited with status code 1 and The system library gdk-3.0 required by crate gdk-sys was
not found, with PKG_CONFIG_PATH not set (lines ~1334-1363).
- This indicates missing GTK/GDK
development dependencies on the GitHub Actions runner (e.g., libgdk-3-dev / libgtk-3-dev) or missing
PKG_CONFIG_PATH configuration.

Relevant error logs:
1:  ##[group]Runner Image Provisioner
2:  Hosted Compute Agent
...

34:  SecurityEvents: write
35:  Statuses: write
36:  ##[endgroup]
37:  Secret source: Actions
38:  Prepare workflow directory
39:  Prepare all required actions
40:  Getting action download info
41:  Download action repository 'actions/checkout@v4' (SHA:34e114876b0b11c390a56381ad16ebd13914f8d5)
42:  Download action repository 'oven-sh/setup-bun@v2' (SHA:0c5077e51419868618aeaa5fe8019c62421857d6)
43:  Download action repository 'actions/cache@v4' (SHA:0057852bfaa89a56745cba8c7296529d2fc39830)
44:  Download action repository 'dtolnay/rust-toolchain@stable' (SHA:29eef336d9b2848a0b548edc03f92a220660cdb8)
45:  Download action repository 'Swatinem/rust-cache@v2' (SHA:e18b497796c12c097a38f9edb9d0641fb99eee32)
46:  Complete job name: test
47:  ##[group]Run actions/checkout@v4
48:  with:
49:  repository: codeErrorSleep/dbpaw
50:  token: ***
51:  ssh-strict: true
52:  ssh-user: git
53:  persist-credentials: true
54:  clean: true
55:  sparse-checkout-cone-mode: true
56:  fetch-depth: 1
57:  fetch-tags: false
58:  show-progress: true
59:  lfs: false
60:  submodules: false
61:  set-safe-directory: true
62:  ##[endgroup]
63:  Syncing repository: codeErrorSleep/dbpaw
64:  ##[group]Getting Git version info
...

74:  [command]/usr/bin/git init /home/runner/work/dbpaw/dbpaw
75:  hint: Using 'master' as the name for the initial branch. This default branch name
76:  hint: will change to "main" in Git 3.0. To configure the initial branch name
77:  hint: to use in all of your new repositories, which will suppress this warning,
78:  hint: call:
79:  hint:
80:  hint: 	git config --global init.defaultBranch <name>
81:  hint:
82:  hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and
83:  hint: 'development'. The just-created branch can be renamed via this command:
84:  hint:
85:  hint: 	git branch -m <name>
86:  hint:
87:  hint: Disable this message with "git config set advice.defaultBranchName false"
88:  Initialized empty Git repository in /home/runner/work/dbpaw/dbpaw/.git/
89:  [command]/usr/bin/git remote add origin https://github.com/codeErrorSleep/dbpaw
90:  ##[endgroup]
91:  ##[group]Disabling automatic garbage collection
92:  [command]/usr/bin/git config --local gc.auto 0
93:  ##[endgroup]
94:  ##[group]Setting up auth
95:  [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand
96:  [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :"
97:  [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader
98:  [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :"
99:  [command]/usr/bin/git config --local --name-only --get-regexp ^includeIf\.gitdir:
100:  [command]/usr/bin/git submodule foreach --recursive git config --local --show-origin --name-only --get-regexp remote.origin.url
101:  [command]/usr/bin/git config --local http.https://github.com/.extraheader AUTHORIZATION: basic ***
102:  ##[endgroup]
103:  ##[group]Fetching the repository
104:  [command]/usr/bin/git -c protocol.version=2 fetch --no-tags --prune --no-recurse-submodules --depth=1 origin +bfaa721fa85ee2af66d99a5415d62491be6bbef4:refs/remotes/pull/49/merge
105:  From https://github.com/codeErrorSleep/dbpaw
106:  * [new ref]         bfaa721fa85ee2af66d99a5415d62491be6bbef4 -> pull/49/merge
...

273:  �[36;1mecho "components=$(for c in ${components//,/ }; do echo -n ' --component' $c; done)" >> $GITHUB_OUTPUT�[0m
274:  �[36;1mecho "downgrade=" >> $GITHUB_OUTPUT�[0m
275:  shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
276:  env:
277:  targets: 
278:  components: 
279:  ##[endgroup]
280:  ##[group]Run : set $CARGO_HOME
281:  �[36;1m: set $CARGO_HOME�[0m
282:  �[36;1mecho CARGO_HOME=${CARGO_HOME:-"$HOME/.cargo"} >> $GITHUB_ENV�[0m
283:  shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
284:  ##[endgroup]
285:  ##[group]Run : install rustup if needed
286:  �[36;1m: install rustup if needed�[0m
287:  �[36;1mif ! command -v rustup &>/dev/null; then�[0m
288:  �[36;1m  curl --proto '=https' --tlsv1.2 --retry 10 --retry-connrefused --location --silent --show-error --fail https://sh.rustup.rs | sh -s -- --default-toolchain none -y�[0m
289:  �[36;1m  echo "$CARGO_HOME/bin" >> $GITHUB_PATH�[0m
...

351:  �[36;1mif [ -z "${CARGO_REGISTRIES_CRATES_IO_PROTOCOL+set}" -o -f "/home/runner/work/_temp"/.implicit_cargo_registries_crates_io_protocol ]; then�[0m
352:  �[36;1m  if rustc +stable --version --verbose | grep -q '^release: 1\.6[89]\.'; then�[0m
353:  �[36;1m    touch "/home/runner/work/_temp"/.implicit_cargo_registries_crates_io_protocol || true�[0m
354:  �[36;1m    echo CARGO_REGISTRIES_CRATES_IO_PROTOCOL=sparse >> $GITHUB_ENV�[0m
355:  �[36;1m  elif rustc +stable --version --verbose | grep -q '^release: 1\.6[67]\.'; then�[0m
356:  �[36;1m    touch "/home/runner/work/_temp"/.implicit_cargo_registries_crates_io_protocol || true�[0m
357:  �[36;1m    echo CARGO_REGISTRIES_CRATES_IO_PROTOCOL=git >> $GITHUB_ENV�[0m
358:  �[36;1m  fi�[0m
359:  �[36;1mfi�[0m
360:  shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
361:  env:
362:  CARGO_HOME: /home/runner/.cargo
363:  CARGO_INCREMENTAL: 0
364:  CARGO_TERM_COLOR: always
365:  ##[endgroup]
366:  ##[group]Run : work around spurious network errors in curl 8.0
367:  �[36;1m: work around spurious network errors in curl 8.0�[0m
368:  �[36;1m# https://rust-lang.zulipchat.com/#narrow/stream/246057-t-cargo/topic/timeout.20investigation�[0m
...

437:  - CARGO_INCREMENTAL
438:  - CARGO_TERM_COLOR
439:  .. Lockfiles considered:
440:  - /home/runner/work/dbpaw/dbpaw/src-tauri/Cargo.lock
441:  - /home/runner/work/dbpaw/dbpaw/src-tauri/Cargo.toml
442:  ##[endgroup]
443:  ... Restoring cache ...
444:  No cache found.
445:  ##[group]Run bun run test:unit
446:  �[36;1mbun run test:unit�[0m
447:  shell: /usr/bin/bash -e {0}
448:  env:
449:  CARGO_HOME: /home/runner/.cargo
450:  CARGO_INCREMENTAL: 0
451:  CARGO_TERM_COLOR: always
452:  CACHE_ON_FAILURE: false
453:  ##[endgroup]
454:  $ bash ./scripts/test-unit.sh
455:  [run] unit tests (8 files)
456:  bun test v1.3.11 (af24e281)
457:  ##[group]src/theme/themeRegistry.unit.test.ts:
458:  (pass) themeRegistry > maps legacy github theme to github-light
459:  (pass) themeRegistry > recognizes all preset keys as valid theme ids
460:  (pass) themeRegistry > returns correct appearance for new themes
461:  (pass) themeRegistry > binds editorTheme to preset theme id [1.00ms]
462:  ##[endgroup]
463:  ##[group]src/lib/queryExecutionState.unit.test.ts:
464:  (pass) applyQueryCompletionToTab > should update results when the query is the latest
465:  (pass) applyQueryCompletionToTab > should ignore stale query results (race condition scenario)
466:  (pass) applyQueryCompletionToTab > should ignore query results from another tab
467:  (pass) applyQueryCompletionToTab > should correctly handle error results
468:  (pass) applyQueryCompletionToTab > complex case: only the last of rapid queries should apply
469:  ##[endgroup]
470:  ##[group]src/lib/sqlEditorDatabase.unit.test.ts:
471:  (pass) normalizeDatabaseOptions > deduplicates database names and keeps fallback when missing
472:  (pass) resolvePreferredDatabase > prefers the saved database when it still exists
473:  (pass) resolvePreferredDatabase > falls back to the connection database when saved database is gone
474:  (pass) resolvePreferredDatabase > falls back to the first available database when connection default is empty
475:  ##[endgroup]
476:  ##[group]src/services/updater.unit.test.ts:
477:  [Updater Mock] Enabled: available
478:  (pass) updater checkForUpdates (mock mode) > available scenario emits checking -> available [803.01ms]
479:  [Updater Mock] Enabled: no_update
480:  (pass) updater checkForUpdates (mock mode) > no_update scenario emits checking -> idle [801.01ms]
481:  [Updater Mock] Enabled: error
482:  (pass) updater checkForUpdates (mock mode) > error scenario emits checking -> error [802.01ms]
483:  (pass) updater startBackgroundInstall > prevents concurrent install [3.00ms]
484:  (pass) updater startBackgroundInstall > emits state transitions and completes [1.00ms]
485:  (pass) updater startBackgroundInstall > returns error when download fails [1.00ms]
486:  (pass) subscribeUpdateTask > pushes initial snapshot and respects unsubscribe [2.00ms]
...

519:  (pass) formatInsertSQLValue > quotes non-numeric and non-boolean values
520:  (pass) isInsertColumnRequired > returns true for NOT NULL without default
521:  (pass) isInsertColumnRequired > returns false when nullable
522:  (pass) isInsertColumnRequired > returns false when default value exists
523:  (pass) getQualifiedTableName > uses unqualified table with backticks for tidb
524:  (pass) getQualifiedTableName > uses unqualified table with backticks for mariadb [1.00ms]
525:  (pass) getQualifiedTableName > does not qualify sqlite main/public schema
526:  (pass) getQualifiedTableName > keeps non-main sqlite schema qualification
527:  (pass) getQualifiedTableName > does not qualify duckdb main/public schema
528:  (pass) clickhouse mutation guards > detects mergetree engine variants
529:  (pass) clickhouse mutation guards > requires both mergetree engine and primary keys
530:  (pass) mutation statement builders > builds clickhouse alter update/delete statements
531:  (pass) mutation statement builders > keeps generic update/delete statements for non-clickhouse
532:  ##[endgroup]
533:  57 pass
534:  0 fail
535:  147 expect() calls
536:  Ran 57 tests across 8 files. [2.43s]
537:  ##[group]Run bun run test:service
538:  �[36;1mbun run test:service�[0m
539:  shell: /usr/bin/bash -e {0}
540:  env:
541:  CARGO_HOME: /home/runner/.cargo
542:  CARGO_INCREMENTAL: 0
543:  CARGO_TERM_COLOR: always
544:  CACHE_ON_FAILURE: false
545:  ##[endgroup]
...

548:  bun test v1.3.11 (af24e281)
549:  ##[group]src/services/mocks.service.test.ts:
550:  [Mock] list_tables {
551:  id: 1,
552:  database: "test_db",
553:  schema: "public",
554:  }
555:  (pass) invokeMock service layer > returns table list for metadata command [51.00ms]
556:  [Mock] get_connections undefined
557:  (pass) invokeMock service layer > returns connection list for connection command [50.00ms]
558:  [Mock] Unknown command: unknown_command_for_test
559:  [Mock] unknown_command_for_test undefined
560:  (pass) invokeMock service layer > throws on unknown command [1.00ms]
561:  ##[endgroup]
562:  3 pass
563:  0 fail
564:  6 expect() calls
565:  Ran 3 tests across 1 file. [112.00ms]
566:  ##[group]Run bun run test:rust:unit
567:  �[36;1mbun run test:rust:unit�[0m
568:  shell: /usr/bin/bash -e {0}
569:  env:
570:  CARGO_HOME: /home/runner/.cargo
571:  CARGO_INCREMENTAL: 0
572:  CARGO_TERM_COLOR: always
573:  CACHE_ON_FAILURE: false
574:  ##[endgroup]
...

632:  �[1m�[92m  Downloaded�[0m proc-macro2 v1.0.106
633:  �[1m�[92m  Downloaded�[0m rand_core v0.5.1
634:  �[1m�[92m  Downloaded�[0m rustc-hash v2.1.1
635:  �[1m�[92m  Downloaded�[0m strum v0.27.2
636:  �[1m�[92m  Downloaded�[0m shlex v1.3.0
637:  �[1m�[92m  Downloaded�[0m sync_wrapper v1.0.2
638:  �[1m�[92m  Downloaded�[0m subtle v2.6.1
639:  �[1m�[92m  Downloaded�[0m servo_arc v0.2.0
640:  �[1m�[92m  Downloaded�[0m schemars_derive v0.8.22
641:  �[1m�[92m  Downloaded�[0m simd-adler32 v0.3.8
642:  �[1m�[92m  Downloaded�[0m spki v0.7.3
643:  �[1m�[92m  Downloaded�[0m strum_macros v0.27.2
644:  �[1m�[92m  Downloaded�[0m synstructure v0.13.2
645:  �[1m�[92m  Downloaded�[0m time-macros v0.2.27
646:  �[1m�[92m  Downloaded�[0m tauri-macros v2.5.4
647:  �[1m�[92m  Downloaded�[0m thiserror v2.0.18
648:  �[1m�[92m  Downloaded�[0m typeid v1.0.3
649:  �[1m�[92m  Downloaded�[0m thiserror v1.0.69
650:  �[1m�[92m  Downloaded�[0m tempfile v3.26.0
...

722:  �[1m�[92m  Downloaded�[0m convert_case v0.4.0
723:  �[1m�[92m  Downloaded�[0m cargo_toml v0.22.3
724:  �[1m�[92m  Downloaded�[0m atomic-waker v1.1.2
725:  �[1m�[92m  Downloaded�[0m async-process v2.5.0
726:  �[1m�[92m  Downloaded�[0m unicode-segmentation v1.12.0
727:  �[1m�[92m  Downloaded�[0m cairo-rs v0.18.5
728:  �[1m�[92m  Downloaded�[0m tauri-plugin-fs v2.4.5
729:  �[1m�[92m  Downloaded�[0m libc v0.2.182
730:  �[1m�[92m  Downloaded�[0m chrono v0.4.44
731:  �[1m�[92m  Downloaded�[0m regex-automata v0.4.14
732:  �[1m�[92m  Downloaded�[0m libz-sys v1.1.24
733:  �[1m�[92m  Downloaded�[0m vcpkg v0.2.15
734:  �[1m�[92m  Downloaded�[0m tao v0.34.5
735:  �[1m�[92m  Downloaded�[0m winnow v0.5.40
736:  �[1m�[92m  Downloaded�[0m utf-8 v0.7.6
737:  �[1m�[92m  Downloaded�[0m thiserror-impl v1.0.69
738:  �[1m�[92m  Downloaded�[0m tauri-plugin-store v2.4.2
...

851:  �[1m�[92m  Downloaded�[0m sqlx-core v0.8.6
852:  �[1m�[92m  Downloaded�[0m arrow-string v56.2.0
853:  �[1m�[92m  Downloaded�[0m tokio-stream v0.1.18
854:  �[1m�[92m  Downloaded�[0m tauri-plugin-process v2.3.1
855:  �[1m�[92m  Downloaded�[0m borsh-derive v1.6.0
856:  �[1m�[92m  Downloaded�[0m toml_parser v1.0.9+spec-1.1.0
857:  �[1m�[92m  Downloaded�[0m tokio-rustls v0.26.4
858:  �[1m�[92m  Downloaded�[0m tauri-winres v0.3.5
859:  �[1m�[92m  Downloaded�[0m tauri-plugin-window-state v2.4.1
860:  �[1m�[92m  Downloaded�[0m sqlx-mysql v0.8.6
861:  �[1m�[92m  Downloaded�[0m serde_with v1.14.0
862:  �[1m�[92m  Downloaded�[0m libm v0.2.16
863:  �[1m�[92m  Downloaded�[0m arrow-cast v56.2.0
864:  �[1m�[92m  Downloaded�[0m aho-corasick v1.1.4
865:  �[1m�[92m  Downloaded�[0m time-core v0.1.8
866:  �[1m�[92m  Downloaded�[0m thiserror-impl v2.0.18
867:  �[1m�[92m  Downloaded�[0m testcontainers v0.15.0
...

896:  �[1m�[92m  Downloaded�[0m rustls-pki-types v1.14.0
897:  �[1m�[92m  Downloaded�[0m sha2 v0.10.9
898:  �[1m�[92m  Downloaded�[0m serde_with_macros v3.17.0
899:  �[1m�[92m  Downloaded�[0m async-trait v0.1.89
900:  �[1m�[92m  Downloaded�[0m strum_macros v0.26.4
901:  �[1m�[92m  Downloaded�[0m stable_deref_trait v1.2.1
902:  �[1m�[92m  Downloaded�[0m smallvec v1.15.1
903:  �[1m�[92m  Downloaded�[0m semver v1.0.27
904:  �[1m�[92m  Downloaded�[0m polling v3.11.0
905:  �[1m�[92m  Downloaded�[0m num-bigint-dig v0.8.6
906:  �[1m�[92m  Downloaded�[0m strum v0.26.3
907:  �[1m�[92m  Downloaded�[0m strsim v0.11.1
908:  �[1m�[92m  Downloaded�[0m sqlx-macros v0.8.6
909:  �[1m�[92m  Downloaded�[0m serde_with_macros v1.5.2
910:  �[1m�[92m  Downloaded�[0m seahash v4.1.0
911:  �[1m�[92m  Downloaded�[0m proc-macro-error v1.0.4
912:  �[1m�[92m  Downloaded�[0m lexical-util v1.0.7
...

916:  �[1m�[92m  Downloaded�[0m serialize-to-javascript v0.1.2
917:  �[1m�[92m  Downloaded�[0m sha1 v0.10.6
918:  �[1m�[92m  Downloaded�[0m rand_chacha v0.9.0
919:  �[1m�[92m  Downloaded�[0m quote v1.0.44
920:  �[1m�[92m  Downloaded�[0m pkg-config v0.3.32
921:  �[1m�[92m  Downloaded�[0m num-traits v0.2.19
922:  �[1m�[92m  Downloaded�[0m scopeguard v1.2.0
923:  �[1m�[92m  Downloaded�[0m rustc_version v0.4.1
924:  �[1m�[92m  Downloaded�[0m proc-macro-crate v1.3.1
925:  �[1m�[92m  Downloaded�[0m arrow-array v56.2.0
926:  �[1m�[92m  Downloaded�[0m serialize-to-javascript-impl v0.1.2
927:  �[1m�[92m  Downloaded�[0m serde_spanned v0.6.9
928:  �[1m�[92m  Downloaded�[0m rand_chacha v0.3.1
929:  �[1m�[92m  Downloaded�[0m rand_chacha v0.2.2
930:  �[1m�[92m  Downloaded�[0m ptr_meta_derive v0.1.4
931:  �[1m�[92m  Downloaded�[0m proc-macro-error-attr v1.0.4
932:  �[1m�[92m  Downloaded�[0m phf_codegen v0.8.0
...

1173:  �[1m�[92m   Compiling�[0m displaydoc v0.2.5
1174:  �[1m�[92m   Compiling�[0m toml_datetime v0.6.3
1175:  �[1m�[92m   Compiling�[0m serde_spanned v0.6.9
1176:  �[1m�[92m   Compiling�[0m cfg-expr v0.15.8
1177:  �[1m�[92m   Compiling�[0m toml_edit v0.20.2
1178:  �[1m�[92m   Compiling�[0m ppv-lite86 v0.2.21
1179:  �[1m�[92m   Compiling�[0m form_urlencoded v1.2.2
1180:  �[1m�[92m   Compiling�[0m utf8_iter v1.0.4
1181:  �[1m�[92m   Compiling�[0m futures-task v0.3.32
1182:  �[1m�[92m   Compiling�[0m stable_deref_trait v1.2.1
1183:  �[1m�[92m   Compiling�[0m rand_chacha v0.3.1
1184:  �[1m�[92m   Compiling�[0m yoke v0.8.1
1185:  �[1m�[92m   Compiling�[0m bitflags v2.11.0
1186:  �[1m�[92m   Compiling�[0m rand v0.8.5
1187:  �[1m�[92m   Compiling�[0m zerovec v0.11.5
1188:  �[1m�[92m   Compiling�[0m thiserror v1.0.69
1189:  �[1m�[92m   Compiling�[0m subtle v2.6.1
1190:  �[1m�[92m   Compiling�[0m thiserror-impl v1.0.69
1191:  �[1m�[92m   Compiling�[0m typenum v1.19.0
1192:  �[1m�[92m   Compiling�[0m toml v0.8.2
1193:  �[1m�[92m   Compiling�[0m tinystr v0.8.2
1194:  �[1m�[92m   Compiling�[0m system-deps v6.2.2
1195:  �[1m�[92m   Compiling�[0m http v1.4.0
1196:  �[1m�[92m   Compiling�[0m generic-array v0.14.7
1197:  �[1m�[92m   Compiling�[0m winnow v0.7.14
1198:  �[1m�[92m   Compiling�[0m icu_locale_core v2.1.1
1199:  �[1m�[92m   Compiling�[0m potential_utf v0.1.4
1200:  �[1m�[92m   Compiling�[0m zerotrie v0.2.3
1201:  �[1m�[92m   Compiling�[0m fnv v1.0.7
1202:  �[1m�[92m   Compiling�[0m icu_provider v2.1.1
1203:  �[1m�[92m   Compiling�[0m icu_collections v2.1.1
1204:  �[1m�[92m   Compiling�[0m byteorder v1.5.0
1205:  �[1m�[92m   Compiling�[0m thiserror v2.0.18
1206:  �[1m�[92m   Compiling�[0m thiserror-impl v2.0.18
1207:  �[1m�[92m   Compiling�[0m slab v0.4.12
...

1314:  �[1m�[92m   Compiling�[0m html5ever v0.29.1
1315:  �[1m�[92m   Compiling�[0m cfb v0.7.3
1316:  �[1m�[92m   Compiling�[0m jsonptr v0.6.3
1317:  �[1m�[92m   Compiling�[0m rustc_version v0.4.1
1318:  �[1m�[92m   Compiling�[0m cargo-platform v0.1.9
1319:  �[1m�[92m   Compiling�[0m dyn-clone v1.0.20
1320:  �[1m�[92m   Compiling�[0m serde-untagged v0.1.9
1321:  �[1m�[92m   Compiling�[0m cargo_metadata v0.19.2
1322:  �[1m�[92m   Compiling�[0m json-patch v3.0.1
1323:  �[1m�[92m   Compiling�[0m infer v0.19.0
1324:  �[1m�[92m   Compiling�[0m kuchikiki v0.8.8-speedreader
1325:  �[1m�[92m   Compiling�[0m serde_with v3.17.0
1326:  �[1m�[92m   Compiling�[0m urlpattern v0.3.0
1327:  �[1m�[92m   Compiling�[0m num-traits v0.2.19
1328:  �[1m�[92m   Compiling�[0m gdk-sys v0.18.2
1329:  �[1m�[92m   Compiling�[0m proc-macro-error-attr v1.0.4
1330:  �[1m�[92m   Compiling�[0m cpufeatures v0.2.17
1331:  �[1m�[92m   Compiling�[0m tauri-utils v2.8.2
1332:  �[1m�[92m   Compiling�[0m rustls-pki-types v1.14.0
1333:  �[1m�[92m   Compiling�[0m futures-executor v0.3.32
1334:  �[1m�[91merror�[0m: failed to run custom build command for `gdk-sys v0.18.2`
1335:  Caused by:
...

1347:  cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR_x86_64-unknown-linux-gnu
1348:  cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR_x86_64_unknown_linux_gnu
1349:  cargo:rerun-if-env-changed=HOST_PKG_CONFIG_LIBDIR
1350:  cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR
1351:  cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR_x86_64-unknown-linux-gnu
1352:  cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR_x86_64_unknown_linux_gnu
1353:  cargo:rerun-if-env-changed=HOST_PKG_CONFIG_SYSROOT_DIR
1354:  cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR
1355:  --- stderr
1356:  pkg-config exited with status code 1
1357:  > PKG_CONFIG_ALLOW_SYSTEM_CFLAGS=1 pkg-config --libs --cflags gdk-3.0 'gdk-3.0 >= 3.22'
1358:  The system library `gdk-3.0` required by crate `gdk-sys` was not found.
1359:  The file `gdk-3.0.pc` needs to be installed and the PKG_CONFIG_PATH environment variable must contain its parent directory.
1360:  The PKG_CONFIG_PATH environment variable is not set.
1361:  HINT: if you have installed the library, try setting PKG_CONFIG_PATH to the directory containing `gdk-3.0.pc`.
1362:  �[1m�[33mwarning�[0m: build failed, waiting for other jobs to finish...
1363:  error: script "test:rust:unit" exited with code 101
1364:  ##[error]Process completed with exit code 101.
1365:  ##[group]Run echo "==== docker ps -a ===="
1366:  �[36;1mecho "==== docker ps -a ===="�[0m
1367:  �[36;1mdocker ps -a || true�[0m
1368:  �[36;1mecho "==== recent mysql/postgres logs ===="�[0m
1369:  �[36;1mfor image in mysql:8.0 postgres:16-alpine; do�[0m
1370:  �[36;1m  for id in $(docker ps -aq --filter "ancestor=${image}"); do�[0m
1371:  �[36;1m    echo "--- logs for $id (${image}) ---"�[0m
1372:  �[36;1m    docker logs "$id" || true�[0m
1373:  �[36;1m  done�[0m
1374:  �[36;1mdone�[0m
1375:  shell: /usr/bin/bash -e {0}
1376:  env:
1377:  CARGO_HOME: /home/runner/.cargo
1378:  CARGO_INCREMENTAL: 0
1379:  CARGO_TERM_COLOR: always
1380:  CACHE_ON_FAILURE: false
1381:  ##[endgroup]

Comment on lines +460 to +465
fn is_affected_rows_statement(sql: &str) -> bool {
matches!(
first_sql_keyword(sql).as_deref(),
Some("INSERT" | "UPDATE" | "DELETE" | "REPLACE")
)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Mysql dml misdetected 🐞 Bug ✓ Correctness

MysqlDriver::execute_query relies on is_affected_rows_statement(), but its keyword parsing
doesn’t skip leading SQL comments, so commented INSERT/UPDATE/DELETE statements fall through to
fetch_all() and can fail at runtime. This breaks valid user SQL like -- note\nINSERT ....
Agent Prompt
### Issue description
`MysqlDriver::execute_query` uses `is_affected_rows_statement()` to decide whether to call `execute()` (for DML) or `fetch_all()` (for result sets). The helper `first_sql_keyword()` does not skip SQL comments, so queries that begin with `-- ...` or `/* ... */` are not recognized as DML and incorrectly go through `fetch_all()`, causing runtime failures.

### Issue Context
Postgres already has a comment-aware prefix skipper (`skip_ignorable_sql_prefix`) used by its `first_sql_keyword()`.

### Fix Focus Areas
- src-tauri/src/db/drivers/mysql.rs[437-465]
- src-tauri/src/db/drivers/mysql.rs[852-893]

### What to change
- Implement a MySQL equivalent of `skip_ignorable_sql_prefix` (handle whitespace, `--` line comments, `/* */` block comments, and optionally `#` for MySQL) and use it in `first_sql_keyword()`.
- Add unit tests proving `is_affected_rows_statement("-- c\nINSERT ...")` and `is_affected_rows_statement("/*c*/UPDATE ...")` return true.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +935 to +964
fn validate_import_path(path: &Path) -> Result<(), String> {
if path.as_os_str().is_empty() {
return Err("[IMPORT_ERROR] Invalid import path".to_string());
}
if path.is_dir() {
return Err("[IMPORT_ERROR] Import path points to a directory".to_string());
}
if !path.exists() {
return Err("[IMPORT_ERROR] Import file does not exist".to_string());
}
let Some(ext) = path.extension().and_then(|v| v.to_str()) else {
return Err("[IMPORT_ERROR] Import file must use .sql extension".to_string());
};
if !ext.eq_ignore_ascii_case("sql") {
return Err("[IMPORT_ERROR] Import file must use .sql extension".to_string());
}
Ok(())
}

fn validate_import_file_size(path: &Path) -> Result<(), String> {
let metadata = fs::metadata(path)
.map_err(|e| format!("[IMPORT_ERROR] failed to read file metadata: {e}"))?;
if metadata.len() > MAX_IMPORT_FILE_SIZE_BYTES {
return Err(format!(
"[IMPORT_ERROR] file is too large (max {} bytes)",
MAX_IMPORT_FILE_SIZE_BYTES
));
}
Ok(())
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. Sql import reads special files 🐞 Bug ⛨ Security

import_sql_file reads the selected path via fs::read_to_string() after only checking exists,
!is_dir, and .sql extension; it never verifies the path is a regular file. This allows
FIFOs/devices/symlinks-with-.sql to bypass intended safety checks and can hang or behave
unpredictably.
Agent Prompt
### Issue description
The SQL import feature reads the target path with `fs::read_to_string()`, but validation only checks existence, not-a-directory, and extension. Non-regular files (FIFO/device) or symlink tricks can bypass the intended safety constraints and may hang the import or cause resource issues.

### Issue Context
`validate_import_file_size()` uses `fs::metadata().len()` which is meaningful for regular files but not a complete safety check for special file types.

### Fix Focus Areas
- src-tauri/src/commands/transfer.rs[398-415]
- src-tauri/src/commands/transfer.rs[935-964]

### What to change
- In `validate_import_path`, use `std::fs::symlink_metadata` and verify `file_type.is_file()`.
- Consider rejecting symlinks explicitly (`file_type.is_symlink()`), or resolve/canonicalize and re-check.
- Update `validate_import_file_size` to reuse the same metadata object and ensure it’s a regular file before checking `len()`.
- Add tests for: directory, missing file, wrong extension, and non-regular file handling (where feasible).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

#[test]
fn api_key_encrypt_decrypt_round_trip_and_format_validation() {
let key = [3u8; 32];
let mut key = [0u8; 32];

Check failure

Code scanning / CodeQL

Hard-coded cryptographic value Critical

This hard-coded value is used as
a key
.
This hard-coded value is used as
a key
.

Copilot Autofix

AI about 22 hours ago

Copilot could not generate an autofix suggestion

Copilot could not generate an autofix suggestion for this alert. Try pushing a new commit or if the problem persists contact support.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 20

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src-tauri/src/db/drivers/mysql.rs (1)

871-908: ⚠️ Potential issue | 🟠 Major

Route no-result DDL statements through execute(), not the row-fetch path.

sqlx documents .execute() as the appropriate API for statements with no result rows (INSERT/UPDATE/DELETE/DDL), while .fetch_all() is for statements that return data. Currently, DDL statements like CREATE/ALTER/DROP/TRUNCATE/SET/USE reach the else branch and attempt .fetch_all(), which is semantically incorrect. Although the code includes a fallback for MySQL's prepared protocol error (1295), this workaround masks the underlying issue: DDL should be routed directly through .execute() for correctness and efficiency.

Add a no-result statement classifier and combine it with the affected-rows path:

♻️ Suggested direction
+fn is_no_result_statement(sql: &str) -> bool {
+    matches!(
+        first_sql_keyword(sql).as_deref(),
+        Some("CREATE" | "ALTER" | "DROP" | "TRUNCATE" | "SET" | "USE")
+    )
+}
+
 ...
-        } else if is_affected_rows_statement(&sql) {
+        } else if is_affected_rows_statement(&sql) || is_no_result_statement(&sql) {
             let result = sqlx::query(&sql)
                 .execute(&self.pool)
                 .await
                 .map_err(|e| format!("[QUERY_ERROR] {e}"))?;
             (Vec::new(), Vec::new(), result.rows_affected() as i64)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/db/drivers/mysql.rs` around lines 871 - 908, The current logic
calls sqlx::query(...).fetch_all(...) for statements that may be DDL/no-result,
which is wrong; add a no-result-statement classifier (e.g.
is_no_result_statement or extend is_affected_rows_statement) and if it returns
true route the SQL through sqlx::query(&sql).execute(&self.pool).await (same
handling as the existing affected-rows branch) instead of the fetch_all path;
keep the existing prepared-protocol-unsupported error handling
(is_prepared_protocol_unsupported_error) and the fallback to sqlx::raw_sql only
for statements that were attempted with fetch_all, and ensure
describe_query_columns(&sql).await is only called when fetch_all succeeded and
returned zero rows without being a no-result statement (references:
is_affected_rows_statement, is_prepared_protocol_unsupported_error,
describe_query_columns, executed_with_raw_sql).
🧹 Nitpick comments (15)
src/components/business/Metadata/TableMetadataView.tsx (1)

210-212: Consider maintaining consistent formatting for all labels.

These three labels were reformatted to use multi-line JSX, while other similar labels in the same section ("Engine" at line 203, "Primary Key Expr" at lines 230-231, and "TTL" at line 250) remain single-line. For consistency and readability, consider either applying the multi-line format to all labels or reverting these to single-line.

♻️ Option 1: Revert to single-line (matches current style)
               <div className="border border-border rounded-md p-2">
-                  <div className="text-xs text-muted-foreground">
-                    Partition Key
-                  </div>
+                  <div className="text-xs text-muted-foreground">Partition Key</div>
                   <div className="font-mono text-sm break-words">
               <div className="border border-border rounded-md p-2">
-                  <div className="text-xs text-muted-foreground">
-                    Sorting Key
-                  </div>
+                  <div className="text-xs text-muted-foreground">Sorting Key</div>
                   <div className="font-mono text-sm break-words">
               <div className="border border-border rounded-md p-2">
-                  <div className="text-xs text-muted-foreground">
-                    Sampling Key
-                  </div>
+                  <div className="text-xs text-muted-foreground">Sampling Key</div>
                   <div className="font-mono text-sm break-words">

Also applies to: 220-222, 240-242

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/business/Metadata/TableMetadataView.tsx` around lines 210 -
212, In TableMetadataView, make the label JSX formatting consistent: revert the
multi-line JSX blocks for the labels "Partition Key", the similar label at
220-222, and the one at 240-242 back to single-line JSX to match the existing
single-line labels like "Engine", "Primary Key Expr", and "TTL"; update the
corresponding div elements with className "text-xs text-muted-foreground" in the
TableMetadataView component so each label is written on one line, preserving
exact text and spacing.
src-tauri/Cargo.toml (1)

48-49: CI runner is missing required GTK system packages for Rust test jobs.

Current CI fails before tests due to missing gdk-3.0 (pkg-config cannot resolve it). Add Linux runner setup for GTK dev libs (e.g., libgtk-3-dev, pkg-config) in the workflow so the new test setup can run reliably.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/Cargo.toml` around lines 48 - 49, CI jobs fail because GTK dev libs
are missing for Rust tests (pkg-config cannot resolve gdk-3.0) after adding the
testcontainers dev-dependency in Cargo.toml; update the workflow that runs the
Rust test job to install system packages before running cargo test by adding
steps to apt-get install pkg-config and libgtk-3-dev (and any other GTK dev
deps) on Linux runners (or use setup-gtk action), so the Rust tests can
successfully find gdk-3.0 during build.
src-tauri/src/commands/storage.rs (1)

23-38: LGTM — Direct helper pattern is consistent and correct.

The new *_direct functions provide a clean way to bypass Tauri's State extraction for integration testing while maintaining identical logic to the existing commands.

If this pattern grows to more commands, consider a macro to reduce the boilerplate between State<'_, AppState> and &AppState variants.

Also applies to: 59-75, 87-94, 106-113

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/commands/storage.rs` around lines 23 - 38, The direct helper
pattern is duplicated across functions (e.g., save_query_direct) — create a
small macro to generate both the State<'_, AppState> wrapper and the &AppState
direct variant to remove boilerplate; the macro should accept the base function
name, argument list and call-through body (for example generate
save_query(state: State<'_, AppState>, ...) -> Result<...> that calls
save_query_direct(&state, ...)), and replace the existing paired functions (like
save_query and save_query_direct, and the other pairs mentioned around lines
59-75, 87-94, 106-113) so behavior is unchanged but duplication is removed.
scripts/test-integration.sh (1)

17-22: Intentional word-splitting on ${ids}, but consider using xargs for robustness.

The unquoted ${ids} relies on word-splitting to pass multiple container IDs to docker rm -f. While this works for typical IDs, using xargs would be more robust and explicit:

♻️ Suggested improvement
   local ids
-  ids="$(docker ps -aq --filter "name=${it_container_prefix}" || true)"
-  if [[ -n "${ids}" ]]; then
+  ids=$(docker ps -aq --filter "name=${it_container_prefix}" || true)
+  if [[ -n "${ids}" ]]; then
     echo "[cleanup] removing leftover integration containers: ${it_container_prefix}*"
-    docker rm -f ${ids} >/dev/null 2>&1 || true
+    echo "${ids}" | xargs -r docker rm -f >/dev/null 2>&1 || true
   fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/test-integration.sh` around lines 17 - 22, The cleanup block relies
on unquoted ${ids} for word-splitting when passing container IDs to docker rm
-f; change it to pipe the ids into xargs to avoid relying on shell splitting and
to handle arbitrary whitespace safely (use the existing ids assignment and
replace the docker rm -f ${ids} call with a robust xargs invocation such as echo
"${ids}" | xargs -r docker rm -f), referencing the variables ids and
it_container_prefix and the docker rm -f invocation so you update that exact
command.
.github/workflows/ci.yml (1)

17-20: Consider pinning Bun version for reproducible builds.

Using bun-version: latest may cause unexpected CI failures when Bun releases breaking changes. Consider pinning to a specific version.

♻️ Suggested improvement
       - name: Setup Bun
         uses: oven-sh/setup-bun@v2
         with:
-          bun-version: latest
+          bun-version: "1.1.0"  # Pin to a specific version
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 17 - 20, The CI step using the Setup
Bun action currently sets bun-version: latest which risks nondeterministic
builds; update the "Setup Bun" step (uses: oven-sh/setup-bun@v2) to pin
bun-version to a specific released version string (e.g., a tested bun release)
instead of "latest" so builds are reproducible and stable.
src/components/business/Sidebar/ConnectionList.tsx (1)

2401-2558: Extract the duplicated database-tree renderer before it drifts.

The showSavedQueriesInTree split now repeats the entire database/schema/table rendering path, including selection behavior and the table context menu. A shared renderer with a configurable base level would keep future changes like import/export actions from having to be patched in two places.

Also applies to: 2636-2853

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/business/Sidebar/ConnectionList.tsx` around lines 2401 - 2558,
The database/schema/table rendering logic is duplicated across the
showSavedQueriesInTree split; extract a shared renderer to avoid drift by
creating a reusable function (e.g., renderDatabaseTree or renderDatabaseGroup)
that accepts a base level and the connection/database context and reuses
existing helpers like getTableNodeKey, renderTableNode (move this inner function
out), tableNodeRefs, toggleTable, handleTableClick, handleTableExport and
handleCreateQueryFromContext; replace both duplicated blocks with calls to this
new renderer so selection, context menu, and behaviors remain consistent across
both code paths.
src-tauri/src/commands/mod.rs (1)

70-107: Avoid maintaining two copies of the connection/retry path.

The new *_from_app_state helpers are effectively copies of the existing State<'_, AppState> versions. Please extract the shared body once so LocalDb existence checks, pool eviction, and retry behavior can't silently diverge later.

Also applies to: 161-178

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/commands/mod.rs` around lines 70 - 107, The code duplicates
connection/retry logic between ensure_connection_with_db_from_app_state and the
State<'_, AppState> variant; extract the shared logic into a single helper
(e.g., a new async fn ensure_connection_shared or ensure_connection_check) that
accepts the AppState reference, connection id, optional database name, and pool
key, and performs: obtaining/cloning local_db (state.local_db.lock().await),
verifying local_db exists, checking db.get_connection_by_id(id).await and
calling state.pool_manager.remove_by_prefix(&id.to_string()).await on failure,
applying database override to the form, and returning the pool_manager.connect
result; then have both ensure_connection_with_db_from_app_state and the
State<'_, AppState> wrapper call that helper to avoid divergent LocalDb
existence checks, pool eviction, and retry behavior.
src-tauri/tests/common/mysql_context.rs (1)

47-58: Collapse the duplicate env-read path into one helper.

mysql_form_from_local_env() already reads every MYSQL_* value, then apply_mysql_env_overrides() immediately rereads the same set. The same two-step pattern exists in the sibling context files, so it is easy for defaults and override behavior to drift. A shared helper that reads each env var once would keep these test contexts consistent.

Also applies to: 61-79

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/tests/common/mysql_context.rs` around lines 47 - 58, Collapse the
duplicate environment reads by extracting a single helper (e.g.,
read_mysql_env_vars or build_mysql_connection_from_env) that reads each MYSQL_*
once and returns either a partial struct or a ConnectionForm; then change
mysql_form_from_local_env and apply_mysql_env_overrides to use that helper
instead of re-reading the env vars. Specifically, move the shared env_or/env_i64
calls for MYSQL_HOST, MYSQL_PORT, MYSQL_USER, MYSQL_PASSWORD, MYSQL_DB into the
new helper and have mysql_form_from_local_env simply initialize the
ConnectionForm from the helper (and have apply_mysql_env_overrides accept the
already-read values or be removed), and apply the same refactor for the sibling
context block at 61-79 to ensure a single env-read path.
src-tauri/tests/mysql_command_integration.rs (1)

12-18: Same minor timestamp collision potential as Postgres tests.

This uses the same millisecond-based naming pattern. Consider extracting this helper to a shared module to ensure consistent behavior across test files.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/tests/mysql_command_integration.rs` around lines 12 - 18, Extract
the duplicate unique_table_name function into a shared test utilities module
(keep the function name unique_table_name and signature fn
unique_table_name(prefix: &str) -> String) and have both
mysql_command_integration.rs and the Postgres test import and call that shared
helper; implement the same millisecond-based logic (or optionally add extra
entropy like a short random suffix) inside the shared helper so both test files
use a single source of truth and avoid timestamp-collision inconsistencies.
src-tauri/tests/postgres_command_integration.rs (1)

11-17: Potential table name collision in parallel test runs.

The unique_table_name function uses millisecond timestamps, which could collide if multiple tests start within the same millisecond. Consider adding a random suffix or using a more granular timestamp.

🔧 Optional: Add randomness to avoid collision
+use std::sync::atomic::{AtomicU64, Ordering};
+
+static TABLE_COUNTER: AtomicU64 = AtomicU64::new(0);
+
 fn unique_table_name(prefix: &str) -> String {
     let millis = SystemTime::now()
         .duration_since(UNIX_EPOCH)
         .expect("time should be after unix epoch")
         .as_millis();
-    format!("{}_{}", prefix, millis)
+    let counter = TABLE_COUNTER.fetch_add(1, Ordering::SeqCst);
+    format!("{}_{}_{}", prefix, millis, counter)
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/tests/postgres_command_integration.rs` around lines 11 - 17,
unique_table_name currently builds names using millisecond timestamps which can
collide in parallel runs; modify unique_table_name to append a high-entropy
suffix (for example a UUID v4 or a small random hex/string from the rand crate
or include as_nanos()) to guarantee uniqueness across concurrent tests—update
the function (and add necessary imports) so it returns format!("{}_{}_{}",
prefix, millis, <random_or_uuid>) using uuid::Uuid::new_v4() or
rand::thread_rng() to generate the suffix.
src-tauri/tests/common/mssql_context.rs (1)

40-83: Consider documenting the unused docker parameter.

On line 43, the docker parameter is explicitly ignored (let _ = docker;), but the function creates its own Cli::default() when Docker is needed. This pattern differs from other contexts that use the passed-in docker parameter. Consider adding a comment explaining this design choice or aligning with other context modules.

📝 Optional: Add clarifying comment
 pub fn mssql_form_from_test_context(
     docker: Option<&Cli>,
 ) -> (Option<MssqlContainerGuard>, ConnectionForm) {
-    let _ = docker;
+    // Note: docker parameter is ignored; we create a leaked Cli for static lifetime
+    // to support the shared singleton pattern across all tests.
+    let _ = docker;
     if should_reuse_local_db() {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/tests/common/mssql_context.rs` around lines 40 - 83, The function
mssql_form_from_test_context currently ignores the incoming docker parameter
(let _ = docker;) and instead leaks a new Cli::default() when initializing
MSSQL_SHARED; either use the provided docker if Some or document why a leaked
Cli::default() is required. Update mssql_form_from_test_context to check the
docker argument and use it for creating the runnable/container when present,
falling back to Box::leak(Box::new(Cli::default())) only when None, or add a
short clarifying comment above the let _ = docker; line explaining why the
passed-in docker is intentionally ignored and why creating/leaking a
Cli::default() is necessary; reference mssql_form_from_test_context, docker,
MSSQL_SHARED and the Box::leak(Cli::default()) use so reviewers can find the
change.
src-tauri/tests/mysql_stateful_command_integration.rs (2)

19-25: Potential name collision in high-throughput scenarios.

unique_name uses millisecond timestamps which could collide if tests run in parallel within the same millisecond. Consider adding a random component or atomic counter.

♻️ Suggested improvement
+use std::sync::atomic::{AtomicU64, Ordering};
+
+static NAME_COUNTER: AtomicU64 = AtomicU64::new(0);
+
 fn unique_name(prefix: &str) -> String {
     let millis = SystemTime::now()
         .duration_since(UNIX_EPOCH)
         .expect("time should be after unix epoch")
         .as_millis();
-    format!("{}_{}", prefix, millis)
+    let counter = NAME_COUNTER.fetch_add(1, Ordering::Relaxed);
+    format!("{}_{}_{}", prefix, millis, counter)
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/tests/mysql_stateful_command_integration.rs` around lines 19 - 25,
unique_name currently builds identifiers from millisecond timestamps and can
collide under parallel/high-throughput tests; update unique_name to append a
small random component or a thread-safe monotonic counter to the timestamp
(e.g., use the rand crate to add a short hex/random u32 or use a static
AtomicU64 incremented per call) so names are effectively unique even within the
same millisecond; modify the unique_name function to combine
SystemTime::now().as_millis() with the chosen random/counter value and return
that string.

67-75: SQL injection risk in test helper.

The drop_database_if_exists function interpolates db_name directly into SQL without using the quoting function. While this is test code with controlled input, it's inconsistent with production code patterns.

♻️ Suggested fix
 async fn drop_database_if_exists(form: &ConnectionForm, db_name: &str) {
     let driver = MysqlDriver::connect(form)
         .await
         .expect("failed to connect mysql driver for cleanup");
+    // Use proper quoting to escape special characters
+    let quoted_name = db_name.replace('`', "``");
     let _ = driver
-        .execute_query(format!("DROP DATABASE IF EXISTS `{}`", db_name))
+        .execute_query(format!("DROP DATABASE IF EXISTS `{}`", quoted_name))
         .await;
     driver.close().await;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/tests/mysql_stateful_command_integration.rs` around lines 67 - 75,
The test helper drop_database_if_exists currently interpolates db_name directly
into SQL, risking SQL injection and inconsistent behavior; change it to obtain a
safely quoted identifier from the driver (e.g., call the driver's
identifier-quoting/escaping method after MysqlDriver::connect) and use that
quoted value when building the DROP DATABASE statement passed to execute_query
so the SQL uses a properly escaped identifier instead of raw interpolation.
src-tauri/tests/common/shared.rs (1)

17-29: Consider handling DNS resolution failures gracefully.

wait_for_port calls resolve_socket_addr which panics if resolution fails. If the hostname is invalid or DNS is temporarily unavailable, this could give a confusing panic instead of a timeout message.

♻️ Suggested improvement
 pub fn wait_for_port(host: &str, port: u16, timeout: Duration) {
     let started_at = Instant::now();
-    let addr = resolve_socket_addr(host, port);
 
     while started_at.elapsed() < timeout {
+        let addr = match (host, port).to_socket_addrs() {
+            Ok(mut addrs) => match addrs.next() {
+                Some(a) => a,
+                None => {
+                    sleep(Duration::from_millis(500));
+                    continue;
+                }
+            },
+            Err(_) => {
+                sleep(Duration::from_millis(500));
+                continue;
+            }
+        };
         if TcpStream::connect_timeout(&addr, Duration::from_millis(500)).is_ok() {
             return;
         }
         sleep(Duration::from_millis(500));
     }
 
     panic!("timed out waiting for {}:{} to accept connections", host, port);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/tests/common/shared.rs` around lines 17 - 29, wait_for_port
currently calls resolve_socket_addr which panics on DNS failures; change
wait_for_port to handle resolution errors gracefully by attempting to resolve
inside the retry loop (call resolve_socket_addr and handle its error) and only
attempt TcpStream::connect_timeout when resolution succeeds, sleeping and
retrying resolution until the overall timeout elapses; preserve the final panic
message "timed out waiting for {}:{} to accept connections" so DNS resolution
failures become part of the timeout instead of immediate panics. Reference:
wait_for_port and resolve_socket_addr.
src-tauri/src/commands/connection.rs (1)

324-403: Consider extracting shared logic to reduce duplication.

create_database_by_id_direct largely duplicates create_database_by_id. Consider extracting the common logic into a private helper function that both can call, differing only in how the AppState is accessed.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/commands/connection.rs` around lines 324 - 403,
create_database_by_id_direct duplicates most of create_database_by_id; extract
the shared logic (validation of payload.name, driver selection/normalization,
unsupported-driver check, the match over driver that builds SQL and calls
super::execute_with_retry_from_app_state, and final error normalization) into a
private helper like create_database_common(state_ref_getter, id, payload,
if_not_exists) or create_database_internal(id, driver, payload, if_not_exists)
that returns Result<(), String>, then have both create_database_by_id_direct and
create_database_by_id call that helper — keep the driver acquisition differences
(how you read local_db from AppState) outside the helper and pass the normalized
driver string in, and ensure you reuse existing symbols like
validate_database_name, build_*_create_database_sql,
execute_with_retry_from_app_state, and normalize_create_database_error so
behavior is unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/ci.yml:
- Around line 8-11: Add a minimal permissions block to restrict the GITHUB_TOKEN
scope by adding a permissions: mapping (for example at the workflow root or
under the job "test") that explicitly grants only the least privileges needed
(e.g., contents: read and actions: read or other specific scopes your tests
require) instead of the default full permissions; update the workflow keys near
the existing jobs/test configuration (where "runs-on" and "timeout-minutes" are
defined) to include this new permissions block referencing GITHUB_TOKEN usage.

In `@src-tauri/src/commands/transfer.rs`:
- Around line 1044-1063: The SingleQuoted state in the SQL scanner
(SqlScanState::SingleQuoted) currently always treats backslash as an escape
(using chars, current, i), which is MySQL-specific; change that logic to only
perform the backslash escape when the driver flag for MySQL-style escaping is
enabled (re-use the existing mysql_style_hash_comment or an equivalent
mysql_style_backslash_escape boolean available in the scanner/context),
otherwise leave backslashes as literal characters and just advance by one. Keep
the unconditional handling of doubled single-quotes (the existing check
comparing chars.get(i + 1) to Some(&'\'')) intact.

In `@src-tauri/src/connection_input/mod.rs`:
- Around line 49-54: The SSH password handling currently uses
trim_preserve_empty for form.ssh_password so whitespace becomes Some(""), which
bypasses the later auth check that only rejects None; either switch to
trim_to_option(form.ssh_password) so empty/whitespace passwords become None, or
keep trim_preserve_empty but update the SSH auth validation (where
form.ssh_password is checked alongside form.ssh_key_path) to treat Some("") as
missing (e.g., require form.ssh_password.as_ref().map(|s|
!s.is_empty()).unwrap_or(false) or normalize with .filter(|s| !s.is_empty())
before validation) so an effectively empty password does not bypass
authentication checks.

In `@src-tauri/src/db/drivers/mod.rs`:
- Around line 23-62: The conn_failed_error function currently returns the full
upstream error string to users; change it to return a sanitized, generic
user-facing message (e.g., "[CONN_FAILED] Connection failed — check host, port,
credentials, and SSL settings (see logs for details)" or keep the existing
contextual hint text but omit the raw error contents), while logging the
original raw error for internal diagnostics using your app logger (e.g.,
log::error or tracing::error) before formatting the returned string; keep the
existing hint-selection logic in conn_failed_error but ensure the formatted
return does not include the raw variable or any sensitive fragments.

In `@src-tauri/src/db/drivers/mssql.rs`:
- Around line 241-245: The health-check in async fn is_valid(&self, conn: &mut
Self::Connection) currently calls conn.simple_query("SELECT 1") and drops the
returned QueryStream, which defers flushing; update is_valid to fully consume
the stream by awaiting the results (use .into_results().await on the
QueryStream) and map any error into the existing error type (e.g., map_err(|e|
format!("{}", e))) so the stream is drained before returning Ok(()), preventing
deferred flush penalties on connection checkout.
- Around line 212-228: The connect_with_config function currently only timeouts
the TcpStream::connect phase; move the tokio::time::timeout to wrap the entire
async sequence that includes both establishing the TCP socket and calling
Client::connect so the PRELOGIN/TLS/LOGIN7 handshake is also bounded; implement
a timeout around an async block that first creates/configures the TcpStream
(TcpStream::connect and set_nodelay) and then awaits Client::connect(config,
tcp.compat_write()), propagating errors with map_err(format!) and returning the
same Result<Client<Compat<TcpStream>>, String> on timeout or underlying errors.

In `@src-tauri/src/ssh.rs`:
- Around line 48-55: The code currently assigns a default port 0 for "sqlite"
which later triggers a misleading validation error; update start_ssh_tunnel() to
detect file-based drivers by inspecting config.driver (e.g., "sqlite", "duckdb")
and immediately return an error with the message "SSH tunneling is not supported
for file-based drivers" before any port defaulting/validation logic runs; keep
the existing default_port logic for network drivers only (config.driver used
elsewhere stays the same). Also extend the unit test
test_target_port_default_by_driver() to add a case for "sqlite" with ssh enabled
and assert that start_ssh_tunnel() returns the explicit file-based-driver error
instead of a port validation error.

In `@src-tauri/tests/common/duckdb_context.rs`:
- Around line 23-28: The code treats any non-error env::var result as an
override even when the value is empty; change the branches that call
env::var("DUCKDB_IT_DB_PATH") and env::var("DUCKDB_DB_PATH") to trim the
returned string and skip it if it's blank (e.g. check if v.trim().is_empty() and
continue instead of returning), and when returning use
PathBuf::from(trimmed_value) so empty strings are not returned as a file path.

In `@src-tauri/tests/duckdb_integration.rs`:
- Around line 534-538: The code treats the COUNT(*) aggregate as a JSON string
and uses as_str().expect(...) which will break if DuckDB returns a numeric JSON
value; replace this by using a small reusable coercion helper (e.g.,
value_to_i64 or coerce_json_to_i64) to convert rolled_back.data[0]["c"] into i64
safely (handling string and number variants and returning a parse error
otherwise) and apply the same helper to the other later count checks so both
string and numeric payloads are accepted reliably.
- Around line 594-597: The test currently uses a Unix-only hardcoded path;
change the ConnectionForm.file_path construction in duckdb_integration.rs to
build a platform-neutral nonexistent path from std::env::temp_dir() (e.g.,
temp_dir().join(format!("nonexistent_parent_{}",
unique_id)).join("dbpaw_test.duckdb")) so the parent directory is guaranteed
missing across platforms; convert that PathBuf to a String for
ConnectionForm.file_path (use a UUID or timestamp for unique_id to avoid
accidental existence).

In `@src-tauri/tests/mysql_integration.rs`:
- Around line 735-738: The test assumes MySQL aggregate columns are always
strings by calling as_str() on rolled_back.data[0]["c"]; change this to use a
small scalar-coercion helper (e.g., normalize_scalar_to_string or
coerce_json_value_to_string) that accepts a serde_json::Value and returns the
integer as a string whether the value is a JSON String or Number, and use that
helper when computing rolled_back_count and in the similar checks around lines
758-761; update assertions to compare against the normalized string ("0") so the
tests accept either encoding.

In `@src-tauri/tests/mysql_stateful_command_integration.rs`:
- Around line 42-52: The helper init_state_with_local_db currently creates a
temporary local_db_dir for LocalDb but never removes it; modify
init_state_with_local_db (and any tests that call it) to return the created path
(e.g., change signature to return (AppState, PathBuf)) so callers can remove the
directory after test completion, ensure the function still initializes AppState
and sets state.local_db with the Arc<LocalDb>, and update tests to call
fs::remove_dir_all(local_db_dir) (or equivalent cleanup) after the test to
prevent disk accumulation.

In `@src-tauri/tests/postgres_integration.rs`:
- Around line 433-443: The test currently only checks that "meta" exists but
doesn't verify its JSON structure; update the assertions in the execute_query()
block (using driver.execute_query -> query_result -> query_row) to assert the
actual JSON payload (e.g., compare query_row["meta"] to the expected
serde_json::Value or a normalized form) instead of just presence, and make the
same change in the get_table_data() assertion block (the later section around
lines 459-464) so both query paths validate the inserted JSON content, not
merely non-nullness.
- Around line 564-583: The test currently only asserts
inserted.success/updated.success/deleted.success but doesn't verify the affected
row counts; update the assertions after the execute_query calls (variables
inserted, updated, deleted returned from driver.execute_query) to assert the
expected row_count values: for the INSERT expect row_count == 2, for the UPDATE
expect row_count == 1, and for the DELETE expect row_count == 2, using the
row_count field on the result objects to ensure the affected-row regression is
caught.

In `@src-tauri/tests/postgres_stateful_command_integration.rs`:
- Around line 18-24: The unique_name function can collide under parallel test
runs because it only uses millisecond timestamps; change it to produce
collision-resistant names by adding a randomness or uniqueness token (e.g.,
append a UUID, nanosecond timestamp, or thread-safe atomic counter) to the
existing prefix/timestamp scheme. Update the unique_name function (the one
returning String) to combine prefix, time, and a UUID or atomic/thread id so
temp LocalDb directories, DB names, connection names, and fixture tables no
longer risk collisions when tests run concurrently.

In `@src-tauri/tests/sqlite_integration.rs`:
- Around line 146-149: Tests are deleting caller-supplied SQLite files by
unconditionally calling std::fs::remove_file(db_path) after driver.close();
change the cleanup to only remove files the test created: either (A) detect when
sqlite_test_path() returned the external SQLITE_IT_DB_PATH (check
env::var("SQLITE_IT_DB_PATH") or compare returned path) and skip remove_file in
that case, or (B) copy the external DB into a temp file at test startup and use
that temp path for driver (so remove_file is safe). Update all occurrences
around driver.close() / remove_file(db_path) to implement one of these
approaches and preserve the existing.sqlite when SQLITE_IT_DB_PATH is supplied.

In `@src/App.tsx`:
- Around line 1495-1505: The tab close button is tabbable but starts with
opacity-0, making it invisible to keyboard users; update the button element (the
onClick handler that calls handleCloseTab and the className string) to include
visibility classes such as group-focus-within:opacity-100 and
focus-visible:opacity-100 and add a visible focus ring (e.g.,
focus-visible:outline or ring classes) so the control becomes visible and
clearly focused for keyboard users; keep the existing hover behavior and ensure
you only change the className on that button element.

In `@src/components/business/DataGrid/TableView.tsx`:
- Around line 457-466: The current canInsert incorrectly blocks inserts for
non-ClickHouse drivers by gating on hasPrimaryKeys; change canInsert to allow
inserts for any valid tableContext when not ClickHouse: keep isClickHouseDriver
check but return true for non-ClickHouse (i.e. set canInsert = !!tableContext &&
(isClickHouseDriver ? isClickHouseMergeTreeEngine(clickhouseEngine) : true));
retain canUpdateDelete behavior (which still uses
canMutateClickHouseTable/hasPrimaryKeys) and ensure generateInsertSQL (which
does not require PKs) remains usable for keyless tables; update references to
isClickHouseDriver, isClickHouseMergeTreeEngine, clickhouseEngine,
primaryKeys/hasPrimaryKeys, and canInsert accordingly.
- Around line 851-865: The fixed 350ms sleep in refreshAfterMutation is
unreliable for ClickHouse; update the ClickHouse-path in refreshAfterMutation
(the block checking tableContext?.driver === "clickhouse") to wait for mutation
completion instead of a hard timeout by either ensuring the ALTER statements
include SETTINGS mutations_sync=1 when executing mutations or by polling
system.mutations (SELECT count() FROM system.mutations WHERE database=... AND
table=... AND is_done=0) until zero pending mutations before calling
onDataRefresh again; use the table/context identifiers available in the
surrounding code to target the correct database/table and keep the existing
runRefresh/onDataRefresh flow once mutations are confirmed complete.

In `@src/lib/i18n/locales/en.ts`:
- Line 351: Replace the flat importSuccess string with a pluralized ICU-style
entry so the toast renders correctly for singular and plural counts; update the
importSuccess locale value (the importSuccess key in locales/en.ts) to use ICU
plural syntax, e.g. a string that uses the count plural form (one vs other) to
produce "Import completed (1 statement)" vs "Import completed (2 statements)".
Ensure the key stays named importSuccess so existing code that reads
importSuccess/count continues to work.

---

Outside diff comments:
In `@src-tauri/src/db/drivers/mysql.rs`:
- Around line 871-908: The current logic calls sqlx::query(...).fetch_all(...)
for statements that may be DDL/no-result, which is wrong; add a
no-result-statement classifier (e.g. is_no_result_statement or extend
is_affected_rows_statement) and if it returns true route the SQL through
sqlx::query(&sql).execute(&self.pool).await (same handling as the existing
affected-rows branch) instead of the fetch_all path; keep the existing
prepared-protocol-unsupported error handling
(is_prepared_protocol_unsupported_error) and the fallback to sqlx::raw_sql only
for statements that were attempted with fetch_all, and ensure
describe_query_columns(&sql).await is only called when fetch_all succeeded and
returned zero rows without being a no-result statement (references:
is_affected_rows_statement, is_prepared_protocol_unsupported_error,
describe_query_columns, executed_with_raw_sql).

---

Nitpick comments:
In @.github/workflows/ci.yml:
- Around line 17-20: The CI step using the Setup Bun action currently sets
bun-version: latest which risks nondeterministic builds; update the "Setup Bun"
step (uses: oven-sh/setup-bun@v2) to pin bun-version to a specific released
version string (e.g., a tested bun release) instead of "latest" so builds are
reproducible and stable.

In `@scripts/test-integration.sh`:
- Around line 17-22: The cleanup block relies on unquoted ${ids} for
word-splitting when passing container IDs to docker rm -f; change it to pipe the
ids into xargs to avoid relying on shell splitting and to handle arbitrary
whitespace safely (use the existing ids assignment and replace the docker rm -f
${ids} call with a robust xargs invocation such as echo "${ids}" | xargs -r
docker rm -f), referencing the variables ids and it_container_prefix and the
docker rm -f invocation so you update that exact command.

In `@src-tauri/Cargo.toml`:
- Around line 48-49: CI jobs fail because GTK dev libs are missing for Rust
tests (pkg-config cannot resolve gdk-3.0) after adding the testcontainers
dev-dependency in Cargo.toml; update the workflow that runs the Rust test job to
install system packages before running cargo test by adding steps to apt-get
install pkg-config and libgtk-3-dev (and any other GTK dev deps) on Linux
runners (or use setup-gtk action), so the Rust tests can successfully find
gdk-3.0 during build.

In `@src-tauri/src/commands/connection.rs`:
- Around line 324-403: create_database_by_id_direct duplicates most of
create_database_by_id; extract the shared logic (validation of payload.name,
driver selection/normalization, unsupported-driver check, the match over driver
that builds SQL and calls super::execute_with_retry_from_app_state, and final
error normalization) into a private helper like
create_database_common(state_ref_getter, id, payload, if_not_exists) or
create_database_internal(id, driver, payload, if_not_exists) that returns
Result<(), String>, then have both create_database_by_id_direct and
create_database_by_id call that helper — keep the driver acquisition differences
(how you read local_db from AppState) outside the helper and pass the normalized
driver string in, and ensure you reuse existing symbols like
validate_database_name, build_*_create_database_sql,
execute_with_retry_from_app_state, and normalize_create_database_error so
behavior is unchanged.

In `@src-tauri/src/commands/mod.rs`:
- Around line 70-107: The code duplicates connection/retry logic between
ensure_connection_with_db_from_app_state and the State<'_, AppState> variant;
extract the shared logic into a single helper (e.g., a new async fn
ensure_connection_shared or ensure_connection_check) that accepts the AppState
reference, connection id, optional database name, and pool key, and performs:
obtaining/cloning local_db (state.local_db.lock().await), verifying local_db
exists, checking db.get_connection_by_id(id).await and calling
state.pool_manager.remove_by_prefix(&id.to_string()).await on failure, applying
database override to the form, and returning the pool_manager.connect result;
then have both ensure_connection_with_db_from_app_state and the State<'_,
AppState> wrapper call that helper to avoid divergent LocalDb existence checks,
pool eviction, and retry behavior.

In `@src-tauri/src/commands/storage.rs`:
- Around line 23-38: The direct helper pattern is duplicated across functions
(e.g., save_query_direct) — create a small macro to generate both the State<'_,
AppState> wrapper and the &AppState direct variant to remove boilerplate; the
macro should accept the base function name, argument list and call-through body
(for example generate save_query(state: State<'_, AppState>, ...) -> Result<...>
that calls save_query_direct(&state, ...)), and replace the existing paired
functions (like save_query and save_query_direct, and the other pairs mentioned
around lines 59-75, 87-94, 106-113) so behavior is unchanged but duplication is
removed.

In `@src-tauri/tests/common/mssql_context.rs`:
- Around line 40-83: The function mssql_form_from_test_context currently ignores
the incoming docker parameter (let _ = docker;) and instead leaks a new
Cli::default() when initializing MSSQL_SHARED; either use the provided docker if
Some or document why a leaked Cli::default() is required. Update
mssql_form_from_test_context to check the docker argument and use it for
creating the runnable/container when present, falling back to
Box::leak(Box::new(Cli::default())) only when None, or add a short clarifying
comment above the let _ = docker; line explaining why the passed-in docker is
intentionally ignored and why creating/leaking a Cli::default() is necessary;
reference mssql_form_from_test_context, docker, MSSQL_SHARED and the
Box::leak(Cli::default()) use so reviewers can find the change.

In `@src-tauri/tests/common/mysql_context.rs`:
- Around line 47-58: Collapse the duplicate environment reads by extracting a
single helper (e.g., read_mysql_env_vars or build_mysql_connection_from_env)
that reads each MYSQL_* once and returns either a partial struct or a
ConnectionForm; then change mysql_form_from_local_env and
apply_mysql_env_overrides to use that helper instead of re-reading the env vars.
Specifically, move the shared env_or/env_i64 calls for MYSQL_HOST, MYSQL_PORT,
MYSQL_USER, MYSQL_PASSWORD, MYSQL_DB into the new helper and have
mysql_form_from_local_env simply initialize the ConnectionForm from the helper
(and have apply_mysql_env_overrides accept the already-read values or be
removed), and apply the same refactor for the sibling context block at 61-79 to
ensure a single env-read path.

In `@src-tauri/tests/common/shared.rs`:
- Around line 17-29: wait_for_port currently calls resolve_socket_addr which
panics on DNS failures; change wait_for_port to handle resolution errors
gracefully by attempting to resolve inside the retry loop (call
resolve_socket_addr and handle its error) and only attempt
TcpStream::connect_timeout when resolution succeeds, sleeping and retrying
resolution until the overall timeout elapses; preserve the final panic message
"timed out waiting for {}:{} to accept connections" so DNS resolution failures
become part of the timeout instead of immediate panics. Reference: wait_for_port
and resolve_socket_addr.

In `@src-tauri/tests/mysql_command_integration.rs`:
- Around line 12-18: Extract the duplicate unique_table_name function into a
shared test utilities module (keep the function name unique_table_name and
signature fn unique_table_name(prefix: &str) -> String) and have both
mysql_command_integration.rs and the Postgres test import and call that shared
helper; implement the same millisecond-based logic (or optionally add extra
entropy like a short random suffix) inside the shared helper so both test files
use a single source of truth and avoid timestamp-collision inconsistencies.

In `@src-tauri/tests/mysql_stateful_command_integration.rs`:
- Around line 19-25: unique_name currently builds identifiers from millisecond
timestamps and can collide under parallel/high-throughput tests; update
unique_name to append a small random component or a thread-safe monotonic
counter to the timestamp (e.g., use the rand crate to add a short hex/random u32
or use a static AtomicU64 incremented per call) so names are effectively unique
even within the same millisecond; modify the unique_name function to combine
SystemTime::now().as_millis() with the chosen random/counter value and return
that string.
- Around line 67-75: The test helper drop_database_if_exists currently
interpolates db_name directly into SQL, risking SQL injection and inconsistent
behavior; change it to obtain a safely quoted identifier from the driver (e.g.,
call the driver's identifier-quoting/escaping method after MysqlDriver::connect)
and use that quoted value when building the DROP DATABASE statement passed to
execute_query so the SQL uses a properly escaped identifier instead of raw
interpolation.

In `@src-tauri/tests/postgres_command_integration.rs`:
- Around line 11-17: unique_table_name currently builds names using millisecond
timestamps which can collide in parallel runs; modify unique_table_name to
append a high-entropy suffix (for example a UUID v4 or a small random hex/string
from the rand crate or include as_nanos()) to guarantee uniqueness across
concurrent tests—update the function (and add necessary imports) so it returns
format!("{}_{}_{}", prefix, millis, <random_or_uuid>) using uuid::Uuid::new_v4()
or rand::thread_rng() to generate the suffix.

In `@src/components/business/Metadata/TableMetadataView.tsx`:
- Around line 210-212: In TableMetadataView, make the label JSX formatting
consistent: revert the multi-line JSX blocks for the labels "Partition Key", the
similar label at 220-222, and the one at 240-242 back to single-line JSX to
match the existing single-line labels like "Engine", "Primary Key Expr", and
"TTL"; update the corresponding div elements with className "text-xs
text-muted-foreground" in the TableMetadataView component so each label is
written on one line, preserving exact text and spacing.

In `@src/components/business/Sidebar/ConnectionList.tsx`:
- Around line 2401-2558: The database/schema/table rendering logic is duplicated
across the showSavedQueriesInTree split; extract a shared renderer to avoid
drift by creating a reusable function (e.g., renderDatabaseTree or
renderDatabaseGroup) that accepts a base level and the connection/database
context and reuses existing helpers like getTableNodeKey, renderTableNode (move
this inner function out), tableNodeRefs, toggleTable, handleTableClick,
handleTableExport and handleCreateQueryFromContext; replace both duplicated
blocks with calls to this new renderer so selection, context menu, and behaviors
remain consistent across both code paths.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 45d3fafd-a944-4f54-b8a4-35e1f04c0a60

📥 Commits

Reviewing files that changed from the base of the PR and between 65a68fd and 22af678.

⛔ Files ignored due to path filters (1)
  • src-tauri/Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (76)
  • .github/workflows/ci.yml
  • .gitignore
  • CHANGELOG.md
  • README.md
  • README_CN.md
  • docs/zh/Development/DEVELOPMENT.md
  • package.json
  • scripts/db-onboard.sh
  • scripts/test-integration.sh
  • src-tauri/Cargo.toml
  • src-tauri/src/ai/prompt.rs
  • src-tauri/src/commands/ai.rs
  • src-tauri/src/commands/connection.rs
  • src-tauri/src/commands/metadata.rs
  • src-tauri/src/commands/mod.rs
  • src-tauri/src/commands/query.rs
  • src-tauri/src/commands/storage.rs
  • src-tauri/src/commands/transfer.rs
  • src-tauri/src/connection_input/mod.rs
  • src-tauri/src/db/drivers/clickhouse.rs
  • src-tauri/src/db/drivers/duckdb.rs
  • src-tauri/src/db/drivers/mod.rs
  • src-tauri/src/db/drivers/mssql.rs
  • src-tauri/src/db/drivers/mysql.rs
  • src-tauri/src/db/drivers/postgres.rs
  • src-tauri/src/db/drivers/sqlite.rs
  • src-tauri/src/db/local.rs
  • src-tauri/src/lib.rs
  • src-tauri/src/models/mod.rs
  • src-tauri/src/ssh.rs
  • src-tauri/tauri.conf.json
  • src-tauri/tests/clickhouse_integration.rs
  • src-tauri/tests/common/clickhouse_context.rs
  • src-tauri/tests/common/duckdb_context.rs
  • src-tauri/tests/common/mariadb_context.rs
  • src-tauri/tests/common/mssql_context.rs
  • src-tauri/tests/common/mysql_context.rs
  • src-tauri/tests/common/postgres_context.rs
  • src-tauri/tests/common/shared.rs
  • src-tauri/tests/duckdb_integration.rs
  • src-tauri/tests/mariadb_integration.rs
  • src-tauri/tests/mssql_integration.rs
  • src-tauri/tests/mysql_command_integration.rs
  • src-tauri/tests/mysql_integration.rs
  • src-tauri/tests/mysql_stateful_command_integration.rs
  • src-tauri/tests/postgres_command_integration.rs
  • src-tauri/tests/postgres_integration.rs
  • src-tauri/tests/postgres_stateful_command_integration.rs
  • src-tauri/tests/sqlite_integration.rs
  • src/App.tsx
  • src/components/business/DataGrid/TableView.tsx
  • src/components/business/DataGrid/tableView/utils.ts
  • src/components/business/DataGrid/tableView/utils.unit.test.ts
  • src/components/business/Editor/SqlEditor.tsx
  • src/components/business/Editor/codemirrorTheme.ts
  • src/components/business/Metadata/TableMetadataView.tsx
  • src/components/business/Sidebar/ConnectionList.tsx
  • src/components/business/Sidebar/SavedQueriesList.tsx
  • src/components/business/Sidebar/Sidebar.tsx
  • src/components/business/Sidebar/connection-list/TreeNode.tsx
  • src/components/business/Sidebar/connection-list/helpers.tsx
  • src/components/business/SqlLogs/SqlExecutionLogsDialog.tsx
  • src/components/settings/LanguageSelector.tsx
  • src/components/settings/SettingsDialog.tsx
  • src/components/updater-checker.tsx
  • src/lib/connection-form/rules.ts
  • src/lib/connection-form/validate.ts
  • src/lib/connection-form/validate.unit.test.ts
  • src/lib/i18n/locales/en.ts
  • src/lib/i18n/locales/ja.ts
  • src/lib/i18n/locales/zh.ts
  • src/lib/queryExecutionState.unit.test.ts
  • src/services/api.ts
  • src/services/mocks.service.test.ts
  • src/services/mocks.ts
  • src/services/updater.ts
💤 Files with no reviewable changes (1)
  • CHANGELOG.md

Comment on lines +8 to +11
jobs:
test:
runs-on: ubuntu-22.04
timeout-minutes: 40
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add explicit permissions block to limit GITHUB_TOKEN scope.

The workflow does not limit the permissions of the GITHUB_TOKEN. For security best practices, add a minimal permissions block.

🔒 Proposed fix
 jobs:
   test:
     runs-on: ubuntu-22.04
     timeout-minutes: 40
+
+permissions:
+  contents: read

Or at the workflow level:

 on:
   pull_request:
     branches:
       - main

+permissions:
+  contents: read
+
 jobs:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 8 - 11, Add a minimal permissions
block to restrict the GITHUB_TOKEN scope by adding a permissions: mapping (for
example at the workflow root or under the job "test") that explicitly grants
only the least privileges needed (e.g., contents: read and actions: read or
other specific scopes your tests require) instead of the default full
permissions; update the workflow keys near the existing jobs/test configuration
(where "runs-on" and "timeout-minutes" are defined) to include this new
permissions block referencing GITHUB_TOKEN usage.

Comment on lines +1044 to +1063
SqlScanState::SingleQuoted => {
let ch = chars[i];
current.push(ch);
if ch == '\\' {
if let Some(next) = chars.get(i + 1) {
current.push(*next);
i += 2;
continue;
}
}
if ch == '\'' {
if chars.get(i + 1) == Some(&'\'') {
current.push('\'');
i += 2;
continue;
}
state = SqlScanState::Normal;
}
i += 1;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if there's any documentation about escape handling or dialect-specific parsing
rg -n -C2 "backslash|escape" src-tauri/src/commands/transfer.rs

Repository: codeErrorSleep/dbpaw

Length of output: 1165


🏁 Script executed:

# View the context around the target lines
sed -n '1000,1100p' src-tauri/src/commands/transfer.rs | head -100

Repository: codeErrorSleep/dbpaw

Length of output: 3535


🏁 Script executed:

# Look for SqlScanState definition and how this parser is used
rg -n "SqlScanState|fn.*scan|struct.*Scan" src-tauri/src/commands/transfer.rs | head -20

Repository: codeErrorSleep/dbpaw

Length of output: 1175


🏁 Script executed:

# Check if there's any SQL dialect detection or configuration
rg -n -i "dialect|driver|mysql|postgres|sqlite" src-tauri/src/commands/transfer.rs | head -30

Repository: codeErrorSleep/dbpaw

Length of output: 1745


🏁 Script executed:

# Find the SqlScanState enum definition
rg -n "enum SqlScanState" -A 15 src-tauri/src/commands/transfer.rs

Repository: codeErrorSleep/dbpaw

Length of output: 603


🏁 Script executed:

# Find the function containing this parsing logic
rg -n "fn.*split.*sql|fn split_statements" -B 2 -A 30 src-tauri/src/commands/transfer.rs | head -50

Repository: codeErrorSleep/dbpaw

Length of output: 46


🏁 Script executed:

# Check what drivers are supported and how they're determined
rg -n "normalize_driver_name" -A 10 src-tauri/src/commands/transfer.rs

Repository: codeErrorSleep/dbpaw

Length of output: 2204


🏁 Script executed:

# View the full parse_sql_statements function to see if backslash handling is conditional
sed -n '977,1090p' src-tauri/src/commands/transfer.rs

Repository: codeErrorSleep/dbpaw

Length of output: 4100


🏁 Script executed:

# Check where parse_sql_statements is called and what drivers are passed
rg -n "parse_sql_statements" src-tauri/src/commands/transfer.rs -B 2 -A 2

Repository: codeErrorSleep/dbpaw

Length of output: 1445


🏁 Script executed:

# Search for test cases involving backslash in strings or postgres quote handling
rg -n "parse_sql_statements|backslash|\\\\'" src-tauri/src/commands/transfer.rs | grep -i "test\|backslash\|'\\\\"

Repository: codeErrorSleep/dbpaw

Length of output: 571


🏁 Script executed:

# Check the test section for any postgres-specific quote tests
sed -n '1560,1650p' src-tauri/src/commands/transfer.rs

Repository: codeErrorSleep/dbpaw

Length of output: 3494


Backslash escape handling should be MySQL-specific.

The single-quoted string parser unconditionally applies backslash escape handling (lines 1047-1053), but this is MySQL-specific behavior. PostgreSQL and SQLite treat backslash as a literal character in single-quoted strings by default and will cause incorrect parsing when backslashes appear in data.

The backslash escape logic should be conditional on the driver, similar to the existing mysql_style_hash_comment check on line 978:

INSERT INTO t VALUES ('path\to\file');

In PostgreSQL/SQLite, the backslash should be preserved as-is; in MySQL, it should skip the next character. The doubled single quote escape (lines 1054-1059) is correctly handled unconditionally as it's standard SQL.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/commands/transfer.rs` around lines 1044 - 1063, The
SingleQuoted state in the SQL scanner (SqlScanState::SingleQuoted) currently
always treats backslash as an escape (using chars, current, i), which is
MySQL-specific; change that logic to only perform the backslash escape when the
driver flag for MySQL-style escaping is enabled (re-use the existing
mysql_style_hash_comment or an equivalent mysql_style_backslash_escape boolean
available in the scanner/context), otherwise leave backslashes as literal
characters and just advance by one. Keep the unconditional handling of doubled
single-quotes (the existing check comparing chars.get(i + 1) to Some(&'\''))
intact.

Comment on lines +49 to +54
form.password = trim_preserve_empty(form.password);
form.ssl_ca_cert = trim_preserve_empty(form.ssl_ca_cert);
form.file_path = trim_to_option(form.file_path);
form.ssh_host = trim_to_option(form.ssh_host);
form.ssh_username = trim_to_option(form.ssh_username);
form.ssh_password = trim_to_option(form.ssh_password);
form.ssh_password = trim_preserve_empty(form.ssh_password);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Empty SSH password now bypasses SSH auth validation.

trim_preserve_empty converts whitespace-only ssh_password to Some(""), and the later check only rejects None. This allows an effectively empty password to pass validation when no SSH key path is provided.

Suggested fix
-    form.ssh_password = trim_preserve_empty(form.ssh_password);
+    form.ssh_password = trim_to_option(form.ssh_password);

Or keep trim_preserve_empty and tighten the auth check:

-        if form.ssh_password.is_none() && form.ssh_key_path.is_none() {
+        let ssh_password_missing = form
+            .ssh_password
+            .as_deref()
+            .map(|v| v.is_empty())
+            .unwrap_or(true);
+        if ssh_password_missing && form.ssh_key_path.is_none() {
             return Err("[VALIDATION_ERROR] ssh password or ssh key path is required".to_string());
         }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
form.password = trim_preserve_empty(form.password);
form.ssl_ca_cert = trim_preserve_empty(form.ssl_ca_cert);
form.file_path = trim_to_option(form.file_path);
form.ssh_host = trim_to_option(form.ssh_host);
form.ssh_username = trim_to_option(form.ssh_username);
form.ssh_password = trim_to_option(form.ssh_password);
form.ssh_password = trim_preserve_empty(form.ssh_password);
form.password = trim_preserve_empty(form.password);
form.ssl_ca_cert = trim_preserve_empty(form.ssl_ca_cert);
form.file_path = trim_to_option(form.file_path);
form.ssh_host = trim_to_option(form.ssh_host);
form.ssh_username = trim_to_option(form.ssh_username);
form.ssh_password = trim_to_option(form.ssh_password);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/connection_input/mod.rs` around lines 49 - 54, The SSH password
handling currently uses trim_preserve_empty for form.ssh_password so whitespace
becomes Some(""), which bypasses the later auth check that only rejects None;
either switch to trim_to_option(form.ssh_password) so empty/whitespace passwords
become None, or keep trim_preserve_empty but update the SSH auth validation
(where form.ssh_password is checked alongside form.ssh_key_path) to treat
Some("") as missing (e.g., require form.ssh_password.as_ref().map(|s|
!s.is_empty()).unwrap_or(false) or normalize with .filter(|s| !s.is_empty())
before validation) so an effectively empty password does not bypass
authentication checks.

Comment on lines +23 to +62
pub(crate) fn conn_failed_error(e: &dyn std::fmt::Display) -> String {
let raw = e.to_string();
let lower = raw.to_ascii_lowercase();

let hint = if lower.contains("handshake")
|| lower.contains("fatal alert")
|| lower.contains("tls")
|| lower.contains("ssl")
|| lower.contains("certificate")
{
"hint: TLS/SSL handshake failed — the server may use a TLS version or cipher suite \
incompatible with the client (TLS 1.2+ required); try disabling SSL in the connection settings"
} else if lower.contains("access denied")
|| lower.contains("authentication")
|| lower.contains("password")
|| lower.contains("login failed")
|| lower.contains("invalid password")
|| lower.contains("1045")
{
"hint: authentication failed — verify the username/password are correct; \
if they contain special characters they must be URL-encoded"
} else if lower.contains("connection refused")
|| lower.contains("timed out")
|| lower.contains("timeout")
|| lower.contains("broken pipe")
|| lower.contains("network unreachable")
{
"hint: could not reach the server — check host, port, firewall rules, and SSH tunnel settings"
} else if lower.contains("name resolution")
|| lower.contains("no such host")
|| lower.contains("failed to lookup")
|| lower.contains("dns")
{
"hint: hostname could not be resolved — check that the host address is correct"
} else {
"hint: check host, port, credentials, and SSL settings"
};

format!("[CONN_FAILED] {raw} ({hint})")
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid returning raw connection error text directly to users.

This currently includes the full upstream error string in [CONN_FAILED] ..., which can leak sensitive connection details (including credentials in some driver error paths). Prefer a sanitized message for user-facing errors and keep raw detail for internal logs only.

Suggested fix
-    format!("[CONN_FAILED] {raw} ({hint})")
+    format!("[CONN_FAILED] {hint}")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub(crate) fn conn_failed_error(e: &dyn std::fmt::Display) -> String {
let raw = e.to_string();
let lower = raw.to_ascii_lowercase();
let hint = if lower.contains("handshake")
|| lower.contains("fatal alert")
|| lower.contains("tls")
|| lower.contains("ssl")
|| lower.contains("certificate")
{
"hint: TLS/SSL handshake failed — the server may use a TLS version or cipher suite \
incompatible with the client (TLS 1.2+ required); try disabling SSL in the connection settings"
} else if lower.contains("access denied")
|| lower.contains("authentication")
|| lower.contains("password")
|| lower.contains("login failed")
|| lower.contains("invalid password")
|| lower.contains("1045")
{
"hint: authentication failed — verify the username/password are correct; \
if they contain special characters they must be URL-encoded"
} else if lower.contains("connection refused")
|| lower.contains("timed out")
|| lower.contains("timeout")
|| lower.contains("broken pipe")
|| lower.contains("network unreachable")
{
"hint: could not reach the server — check host, port, firewall rules, and SSH tunnel settings"
} else if lower.contains("name resolution")
|| lower.contains("no such host")
|| lower.contains("failed to lookup")
|| lower.contains("dns")
{
"hint: hostname could not be resolved — check that the host address is correct"
} else {
"hint: check host, port, credentials, and SSL settings"
};
format!("[CONN_FAILED] {raw} ({hint})")
}
pub(crate) fn conn_failed_error(e: &dyn std::fmt::Display) -> String {
let raw = e.to_string();
let lower = raw.to_ascii_lowercase();
let hint = if lower.contains("handshake")
|| lower.contains("fatal alert")
|| lower.contains("tls")
|| lower.contains("ssl")
|| lower.contains("certificate")
{
"hint: TLS/SSL handshake failed — the server may use a TLS version or cipher suite \
incompatible with the client (TLS 1.2+ required); try disabling SSL in the connection settings"
} else if lower.contains("access denied")
|| lower.contains("authentication")
|| lower.contains("password")
|| lower.contains("login failed")
|| lower.contains("invalid password")
|| lower.contains("1045")
{
"hint: authentication failed — verify the username/password are correct; \
if they contain special characters they must be URL-encoded"
} else if lower.contains("connection refused")
|| lower.contains("timed out")
|| lower.contains("timeout")
|| lower.contains("broken pipe")
|| lower.contains("network unreachable")
{
"hint: could not reach the server — check host, port, firewall rules, and SSH tunnel settings"
} else if lower.contains("name resolution")
|| lower.contains("no such host")
|| lower.contains("failed to lookup")
|| lower.contains("dns")
{
"hint: hostname could not be resolved — check that the host address is correct"
} else {
"hint: check host, port, credentials, and SSL settings"
};
format!("[CONN_FAILED] {hint}")
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/db/drivers/mod.rs` around lines 23 - 62, The conn_failed_error
function currently returns the full upstream error string to users; change it to
return a sanitized, generic user-facing message (e.g., "[CONN_FAILED] Connection
failed — check host, port, credentials, and SSL settings (see logs for details)"
or keep the existing contextual hint text but omit the raw error contents),
while logging the original raw error for internal diagnostics using your app
logger (e.g., log::error or tracing::error) before formatting the returned
string; keep the existing hint-selection logic in conn_failed_error but ensure
the formatted return does not include the raw variable or any sensitive
fragments.

Comment on lines +212 to +228
async fn connect_with_config(config: Config) -> Result<Client<Compat<TcpStream>>, String> {
let connect_future = async {
let tcp = TcpStream::connect(config.get_addr())
.await
.map_err(|e| format!("{}", e))?;
tcp.set_nodelay(true).map_err(|e| format!("{}", e))?;
Ok::<TcpStream, String>(tcp)
};

let tcp = tokio::time::timeout(std::time::Duration::from_secs(10), connect_future)
.await
.map_err(|_| "Connection timed out".to_string())?
.map_err(|e| format!("{}", e))?;

Client::connect(config, tcp.compat_write())
.await
.map_err(|e| format!("{}", e))
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -name "mssql.rs" -type f

Repository: codeErrorSleep/dbpaw

Length of output: 99


🏁 Script executed:

cat -n src-tauri/src/db/drivers/mssql.rs | sed -n '200,250p'

Repository: codeErrorSleep/dbpaw

Length of output: 2048


🌐 Web query:

tiberius Client::connect TDS TLS login handshake documentation

💡 Result:

Tiberius is a pure-Rust asynchronous TDS (Tabular Data Stream) client library for Microsoft SQL Server. The Client::connect method establishes a connection to SQL Server over TDS, handling TLS encryption and the login handshake as follows: ## Usage Example (from official docs) use tiberius::{Client, Config, AuthMethod}; use tokio::net::TcpStream; use tokio_util::compat::TokioAsyncWriteCompatExt; let mut config = Config::new; config.host("localhost"); config.port(1433); config.authentication(AuthMethod::sql_server("SA", "password")); let tcp = TcpStream::connect(config.get_addr).await?; tcp.set_nodelay(true)?; let mut client = Client::connect(config, tcp.compat_write).await?; ## Connection Process in Client::connect(config, tcp_stream) 1. TCP Connection: User establishes plain TCP stream (e.g., via tokio::net::TcpStream::connect) to SQL Server (default port 1433, or dynamic via SQL Browser for named instances). 2. PRELOGIN (TDS Packet Type 0x12): Tiberius sends a PRELOGIN message to negotiate capabilities, including encryption level (ENCRYPT option). This follows TDS protocol specs. 3. TLS Handshake (if encryption negotiated): - Based on Config::encryption setting (defaults to Required with TLS features enabled). - Enum EncryptionLevel: | Level | Value | Behavior | |-------|-------|----------| | Off | 0 | Encrypt only login handshake | | On | 1 | Encrypt all if server supports | | NotSupported | 2 | No encryption | | Required | 3 | Encrypt all, fail otherwise (default) | - TLS is layered over the TCP stream using native-tls (default, OS TLS: OpenSSL/Schannel/Secure Transport) or rustls (via feature flag). - Config::trust_cert disables server cert validation (insecure for prod); Config::trust_cert_ca(path) adds custom CA. - Handshake occurs via TDS packets encapsulating TLS messages until TLS layer signals completion. 4. LOGIN7 Message: After TLS (if any), sends LOGIN7 TDS packet with authentication details (e.g., SQL Server auth, Integrated Windows auth, AAD token). Server responds with login acknowledgment. 5. Post-Login: Handles redirects (Error::Routing), enters ready state for queries. This implements TDS 7.2+ protocol faithfully, including [MS-TDS] states like "Sent TLS/SSL Negotiation Packet State". Source code in prisma/tiberius repo (src/client/connection.rs) orchestrates prelogin, TLS stream creation, and login. For full details, see: - Official Docs: https://docs.rs/tiberius/latest/tiberius/struct.Client.html - GitHub: https://github.com/prisma/tiberius - MS-TDS Spec: https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-tds/

Citations:


Wrap the full SQL Server connection sequence in the timeout.

The timeout on lines 221–224 only protects TcpStream::connect. After the TCP connection succeeds, Client::connect on lines 226–228 performs the PRELOGIN negotiation, TLS handshake, and LOGIN7 authentication—all of which involve network round-trips and can stall indefinitely if the server is unresponsive or misconfigured. A server that accepts TCP but stalls in TLS negotiation or authentication will hang this function without a timeout. Wrap both the TCP connection and Client::connect in a single timeout to protect the entire handshake sequence.

⏱️ Suggested fix
-        let connect_future = async {
-            let tcp = TcpStream::connect(config.get_addr())
-                .await
-                .map_err(|e| format!("{}", e))?;
-            tcp.set_nodelay(true).map_err(|e| format!("{}", e))?;
-            Ok::<TcpStream, String>(tcp)
-        };
-
-        let tcp = tokio::time::timeout(std::time::Duration::from_secs(10), connect_future)
-            .await
-            .map_err(|_| "Connection timed out".to_string())?
-            .map_err(|e| format!("{}", e))?;
-
-        Client::connect(config, tcp.compat_write())
-            .await
-            .map_err(|e| format!("{}", e))
+        tokio::time::timeout(std::time::Duration::from_secs(10), async move {
+            let tcp = TcpStream::connect(config.get_addr())
+                .await
+                .map_err(|e| e.to_string())?;
+            tcp.set_nodelay(true).map_err(|e| e.to_string())?;
+            Client::connect(config, tcp.compat_write())
+                .await
+                .map_err(|e| e.to_string())
+        })
+        .await
+        .map_err(|_| "Connection timed out".to_string())?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/db/drivers/mssql.rs` around lines 212 - 228, The
connect_with_config function currently only timeouts the TcpStream::connect
phase; move the tokio::time::timeout to wrap the entire async sequence that
includes both establishing the TCP socket and calling Client::connect so the
PRELOGIN/TLS/LOGIN7 handshake is also bounded; implement a timeout around an
async block that first creates/configures the TcpStream (TcpStream::connect and
set_nodelay) and then awaits Client::connect(config, tcp.compat_write()),
propagating errors with map_err(format!) and returning the same
Result<Client<Compat<TcpStream>>, String> on timeout or underlying errors.

Comment on lines +146 to +149
driver.close().await;

let _ = std::fs::remove_file(db_path);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't delete caller-supplied SQLite files during test cleanup.

sqlite_test_path() returns SQLITE_IT_DB_PATH verbatim when that env var is set. With the new unconditional remove_file(db_path) cleanup, running the suite against an existing SQLite database will delete that file after the test finishes. Only remove paths the test created itself, or copy external DBs into a temp file first. The same cleanup pattern repeats in the added tests below.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/tests/sqlite_integration.rs` around lines 146 - 149, Tests are
deleting caller-supplied SQLite files by unconditionally calling
std::fs::remove_file(db_path) after driver.close(); change the cleanup to only
remove files the test created: either (A) detect when sqlite_test_path()
returned the external SQLITE_IT_DB_PATH (check env::var("SQLITE_IT_DB_PATH") or
compare returned path) and skip remove_file in that case, or (B) copy the
external DB into a temp file at test startup and use that temp path for driver
(so remove_file is safe). Update all occurrences around driver.close() /
remove_file(db_path) to implement one of these approaches and preserve the
existing.sqlite when SQLITE_IT_DB_PATH is supplied.

Comment on lines +1495 to +1505
<button
type="button"
aria-label={t("app.tab.closeAria", {
title,
})}
className="absolute right-1 top-1/2 -translate-y-1/2 opacity-0 group-hover:opacity-100 p-1 hover:bg-accent rounded-sm cursor-pointer transition-opacity"
onClick={(e) => {
e.stopPropagation();
handleCloseTab(tab.id);
}}
>
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Make the tab close button visible on keyboard focus.

This button is still tabbable while opacity-0, so keyboard users can land on an invisible control. Add group-focus-within:opacity-100 / focus-visible:opacity-100 and a visible focus ring.

♿ Suggested tweak
-                                          className="absolute right-1 top-1/2 -translate-y-1/2 opacity-0 group-hover:opacity-100 p-1 hover:bg-accent rounded-sm cursor-pointer transition-opacity"
+                                          className="absolute right-1 top-1/2 -translate-y-1/2 opacity-0 group-hover:opacity-100 group-focus-within:opacity-100 focus-visible:opacity-100 p-1 hover:bg-accent rounded-sm cursor-pointer transition-opacity focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<button
type="button"
aria-label={t("app.tab.closeAria", {
title,
})}
className="absolute right-1 top-1/2 -translate-y-1/2 opacity-0 group-hover:opacity-100 p-1 hover:bg-accent rounded-sm cursor-pointer transition-opacity"
onClick={(e) => {
e.stopPropagation();
handleCloseTab(tab.id);
}}
>
<button
type="button"
aria-label={t("app.tab.closeAria", {
title,
})}
className="absolute right-1 top-1/2 -translate-y-1/2 opacity-0 group-hover:opacity-100 group-focus-within:opacity-100 focus-visible:opacity-100 p-1 hover:bg-accent rounded-sm cursor-pointer transition-opacity focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring"
onClick={(e) => {
e.stopPropagation();
handleCloseTab(tab.id);
}}
>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/App.tsx` around lines 1495 - 1505, The tab close button is tabbable but
starts with opacity-0, making it invisible to keyboard users; update the button
element (the onClick handler that calls handleCloseTab and the className string)
to include visibility classes such as group-focus-within:opacity-100 and
focus-visible:opacity-100 and add a visible focus ring (e.g.,
focus-visible:outline or ring classes) so the control becomes visible and
clearly focused for keyboard users; keep the existing hover behavior and ensure
you only change the className on that button element.

Comment on lines +457 to +466
const isClickHouseDriver = tableContext?.driver === "clickhouse";
const hasPrimaryKeys = primaryKeys.length > 0;
const canInsert = !!tableContext &&
(isClickHouseDriver
? isClickHouseMergeTreeEngine(clickhouseEngine)
: hasPrimaryKeys);
const canUpdateDelete = !!tableContext &&
(isClickHouseDriver
? canMutateClickHouseTable(clickhouseEngine, primaryKeys)
: hasPrimaryKeys);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t require primary keys for inserts on non-ClickHouse tables.

canInsert now depends on hasPrimaryKeys for every non-ClickHouse driver. That hides Add and marks keyless tables as read-only even though generateInsertSQL() does not rely on a primary key at all, so this regresses inserts for valid tables without PKs.

➕ Suggested fix
-  const canInsert = !!tableContext &&
-    (isClickHouseDriver
-      ? isClickHouseMergeTreeEngine(clickhouseEngine)
-      : hasPrimaryKeys);
+  const canInsert =
+    !!tableContext &&
+    (isClickHouseDriver
+      ? isClickHouseMergeTreeEngine(clickhouseEngine)
+      : true);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const isClickHouseDriver = tableContext?.driver === "clickhouse";
const hasPrimaryKeys = primaryKeys.length > 0;
const canInsert = !!tableContext &&
(isClickHouseDriver
? isClickHouseMergeTreeEngine(clickhouseEngine)
: hasPrimaryKeys);
const canUpdateDelete = !!tableContext &&
(isClickHouseDriver
? canMutateClickHouseTable(clickhouseEngine, primaryKeys)
: hasPrimaryKeys);
const isClickHouseDriver = tableContext?.driver === "clickhouse";
const hasPrimaryKeys = primaryKeys.length > 0;
const canInsert =
!!tableContext &&
(isClickHouseDriver
? isClickHouseMergeTreeEngine(clickhouseEngine)
: true);
const canUpdateDelete = !!tableContext &&
(isClickHouseDriver
? canMutateClickHouseTable(clickhouseEngine, primaryKeys)
: hasPrimaryKeys);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/business/DataGrid/TableView.tsx` around lines 457 - 466, The
current canInsert incorrectly blocks inserts for non-ClickHouse drivers by
gating on hasPrimaryKeys; change canInsert to allow inserts for any valid
tableContext when not ClickHouse: keep isClickHouseDriver check but return true
for non-ClickHouse (i.e. set canInsert = !!tableContext && (isClickHouseDriver ?
isClickHouseMergeTreeEngine(clickhouseEngine) : true)); retain canUpdateDelete
behavior (which still uses canMutateClickHouseTable/hasPrimaryKeys) and ensure
generateInsertSQL (which does not require PKs) remains usable for keyless
tables; update references to isClickHouseDriver, isClickHouseMergeTreeEngine,
clickhouseEngine, primaryKeys/hasPrimaryKeys, and canInsert accordingly.

Comment on lines +851 to +865
const refreshAfterMutation = useCallback(async () => {
if (!onDataRefresh) return;
const runRefresh = async () => {
const ret = onDataRefresh();
if (ret && typeof (ret as Promise<unknown>).then === "function") {
await ret;
}
};

await runRefresh();
if (tableContext?.driver === "clickhouse") {
await new Promise((resolve) => setTimeout(resolve, 350));
await runRefresh();
}
}, [onDataRefresh, tableContext?.driver]);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

ClickHouse UPDATE DELETE mutations asynchronous mutations_sync

💡 Result:

In ClickHouse, UPDATE and DELETE operations on MergeTree-family tables are implemented as asynchronous "mutations" using ALTER TABLE statements. These mutations rewrite entire affected data parts in the background, making them heavyweight operations not intended for frequent use. Key syntax: - UPDATE: ALTER TABLE [db.]table UPDATE column1 = expr1 [, column2 = expr2, ...] WHERE filter_expr - DELETE: ALTER TABLE [db.]table DELETE WHERE filter_expr The filter_expr must return UInt8 (1 for rows to modify/delete). Mutations are asynchronous by default, meaning the ALTER query returns immediately after queuing the mutation, while the actual data rewrite happens in background threads. The mutations_sync setting controls synchronicity (can be set at session/query level or globally): - 0 (default): Asynchronous - query returns immediately. - 1: Wait for mutation to complete on the local/replica node. - 2: Wait for mutation to complete on all replicas (for replicated tables). Examples: -- Asynchronous (default) ALTER TABLE events UPDATE status = 'processed' WHERE event_id = 1; -- Synchronous on local node SET mutations_sync = 1; ALTER TABLE events DELETE WHERE user_id = 12345; -- Wait on all replicas ALTER TABLE events UPDATE col = expr WHERE cond SETTINGS mutations_sync=2; Monitor progress with: SELECT * FROM system.mutations WHERE table = 'your_table' AND is_done = 0; Best practices: - Avoid frequent mutations; prefer ReplacingMergeTree/CollapsingMergeTree, TTL, or lightweight DELETE/UPDATE (DELETE FROM, ALTER...UPDATE with on-the-fly application). - Batch multiple changes in one ALTER. - Mutations apply only to data inserted before the mutation; newer data unaffected. - Cannot update primary/partition key columns. Lightweight alternatives exist for faster operations without full part rewrites.

Citations:


🏁 Script executed:

cd src/components/business/DataGrid && wc -l TableView.tsx

Repository: codeErrorSleep/dbpaw

Length of output: 82


🏁 Script executed:

# Search for ClickHouse-related code in the codebase
rg -i "clickhouse" --type ts --type tsx -B 2 -A 2

Repository: codeErrorSleep/dbpaw

Length of output: 91


🏁 Script executed:

# Search for mutation-related code in the TableView file
rg -i "mutation" src/components/business/DataGrid/TableView.tsx -B 2 -A 2

Repository: codeErrorSleep/dbpaw

Length of output: 1297


🏁 Script executed:

# Read the specific section around lines 851-865
sed -n '845,870p' src/components/business/DataGrid/TableView.tsx

Repository: codeErrorSleep/dbpaw

Length of output: 745


🏁 Script executed:

# Search for mutations_sync usage in the codebase
rg "mutations_sync"

Repository: codeErrorSleep/dbpaw

Length of output: 46


🏁 Script executed:

# Search for system.mutations usage
rg "system\.mutations"

Repository: codeErrorSleep/dbpaw

Length of output: 46


🏁 Script executed:

# Search for other ClickHouse-specific handling patterns
rg -i "clickhouse" src/components/business/DataGrid/TableView.tsx -B 5 -A 5

Repository: codeErrorSleep/dbpaw

Length of output: 3766


🏁 Script executed:

# Search for git history or comments explaining the 350ms value
rg "350" src/components/business/DataGrid/TableView.tsx -B 5 -A 5

Repository: codeErrorSleep/dbpaw

Length of output: 350


🏁 Script executed:

# Check if there's any other retry logic or exponential backoff patterns in the file
rg -i "retry|backoff|timeout" src/components/business/DataGrid/TableView.tsx

Repository: codeErrorSleep/dbpaw

Length of output: 297


🏁 Script executed:

# Search for onDataRefresh implementation to understand what it does
rg "onDataRefresh" src/components/business/DataGrid/ -B 2 -A 5

Repository: codeErrorSleep/dbpaw

Length of output: 3989


🏁 Script executed:

# Check if there's any configuration or settings being passed to mutations
rg -i "alter table|insert into" src/components/business/DataGrid/ -A 3 -B 3

Repository: codeErrorSleep/dbpaw

Length of output: 3350


🏁 Script executed:

# Look for the actual mutation execution/SQL building code
rg "buildDeleteStatement|buildUpdateStatement|buildInsertStatement" src/components/business/DataGrid/ -A 10

Repository: codeErrorSleep/dbpaw

Length of output: 8480


🏁 Script executed:

# Search for SQL building utilities and check if mutations_sync is used
head -100 src/components/business/DataGrid/tableView/utils.ts | grep -i "mutation\|alter\|insert\|delete"

Repository: codeErrorSleep/dbpaw

Length of output: 202


🏁 Script executed:

# Check the full file size and then look at mutation building logic
wc -l src/components/business/DataGrid/tableView/utils.ts && rg "buildDeleteStatement|buildUpdateStatement" src/components/business/DataGrid/tableView/utils.ts -A 30 | head -100

Repository: codeErrorSleep/dbpaw

Length of output: 718


The fixed 350ms sleep is insufficient for ClickHouse mutations and risks displaying stale data.

ClickHouse mutations (UPDATE/DELETE via ALTER TABLE) are asynchronous by default. The current implementation sleeps for 350ms then calls refreshAfterMutation() again, but this doesn't guarantee mutation completion—larger tables or slower nodes can exceed this arbitrary timeout. ClickHouse documents mutations_sync (set to 1 or 2 to wait for completion) or polling system.mutations as the reliable approach. Consider either:

  • Adding SETTINGS mutations_sync=1 to the generated ALTER statements, or
  • Polling system.mutations WHERE is_done = 0 after executing mutations before refreshing
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/business/DataGrid/TableView.tsx` around lines 851 - 865, The
fixed 350ms sleep in refreshAfterMutation is unreliable for ClickHouse; update
the ClickHouse-path in refreshAfterMutation (the block checking
tableContext?.driver === "clickhouse") to wait for mutation completion instead
of a hard timeout by either ensuring the ALTER statements include SETTINGS
mutations_sync=1 when executing mutations or by polling system.mutations (SELECT
count() FROM system.mutations WHERE database=... AND table=... AND is_done=0)
until zero pending mutations before calling onDataRefresh again; use the
table/context identifiers available in the surrounding code to target the
correct database/table and keep the existing runRefresh/onDataRefresh flow once
mutations are confirmed complete.

importReadOnlyDriver:
"This driver is read-only in DbPaw and does not support SQL import.",
selectImportSqlFile: "Select SQL File to Import",
importSuccess: "Import completed ({{count}} statements)",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Pluralize the new import-success toast.

{{count}} statements will read as 1 statements in the single-statement case. Please add singular/plural variants here so the new import flow renders correctly for both paths.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/i18n/locales/en.ts` at line 351, Replace the flat importSuccess
string with a pluralized ICU-style entry so the toast renders correctly for
singular and plural counts; update the importSuccess locale value (the
importSuccess key in locales/en.ts) to use ICU plural syntax, e.g. a string that
uses the count plural form (one vs other) to produce "Import completed (1
statement)" vs "Import completed (2 statements)". Ensure the key stays named
importSuccess so existing code that reads importSuccess/count continues to work.

@codeErrorSleep codeErrorSleep merged commit e66ed20 into main Mar 30, 2026
11 of 14 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant