Conversation
…ndent history assembly Change prompt construction from user message mode to system message mode to clearly distinguish system context and dialog history. Extract message assembly logic to independent functions to improve testability and maintainability. Update tests to reflect role changes and new assembly features.
…correctly after deletion After the deletion operation, the selectedRows and selectedCell statuses are not completely synchronized and updated, which may lead to inconsistent UI statuses. Ensure status synchronization by updating ref and state at the same time, and clear selectedCellRef.
…ormalization of connection strings
…t port - Added a new universal connection error handling function to provide targeted prompts based on the error content. - Set the correct SSH tunnel default port for different database drivers - Disable SSL by default for MySQL connections to avoid TLS handshake failures - Update related tests to verify new features
…full rollback Added SQL file import function, supporting MySQL and PostgreSQL drivers. The import process is executed in a transaction. If any statement fails, all changes will be rolled back automatically. File size and statement number limits have been added to ensure security. The front-end adds import dialog boxes, menu items and multi language support, and the back-end implements SQL statement parsing and error handling.
…House tables -Add support for ClickHouse in the database creation function -Enable row level updates and deletes for ClickHouse MergeTree engine tables -Add ClickHouse specific SQL statement building function -Extended table editing permission check to adapt to ClickHouse engine features -Update the test case to override the new ClickHouse function
… error handling -Expand SQL import function to support MariaDB, TiDB, SQLite, DuckDB and SQL Server -Add explicit "not supported" status and interface prompt for read-only driver (ClickHouse) -Improve SQL Server import and support 'GO' batch separator -Enhanced error reporting, including index and statement preview of failed batches -Automatically detect and process transaction control statements in SQL scripts (BEGIN/COMMIT) -Update the front-end interface, multilingual translation and documentation to reflect the new scope of support
-Add testcontainers dependency to automatically start MySQL/Postgres integration tests -Create. github/workflows/ci.iyml to automatically run unit, service and integration tests in PR -Reconstruct the test script, support the IT_DB environment variable to select the database type, and add the container cleaning mechanism -Update the development document, and specify the use method and recommended workflow of integration test -Unified test context management, providing reliability enhancements such as connection retry and port waiting
…d SQLite Add the test container context of MariaDB and SQL Server, and extend the integration test of SQLite. Update the test scripts and documentation to support the selection of these databases through the IT_DB environment variable. For MySQL, PostgreSQL and new databases, regression tests for paging, sorting, filtering, table structure, metadata (indexes, foreign keys) and Boolean/JSON type mapping have been added.
Add a complete integration test framework for ClickHouse and DuckDB, including: -Add clickhouse and duckdb option support in the test script -Add containerized test environment management for ClickHouse -Add local file test environment management for DuckDB -Expand the test coverage of the two databases, including table structure, data paging, sorting and filtering, etc -Update the development document to describe the new test options and environment variables
…DATE/DELETE statements Add is_affected_rows_statement function to identify data modification statements For the INSERT/UPDATE/DELETE/REPLACE statement, use the execute method to obtain rows_affected Fix the problem that these statements returned empty result sets incorrectly before
Add integration tests for DuckDB, PostgreSQL, SQLite and MSSQL to cover the following key functions: -Transaction commit and rollback -UPDATE/DELETE returns the number of affected rows -Round trip test of large text and BLOB fields -SQL error handling -Concurrent connection query -List and query of views -Connection failure scenario (invalid path, wrong password, timeout) -Batch Insertion and Batch Execution Process -Lock conflict and busy error handling -Preparation, Execution and Release of Preprocessing Statements These tests ensure the consistency and reliability of each database driver's behavior in real scenarios.
…ance and refactor tests -Add bb8 connection pool support for MSSQL driver to improve resource utilization and performance in multi connection scenarios -Reconstruct MSSQL integration tests, use shared containers and connection pools, and reduce test startup overhead -Fix SQL syntax errors in DuckDB transaction test, and combine multiple statements into a single query\ -Add MySQL command integration test, covering core functions such as connection test, query execution and database list -Improve error handling and connection timeout mechanism to enhance drive stability
Add a series of direct calling functions for the library module that do not rely on Tauri state management to facilitate the integration of other components. Major changes include: -In 'commands/mod.rs', the' ensure_connection_with_db_from_app_state 'and' execute_with_retry_from_app_state 'functions are added. They support obtaining connections directly from' AppState 'and executing retry logic. -Add the corresponding '_direct' suffix function for the 'storage', 'metadata', 'query', 'connection', 'ai' and 'transfer' modules, and provide a direct call interface that does not pass through the Tauri command layer. -Reconstruct 'LocalDb:: init' in 'db/local.rs' to extract the' init_with_app_dir 'method to improve testability. -Update 'scripts/test integration. sh' to add the execution of the new MySQL and PostgreSQL command integration test. -Add 'mysql_command_integration. rs' and' postgres_command_integration. rs' test files to verify the correctness of direct command functions. -Fix the test assertion in 'postgres_integration. rs', change the check primary key to check the existence of columns, and optimize the verification logic for the number of affected rows. These changes enable core functions to be reused away from the Tauri environment, while maintaining backward compatibility.
…iguration Add the auxiliary script db-onboard.sh for database integration testing, which is used to automatically check scaffolding, run syntax and compilation checks, and synchronize the test matrix. At the same time, update the. gitignore file and ignore the files in the. tray/skills/directory.
📝 WalkthroughWalkthroughThis PR introduces SQL import functionality with statement-level execution tracking, refactors backend commands with "direct" APIs bypassing Tauri, adds testcontainers-based integration testing infrastructure, enhances ClickHouse support (mutation semantics, identifier quoting, CREATE DATABASE), implements connection pooling for MSSQL via bb8, and bumps the version to 0.3.0 with corresponding documentation and UI updates. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User
participant UI as Frontend UI<br/>(ConnectionList)
participant API as API Layer<br/>(services/api.ts)
participant Tauri as Tauri IPC
participant Transfer as Transfer Module<br/>(commands/transfer.rs)
participant Parser as SQL Parser
participant DB as Database Driver
User->>UI: Select "Import SQL" on database
UI->>UI: Open file picker
User->>UI: Select .sql file
UI->>API: importSqlFile(id, database, filePath)
API->>Tauri: invoke("import_sql_file", params)
Tauri->>Transfer: import_sql_file(...)
Transfer->>Transfer: Validate file (size, extension, existence)
Transfer->>Transfer: Read & BOM-strip file content
Transfer->>Parser: Parse SQL into statements<br/>(handle GO, transactions)
Parser-->>Transfer: Vec<(unit, preview)>
Transfer->>Transfer: Detect transaction management
Transfer->>DB: BEGIN (if needed)
loop Each parsed statement
Transfer->>DB: Execute statement
alt Success
Transfer->>Transfer: Log success, increment count
else Error
Transfer->>Transfer: Store error, position, batch
Transfer->>DB: ROLLBACK (if using outer TX)
Transfer-->>Transfer: Break loop
end
end
Transfer-->>Tauri: ImportSqlResult<br/>{totalStatements, successStatements,<br/>failedAt?, error?, rolledBack}
Tauri-->>API: Result
API-->>UI: ImportSqlResult
UI->>User: Show import summary<br/>(success count, rollback status)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
Review Summary by QodoRelease 260325 - SQL import feature, ClickHouse mutations, and comprehensive test expansion
WalkthroughsDescription• **SQL import functionality**: Added comprehensive SQL import feature with support for multiple database drivers (MSSQL, PostgreSQL, MySQL, SQLite, DuckDB, ClickHouse), including SQL parsing for comments, string literals, and driver-specific syntax (MSSQL GO batches, PostgreSQL dollar quotes) • **ClickHouse mutation support**: Implemented statement builders for UPDATE and DELETE operations on ClickHouse tables with MergeTree engine detection and validation • **Internationalization**: Added SQL import i18n strings for English, Chinese, and Japanese locales with import-related toast messages • **API enhancements**: Added ImportDriverCapability type, ImportSqlResult interface, and importSqlFile API method for SQL import operations • **Direct method variants**: Implemented _direct variants for storage, AI, and export commands to support non-Tauri contexts • **Database initialization refactoring**: Extracted database initialization logic into init_with_app_dir method with secure random key generation for tests • **Comprehensive integration test expansion**: Added extensive test suites for SQLite, DuckDB, PostgreSQL, MySQL, MSSQL, and ClickHouse with pagination, sorting, filtering, transactions, concurrent connections, and error handling coverage • **Test infrastructure improvements**: Created shared context modules (postgres_context, mysql_context, mssql_context, clickhouse_context, duckdb_context) for consistent test database setup and management • **Code formatting and consistency**: Reformatted multiple files for improved readability including multi-line function signatures, string quotes standardization, and consistent indentation Diagramflowchart LR
A["SQL Import Feature"] --> B["Transfer Commands"]
A --> C["API Types & Methods"]
A --> D["Mock Handler"]
E["ClickHouse Mutations"] --> F["Statement Builders"]
E --> G["Mutation Guards"]
H["Internationalization"] --> I["EN/ZH/JA Locales"]
J["Direct Methods"] --> K["Storage Commands"]
J --> L["AI Commands"]
J --> M["Export Methods"]
N["Database Refactoring"] --> O["Init with App Dir"]
N --> P["Secure Random Keys"]
Q["Test Infrastructure"] --> R["Context Modules"]
Q --> S["Integration Tests"]
R --> T["Postgres/MySQL/MSSQL/ClickHouse/DuckDB"]
S --> U["Pagination/Sorting/Filtering/Transactions"]
File Changes1. src/components/business/DataGrid/tableView/utils.unit.test.ts
|
Code Review by Qodo
1. MySQL DML misdetected
|
| runs-on: ubuntu-22.04 | ||
| timeout-minutes: 40 | ||
|
|
||
| steps: | ||
| - name: Checkout | ||
| uses: actions/checkout@v4 | ||
|
|
||
| - name: Setup Bun | ||
| uses: oven-sh/setup-bun@v2 | ||
| with: | ||
| bun-version: latest | ||
|
|
||
| - name: Cache Bun dependencies | ||
| uses: actions/cache@v4 | ||
| with: | ||
| path: | | ||
| node_modules | ||
| ~/.bun/install/cache | ||
| key: ${{ runner.os }}-bun-${{ hashFiles('bun.lock') }} | ||
| restore-keys: | | ||
| ${{ runner.os }}-bun- | ||
|
|
||
| - name: Install frontend dependencies | ||
| run: bun install --frozen-lockfile | ||
|
|
||
| - name: Setup Rust toolchain | ||
| uses: dtolnay/rust-toolchain@stable | ||
|
|
||
| - name: Cache Rust build artifacts | ||
| uses: Swatinem/rust-cache@v2 | ||
| with: | ||
| workspaces: src-tauri -> src-tauri/target | ||
|
|
||
| - name: Run unit tests | ||
| run: bun run test:unit | ||
|
|
||
| - name: Run service tests | ||
| run: bun run test:service | ||
|
|
||
| - name: Run rust unit tests | ||
| run: bun run test:rust:unit | ||
|
|
||
| - name: Run integration tests (MySQL + Postgres with testcontainers) | ||
| run: IT_DB=all bun run test:integration | ||
|
|
||
| - name: Docker diagnostics on failure | ||
| if: failure() | ||
| run: | | ||
| echo "==== docker ps -a ====" | ||
| docker ps -a || true | ||
| echo "==== recent mysql/postgres logs ====" | ||
| for image in mysql:8.0 postgres:16-alpine; do | ||
| for id in $(docker ps -aq --filter "ancestor=${image}"); do | ||
| echo "--- logs for $id (${image}) ---" | ||
| docker logs "$id" || true | ||
| done | ||
| done |
Check warning
Code scanning / CodeQL
Workflow does not contain permissions Medium
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 22 hours ago
In general, to fix this class of issue you add an explicit permissions block at the workflow root or per job, restricting the GITHUB_TOKEN to the minimal rights needed (typically contents: read for a pure CI/test workflow). This both documents the intended access and ensures the workflow does not inherit broader defaults.
For this workflow, no step modifies repository contents, creates releases, comments on pull requests, or otherwise needs write access. The only privileged action is actions/checkout@v4, which works with contents: read. Caching with actions/cache, installing dependencies, running tests, and running Docker commands do not require GITHUB_TOKEN write scopes. Therefore the safest minimal fix is to add permissions: contents: read at the workflow root so it applies to all jobs.
Concretely:
- Edit
.github/workflows/ci.yml. - Add a root-level
permissions:block after thename: CIline (beforeon:), withcontents: read. - No additional imports, definitions, or changes to steps are required; this merely limits the default
GITHUB_TOKENcapabilities.
| @@ -1,5 +1,8 @@ | ||
| name: CI | ||
|
|
||
| permissions: | ||
| contents: read | ||
|
|
||
| on: | ||
| pull_request: | ||
| branches: |
CI Feedback 🧐A test triggered by this PR failed. Here is an AI-generated analysis of the failure:
|
| fn is_affected_rows_statement(sql: &str) -> bool { | ||
| matches!( | ||
| first_sql_keyword(sql).as_deref(), | ||
| Some("INSERT" | "UPDATE" | "DELETE" | "REPLACE") | ||
| ) | ||
| } |
There was a problem hiding this comment.
1. Mysql dml misdetected 🐞 Bug ✓ Correctness
MysqlDriver::execute_query relies on is_affected_rows_statement(), but its keyword parsing doesn’t skip leading SQL comments, so commented INSERT/UPDATE/DELETE statements fall through to fetch_all() and can fail at runtime. This breaks valid user SQL like -- note\nINSERT ....
Agent Prompt
### Issue description
`MysqlDriver::execute_query` uses `is_affected_rows_statement()` to decide whether to call `execute()` (for DML) or `fetch_all()` (for result sets). The helper `first_sql_keyword()` does not skip SQL comments, so queries that begin with `-- ...` or `/* ... */` are not recognized as DML and incorrectly go through `fetch_all()`, causing runtime failures.
### Issue Context
Postgres already has a comment-aware prefix skipper (`skip_ignorable_sql_prefix`) used by its `first_sql_keyword()`.
### Fix Focus Areas
- src-tauri/src/db/drivers/mysql.rs[437-465]
- src-tauri/src/db/drivers/mysql.rs[852-893]
### What to change
- Implement a MySQL equivalent of `skip_ignorable_sql_prefix` (handle whitespace, `--` line comments, `/* */` block comments, and optionally `#` for MySQL) and use it in `first_sql_keyword()`.
- Add unit tests proving `is_affected_rows_statement("-- c\nINSERT ...")` and `is_affected_rows_statement("/*c*/UPDATE ...")` return true.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| fn validate_import_path(path: &Path) -> Result<(), String> { | ||
| if path.as_os_str().is_empty() { | ||
| return Err("[IMPORT_ERROR] Invalid import path".to_string()); | ||
| } | ||
| if path.is_dir() { | ||
| return Err("[IMPORT_ERROR] Import path points to a directory".to_string()); | ||
| } | ||
| if !path.exists() { | ||
| return Err("[IMPORT_ERROR] Import file does not exist".to_string()); | ||
| } | ||
| let Some(ext) = path.extension().and_then(|v| v.to_str()) else { | ||
| return Err("[IMPORT_ERROR] Import file must use .sql extension".to_string()); | ||
| }; | ||
| if !ext.eq_ignore_ascii_case("sql") { | ||
| return Err("[IMPORT_ERROR] Import file must use .sql extension".to_string()); | ||
| } | ||
| Ok(()) | ||
| } | ||
|
|
||
| fn validate_import_file_size(path: &Path) -> Result<(), String> { | ||
| let metadata = fs::metadata(path) | ||
| .map_err(|e| format!("[IMPORT_ERROR] failed to read file metadata: {e}"))?; | ||
| if metadata.len() > MAX_IMPORT_FILE_SIZE_BYTES { | ||
| return Err(format!( | ||
| "[IMPORT_ERROR] file is too large (max {} bytes)", | ||
| MAX_IMPORT_FILE_SIZE_BYTES | ||
| )); | ||
| } | ||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
2. Sql import reads special files 🐞 Bug ⛨ Security
import_sql_file reads the selected path via fs::read_to_string() after only checking exists, !is_dir, and .sql extension; it never verifies the path is a regular file. This allows FIFOs/devices/symlinks-with-.sql to bypass intended safety checks and can hang or behave unpredictably.
Agent Prompt
### Issue description
The SQL import feature reads the target path with `fs::read_to_string()`, but validation only checks existence, not-a-directory, and extension. Non-regular files (FIFO/device) or symlink tricks can bypass the intended safety constraints and may hang the import or cause resource issues.
### Issue Context
`validate_import_file_size()` uses `fs::metadata().len()` which is meaningful for regular files but not a complete safety check for special file types.
### Fix Focus Areas
- src-tauri/src/commands/transfer.rs[398-415]
- src-tauri/src/commands/transfer.rs[935-964]
### What to change
- In `validate_import_path`, use `std::fs::symlink_metadata` and verify `file_type.is_file()`.
- Consider rejecting symlinks explicitly (`file_type.is_symlink()`), or resolve/canonicalize and re-check.
- Update `validate_import_file_size` to reuse the same metadata object and ensure it’s a regular file before checking `len()`.
- Add tests for: directory, missing file, wrong extension, and non-regular file handling (where feasible).
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| #[test] | ||
| fn api_key_encrypt_decrypt_round_trip_and_format_validation() { | ||
| let key = [3u8; 32]; | ||
| let mut key = [0u8; 32]; |
Check failure
Code scanning / CodeQL
Hard-coded cryptographic value Critical
Copilot Autofix
AI about 22 hours ago
Copilot could not generate an autofix suggestion
Copilot could not generate an autofix suggestion for this alert. Try pushing a new commit or if the problem persists contact support.
There was a problem hiding this comment.
Actionable comments posted: 20
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src-tauri/src/db/drivers/mysql.rs (1)
871-908:⚠️ Potential issue | 🟠 MajorRoute no-result DDL statements through
execute(), not the row-fetch path.
sqlxdocuments.execute()as the appropriate API for statements with no result rows (INSERT/UPDATE/DELETE/DDL), while.fetch_all()is for statements that return data. Currently, DDL statements likeCREATE/ALTER/DROP/TRUNCATE/SET/USEreach the else branch and attempt.fetch_all(), which is semantically incorrect. Although the code includes a fallback for MySQL's prepared protocol error (1295), this workaround masks the underlying issue: DDL should be routed directly through.execute()for correctness and efficiency.Add a no-result statement classifier and combine it with the affected-rows path:
♻️ Suggested direction
+fn is_no_result_statement(sql: &str) -> bool { + matches!( + first_sql_keyword(sql).as_deref(), + Some("CREATE" | "ALTER" | "DROP" | "TRUNCATE" | "SET" | "USE") + ) +} + ... - } else if is_affected_rows_statement(&sql) { + } else if is_affected_rows_statement(&sql) || is_no_result_statement(&sql) { let result = sqlx::query(&sql) .execute(&self.pool) .await .map_err(|e| format!("[QUERY_ERROR] {e}"))?; (Vec::new(), Vec::new(), result.rows_affected() as i64)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/db/drivers/mysql.rs` around lines 871 - 908, The current logic calls sqlx::query(...).fetch_all(...) for statements that may be DDL/no-result, which is wrong; add a no-result-statement classifier (e.g. is_no_result_statement or extend is_affected_rows_statement) and if it returns true route the SQL through sqlx::query(&sql).execute(&self.pool).await (same handling as the existing affected-rows branch) instead of the fetch_all path; keep the existing prepared-protocol-unsupported error handling (is_prepared_protocol_unsupported_error) and the fallback to sqlx::raw_sql only for statements that were attempted with fetch_all, and ensure describe_query_columns(&sql).await is only called when fetch_all succeeded and returned zero rows without being a no-result statement (references: is_affected_rows_statement, is_prepared_protocol_unsupported_error, describe_query_columns, executed_with_raw_sql).
🧹 Nitpick comments (15)
src/components/business/Metadata/TableMetadataView.tsx (1)
210-212: Consider maintaining consistent formatting for all labels.These three labels were reformatted to use multi-line JSX, while other similar labels in the same section ("Engine" at line 203, "Primary Key Expr" at lines 230-231, and "TTL" at line 250) remain single-line. For consistency and readability, consider either applying the multi-line format to all labels or reverting these to single-line.
♻️ Option 1: Revert to single-line (matches current style)
<div className="border border-border rounded-md p-2"> - <div className="text-xs text-muted-foreground"> - Partition Key - </div> + <div className="text-xs text-muted-foreground">Partition Key</div> <div className="font-mono text-sm break-words"><div className="border border-border rounded-md p-2"> - <div className="text-xs text-muted-foreground"> - Sorting Key - </div> + <div className="text-xs text-muted-foreground">Sorting Key</div> <div className="font-mono text-sm break-words"><div className="border border-border rounded-md p-2"> - <div className="text-xs text-muted-foreground"> - Sampling Key - </div> + <div className="text-xs text-muted-foreground">Sampling Key</div> <div className="font-mono text-sm break-words">Also applies to: 220-222, 240-242
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/business/Metadata/TableMetadataView.tsx` around lines 210 - 212, In TableMetadataView, make the label JSX formatting consistent: revert the multi-line JSX blocks for the labels "Partition Key", the similar label at 220-222, and the one at 240-242 back to single-line JSX to match the existing single-line labels like "Engine", "Primary Key Expr", and "TTL"; update the corresponding div elements with className "text-xs text-muted-foreground" in the TableMetadataView component so each label is written on one line, preserving exact text and spacing.src-tauri/Cargo.toml (1)
48-49: CI runner is missing required GTK system packages for Rust test jobs.Current CI fails before tests due to missing
gdk-3.0(pkg-configcannot resolve it). Add Linux runner setup for GTK dev libs (e.g.,libgtk-3-dev,pkg-config) in the workflow so the new test setup can run reliably.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/Cargo.toml` around lines 48 - 49, CI jobs fail because GTK dev libs are missing for Rust tests (pkg-config cannot resolve gdk-3.0) after adding the testcontainers dev-dependency in Cargo.toml; update the workflow that runs the Rust test job to install system packages before running cargo test by adding steps to apt-get install pkg-config and libgtk-3-dev (and any other GTK dev deps) on Linux runners (or use setup-gtk action), so the Rust tests can successfully find gdk-3.0 during build.src-tauri/src/commands/storage.rs (1)
23-38: LGTM — Direct helper pattern is consistent and correct.The new
*_directfunctions provide a clean way to bypass Tauri'sStateextraction for integration testing while maintaining identical logic to the existing commands.If this pattern grows to more commands, consider a macro to reduce the boilerplate between
State<'_, AppState>and&AppStatevariants.Also applies to: 59-75, 87-94, 106-113
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/commands/storage.rs` around lines 23 - 38, The direct helper pattern is duplicated across functions (e.g., save_query_direct) — create a small macro to generate both the State<'_, AppState> wrapper and the &AppState direct variant to remove boilerplate; the macro should accept the base function name, argument list and call-through body (for example generate save_query(state: State<'_, AppState>, ...) -> Result<...> that calls save_query_direct(&state, ...)), and replace the existing paired functions (like save_query and save_query_direct, and the other pairs mentioned around lines 59-75, 87-94, 106-113) so behavior is unchanged but duplication is removed.scripts/test-integration.sh (1)
17-22: Intentional word-splitting on${ids}, but consider usingxargsfor robustness.The unquoted
${ids}relies on word-splitting to pass multiple container IDs todocker rm -f. While this works for typical IDs, usingxargswould be more robust and explicit:♻️ Suggested improvement
local ids - ids="$(docker ps -aq --filter "name=${it_container_prefix}" || true)" - if [[ -n "${ids}" ]]; then + ids=$(docker ps -aq --filter "name=${it_container_prefix}" || true) + if [[ -n "${ids}" ]]; then echo "[cleanup] removing leftover integration containers: ${it_container_prefix}*" - docker rm -f ${ids} >/dev/null 2>&1 || true + echo "${ids}" | xargs -r docker rm -f >/dev/null 2>&1 || true fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/test-integration.sh` around lines 17 - 22, The cleanup block relies on unquoted ${ids} for word-splitting when passing container IDs to docker rm -f; change it to pipe the ids into xargs to avoid relying on shell splitting and to handle arbitrary whitespace safely (use the existing ids assignment and replace the docker rm -f ${ids} call with a robust xargs invocation such as echo "${ids}" | xargs -r docker rm -f), referencing the variables ids and it_container_prefix and the docker rm -f invocation so you update that exact command..github/workflows/ci.yml (1)
17-20: Consider pinning Bun version for reproducible builds.Using
bun-version: latestmay cause unexpected CI failures when Bun releases breaking changes. Consider pinning to a specific version.♻️ Suggested improvement
- name: Setup Bun uses: oven-sh/setup-bun@v2 with: - bun-version: latest + bun-version: "1.1.0" # Pin to a specific version🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci.yml around lines 17 - 20, The CI step using the Setup Bun action currently sets bun-version: latest which risks nondeterministic builds; update the "Setup Bun" step (uses: oven-sh/setup-bun@v2) to pin bun-version to a specific released version string (e.g., a tested bun release) instead of "latest" so builds are reproducible and stable.src/components/business/Sidebar/ConnectionList.tsx (1)
2401-2558: Extract the duplicated database-tree renderer before it drifts.The
showSavedQueriesInTreesplit now repeats the entire database/schema/table rendering path, including selection behavior and the table context menu. A shared renderer with a configurable base level would keep future changes like import/export actions from having to be patched in two places.Also applies to: 2636-2853
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/business/Sidebar/ConnectionList.tsx` around lines 2401 - 2558, The database/schema/table rendering logic is duplicated across the showSavedQueriesInTree split; extract a shared renderer to avoid drift by creating a reusable function (e.g., renderDatabaseTree or renderDatabaseGroup) that accepts a base level and the connection/database context and reuses existing helpers like getTableNodeKey, renderTableNode (move this inner function out), tableNodeRefs, toggleTable, handleTableClick, handleTableExport and handleCreateQueryFromContext; replace both duplicated blocks with calls to this new renderer so selection, context menu, and behaviors remain consistent across both code paths.src-tauri/src/commands/mod.rs (1)
70-107: Avoid maintaining two copies of the connection/retry path.The new
*_from_app_statehelpers are effectively copies of the existingState<'_, AppState>versions. Please extract the shared body once so LocalDb existence checks, pool eviction, and retry behavior can't silently diverge later.Also applies to: 161-178
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/commands/mod.rs` around lines 70 - 107, The code duplicates connection/retry logic between ensure_connection_with_db_from_app_state and the State<'_, AppState> variant; extract the shared logic into a single helper (e.g., a new async fn ensure_connection_shared or ensure_connection_check) that accepts the AppState reference, connection id, optional database name, and pool key, and performs: obtaining/cloning local_db (state.local_db.lock().await), verifying local_db exists, checking db.get_connection_by_id(id).await and calling state.pool_manager.remove_by_prefix(&id.to_string()).await on failure, applying database override to the form, and returning the pool_manager.connect result; then have both ensure_connection_with_db_from_app_state and the State<'_, AppState> wrapper call that helper to avoid divergent LocalDb existence checks, pool eviction, and retry behavior.src-tauri/tests/common/mysql_context.rs (1)
47-58: Collapse the duplicate env-read path into one helper.
mysql_form_from_local_env()already reads everyMYSQL_*value, thenapply_mysql_env_overrides()immediately rereads the same set. The same two-step pattern exists in the sibling context files, so it is easy for defaults and override behavior to drift. A shared helper that reads each env var once would keep these test contexts consistent.Also applies to: 61-79
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/tests/common/mysql_context.rs` around lines 47 - 58, Collapse the duplicate environment reads by extracting a single helper (e.g., read_mysql_env_vars or build_mysql_connection_from_env) that reads each MYSQL_* once and returns either a partial struct or a ConnectionForm; then change mysql_form_from_local_env and apply_mysql_env_overrides to use that helper instead of re-reading the env vars. Specifically, move the shared env_or/env_i64 calls for MYSQL_HOST, MYSQL_PORT, MYSQL_USER, MYSQL_PASSWORD, MYSQL_DB into the new helper and have mysql_form_from_local_env simply initialize the ConnectionForm from the helper (and have apply_mysql_env_overrides accept the already-read values or be removed), and apply the same refactor for the sibling context block at 61-79 to ensure a single env-read path.src-tauri/tests/mysql_command_integration.rs (1)
12-18: Same minor timestamp collision potential as Postgres tests.This uses the same millisecond-based naming pattern. Consider extracting this helper to a shared module to ensure consistent behavior across test files.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/tests/mysql_command_integration.rs` around lines 12 - 18, Extract the duplicate unique_table_name function into a shared test utilities module (keep the function name unique_table_name and signature fn unique_table_name(prefix: &str) -> String) and have both mysql_command_integration.rs and the Postgres test import and call that shared helper; implement the same millisecond-based logic (or optionally add extra entropy like a short random suffix) inside the shared helper so both test files use a single source of truth and avoid timestamp-collision inconsistencies.src-tauri/tests/postgres_command_integration.rs (1)
11-17: Potential table name collision in parallel test runs.The
unique_table_namefunction uses millisecond timestamps, which could collide if multiple tests start within the same millisecond. Consider adding a random suffix or using a more granular timestamp.🔧 Optional: Add randomness to avoid collision
+use std::sync::atomic::{AtomicU64, Ordering}; + +static TABLE_COUNTER: AtomicU64 = AtomicU64::new(0); + fn unique_table_name(prefix: &str) -> String { let millis = SystemTime::now() .duration_since(UNIX_EPOCH) .expect("time should be after unix epoch") .as_millis(); - format!("{}_{}", prefix, millis) + let counter = TABLE_COUNTER.fetch_add(1, Ordering::SeqCst); + format!("{}_{}_{}", prefix, millis, counter) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/tests/postgres_command_integration.rs` around lines 11 - 17, unique_table_name currently builds names using millisecond timestamps which can collide in parallel runs; modify unique_table_name to append a high-entropy suffix (for example a UUID v4 or a small random hex/string from the rand crate or include as_nanos()) to guarantee uniqueness across concurrent tests—update the function (and add necessary imports) so it returns format!("{}_{}_{}", prefix, millis, <random_or_uuid>) using uuid::Uuid::new_v4() or rand::thread_rng() to generate the suffix.src-tauri/tests/common/mssql_context.rs (1)
40-83: Consider documenting the unuseddockerparameter.On line 43, the
dockerparameter is explicitly ignored (let _ = docker;), but the function creates its ownCli::default()when Docker is needed. This pattern differs from other contexts that use the passed-indockerparameter. Consider adding a comment explaining this design choice or aligning with other context modules.📝 Optional: Add clarifying comment
pub fn mssql_form_from_test_context( docker: Option<&Cli>, ) -> (Option<MssqlContainerGuard>, ConnectionForm) { - let _ = docker; + // Note: docker parameter is ignored; we create a leaked Cli for static lifetime + // to support the shared singleton pattern across all tests. + let _ = docker; if should_reuse_local_db() {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/tests/common/mssql_context.rs` around lines 40 - 83, The function mssql_form_from_test_context currently ignores the incoming docker parameter (let _ = docker;) and instead leaks a new Cli::default() when initializing MSSQL_SHARED; either use the provided docker if Some or document why a leaked Cli::default() is required. Update mssql_form_from_test_context to check the docker argument and use it for creating the runnable/container when present, falling back to Box::leak(Box::new(Cli::default())) only when None, or add a short clarifying comment above the let _ = docker; line explaining why the passed-in docker is intentionally ignored and why creating/leaking a Cli::default() is necessary; reference mssql_form_from_test_context, docker, MSSQL_SHARED and the Box::leak(Cli::default()) use so reviewers can find the change.src-tauri/tests/mysql_stateful_command_integration.rs (2)
19-25: Potential name collision in high-throughput scenarios.
unique_nameuses millisecond timestamps which could collide if tests run in parallel within the same millisecond. Consider adding a random component or atomic counter.♻️ Suggested improvement
+use std::sync::atomic::{AtomicU64, Ordering}; + +static NAME_COUNTER: AtomicU64 = AtomicU64::new(0); + fn unique_name(prefix: &str) -> String { let millis = SystemTime::now() .duration_since(UNIX_EPOCH) .expect("time should be after unix epoch") .as_millis(); - format!("{}_{}", prefix, millis) + let counter = NAME_COUNTER.fetch_add(1, Ordering::Relaxed); + format!("{}_{}_{}", prefix, millis, counter) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/tests/mysql_stateful_command_integration.rs` around lines 19 - 25, unique_name currently builds identifiers from millisecond timestamps and can collide under parallel/high-throughput tests; update unique_name to append a small random component or a thread-safe monotonic counter to the timestamp (e.g., use the rand crate to add a short hex/random u32 or use a static AtomicU64 incremented per call) so names are effectively unique even within the same millisecond; modify the unique_name function to combine SystemTime::now().as_millis() with the chosen random/counter value and return that string.
67-75: SQL injection risk in test helper.The
drop_database_if_existsfunction interpolatesdb_namedirectly into SQL without using the quoting function. While this is test code with controlled input, it's inconsistent with production code patterns.♻️ Suggested fix
async fn drop_database_if_exists(form: &ConnectionForm, db_name: &str) { let driver = MysqlDriver::connect(form) .await .expect("failed to connect mysql driver for cleanup"); + // Use proper quoting to escape special characters + let quoted_name = db_name.replace('`', "``"); let _ = driver - .execute_query(format!("DROP DATABASE IF EXISTS `{}`", db_name)) + .execute_query(format!("DROP DATABASE IF EXISTS `{}`", quoted_name)) .await; driver.close().await; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/tests/mysql_stateful_command_integration.rs` around lines 67 - 75, The test helper drop_database_if_exists currently interpolates db_name directly into SQL, risking SQL injection and inconsistent behavior; change it to obtain a safely quoted identifier from the driver (e.g., call the driver's identifier-quoting/escaping method after MysqlDriver::connect) and use that quoted value when building the DROP DATABASE statement passed to execute_query so the SQL uses a properly escaped identifier instead of raw interpolation.src-tauri/tests/common/shared.rs (1)
17-29: Consider handling DNS resolution failures gracefully.
wait_for_portcallsresolve_socket_addrwhich panics if resolution fails. If the hostname is invalid or DNS is temporarily unavailable, this could give a confusing panic instead of a timeout message.♻️ Suggested improvement
pub fn wait_for_port(host: &str, port: u16, timeout: Duration) { let started_at = Instant::now(); - let addr = resolve_socket_addr(host, port); while started_at.elapsed() < timeout { + let addr = match (host, port).to_socket_addrs() { + Ok(mut addrs) => match addrs.next() { + Some(a) => a, + None => { + sleep(Duration::from_millis(500)); + continue; + } + }, + Err(_) => { + sleep(Duration::from_millis(500)); + continue; + } + }; if TcpStream::connect_timeout(&addr, Duration::from_millis(500)).is_ok() { return; } sleep(Duration::from_millis(500)); } panic!("timed out waiting for {}:{} to accept connections", host, port); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/tests/common/shared.rs` around lines 17 - 29, wait_for_port currently calls resolve_socket_addr which panics on DNS failures; change wait_for_port to handle resolution errors gracefully by attempting to resolve inside the retry loop (call resolve_socket_addr and handle its error) and only attempt TcpStream::connect_timeout when resolution succeeds, sleeping and retrying resolution until the overall timeout elapses; preserve the final panic message "timed out waiting for {}:{} to accept connections" so DNS resolution failures become part of the timeout instead of immediate panics. Reference: wait_for_port and resolve_socket_addr.src-tauri/src/commands/connection.rs (1)
324-403: Consider extracting shared logic to reduce duplication.
create_database_by_id_directlargely duplicatescreate_database_by_id. Consider extracting the common logic into a private helper function that both can call, differing only in how theAppStateis accessed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/commands/connection.rs` around lines 324 - 403, create_database_by_id_direct duplicates most of create_database_by_id; extract the shared logic (validation of payload.name, driver selection/normalization, unsupported-driver check, the match over driver that builds SQL and calls super::execute_with_retry_from_app_state, and final error normalization) into a private helper like create_database_common(state_ref_getter, id, payload, if_not_exists) or create_database_internal(id, driver, payload, if_not_exists) that returns Result<(), String>, then have both create_database_by_id_direct and create_database_by_id call that helper — keep the driver acquisition differences (how you read local_db from AppState) outside the helper and pass the normalized driver string in, and ensure you reuse existing symbols like validate_database_name, build_*_create_database_sql, execute_with_retry_from_app_state, and normalize_create_database_error so behavior is unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/ci.yml:
- Around line 8-11: Add a minimal permissions block to restrict the GITHUB_TOKEN
scope by adding a permissions: mapping (for example at the workflow root or
under the job "test") that explicitly grants only the least privileges needed
(e.g., contents: read and actions: read or other specific scopes your tests
require) instead of the default full permissions; update the workflow keys near
the existing jobs/test configuration (where "runs-on" and "timeout-minutes" are
defined) to include this new permissions block referencing GITHUB_TOKEN usage.
In `@src-tauri/src/commands/transfer.rs`:
- Around line 1044-1063: The SingleQuoted state in the SQL scanner
(SqlScanState::SingleQuoted) currently always treats backslash as an escape
(using chars, current, i), which is MySQL-specific; change that logic to only
perform the backslash escape when the driver flag for MySQL-style escaping is
enabled (re-use the existing mysql_style_hash_comment or an equivalent
mysql_style_backslash_escape boolean available in the scanner/context),
otherwise leave backslashes as literal characters and just advance by one. Keep
the unconditional handling of doubled single-quotes (the existing check
comparing chars.get(i + 1) to Some(&'\'')) intact.
In `@src-tauri/src/connection_input/mod.rs`:
- Around line 49-54: The SSH password handling currently uses
trim_preserve_empty for form.ssh_password so whitespace becomes Some(""), which
bypasses the later auth check that only rejects None; either switch to
trim_to_option(form.ssh_password) so empty/whitespace passwords become None, or
keep trim_preserve_empty but update the SSH auth validation (where
form.ssh_password is checked alongside form.ssh_key_path) to treat Some("") as
missing (e.g., require form.ssh_password.as_ref().map(|s|
!s.is_empty()).unwrap_or(false) or normalize with .filter(|s| !s.is_empty())
before validation) so an effectively empty password does not bypass
authentication checks.
In `@src-tauri/src/db/drivers/mod.rs`:
- Around line 23-62: The conn_failed_error function currently returns the full
upstream error string to users; change it to return a sanitized, generic
user-facing message (e.g., "[CONN_FAILED] Connection failed — check host, port,
credentials, and SSL settings (see logs for details)" or keep the existing
contextual hint text but omit the raw error contents), while logging the
original raw error for internal diagnostics using your app logger (e.g.,
log::error or tracing::error) before formatting the returned string; keep the
existing hint-selection logic in conn_failed_error but ensure the formatted
return does not include the raw variable or any sensitive fragments.
In `@src-tauri/src/db/drivers/mssql.rs`:
- Around line 241-245: The health-check in async fn is_valid(&self, conn: &mut
Self::Connection) currently calls conn.simple_query("SELECT 1") and drops the
returned QueryStream, which defers flushing; update is_valid to fully consume
the stream by awaiting the results (use .into_results().await on the
QueryStream) and map any error into the existing error type (e.g., map_err(|e|
format!("{}", e))) so the stream is drained before returning Ok(()), preventing
deferred flush penalties on connection checkout.
- Around line 212-228: The connect_with_config function currently only timeouts
the TcpStream::connect phase; move the tokio::time::timeout to wrap the entire
async sequence that includes both establishing the TCP socket and calling
Client::connect so the PRELOGIN/TLS/LOGIN7 handshake is also bounded; implement
a timeout around an async block that first creates/configures the TcpStream
(TcpStream::connect and set_nodelay) and then awaits Client::connect(config,
tcp.compat_write()), propagating errors with map_err(format!) and returning the
same Result<Client<Compat<TcpStream>>, String> on timeout or underlying errors.
In `@src-tauri/src/ssh.rs`:
- Around line 48-55: The code currently assigns a default port 0 for "sqlite"
which later triggers a misleading validation error; update start_ssh_tunnel() to
detect file-based drivers by inspecting config.driver (e.g., "sqlite", "duckdb")
and immediately return an error with the message "SSH tunneling is not supported
for file-based drivers" before any port defaulting/validation logic runs; keep
the existing default_port logic for network drivers only (config.driver used
elsewhere stays the same). Also extend the unit test
test_target_port_default_by_driver() to add a case for "sqlite" with ssh enabled
and assert that start_ssh_tunnel() returns the explicit file-based-driver error
instead of a port validation error.
In `@src-tauri/tests/common/duckdb_context.rs`:
- Around line 23-28: The code treats any non-error env::var result as an
override even when the value is empty; change the branches that call
env::var("DUCKDB_IT_DB_PATH") and env::var("DUCKDB_DB_PATH") to trim the
returned string and skip it if it's blank (e.g. check if v.trim().is_empty() and
continue instead of returning), and when returning use
PathBuf::from(trimmed_value) so empty strings are not returned as a file path.
In `@src-tauri/tests/duckdb_integration.rs`:
- Around line 534-538: The code treats the COUNT(*) aggregate as a JSON string
and uses as_str().expect(...) which will break if DuckDB returns a numeric JSON
value; replace this by using a small reusable coercion helper (e.g.,
value_to_i64 or coerce_json_to_i64) to convert rolled_back.data[0]["c"] into i64
safely (handling string and number variants and returning a parse error
otherwise) and apply the same helper to the other later count checks so both
string and numeric payloads are accepted reliably.
- Around line 594-597: The test currently uses a Unix-only hardcoded path;
change the ConnectionForm.file_path construction in duckdb_integration.rs to
build a platform-neutral nonexistent path from std::env::temp_dir() (e.g.,
temp_dir().join(format!("nonexistent_parent_{}",
unique_id)).join("dbpaw_test.duckdb")) so the parent directory is guaranteed
missing across platforms; convert that PathBuf to a String for
ConnectionForm.file_path (use a UUID or timestamp for unique_id to avoid
accidental existence).
In `@src-tauri/tests/mysql_integration.rs`:
- Around line 735-738: The test assumes MySQL aggregate columns are always
strings by calling as_str() on rolled_back.data[0]["c"]; change this to use a
small scalar-coercion helper (e.g., normalize_scalar_to_string or
coerce_json_value_to_string) that accepts a serde_json::Value and returns the
integer as a string whether the value is a JSON String or Number, and use that
helper when computing rolled_back_count and in the similar checks around lines
758-761; update assertions to compare against the normalized string ("0") so the
tests accept either encoding.
In `@src-tauri/tests/mysql_stateful_command_integration.rs`:
- Around line 42-52: The helper init_state_with_local_db currently creates a
temporary local_db_dir for LocalDb but never removes it; modify
init_state_with_local_db (and any tests that call it) to return the created path
(e.g., change signature to return (AppState, PathBuf)) so callers can remove the
directory after test completion, ensure the function still initializes AppState
and sets state.local_db with the Arc<LocalDb>, and update tests to call
fs::remove_dir_all(local_db_dir) (or equivalent cleanup) after the test to
prevent disk accumulation.
In `@src-tauri/tests/postgres_integration.rs`:
- Around line 433-443: The test currently only checks that "meta" exists but
doesn't verify its JSON structure; update the assertions in the execute_query()
block (using driver.execute_query -> query_result -> query_row) to assert the
actual JSON payload (e.g., compare query_row["meta"] to the expected
serde_json::Value or a normalized form) instead of just presence, and make the
same change in the get_table_data() assertion block (the later section around
lines 459-464) so both query paths validate the inserted JSON content, not
merely non-nullness.
- Around line 564-583: The test currently only asserts
inserted.success/updated.success/deleted.success but doesn't verify the affected
row counts; update the assertions after the execute_query calls (variables
inserted, updated, deleted returned from driver.execute_query) to assert the
expected row_count values: for the INSERT expect row_count == 2, for the UPDATE
expect row_count == 1, and for the DELETE expect row_count == 2, using the
row_count field on the result objects to ensure the affected-row regression is
caught.
In `@src-tauri/tests/postgres_stateful_command_integration.rs`:
- Around line 18-24: The unique_name function can collide under parallel test
runs because it only uses millisecond timestamps; change it to produce
collision-resistant names by adding a randomness or uniqueness token (e.g.,
append a UUID, nanosecond timestamp, or thread-safe atomic counter) to the
existing prefix/timestamp scheme. Update the unique_name function (the one
returning String) to combine prefix, time, and a UUID or atomic/thread id so
temp LocalDb directories, DB names, connection names, and fixture tables no
longer risk collisions when tests run concurrently.
In `@src-tauri/tests/sqlite_integration.rs`:
- Around line 146-149: Tests are deleting caller-supplied SQLite files by
unconditionally calling std::fs::remove_file(db_path) after driver.close();
change the cleanup to only remove files the test created: either (A) detect when
sqlite_test_path() returned the external SQLITE_IT_DB_PATH (check
env::var("SQLITE_IT_DB_PATH") or compare returned path) and skip remove_file in
that case, or (B) copy the external DB into a temp file at test startup and use
that temp path for driver (so remove_file is safe). Update all occurrences
around driver.close() / remove_file(db_path) to implement one of these
approaches and preserve the existing.sqlite when SQLITE_IT_DB_PATH is supplied.
In `@src/App.tsx`:
- Around line 1495-1505: The tab close button is tabbable but starts with
opacity-0, making it invisible to keyboard users; update the button element (the
onClick handler that calls handleCloseTab and the className string) to include
visibility classes such as group-focus-within:opacity-100 and
focus-visible:opacity-100 and add a visible focus ring (e.g.,
focus-visible:outline or ring classes) so the control becomes visible and
clearly focused for keyboard users; keep the existing hover behavior and ensure
you only change the className on that button element.
In `@src/components/business/DataGrid/TableView.tsx`:
- Around line 457-466: The current canInsert incorrectly blocks inserts for
non-ClickHouse drivers by gating on hasPrimaryKeys; change canInsert to allow
inserts for any valid tableContext when not ClickHouse: keep isClickHouseDriver
check but return true for non-ClickHouse (i.e. set canInsert = !!tableContext &&
(isClickHouseDriver ? isClickHouseMergeTreeEngine(clickhouseEngine) : true));
retain canUpdateDelete behavior (which still uses
canMutateClickHouseTable/hasPrimaryKeys) and ensure generateInsertSQL (which
does not require PKs) remains usable for keyless tables; update references to
isClickHouseDriver, isClickHouseMergeTreeEngine, clickhouseEngine,
primaryKeys/hasPrimaryKeys, and canInsert accordingly.
- Around line 851-865: The fixed 350ms sleep in refreshAfterMutation is
unreliable for ClickHouse; update the ClickHouse-path in refreshAfterMutation
(the block checking tableContext?.driver === "clickhouse") to wait for mutation
completion instead of a hard timeout by either ensuring the ALTER statements
include SETTINGS mutations_sync=1 when executing mutations or by polling
system.mutations (SELECT count() FROM system.mutations WHERE database=... AND
table=... AND is_done=0) until zero pending mutations before calling
onDataRefresh again; use the table/context identifiers available in the
surrounding code to target the correct database/table and keep the existing
runRefresh/onDataRefresh flow once mutations are confirmed complete.
In `@src/lib/i18n/locales/en.ts`:
- Line 351: Replace the flat importSuccess string with a pluralized ICU-style
entry so the toast renders correctly for singular and plural counts; update the
importSuccess locale value (the importSuccess key in locales/en.ts) to use ICU
plural syntax, e.g. a string that uses the count plural form (one vs other) to
produce "Import completed (1 statement)" vs "Import completed (2 statements)".
Ensure the key stays named importSuccess so existing code that reads
importSuccess/count continues to work.
---
Outside diff comments:
In `@src-tauri/src/db/drivers/mysql.rs`:
- Around line 871-908: The current logic calls sqlx::query(...).fetch_all(...)
for statements that may be DDL/no-result, which is wrong; add a
no-result-statement classifier (e.g. is_no_result_statement or extend
is_affected_rows_statement) and if it returns true route the SQL through
sqlx::query(&sql).execute(&self.pool).await (same handling as the existing
affected-rows branch) instead of the fetch_all path; keep the existing
prepared-protocol-unsupported error handling
(is_prepared_protocol_unsupported_error) and the fallback to sqlx::raw_sql only
for statements that were attempted with fetch_all, and ensure
describe_query_columns(&sql).await is only called when fetch_all succeeded and
returned zero rows without being a no-result statement (references:
is_affected_rows_statement, is_prepared_protocol_unsupported_error,
describe_query_columns, executed_with_raw_sql).
---
Nitpick comments:
In @.github/workflows/ci.yml:
- Around line 17-20: The CI step using the Setup Bun action currently sets
bun-version: latest which risks nondeterministic builds; update the "Setup Bun"
step (uses: oven-sh/setup-bun@v2) to pin bun-version to a specific released
version string (e.g., a tested bun release) instead of "latest" so builds are
reproducible and stable.
In `@scripts/test-integration.sh`:
- Around line 17-22: The cleanup block relies on unquoted ${ids} for
word-splitting when passing container IDs to docker rm -f; change it to pipe the
ids into xargs to avoid relying on shell splitting and to handle arbitrary
whitespace safely (use the existing ids assignment and replace the docker rm -f
${ids} call with a robust xargs invocation such as echo "${ids}" | xargs -r
docker rm -f), referencing the variables ids and it_container_prefix and the
docker rm -f invocation so you update that exact command.
In `@src-tauri/Cargo.toml`:
- Around line 48-49: CI jobs fail because GTK dev libs are missing for Rust
tests (pkg-config cannot resolve gdk-3.0) after adding the testcontainers
dev-dependency in Cargo.toml; update the workflow that runs the Rust test job to
install system packages before running cargo test by adding steps to apt-get
install pkg-config and libgtk-3-dev (and any other GTK dev deps) on Linux
runners (or use setup-gtk action), so the Rust tests can successfully find
gdk-3.0 during build.
In `@src-tauri/src/commands/connection.rs`:
- Around line 324-403: create_database_by_id_direct duplicates most of
create_database_by_id; extract the shared logic (validation of payload.name,
driver selection/normalization, unsupported-driver check, the match over driver
that builds SQL and calls super::execute_with_retry_from_app_state, and final
error normalization) into a private helper like
create_database_common(state_ref_getter, id, payload, if_not_exists) or
create_database_internal(id, driver, payload, if_not_exists) that returns
Result<(), String>, then have both create_database_by_id_direct and
create_database_by_id call that helper — keep the driver acquisition differences
(how you read local_db from AppState) outside the helper and pass the normalized
driver string in, and ensure you reuse existing symbols like
validate_database_name, build_*_create_database_sql,
execute_with_retry_from_app_state, and normalize_create_database_error so
behavior is unchanged.
In `@src-tauri/src/commands/mod.rs`:
- Around line 70-107: The code duplicates connection/retry logic between
ensure_connection_with_db_from_app_state and the State<'_, AppState> variant;
extract the shared logic into a single helper (e.g., a new async fn
ensure_connection_shared or ensure_connection_check) that accepts the AppState
reference, connection id, optional database name, and pool key, and performs:
obtaining/cloning local_db (state.local_db.lock().await), verifying local_db
exists, checking db.get_connection_by_id(id).await and calling
state.pool_manager.remove_by_prefix(&id.to_string()).await on failure, applying
database override to the form, and returning the pool_manager.connect result;
then have both ensure_connection_with_db_from_app_state and the State<'_,
AppState> wrapper call that helper to avoid divergent LocalDb existence checks,
pool eviction, and retry behavior.
In `@src-tauri/src/commands/storage.rs`:
- Around line 23-38: The direct helper pattern is duplicated across functions
(e.g., save_query_direct) — create a small macro to generate both the State<'_,
AppState> wrapper and the &AppState direct variant to remove boilerplate; the
macro should accept the base function name, argument list and call-through body
(for example generate save_query(state: State<'_, AppState>, ...) -> Result<...>
that calls save_query_direct(&state, ...)), and replace the existing paired
functions (like save_query and save_query_direct, and the other pairs mentioned
around lines 59-75, 87-94, 106-113) so behavior is unchanged but duplication is
removed.
In `@src-tauri/tests/common/mssql_context.rs`:
- Around line 40-83: The function mssql_form_from_test_context currently ignores
the incoming docker parameter (let _ = docker;) and instead leaks a new
Cli::default() when initializing MSSQL_SHARED; either use the provided docker if
Some or document why a leaked Cli::default() is required. Update
mssql_form_from_test_context to check the docker argument and use it for
creating the runnable/container when present, falling back to
Box::leak(Box::new(Cli::default())) only when None, or add a short clarifying
comment above the let _ = docker; line explaining why the passed-in docker is
intentionally ignored and why creating/leaking a Cli::default() is necessary;
reference mssql_form_from_test_context, docker, MSSQL_SHARED and the
Box::leak(Cli::default()) use so reviewers can find the change.
In `@src-tauri/tests/common/mysql_context.rs`:
- Around line 47-58: Collapse the duplicate environment reads by extracting a
single helper (e.g., read_mysql_env_vars or build_mysql_connection_from_env)
that reads each MYSQL_* once and returns either a partial struct or a
ConnectionForm; then change mysql_form_from_local_env and
apply_mysql_env_overrides to use that helper instead of re-reading the env vars.
Specifically, move the shared env_or/env_i64 calls for MYSQL_HOST, MYSQL_PORT,
MYSQL_USER, MYSQL_PASSWORD, MYSQL_DB into the new helper and have
mysql_form_from_local_env simply initialize the ConnectionForm from the helper
(and have apply_mysql_env_overrides accept the already-read values or be
removed), and apply the same refactor for the sibling context block at 61-79 to
ensure a single env-read path.
In `@src-tauri/tests/common/shared.rs`:
- Around line 17-29: wait_for_port currently calls resolve_socket_addr which
panics on DNS failures; change wait_for_port to handle resolution errors
gracefully by attempting to resolve inside the retry loop (call
resolve_socket_addr and handle its error) and only attempt
TcpStream::connect_timeout when resolution succeeds, sleeping and retrying
resolution until the overall timeout elapses; preserve the final panic message
"timed out waiting for {}:{} to accept connections" so DNS resolution failures
become part of the timeout instead of immediate panics. Reference: wait_for_port
and resolve_socket_addr.
In `@src-tauri/tests/mysql_command_integration.rs`:
- Around line 12-18: Extract the duplicate unique_table_name function into a
shared test utilities module (keep the function name unique_table_name and
signature fn unique_table_name(prefix: &str) -> String) and have both
mysql_command_integration.rs and the Postgres test import and call that shared
helper; implement the same millisecond-based logic (or optionally add extra
entropy like a short random suffix) inside the shared helper so both test files
use a single source of truth and avoid timestamp-collision inconsistencies.
In `@src-tauri/tests/mysql_stateful_command_integration.rs`:
- Around line 19-25: unique_name currently builds identifiers from millisecond
timestamps and can collide under parallel/high-throughput tests; update
unique_name to append a small random component or a thread-safe monotonic
counter to the timestamp (e.g., use the rand crate to add a short hex/random u32
or use a static AtomicU64 incremented per call) so names are effectively unique
even within the same millisecond; modify the unique_name function to combine
SystemTime::now().as_millis() with the chosen random/counter value and return
that string.
- Around line 67-75: The test helper drop_database_if_exists currently
interpolates db_name directly into SQL, risking SQL injection and inconsistent
behavior; change it to obtain a safely quoted identifier from the driver (e.g.,
call the driver's identifier-quoting/escaping method after MysqlDriver::connect)
and use that quoted value when building the DROP DATABASE statement passed to
execute_query so the SQL uses a properly escaped identifier instead of raw
interpolation.
In `@src-tauri/tests/postgres_command_integration.rs`:
- Around line 11-17: unique_table_name currently builds names using millisecond
timestamps which can collide in parallel runs; modify unique_table_name to
append a high-entropy suffix (for example a UUID v4 or a small random hex/string
from the rand crate or include as_nanos()) to guarantee uniqueness across
concurrent tests—update the function (and add necessary imports) so it returns
format!("{}_{}_{}", prefix, millis, <random_or_uuid>) using uuid::Uuid::new_v4()
or rand::thread_rng() to generate the suffix.
In `@src/components/business/Metadata/TableMetadataView.tsx`:
- Around line 210-212: In TableMetadataView, make the label JSX formatting
consistent: revert the multi-line JSX blocks for the labels "Partition Key", the
similar label at 220-222, and the one at 240-242 back to single-line JSX to
match the existing single-line labels like "Engine", "Primary Key Expr", and
"TTL"; update the corresponding div elements with className "text-xs
text-muted-foreground" in the TableMetadataView component so each label is
written on one line, preserving exact text and spacing.
In `@src/components/business/Sidebar/ConnectionList.tsx`:
- Around line 2401-2558: The database/schema/table rendering logic is duplicated
across the showSavedQueriesInTree split; extract a shared renderer to avoid
drift by creating a reusable function (e.g., renderDatabaseTree or
renderDatabaseGroup) that accepts a base level and the connection/database
context and reuses existing helpers like getTableNodeKey, renderTableNode (move
this inner function out), tableNodeRefs, toggleTable, handleTableClick,
handleTableExport and handleCreateQueryFromContext; replace both duplicated
blocks with calls to this new renderer so selection, context menu, and behaviors
remain consistent across both code paths.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 45d3fafd-a944-4f54-b8a4-35e1f04c0a60
⛔ Files ignored due to path filters (1)
src-tauri/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (76)
.github/workflows/ci.yml.gitignoreCHANGELOG.mdREADME.mdREADME_CN.mddocs/zh/Development/DEVELOPMENT.mdpackage.jsonscripts/db-onboard.shscripts/test-integration.shsrc-tauri/Cargo.tomlsrc-tauri/src/ai/prompt.rssrc-tauri/src/commands/ai.rssrc-tauri/src/commands/connection.rssrc-tauri/src/commands/metadata.rssrc-tauri/src/commands/mod.rssrc-tauri/src/commands/query.rssrc-tauri/src/commands/storage.rssrc-tauri/src/commands/transfer.rssrc-tauri/src/connection_input/mod.rssrc-tauri/src/db/drivers/clickhouse.rssrc-tauri/src/db/drivers/duckdb.rssrc-tauri/src/db/drivers/mod.rssrc-tauri/src/db/drivers/mssql.rssrc-tauri/src/db/drivers/mysql.rssrc-tauri/src/db/drivers/postgres.rssrc-tauri/src/db/drivers/sqlite.rssrc-tauri/src/db/local.rssrc-tauri/src/lib.rssrc-tauri/src/models/mod.rssrc-tauri/src/ssh.rssrc-tauri/tauri.conf.jsonsrc-tauri/tests/clickhouse_integration.rssrc-tauri/tests/common/clickhouse_context.rssrc-tauri/tests/common/duckdb_context.rssrc-tauri/tests/common/mariadb_context.rssrc-tauri/tests/common/mssql_context.rssrc-tauri/tests/common/mysql_context.rssrc-tauri/tests/common/postgres_context.rssrc-tauri/tests/common/shared.rssrc-tauri/tests/duckdb_integration.rssrc-tauri/tests/mariadb_integration.rssrc-tauri/tests/mssql_integration.rssrc-tauri/tests/mysql_command_integration.rssrc-tauri/tests/mysql_integration.rssrc-tauri/tests/mysql_stateful_command_integration.rssrc-tauri/tests/postgres_command_integration.rssrc-tauri/tests/postgres_integration.rssrc-tauri/tests/postgres_stateful_command_integration.rssrc-tauri/tests/sqlite_integration.rssrc/App.tsxsrc/components/business/DataGrid/TableView.tsxsrc/components/business/DataGrid/tableView/utils.tssrc/components/business/DataGrid/tableView/utils.unit.test.tssrc/components/business/Editor/SqlEditor.tsxsrc/components/business/Editor/codemirrorTheme.tssrc/components/business/Metadata/TableMetadataView.tsxsrc/components/business/Sidebar/ConnectionList.tsxsrc/components/business/Sidebar/SavedQueriesList.tsxsrc/components/business/Sidebar/Sidebar.tsxsrc/components/business/Sidebar/connection-list/TreeNode.tsxsrc/components/business/Sidebar/connection-list/helpers.tsxsrc/components/business/SqlLogs/SqlExecutionLogsDialog.tsxsrc/components/settings/LanguageSelector.tsxsrc/components/settings/SettingsDialog.tsxsrc/components/updater-checker.tsxsrc/lib/connection-form/rules.tssrc/lib/connection-form/validate.tssrc/lib/connection-form/validate.unit.test.tssrc/lib/i18n/locales/en.tssrc/lib/i18n/locales/ja.tssrc/lib/i18n/locales/zh.tssrc/lib/queryExecutionState.unit.test.tssrc/services/api.tssrc/services/mocks.service.test.tssrc/services/mocks.tssrc/services/updater.ts
💤 Files with no reviewable changes (1)
- CHANGELOG.md
| jobs: | ||
| test: | ||
| runs-on: ubuntu-22.04 | ||
| timeout-minutes: 40 |
There was a problem hiding this comment.
Add explicit permissions block to limit GITHUB_TOKEN scope.
The workflow does not limit the permissions of the GITHUB_TOKEN. For security best practices, add a minimal permissions block.
🔒 Proposed fix
jobs:
test:
runs-on: ubuntu-22.04
timeout-minutes: 40
+
+permissions:
+ contents: readOr at the workflow level:
on:
pull_request:
branches:
- main
+permissions:
+ contents: read
+
jobs:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/ci.yml around lines 8 - 11, Add a minimal permissions
block to restrict the GITHUB_TOKEN scope by adding a permissions: mapping (for
example at the workflow root or under the job "test") that explicitly grants
only the least privileges needed (e.g., contents: read and actions: read or
other specific scopes your tests require) instead of the default full
permissions; update the workflow keys near the existing jobs/test configuration
(where "runs-on" and "timeout-minutes" are defined) to include this new
permissions block referencing GITHUB_TOKEN usage.
| SqlScanState::SingleQuoted => { | ||
| let ch = chars[i]; | ||
| current.push(ch); | ||
| if ch == '\\' { | ||
| if let Some(next) = chars.get(i + 1) { | ||
| current.push(*next); | ||
| i += 2; | ||
| continue; | ||
| } | ||
| } | ||
| if ch == '\'' { | ||
| if chars.get(i + 1) == Some(&'\'') { | ||
| current.push('\''); | ||
| i += 2; | ||
| continue; | ||
| } | ||
| state = SqlScanState::Normal; | ||
| } | ||
| i += 1; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if there's any documentation about escape handling or dialect-specific parsing
rg -n -C2 "backslash|escape" src-tauri/src/commands/transfer.rsRepository: codeErrorSleep/dbpaw
Length of output: 1165
🏁 Script executed:
# View the context around the target lines
sed -n '1000,1100p' src-tauri/src/commands/transfer.rs | head -100Repository: codeErrorSleep/dbpaw
Length of output: 3535
🏁 Script executed:
# Look for SqlScanState definition and how this parser is used
rg -n "SqlScanState|fn.*scan|struct.*Scan" src-tauri/src/commands/transfer.rs | head -20Repository: codeErrorSleep/dbpaw
Length of output: 1175
🏁 Script executed:
# Check if there's any SQL dialect detection or configuration
rg -n -i "dialect|driver|mysql|postgres|sqlite" src-tauri/src/commands/transfer.rs | head -30Repository: codeErrorSleep/dbpaw
Length of output: 1745
🏁 Script executed:
# Find the SqlScanState enum definition
rg -n "enum SqlScanState" -A 15 src-tauri/src/commands/transfer.rsRepository: codeErrorSleep/dbpaw
Length of output: 603
🏁 Script executed:
# Find the function containing this parsing logic
rg -n "fn.*split.*sql|fn split_statements" -B 2 -A 30 src-tauri/src/commands/transfer.rs | head -50Repository: codeErrorSleep/dbpaw
Length of output: 46
🏁 Script executed:
# Check what drivers are supported and how they're determined
rg -n "normalize_driver_name" -A 10 src-tauri/src/commands/transfer.rsRepository: codeErrorSleep/dbpaw
Length of output: 2204
🏁 Script executed:
# View the full parse_sql_statements function to see if backslash handling is conditional
sed -n '977,1090p' src-tauri/src/commands/transfer.rsRepository: codeErrorSleep/dbpaw
Length of output: 4100
🏁 Script executed:
# Check where parse_sql_statements is called and what drivers are passed
rg -n "parse_sql_statements" src-tauri/src/commands/transfer.rs -B 2 -A 2Repository: codeErrorSleep/dbpaw
Length of output: 1445
🏁 Script executed:
# Search for test cases involving backslash in strings or postgres quote handling
rg -n "parse_sql_statements|backslash|\\\\'" src-tauri/src/commands/transfer.rs | grep -i "test\|backslash\|'\\\\"Repository: codeErrorSleep/dbpaw
Length of output: 571
🏁 Script executed:
# Check the test section for any postgres-specific quote tests
sed -n '1560,1650p' src-tauri/src/commands/transfer.rsRepository: codeErrorSleep/dbpaw
Length of output: 3494
Backslash escape handling should be MySQL-specific.
The single-quoted string parser unconditionally applies backslash escape handling (lines 1047-1053), but this is MySQL-specific behavior. PostgreSQL and SQLite treat backslash as a literal character in single-quoted strings by default and will cause incorrect parsing when backslashes appear in data.
The backslash escape logic should be conditional on the driver, similar to the existing mysql_style_hash_comment check on line 978:
INSERT INTO t VALUES ('path\to\file');In PostgreSQL/SQLite, the backslash should be preserved as-is; in MySQL, it should skip the next character. The doubled single quote escape (lines 1054-1059) is correctly handled unconditionally as it's standard SQL.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src-tauri/src/commands/transfer.rs` around lines 1044 - 1063, The
SingleQuoted state in the SQL scanner (SqlScanState::SingleQuoted) currently
always treats backslash as an escape (using chars, current, i), which is
MySQL-specific; change that logic to only perform the backslash escape when the
driver flag for MySQL-style escaping is enabled (re-use the existing
mysql_style_hash_comment or an equivalent mysql_style_backslash_escape boolean
available in the scanner/context), otherwise leave backslashes as literal
characters and just advance by one. Keep the unconditional handling of doubled
single-quotes (the existing check comparing chars.get(i + 1) to Some(&'\''))
intact.
| form.password = trim_preserve_empty(form.password); | ||
| form.ssl_ca_cert = trim_preserve_empty(form.ssl_ca_cert); | ||
| form.file_path = trim_to_option(form.file_path); | ||
| form.ssh_host = trim_to_option(form.ssh_host); | ||
| form.ssh_username = trim_to_option(form.ssh_username); | ||
| form.ssh_password = trim_to_option(form.ssh_password); | ||
| form.ssh_password = trim_preserve_empty(form.ssh_password); |
There was a problem hiding this comment.
Empty SSH password now bypasses SSH auth validation.
trim_preserve_empty converts whitespace-only ssh_password to Some(""), and the later check only rejects None. This allows an effectively empty password to pass validation when no SSH key path is provided.
Suggested fix
- form.ssh_password = trim_preserve_empty(form.ssh_password);
+ form.ssh_password = trim_to_option(form.ssh_password);Or keep trim_preserve_empty and tighten the auth check:
- if form.ssh_password.is_none() && form.ssh_key_path.is_none() {
+ let ssh_password_missing = form
+ .ssh_password
+ .as_deref()
+ .map(|v| v.is_empty())
+ .unwrap_or(true);
+ if ssh_password_missing && form.ssh_key_path.is_none() {
return Err("[VALIDATION_ERROR] ssh password or ssh key path is required".to_string());
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| form.password = trim_preserve_empty(form.password); | |
| form.ssl_ca_cert = trim_preserve_empty(form.ssl_ca_cert); | |
| form.file_path = trim_to_option(form.file_path); | |
| form.ssh_host = trim_to_option(form.ssh_host); | |
| form.ssh_username = trim_to_option(form.ssh_username); | |
| form.ssh_password = trim_to_option(form.ssh_password); | |
| form.ssh_password = trim_preserve_empty(form.ssh_password); | |
| form.password = trim_preserve_empty(form.password); | |
| form.ssl_ca_cert = trim_preserve_empty(form.ssl_ca_cert); | |
| form.file_path = trim_to_option(form.file_path); | |
| form.ssh_host = trim_to_option(form.ssh_host); | |
| form.ssh_username = trim_to_option(form.ssh_username); | |
| form.ssh_password = trim_to_option(form.ssh_password); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src-tauri/src/connection_input/mod.rs` around lines 49 - 54, The SSH password
handling currently uses trim_preserve_empty for form.ssh_password so whitespace
becomes Some(""), which bypasses the later auth check that only rejects None;
either switch to trim_to_option(form.ssh_password) so empty/whitespace passwords
become None, or keep trim_preserve_empty but update the SSH auth validation
(where form.ssh_password is checked alongside form.ssh_key_path) to treat
Some("") as missing (e.g., require form.ssh_password.as_ref().map(|s|
!s.is_empty()).unwrap_or(false) or normalize with .filter(|s| !s.is_empty())
before validation) so an effectively empty password does not bypass
authentication checks.
| pub(crate) fn conn_failed_error(e: &dyn std::fmt::Display) -> String { | ||
| let raw = e.to_string(); | ||
| let lower = raw.to_ascii_lowercase(); | ||
|
|
||
| let hint = if lower.contains("handshake") | ||
| || lower.contains("fatal alert") | ||
| || lower.contains("tls") | ||
| || lower.contains("ssl") | ||
| || lower.contains("certificate") | ||
| { | ||
| "hint: TLS/SSL handshake failed — the server may use a TLS version or cipher suite \ | ||
| incompatible with the client (TLS 1.2+ required); try disabling SSL in the connection settings" | ||
| } else if lower.contains("access denied") | ||
| || lower.contains("authentication") | ||
| || lower.contains("password") | ||
| || lower.contains("login failed") | ||
| || lower.contains("invalid password") | ||
| || lower.contains("1045") | ||
| { | ||
| "hint: authentication failed — verify the username/password are correct; \ | ||
| if they contain special characters they must be URL-encoded" | ||
| } else if lower.contains("connection refused") | ||
| || lower.contains("timed out") | ||
| || lower.contains("timeout") | ||
| || lower.contains("broken pipe") | ||
| || lower.contains("network unreachable") | ||
| { | ||
| "hint: could not reach the server — check host, port, firewall rules, and SSH tunnel settings" | ||
| } else if lower.contains("name resolution") | ||
| || lower.contains("no such host") | ||
| || lower.contains("failed to lookup") | ||
| || lower.contains("dns") | ||
| { | ||
| "hint: hostname could not be resolved — check that the host address is correct" | ||
| } else { | ||
| "hint: check host, port, credentials, and SSL settings" | ||
| }; | ||
|
|
||
| format!("[CONN_FAILED] {raw} ({hint})") | ||
| } |
There was a problem hiding this comment.
Avoid returning raw connection error text directly to users.
This currently includes the full upstream error string in [CONN_FAILED] ..., which can leak sensitive connection details (including credentials in some driver error paths). Prefer a sanitized message for user-facing errors and keep raw detail for internal logs only.
Suggested fix
- format!("[CONN_FAILED] {raw} ({hint})")
+ format!("[CONN_FAILED] {hint}")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub(crate) fn conn_failed_error(e: &dyn std::fmt::Display) -> String { | |
| let raw = e.to_string(); | |
| let lower = raw.to_ascii_lowercase(); | |
| let hint = if lower.contains("handshake") | |
| || lower.contains("fatal alert") | |
| || lower.contains("tls") | |
| || lower.contains("ssl") | |
| || lower.contains("certificate") | |
| { | |
| "hint: TLS/SSL handshake failed — the server may use a TLS version or cipher suite \ | |
| incompatible with the client (TLS 1.2+ required); try disabling SSL in the connection settings" | |
| } else if lower.contains("access denied") | |
| || lower.contains("authentication") | |
| || lower.contains("password") | |
| || lower.contains("login failed") | |
| || lower.contains("invalid password") | |
| || lower.contains("1045") | |
| { | |
| "hint: authentication failed — verify the username/password are correct; \ | |
| if they contain special characters they must be URL-encoded" | |
| } else if lower.contains("connection refused") | |
| || lower.contains("timed out") | |
| || lower.contains("timeout") | |
| || lower.contains("broken pipe") | |
| || lower.contains("network unreachable") | |
| { | |
| "hint: could not reach the server — check host, port, firewall rules, and SSH tunnel settings" | |
| } else if lower.contains("name resolution") | |
| || lower.contains("no such host") | |
| || lower.contains("failed to lookup") | |
| || lower.contains("dns") | |
| { | |
| "hint: hostname could not be resolved — check that the host address is correct" | |
| } else { | |
| "hint: check host, port, credentials, and SSL settings" | |
| }; | |
| format!("[CONN_FAILED] {raw} ({hint})") | |
| } | |
| pub(crate) fn conn_failed_error(e: &dyn std::fmt::Display) -> String { | |
| let raw = e.to_string(); | |
| let lower = raw.to_ascii_lowercase(); | |
| let hint = if lower.contains("handshake") | |
| || lower.contains("fatal alert") | |
| || lower.contains("tls") | |
| || lower.contains("ssl") | |
| || lower.contains("certificate") | |
| { | |
| "hint: TLS/SSL handshake failed — the server may use a TLS version or cipher suite \ | |
| incompatible with the client (TLS 1.2+ required); try disabling SSL in the connection settings" | |
| } else if lower.contains("access denied") | |
| || lower.contains("authentication") | |
| || lower.contains("password") | |
| || lower.contains("login failed") | |
| || lower.contains("invalid password") | |
| || lower.contains("1045") | |
| { | |
| "hint: authentication failed — verify the username/password are correct; \ | |
| if they contain special characters they must be URL-encoded" | |
| } else if lower.contains("connection refused") | |
| || lower.contains("timed out") | |
| || lower.contains("timeout") | |
| || lower.contains("broken pipe") | |
| || lower.contains("network unreachable") | |
| { | |
| "hint: could not reach the server — check host, port, firewall rules, and SSH tunnel settings" | |
| } else if lower.contains("name resolution") | |
| || lower.contains("no such host") | |
| || lower.contains("failed to lookup") | |
| || lower.contains("dns") | |
| { | |
| "hint: hostname could not be resolved — check that the host address is correct" | |
| } else { | |
| "hint: check host, port, credentials, and SSL settings" | |
| }; | |
| format!("[CONN_FAILED] {hint}") | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src-tauri/src/db/drivers/mod.rs` around lines 23 - 62, The conn_failed_error
function currently returns the full upstream error string to users; change it to
return a sanitized, generic user-facing message (e.g., "[CONN_FAILED] Connection
failed — check host, port, credentials, and SSL settings (see logs for details)"
or keep the existing contextual hint text but omit the raw error contents),
while logging the original raw error for internal diagnostics using your app
logger (e.g., log::error or tracing::error) before formatting the returned
string; keep the existing hint-selection logic in conn_failed_error but ensure
the formatted return does not include the raw variable or any sensitive
fragments.
| async fn connect_with_config(config: Config) -> Result<Client<Compat<TcpStream>>, String> { | ||
| let connect_future = async { | ||
| let tcp = TcpStream::connect(config.get_addr()) | ||
| .await | ||
| .map_err(|e| format!("{}", e))?; | ||
| tcp.set_nodelay(true).map_err(|e| format!("{}", e))?; | ||
| Ok::<TcpStream, String>(tcp) | ||
| }; | ||
|
|
||
| let tcp = tokio::time::timeout(std::time::Duration::from_secs(10), connect_future) | ||
| .await | ||
| .map_err(|_| "Connection timed out".to_string())? | ||
| .map_err(|e| format!("{}", e))?; | ||
|
|
||
| Client::connect(config, tcp.compat_write()) | ||
| .await | ||
| .map_err(|e| format!("{}", e)) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "mssql.rs" -type fRepository: codeErrorSleep/dbpaw
Length of output: 99
🏁 Script executed:
cat -n src-tauri/src/db/drivers/mssql.rs | sed -n '200,250p'Repository: codeErrorSleep/dbpaw
Length of output: 2048
🌐 Web query:
tiberius Client::connect TDS TLS login handshake documentation
💡 Result:
Tiberius is a pure-Rust asynchronous TDS (Tabular Data Stream) client library for Microsoft SQL Server. The Client::connect method establishes a connection to SQL Server over TDS, handling TLS encryption and the login handshake as follows: ## Usage Example (from official docs) use tiberius::{Client, Config, AuthMethod}; use tokio::net::TcpStream; use tokio_util::compat::TokioAsyncWriteCompatExt; let mut config = Config::new; config.host("localhost"); config.port(1433); config.authentication(AuthMethod::sql_server("SA", "password")); let tcp = TcpStream::connect(config.get_addr).await?; tcp.set_nodelay(true)?; let mut client = Client::connect(config, tcp.compat_write).await?; ## Connection Process in Client::connect(config, tcp_stream) 1. TCP Connection: User establishes plain TCP stream (e.g., via tokio::net::TcpStream::connect) to SQL Server (default port 1433, or dynamic via SQL Browser for named instances). 2. PRELOGIN (TDS Packet Type 0x12): Tiberius sends a PRELOGIN message to negotiate capabilities, including encryption level (ENCRYPT option). This follows TDS protocol specs. 3. TLS Handshake (if encryption negotiated): - Based on Config::encryption setting (defaults to Required with TLS features enabled). - Enum EncryptionLevel: | Level | Value | Behavior | |-------|-------|----------| | Off | 0 | Encrypt only login handshake | | On | 1 | Encrypt all if server supports | | NotSupported | 2 | No encryption | | Required | 3 | Encrypt all, fail otherwise (default) | - TLS is layered over the TCP stream using native-tls (default, OS TLS: OpenSSL/Schannel/Secure Transport) or rustls (via feature flag). - Config::trust_cert disables server cert validation (insecure for prod); Config::trust_cert_ca(path) adds custom CA. - Handshake occurs via TDS packets encapsulating TLS messages until TLS layer signals completion. 4. LOGIN7 Message: After TLS (if any), sends LOGIN7 TDS packet with authentication details (e.g., SQL Server auth, Integrated Windows auth, AAD token). Server responds with login acknowledgment. 5. Post-Login: Handles redirects (Error::Routing), enters ready state for queries. This implements TDS 7.2+ protocol faithfully, including [MS-TDS] states like "Sent TLS/SSL Negotiation Packet State". Source code in prisma/tiberius repo (src/client/connection.rs) orchestrates prelogin, TLS stream creation, and login. For full details, see: - Official Docs: https://docs.rs/tiberius/latest/tiberius/struct.Client.html - GitHub: https://github.com/prisma/tiberius - MS-TDS Spec: https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-tds/
Citations:
- 1: https://docs.rs/tiberius-rustls/
- 2: https://docs.rs/tiberius/latest/tiberius/struct.Client.html
- 3: https://lib.rs/crates/tiberius
- 4: https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-tds/d62e225b-d865-4ccc-8f73-de1ef49e30d4
- 5: https://github.com/prisma/tiberius
- 6: https://docs.rs/tiberius/latest/tiberius/enum.EncryptionLevel.html
- 7: https://prisma.github.io/prisma-engines/doc/tiberius/enum.EncryptionLevel.html
- 8: https://docs.rs/tiberius/latest/tiberius/struct.Config.html
Wrap the full SQL Server connection sequence in the timeout.
The timeout on lines 221–224 only protects TcpStream::connect. After the TCP connection succeeds, Client::connect on lines 226–228 performs the PRELOGIN negotiation, TLS handshake, and LOGIN7 authentication—all of which involve network round-trips and can stall indefinitely if the server is unresponsive or misconfigured. A server that accepts TCP but stalls in TLS negotiation or authentication will hang this function without a timeout. Wrap both the TCP connection and Client::connect in a single timeout to protect the entire handshake sequence.
⏱️ Suggested fix
- let connect_future = async {
- let tcp = TcpStream::connect(config.get_addr())
- .await
- .map_err(|e| format!("{}", e))?;
- tcp.set_nodelay(true).map_err(|e| format!("{}", e))?;
- Ok::<TcpStream, String>(tcp)
- };
-
- let tcp = tokio::time::timeout(std::time::Duration::from_secs(10), connect_future)
- .await
- .map_err(|_| "Connection timed out".to_string())?
- .map_err(|e| format!("{}", e))?;
-
- Client::connect(config, tcp.compat_write())
- .await
- .map_err(|e| format!("{}", e))
+ tokio::time::timeout(std::time::Duration::from_secs(10), async move {
+ let tcp = TcpStream::connect(config.get_addr())
+ .await
+ .map_err(|e| e.to_string())?;
+ tcp.set_nodelay(true).map_err(|e| e.to_string())?;
+ Client::connect(config, tcp.compat_write())
+ .await
+ .map_err(|e| e.to_string())
+ })
+ .await
+ .map_err(|_| "Connection timed out".to_string())?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src-tauri/src/db/drivers/mssql.rs` around lines 212 - 228, The
connect_with_config function currently only timeouts the TcpStream::connect
phase; move the tokio::time::timeout to wrap the entire async sequence that
includes both establishing the TCP socket and calling Client::connect so the
PRELOGIN/TLS/LOGIN7 handshake is also bounded; implement a timeout around an
async block that first creates/configures the TcpStream (TcpStream::connect and
set_nodelay) and then awaits Client::connect(config, tcp.compat_write()),
propagating errors with map_err(format!) and returning the same
Result<Client<Compat<TcpStream>>, String> on timeout or underlying errors.
| driver.close().await; | ||
|
|
||
| let _ = std::fs::remove_file(db_path); | ||
| } |
There was a problem hiding this comment.
Don't delete caller-supplied SQLite files during test cleanup.
sqlite_test_path() returns SQLITE_IT_DB_PATH verbatim when that env var is set. With the new unconditional remove_file(db_path) cleanup, running the suite against an existing SQLite database will delete that file after the test finishes. Only remove paths the test created itself, or copy external DBs into a temp file first. The same cleanup pattern repeats in the added tests below.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src-tauri/tests/sqlite_integration.rs` around lines 146 - 149, Tests are
deleting caller-supplied SQLite files by unconditionally calling
std::fs::remove_file(db_path) after driver.close(); change the cleanup to only
remove files the test created: either (A) detect when sqlite_test_path()
returned the external SQLITE_IT_DB_PATH (check env::var("SQLITE_IT_DB_PATH") or
compare returned path) and skip remove_file in that case, or (B) copy the
external DB into a temp file at test startup and use that temp path for driver
(so remove_file is safe). Update all occurrences around driver.close() /
remove_file(db_path) to implement one of these approaches and preserve the
existing.sqlite when SQLITE_IT_DB_PATH is supplied.
| <button | ||
| type="button" | ||
| aria-label={t("app.tab.closeAria", { | ||
| title, | ||
| })} | ||
| className="absolute right-1 top-1/2 -translate-y-1/2 opacity-0 group-hover:opacity-100 p-1 hover:bg-accent rounded-sm cursor-pointer transition-opacity" | ||
| onClick={(e) => { | ||
| e.stopPropagation(); | ||
| handleCloseTab(tab.id); | ||
| }} | ||
| > |
There was a problem hiding this comment.
Make the tab close button visible on keyboard focus.
This button is still tabbable while opacity-0, so keyboard users can land on an invisible control. Add group-focus-within:opacity-100 / focus-visible:opacity-100 and a visible focus ring.
♿ Suggested tweak
- className="absolute right-1 top-1/2 -translate-y-1/2 opacity-0 group-hover:opacity-100 p-1 hover:bg-accent rounded-sm cursor-pointer transition-opacity"
+ className="absolute right-1 top-1/2 -translate-y-1/2 opacity-0 group-hover:opacity-100 group-focus-within:opacity-100 focus-visible:opacity-100 p-1 hover:bg-accent rounded-sm cursor-pointer transition-opacity focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <button | |
| type="button" | |
| aria-label={t("app.tab.closeAria", { | |
| title, | |
| })} | |
| className="absolute right-1 top-1/2 -translate-y-1/2 opacity-0 group-hover:opacity-100 p-1 hover:bg-accent rounded-sm cursor-pointer transition-opacity" | |
| onClick={(e) => { | |
| e.stopPropagation(); | |
| handleCloseTab(tab.id); | |
| }} | |
| > | |
| <button | |
| type="button" | |
| aria-label={t("app.tab.closeAria", { | |
| title, | |
| })} | |
| className="absolute right-1 top-1/2 -translate-y-1/2 opacity-0 group-hover:opacity-100 group-focus-within:opacity-100 focus-visible:opacity-100 p-1 hover:bg-accent rounded-sm cursor-pointer transition-opacity focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring" | |
| onClick={(e) => { | |
| e.stopPropagation(); | |
| handleCloseTab(tab.id); | |
| }} | |
| > |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/App.tsx` around lines 1495 - 1505, The tab close button is tabbable but
starts with opacity-0, making it invisible to keyboard users; update the button
element (the onClick handler that calls handleCloseTab and the className string)
to include visibility classes such as group-focus-within:opacity-100 and
focus-visible:opacity-100 and add a visible focus ring (e.g.,
focus-visible:outline or ring classes) so the control becomes visible and
clearly focused for keyboard users; keep the existing hover behavior and ensure
you only change the className on that button element.
| const isClickHouseDriver = tableContext?.driver === "clickhouse"; | ||
| const hasPrimaryKeys = primaryKeys.length > 0; | ||
| const canInsert = !!tableContext && | ||
| (isClickHouseDriver | ||
| ? isClickHouseMergeTreeEngine(clickhouseEngine) | ||
| : hasPrimaryKeys); | ||
| const canUpdateDelete = !!tableContext && | ||
| (isClickHouseDriver | ||
| ? canMutateClickHouseTable(clickhouseEngine, primaryKeys) | ||
| : hasPrimaryKeys); |
There was a problem hiding this comment.
Don’t require primary keys for inserts on non-ClickHouse tables.
canInsert now depends on hasPrimaryKeys for every non-ClickHouse driver. That hides Add and marks keyless tables as read-only even though generateInsertSQL() does not rely on a primary key at all, so this regresses inserts for valid tables without PKs.
➕ Suggested fix
- const canInsert = !!tableContext &&
- (isClickHouseDriver
- ? isClickHouseMergeTreeEngine(clickhouseEngine)
- : hasPrimaryKeys);
+ const canInsert =
+ !!tableContext &&
+ (isClickHouseDriver
+ ? isClickHouseMergeTreeEngine(clickhouseEngine)
+ : true);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const isClickHouseDriver = tableContext?.driver === "clickhouse"; | |
| const hasPrimaryKeys = primaryKeys.length > 0; | |
| const canInsert = !!tableContext && | |
| (isClickHouseDriver | |
| ? isClickHouseMergeTreeEngine(clickhouseEngine) | |
| : hasPrimaryKeys); | |
| const canUpdateDelete = !!tableContext && | |
| (isClickHouseDriver | |
| ? canMutateClickHouseTable(clickhouseEngine, primaryKeys) | |
| : hasPrimaryKeys); | |
| const isClickHouseDriver = tableContext?.driver === "clickhouse"; | |
| const hasPrimaryKeys = primaryKeys.length > 0; | |
| const canInsert = | |
| !!tableContext && | |
| (isClickHouseDriver | |
| ? isClickHouseMergeTreeEngine(clickhouseEngine) | |
| : true); | |
| const canUpdateDelete = !!tableContext && | |
| (isClickHouseDriver | |
| ? canMutateClickHouseTable(clickhouseEngine, primaryKeys) | |
| : hasPrimaryKeys); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/business/DataGrid/TableView.tsx` around lines 457 - 466, The
current canInsert incorrectly blocks inserts for non-ClickHouse drivers by
gating on hasPrimaryKeys; change canInsert to allow inserts for any valid
tableContext when not ClickHouse: keep isClickHouseDriver check but return true
for non-ClickHouse (i.e. set canInsert = !!tableContext && (isClickHouseDriver ?
isClickHouseMergeTreeEngine(clickhouseEngine) : true)); retain canUpdateDelete
behavior (which still uses canMutateClickHouseTable/hasPrimaryKeys) and ensure
generateInsertSQL (which does not require PKs) remains usable for keyless
tables; update references to isClickHouseDriver, isClickHouseMergeTreeEngine,
clickhouseEngine, primaryKeys/hasPrimaryKeys, and canInsert accordingly.
| const refreshAfterMutation = useCallback(async () => { | ||
| if (!onDataRefresh) return; | ||
| const runRefresh = async () => { | ||
| const ret = onDataRefresh(); | ||
| if (ret && typeof (ret as Promise<unknown>).then === "function") { | ||
| await ret; | ||
| } | ||
| }; | ||
|
|
||
| await runRefresh(); | ||
| if (tableContext?.driver === "clickhouse") { | ||
| await new Promise((resolve) => setTimeout(resolve, 350)); | ||
| await runRefresh(); | ||
| } | ||
| }, [onDataRefresh, tableContext?.driver]); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
ClickHouse UPDATE DELETE mutations asynchronous mutations_sync
💡 Result:
In ClickHouse, UPDATE and DELETE operations on MergeTree-family tables are implemented as asynchronous "mutations" using ALTER TABLE statements. These mutations rewrite entire affected data parts in the background, making them heavyweight operations not intended for frequent use. Key syntax: - UPDATE: ALTER TABLE [db.]table UPDATE column1 = expr1 [, column2 = expr2, ...] WHERE filter_expr - DELETE: ALTER TABLE [db.]table DELETE WHERE filter_expr The filter_expr must return UInt8 (1 for rows to modify/delete). Mutations are asynchronous by default, meaning the ALTER query returns immediately after queuing the mutation, while the actual data rewrite happens in background threads. The mutations_sync setting controls synchronicity (can be set at session/query level or globally): - 0 (default): Asynchronous - query returns immediately. - 1: Wait for mutation to complete on the local/replica node. - 2: Wait for mutation to complete on all replicas (for replicated tables). Examples: -- Asynchronous (default) ALTER TABLE events UPDATE status = 'processed' WHERE event_id = 1; -- Synchronous on local node SET mutations_sync = 1; ALTER TABLE events DELETE WHERE user_id = 12345; -- Wait on all replicas ALTER TABLE events UPDATE col = expr WHERE cond SETTINGS mutations_sync=2; Monitor progress with: SELECT * FROM system.mutations WHERE table = 'your_table' AND is_done = 0; Best practices: - Avoid frequent mutations; prefer ReplacingMergeTree/CollapsingMergeTree, TTL, or lightweight DELETE/UPDATE (DELETE FROM, ALTER...UPDATE with on-the-fly application). - Batch multiple changes in one ALTER. - Mutations apply only to data inserted before the mutation; newer data unaffected. - Cannot update primary/partition key columns. Lightweight alternatives exist for faster operations without full part rewrites.
Citations:
- 1: https://clickhouse.com/docs/guides/developer/mutations
- 2: https://clickhouse.com/docs/en/guides/developer/mutations/
- 3: https://clickhouse.com/docs/managing-data/delete_mutations
- 4: https://clickhouse.com/jp/blog/handling-updates-and-deletes-in-clickhouse
- 5: Would mutations_sync=0 cause UPDATE inconsistency? ClickHouse/ClickHouse#23180
- 6: https://clickhouse.com/docs/sql-reference/statements/alter/update
- 7: https://clickhouse.com/docs/managing-data/update_mutations
- 8: https://clickhouse.com/docs/en/sql-reference/statements/alter
🏁 Script executed:
cd src/components/business/DataGrid && wc -l TableView.tsxRepository: codeErrorSleep/dbpaw
Length of output: 82
🏁 Script executed:
# Search for ClickHouse-related code in the codebase
rg -i "clickhouse" --type ts --type tsx -B 2 -A 2Repository: codeErrorSleep/dbpaw
Length of output: 91
🏁 Script executed:
# Search for mutation-related code in the TableView file
rg -i "mutation" src/components/business/DataGrid/TableView.tsx -B 2 -A 2Repository: codeErrorSleep/dbpaw
Length of output: 1297
🏁 Script executed:
# Read the specific section around lines 851-865
sed -n '845,870p' src/components/business/DataGrid/TableView.tsxRepository: codeErrorSleep/dbpaw
Length of output: 745
🏁 Script executed:
# Search for mutations_sync usage in the codebase
rg "mutations_sync"Repository: codeErrorSleep/dbpaw
Length of output: 46
🏁 Script executed:
# Search for system.mutations usage
rg "system\.mutations"Repository: codeErrorSleep/dbpaw
Length of output: 46
🏁 Script executed:
# Search for other ClickHouse-specific handling patterns
rg -i "clickhouse" src/components/business/DataGrid/TableView.tsx -B 5 -A 5Repository: codeErrorSleep/dbpaw
Length of output: 3766
🏁 Script executed:
# Search for git history or comments explaining the 350ms value
rg "350" src/components/business/DataGrid/TableView.tsx -B 5 -A 5Repository: codeErrorSleep/dbpaw
Length of output: 350
🏁 Script executed:
# Check if there's any other retry logic or exponential backoff patterns in the file
rg -i "retry|backoff|timeout" src/components/business/DataGrid/TableView.tsxRepository: codeErrorSleep/dbpaw
Length of output: 297
🏁 Script executed:
# Search for onDataRefresh implementation to understand what it does
rg "onDataRefresh" src/components/business/DataGrid/ -B 2 -A 5Repository: codeErrorSleep/dbpaw
Length of output: 3989
🏁 Script executed:
# Check if there's any configuration or settings being passed to mutations
rg -i "alter table|insert into" src/components/business/DataGrid/ -A 3 -B 3Repository: codeErrorSleep/dbpaw
Length of output: 3350
🏁 Script executed:
# Look for the actual mutation execution/SQL building code
rg "buildDeleteStatement|buildUpdateStatement|buildInsertStatement" src/components/business/DataGrid/ -A 10Repository: codeErrorSleep/dbpaw
Length of output: 8480
🏁 Script executed:
# Search for SQL building utilities and check if mutations_sync is used
head -100 src/components/business/DataGrid/tableView/utils.ts | grep -i "mutation\|alter\|insert\|delete"Repository: codeErrorSleep/dbpaw
Length of output: 202
🏁 Script executed:
# Check the full file size and then look at mutation building logic
wc -l src/components/business/DataGrid/tableView/utils.ts && rg "buildDeleteStatement|buildUpdateStatement" src/components/business/DataGrid/tableView/utils.ts -A 30 | head -100Repository: codeErrorSleep/dbpaw
Length of output: 718
The fixed 350ms sleep is insufficient for ClickHouse mutations and risks displaying stale data.
ClickHouse mutations (UPDATE/DELETE via ALTER TABLE) are asynchronous by default. The current implementation sleeps for 350ms then calls refreshAfterMutation() again, but this doesn't guarantee mutation completion—larger tables or slower nodes can exceed this arbitrary timeout. ClickHouse documents mutations_sync (set to 1 or 2 to wait for completion) or polling system.mutations as the reliable approach. Consider either:
- Adding
SETTINGS mutations_sync=1to the generated ALTER statements, or - Polling
system.mutations WHERE is_done = 0after executing mutations before refreshing
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/business/DataGrid/TableView.tsx` around lines 851 - 865, The
fixed 350ms sleep in refreshAfterMutation is unreliable for ClickHouse; update
the ClickHouse-path in refreshAfterMutation (the block checking
tableContext?.driver === "clickhouse") to wait for mutation completion instead
of a hard timeout by either ensuring the ALTER statements include SETTINGS
mutations_sync=1 when executing mutations or by polling system.mutations (SELECT
count() FROM system.mutations WHERE database=... AND table=... AND is_done=0)
until zero pending mutations before calling onDataRefresh again; use the
table/context identifiers available in the surrounding code to target the
correct database/table and keep the existing runRefresh/onDataRefresh flow once
mutations are confirmed complete.
| importReadOnlyDriver: | ||
| "This driver is read-only in DbPaw and does not support SQL import.", | ||
| selectImportSqlFile: "Select SQL File to Import", | ||
| importSuccess: "Import completed ({{count}} statements)", |
There was a problem hiding this comment.
Pluralize the new import-success toast.
{{count}} statements will read as 1 statements in the single-statement case. Please add singular/plural variants here so the new import flow renders correctly for both paths.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/i18n/locales/en.ts` at line 351, Replace the flat importSuccess
string with a pluralized ICU-style entry so the toast renders correctly for
singular and plural counts; update the importSuccess locale value (the
importSuccess key in locales/en.ts) to use ICU plural syntax, e.g. a string that
uses the count plural form (one vs other) to produce "Import completed (1
statement)" vs "Import completed (2 statements)". Ensure the key stays named
importSuccess so existing code that reads importSuccess/count continues to work.
Summary by CodeRabbit
New Features
Improvements
Infrastructure