Skip to content

feat: granularised db connection usage in superposition crate#871

Open
ayushjain17 wants to merge 4 commits intomainfrom
feat/granular/connection/1
Open

feat: granularised db connection usage in superposition crate#871
ayushjain17 wants to merge 4 commits intomainfrom
feat/granular/connection/1

Conversation

@ayushjain17
Copy link
Collaborator

@ayushjain17 ayushjain17 commented Feb 8, 2026

Problem

DB Connection from pool is being held up for the entirety of the request processing duration

Solution

Granularise getting and usage of DB Connection from the pool limited to the query execution only

Summary by CodeRabbit

  • Refactor

    • Standardized database access to a pooled, centralized query execution model across handlers and helper modules, simplifying transactions and error handling.
    • Migrated handlers to use shared application state for DB access, reducing per-request connection management.
    • Streamlined variant resolution by removing unnecessary async indirection for faster, clearer execution paths.
  • Chores

    • Added a development tool to the dev environment.

@ayushjain17 ayushjain17 requested a review from a team as a code owner February 8, 2026 12:02
@semanticdiff-com
Copy link

semanticdiff-com bot commented Feb 8, 2026

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 8, 2026

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review

Walkthrough

This PR centralizes DB access behind a new run_query! macro and migrates many handlers/helpers from per-request DbConnection/pooled-connection parameters to using Data<AppState> with pool-based run_query! invocations; also converts one helper from async to sync.

Changes

Cohort / File(s) Summary
Database Macro Infrastructure
crates/service_utils/src/db.rs
Added run_query! macro that acquires a pooled connection, disables prepared-statement cache, and dispatches either raw or transactional execution with unified AppError mapping.
Experiments Handler
crates/experimentation_platform/src/api/experiments/handlers.rs
Converted get_applicable_variants_helper from async fn to fn and updated the handler to call it synchronously (removed .await).
Service Utils Helpers & Middlewares
crates/service_utils/src/helpers.rs, crates/service_utils/src/middlewares/auth_n/helpers.rs, crates/service_utils/src/middlewares/workspace_context.rs
Switched helper signatures to accept PgSchemaConnectionPool/use run_query! instead of direct DBConnection/PooledConnection; removed manual prepared-statement cache calls and simplified workspace retrieval.
Organisation Handlers
crates/superposition/src/organisation/handlers.rs
Replaced DbConnection parameters with Data<AppState> and converted all DB interactions to run_query! closures; added pagination offset handling.
Resolve Handler
crates/superposition/src/resolve/handlers.rs
Removed DbConnection param from resolve_with_exp_handler; migrated all DB operations (max_created_at, variant fetch, audit header augmentation, etc.) to run_query! calls against state.db_pool.
Webhooks Handlers & Helpers
crates/superposition/src/webhooks/handlers.rs, crates/superposition/src/webhooks/helper.rs
Migrated handlers and helper functions to accept Data<AppState>/PgSchemaConnectionPool, refactored validation and fetch operations to run inside run_query! closures; removed direct pooled-connection parameters.
Workspace Handlers
crates/superposition/src/workspace/handlers.rs
Replaced DbConnection usage with Data<AppState> and run_query! across get/create/update/list/migrate/rotate flows; adapted transactions and error mappings to macro semantics.
Dev tooling
flake.nix
Added cargo-expand to devShell packages.

Sequence Diagram(s)

sequenceDiagram
    rect rgba(200,200,255,0.5)
    participant Client
    end
    rect rgba(200,255,200,0.5)
    participant Handler
    end
    rect rgba(255,200,200,0.5)
    participant run_query
    end
    rect rgba(255,230,150,0.5)
    participant DBPool
    end
    rect rgba(200,255,255,0.5)
    participant DBConnection
    end

    Client->>Handler: HTTP request
    Handler->>run_query: invoke run_query!(state.db_pool, conn, { ... })
    run_query->>DBPool: acquire connection
    DBPool->>DBConnection: provide conn
    run_query->>DBConnection: execute query / begin transaction
    DBConnection-->>run_query: result / error
    run_query-->>Handler: mapped Result<T, AppError>
    Handler-->>Client: HTTP response
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested labels

enhancement, P1

Suggested reviewers

  • knutties
  • Datron
  • mahatoankitkumar

Poem

🐰 I hopped through code with a curious mind,
Swapped single conns for a pooled design,
A macro now runs each transactional quest,
Handlers breathe easy — the pool does the rest.
🥕✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: replacing per-request DbConnection usage with granularized connection acquisition through a run_query macro, limiting connection hold time to only query execution periods.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/granular/connection/1

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@ayushjain17
Copy link
Collaborator Author

@coderabbitai full review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 23, 2026

✅ Actions performed

Full review triggered.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
crates/service_utils/src/middlewares/auth_n/helpers.rs (1)

21-40: ⚠️ Potential issue | 🟡 Minor

This function's manual db_pool.get() call skips the prepared statement cache disabling that is performed everywhere else.

The set_prepared_statement_cache_size(Disabled) was removed here, but:

  • The run_query! macro (crates/service_utils/src/db.rs) centralizes this and always disables cache (lines 41-44)
  • Other manual db_pool.get() calls, including ones querying the same superposition schema (crates/service_utils/src/service/types.rs:223), still explicitly disable cache
  • This function is the sole exception

Either add conn.set_prepared_statement_cache_size(diesel::connection::CacheSize::Disabled) after acquiring the connection, or migrate to run_query! if the return type can be changed from Result<Vec<String>, &'static str> to superposition_types::result::Result<Vec<String>>.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/service_utils/src/middlewares/auth_n/helpers.rs` around lines 21 - 40,
The function currently calls app_state.db_pool.get() directly which skips
disabling the Diesel prepared statement cache used elsewhere; either call
conn.set_prepared_statement_cache_size(diesel::connection::CacheSize::Disabled)
immediately after acquiring the connection (after app_state.db_pool.get() and
before running the organisations query) or refactor the body to use the
run_query! macro instead and change the return type from Result<Vec<String>,
&'static str> to superposition_types::result::Result<Vec<String>> so it matches
the centralized behavior; update error handling to the chosen path accordingly.
🧹 Nitpick comments (6)
crates/service_utils/src/db.rs (1)

58-68: Subtle ? operator asymmetry between raw and tx modes.

In raw mode, the body is expanded inline (no closure), so ? inside the body returns from the enclosing function, bypassing .map_err(Into::into). In tx mode, the body is inside a closure passed to diesel::Connection::transaction, so ? returns from the closure and the transaction handles rollback.

This means:

  • In raw mode with ?, the caller's function return type must accept the error directly, and .map_err(Into::into) only applies to the final expression (never reached on ?-triggered early return).
  • In tx mode with ?, all errors are captured by the transaction, enabling proper rollback.

This works correctly in practice, but consider documenting this distinction in the macro's doc comment to prevent misuse (e.g., someone using raw mode with multiple queries expecting atomicity on ?-failure).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/service_utils/src/db.rs` around lines 58 - 68, The macro's two
dispatch arms (`@dispatch` raw and `@dispatch` tx) behave differently with the ?
operator (raw expands inline so ? returns from the caller bypassing
.map_err(Into::into), while tx runs the body in a closure so ? returns from the
closure and triggers transaction rollback); update the macro-level doc comment
to clearly describe this asymmetry, warn that using ? in raw mode will not
provide transactional rollback or guaranteed .map_err mapping, and recommend
using tx mode (or explicit error mapping) for multi-step atomic operations so
callers won't be surprised by differing control-flow and error-handling
semantics.
crates/superposition/src/resolve/handlers.rs (2)

105-109: Unnecessary run_query! for a non-query operation.

add_audit_id_to_header needs a DB connection but the run_query! raw mode forces the body to return a Result, leading to the boilerplate Ok::<(), superposition::AppError>(()). This works but is a code smell — the macro wasn't designed for side-effect-only operations.

Consider either:

  1. Having add_audit_id_to_header return a Result internally, or
  2. Getting the connection directly for this single side-effect call.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/superposition/src/resolve/handlers.rs` around lines 105 - 109, The
current use of run_query! around add_audit_id_to_header is unnecessary and
forces a dummy Ok return; replace it by either (A) changing
add_audit_id_to_header to return a Result<() , superposition::AppError> and
propagate that Result directly so you no longer need run_query!, or (B) obtain a
DB connection directly from state.db_pool (the same pool used in run_query!),
pass that conn plus &mut resp and &workspace_context.schema_name into
add_audit_id_to_header, and remove the run_query! wrapper and the Ok::<(),
superposition::AppError>(()) boilerplate; reference add_audit_id_to_header,
run_query!, state.db_pool, conn, resp, and workspace_context.schema_name when
making the change.

46-101: Five separate pool acquisitions per request — acknowledged by TODOs.

Each run_query! call acquires and releases a connection. Five round-trips to the pool per resolve request adds latency overhead (lock contention, connection validation). The TODO comments indicate this is a transitional state.

One concern worth noting now: lines 67-73 pass &mut config_version and lines 91-101 pass &mut config into run_query! bodies. This works because raw mode expands inline (no closure), so the mutations are visible after the macro. This is correct but fragile — if raw mode were ever refactored to use a closure (like tx mode), these mutations would silently stop propagating to the caller.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/superposition/src/resolve/handlers.rs` around lines 46 - 101, Multiple
run_query! invocations each acquire a DB connection and the current pattern
relies on raw-mode macro expansion to let mutations to config_version and config
propagate; fix by acquiring one connection up-front and reusing it for all calls
(get_max_created_at, generate_config_from_version,
get_applicable_variants_helper, resolve) so there is a single pool checkout and
the same &mut conn is passed into each run_query! invocation; locate the calls
using the symbols run_query!, get_max_created_at, generate_config_from_version,
get_applicable_variants_helper, and resolve, create a single let mut conn =
state.db_pool.get().await? (or the existing conn acquisition helper), and
replace each run_query! invocation to use that conn so &mut config_version and
&mut config mutations remain correct and you avoid multiple pool acquisitions.
crates/superposition/src/organisation/handlers.rs (1)

124-144: Two separate pool acquisitions for count + data in paginated list.

This is the expected pattern per the PR's goals (shorter connection hold times), but note that between the count query (line 124) and the data query (line 144), the dataset could change, causing a mismatch between total_items and the actual page data. This is generally acceptable for list endpoints, just flagging it since this is a behavioral change from the previous single-connection approach.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/superposition/src/organisation/handlers.rs` around lines 124 - 144,
The count and the paginated load are executed using separate pool acquisitions
which can cause total_items to diverge from the returned page; to fix, acquire a
single DB connection (reuse the same conn) or run both queries inside the same
transaction so total_items and the builder.load(conn) use the same connection
(references: run_query!, total_items, data, builder, organisations::table,
filters.page); alternatively, if eventual consistency is acceptable, add a short
comment by the count+data block documenting this intentional race to make the
behavioral change explicit.
crates/superposition/src/workspace/handlers.rs (1)

197-213: tx conn is unnecessary for a single UPDATE.

The transaction block contains exactly one Diesel operation. BEGIN/COMMIT adds overhead for no atomicity benefit here.

♻️ Suggested simplification (after the TOCTOU fix above is applied)
-    let updated_workspace = run_query!(state.db_pool, tx conn, {
-        let updated_workspace = diesel::update(workspaces::table)
-            ...
-            .get_result::<Workspace>(conn)
-            .map_err(|err| { ... })?;
-
-        Ok::<Workspace, superposition::AppError>(updated_workspace)
-    })?;
+    let updated_workspace = run_query!(
+        state.db_pool,
+        conn,
+        diesel::update(workspaces::table)
+            .filter(workspaces::organisation_id.eq(&org_id.0))
+            .filter(workspaces::workspace_name.eq(workspace_name))
+            .set((
+                request,
+                workspaces::last_modified_by.eq(user.email),
+                workspaces::last_modified_at.eq(timestamp),
+            ))
+            .get_result::<Workspace>(conn)
+    )?;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/superposition/src/workspace/handlers.rs` around lines 197 - 213, The
block opens a transaction via run_query!(state.db_pool, tx conn, { ... }) but
performs a single diesel::update, so remove the unnecessary transaction: change
the macro invocation to call run_query! without the `tx` transaction qualifier
(e.g., run_query!(state.db_pool, conn, { ... })), keeping the same update, set,
get_result::<Workspace>(conn) and error mapping logic intact; this removes
BEGIN/COMMIT overhead while preserving the existing update semantics for
updated_workspace.
crates/superposition/src/webhooks/handlers.rs (1)

44-54: TODO: complete granularization once validate_change_reason is migrated.

The validate_change_reason call is wrapped in its own run_query! block as a bridge (passes conn directly to the legacy function), leaving validation and the subsequent DML as separate pool acquisitions. The TODO acknowledges this is intentional but should be tracked.

Would you like me to open a follow-up issue to track migrating validate_change_reason (and similar legacy-conn-taking helpers) to a pool-based signature so these can be folded into a single run_query! block?

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/superposition/src/webhooks/handlers.rs` around lines 44 - 54, The TODO
notes that validate_change_reason is intentionally called inside its own
run_query! with a raw conn; create a follow-up issue tracking migration of
validate_change_reason (and any helpers taking legacy conn) to a pool-based
signature so they can be invoked inside a single run_query! block; specifically,
plan to change validate_change_reason(&WorkspaceContext, &ChangeReason, &mut
PgPool) -> Result<()> (or similar), update its callers to accept a pool instead
of conn, and then fold the current separate run_query!(state.db_pool, conn,
validate_change_reason(...)) and the subsequent DML into one run_query!
invocation using state.db_pool and state.master_encryption_key.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/service_utils/src/db.rs`:
- Around line 10-22: The doctest examples for the run_query macro fail because
the macro isn't in scope; update both examples to either (a) add an explicit
import line like `use service_utils::run_query;` (and minimal scaffolding
comments) before each `run_query!(...)` example so the macro is found, or (b)
mark the examples as doctest-ignored (`no_run` or `ignore`) if providing full
test scaffolding is impractical; locate references to the run_query macro in the
doc comments above the macro definition and apply the chosen fix consistently to
both examples.

In `@crates/superposition/src/webhooks/handlers.rs`:
- Around line 81-88: The insert currently uses
diesel::insert_into(webhooks::table)...execute(conn) which returns a row count
and then returns the locally-built webhook_data, so DB-side defaults/triggers
(ids, timestamps) are lost; change the insert to use
.get_result::<Webhook>(conn) (via the same run_query! invocation and
schema_name(&workspace_context.schema_name)) to fetch the persisted Webhook
struct from the DB and return that authoritative object (replace returning
webhook_data with the DB-returned Webhook).
- Around line 212-224: The delete handler currently ignores the delete result so
DELETE returns 204 even when no row matched; change the block using
run_query!(state.db_pool, tx conn, { ... diesel::delete(...).execute(conn) }) to
capture the integer result of execute(conn), check if it is 0, and if so return
a 404 response instead of proceeding to success; keep the same filters
(webhooks::name.eq(&w_name)), schema injection (workspace_context.schema_name)
and the preceding update of last_modified_at/last_modified_by, but after
execute(conn) inspect the returned affected-rows count and map 0 to a NotFound
error/HTTP 404, otherwise continue to return 204.

In `@crates/superposition/src/workspace/handlers.rs`:
- Around line 382-397: The workspace is fetched with get_workspace outside the
transaction, which can cause stale workspace.encryption_key to be used in
rotate_workspace_encryption_key_helper; change the code to fetch the workspace
inside the same transaction passed to run_query! (i.e. call get_workspace using
the transaction connection/conn within the run_query! tx conn block), then
construct WorkspaceContext from that in-transaction workspace before calling
rotate_workspace_encryption_key_helper; apply the same pattern to
migrate_schema_handler so any checks or uses of workspace.encryption_key use the
workspace read from the transaction rather than a previously released
connection.
- Around line 188-213: The config_version existence check (the select on
config_versions::config_versions using run_query!(state.db_pool, conn, {...}))
must be moved into the same transaction that performs the workspace update (the
run_query!(state.db_pool, tx conn, {...}) block that updates workspaces::table
and returns updated_workspace) to avoid a TOCTOU race; modify the transaction
block to first query/config_versions::id with the provided version inside the tx
conn closure, then proceed to the diesel::update(...) and
get_result::<Workspace>(conn) so both validation and update execute atomically
within the same run_query! transaction.

---

Outside diff comments:
In `@crates/service_utils/src/middlewares/auth_n/helpers.rs`:
- Around line 21-40: The function currently calls app_state.db_pool.get()
directly which skips disabling the Diesel prepared statement cache used
elsewhere; either call
conn.set_prepared_statement_cache_size(diesel::connection::CacheSize::Disabled)
immediately after acquiring the connection (after app_state.db_pool.get() and
before running the organisations query) or refactor the body to use the
run_query! macro instead and change the return type from Result<Vec<String>,
&'static str> to superposition_types::result::Result<Vec<String>> so it matches
the centralized behavior; update error handling to the chosen path accordingly.

---

Nitpick comments:
In `@crates/service_utils/src/db.rs`:
- Around line 58-68: The macro's two dispatch arms (`@dispatch` raw and `@dispatch`
tx) behave differently with the ? operator (raw expands inline so ? returns from
the caller bypassing .map_err(Into::into), while tx runs the body in a closure
so ? returns from the closure and triggers transaction rollback); update the
macro-level doc comment to clearly describe this asymmetry, warn that using ? in
raw mode will not provide transactional rollback or guaranteed .map_err mapping,
and recommend using tx mode (or explicit error mapping) for multi-step atomic
operations so callers won't be surprised by differing control-flow and
error-handling semantics.

In `@crates/superposition/src/organisation/handlers.rs`:
- Around line 124-144: The count and the paginated load are executed using
separate pool acquisitions which can cause total_items to diverge from the
returned page; to fix, acquire a single DB connection (reuse the same conn) or
run both queries inside the same transaction so total_items and the
builder.load(conn) use the same connection (references: run_query!, total_items,
data, builder, organisations::table, filters.page); alternatively, if eventual
consistency is acceptable, add a short comment by the count+data block
documenting this intentional race to make the behavioral change explicit.

In `@crates/superposition/src/resolve/handlers.rs`:
- Around line 105-109: The current use of run_query! around
add_audit_id_to_header is unnecessary and forces a dummy Ok return; replace it
by either (A) changing add_audit_id_to_header to return a Result<() ,
superposition::AppError> and propagate that Result directly so you no longer
need run_query!, or (B) obtain a DB connection directly from state.db_pool (the
same pool used in run_query!), pass that conn plus &mut resp and
&workspace_context.schema_name into add_audit_id_to_header, and remove the
run_query! wrapper and the Ok::<(), superposition::AppError>(()) boilerplate;
reference add_audit_id_to_header, run_query!, state.db_pool, conn, resp, and
workspace_context.schema_name when making the change.
- Around line 46-101: Multiple run_query! invocations each acquire a DB
connection and the current pattern relies on raw-mode macro expansion to let
mutations to config_version and config propagate; fix by acquiring one
connection up-front and reusing it for all calls (get_max_created_at,
generate_config_from_version, get_applicable_variants_helper, resolve) so there
is a single pool checkout and the same &mut conn is passed into each run_query!
invocation; locate the calls using the symbols run_query!, get_max_created_at,
generate_config_from_version, get_applicable_variants_helper, and resolve,
create a single let mut conn = state.db_pool.get().await? (or the existing conn
acquisition helper), and replace each run_query! invocation to use that conn so
&mut config_version and &mut config mutations remain correct and you avoid
multiple pool acquisitions.

In `@crates/superposition/src/webhooks/handlers.rs`:
- Around line 44-54: The TODO notes that validate_change_reason is intentionally
called inside its own run_query! with a raw conn; create a follow-up issue
tracking migration of validate_change_reason (and any helpers taking legacy
conn) to a pool-based signature so they can be invoked inside a single
run_query! block; specifically, plan to change
validate_change_reason(&WorkspaceContext, &ChangeReason, &mut PgPool) ->
Result<()> (or similar), update its callers to accept a pool instead of conn,
and then fold the current separate run_query!(state.db_pool, conn,
validate_change_reason(...)) and the subsequent DML into one run_query!
invocation using state.db_pool and state.master_encryption_key.

In `@crates/superposition/src/workspace/handlers.rs`:
- Around line 197-213: The block opens a transaction via
run_query!(state.db_pool, tx conn, { ... }) but performs a single
diesel::update, so remove the unnecessary transaction: change the macro
invocation to call run_query! without the `tx` transaction qualifier (e.g.,
run_query!(state.db_pool, conn, { ... })), keeping the same update, set,
get_result::<Workspace>(conn) and error mapping logic intact; this removes
BEGIN/COMMIT overhead while preserving the existing update semantics for
updated_workspace.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a1474e5 and e587b95.

📒 Files selected for processing (11)
  • crates/experimentation_platform/src/api/experiments/handlers.rs
  • crates/service_utils/src/db.rs
  • crates/service_utils/src/helpers.rs
  • crates/service_utils/src/middlewares/auth_n/helpers.rs
  • crates/service_utils/src/middlewares/workspace_context.rs
  • crates/superposition/src/organisation/handlers.rs
  • crates/superposition/src/resolve/handlers.rs
  • crates/superposition/src/webhooks/handlers.rs
  • crates/superposition/src/webhooks/helper.rs
  • crates/superposition/src/workspace/handlers.rs
  • flake.nix

Comment on lines +212 to +224
run_query!(state.db_pool, tx conn, {
diesel::update(webhooks::table)
.filter(webhooks::name.eq(&w_name))
.set((
webhooks::last_modified_at.eq(Utc::now()),
webhooks::last_modified_by.eq(user.get_email()),
))
.schema_name(&workspace_context.schema_name)
.execute(conn)?;
diesel::delete(webhooks.filter(webhooks::name.eq(&w_name)))
.schema_name(&workspace_context.schema_name)
.execute(conn)
})?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

delete_handler returns 204 even when no webhook matched — missing 404.

diesel::delete(...).execute(conn) returns Ok(0) (not an error) when no row matches the filter. The run_query!(...)? call propagates Err variants but silently passes Ok(0), so a DELETE /webhooks/nonexistent responds with 204 No Content instead of 404 Not Found.

🛡️ Proposed fix
-    run_query!(state.db_pool, tx conn, {
-        diesel::update(webhooks::table)
-            .filter(webhooks::name.eq(&w_name))
-            .set((
-                webhooks::last_modified_at.eq(Utc::now()),
-                webhooks::last_modified_by.eq(user.get_email()),
-            ))
-            .schema_name(&workspace_context.schema_name)
-            .execute(conn)?;
-        diesel::delete(webhooks.filter(webhooks::name.eq(&w_name)))
-            .schema_name(&workspace_context.schema_name)
-            .execute(conn)
-    })?;
+    let deleted_count = run_query!(state.db_pool, tx conn, {
+        diesel::update(webhooks::table)
+            .filter(webhooks::name.eq(&w_name))
+            .set((
+                webhooks::last_modified_at.eq(Utc::now()),
+                webhooks::last_modified_by.eq(user.get_email()),
+            ))
+            .schema_name(&workspace_context.schema_name)
+            .execute(conn)?;
+        diesel::delete(webhooks.filter(webhooks::name.eq(&w_name)))
+            .schema_name(&workspace_context.schema_name)
+            .execute(conn)
+    })?;
+    if deleted_count == 0 {
+        return Err(bad_argument!("Webhook '{}' not found", w_name));
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
run_query!(state.db_pool, tx conn, {
diesel::update(webhooks::table)
.filter(webhooks::name.eq(&w_name))
.set((
webhooks::last_modified_at.eq(Utc::now()),
webhooks::last_modified_by.eq(user.get_email()),
))
.schema_name(&workspace_context.schema_name)
.execute(conn)?;
diesel::delete(webhooks.filter(webhooks::name.eq(&w_name)))
.schema_name(&workspace_context.schema_name)
.execute(conn)
})?;
let deleted_count = run_query!(state.db_pool, tx conn, {
diesel::update(webhooks::table)
.filter(webhooks::name.eq(&w_name))
.set((
webhooks::last_modified_at.eq(Utc::now()),
webhooks::last_modified_by.eq(user.get_email()),
))
.schema_name(&workspace_context.schema_name)
.execute(conn)?;
diesel::delete(webhooks.filter(webhooks::name.eq(&w_name)))
.schema_name(&workspace_context.schema_name)
.execute(conn)
})?;
if deleted_count == 0 {
return Err(bad_argument!("Webhook '{}' not found", w_name));
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/superposition/src/webhooks/handlers.rs` around lines 212 - 224, The
delete handler currently ignores the delete result so DELETE returns 204 even
when no row matched; change the block using run_query!(state.db_pool, tx conn, {
... diesel::delete(...).execute(conn) }) to capture the integer result of
execute(conn), check if it is 0, and if so return a 404 response instead of
proceeding to success; keep the same filters (webhooks::name.eq(&w_name)),
schema injection (workspace_context.schema_name) and the preceding update of
last_modified_at/last_modified_by, but after execute(conn) inspect the returned
affected-rows count and map 0 to a NotFound error/HTTP 404, otherwise continue
to return 204.

Comment on lines +382 to +397
let workspace = get_workspace(&schema_name, &state.db_pool)?;
let workspace_context = WorkspaceContext {
schema_name,
organisation_id: org_id,
workspace_id: WorkspaceId(workspace.workspace_name.clone()),
settings: workspace,
};

let total_secrets_re_encrypted = conn
.transaction::<i64, superposition::AppError, _>(|conn| {
rotate_workspace_encryption_key_helper(
&workspace_context,
conn,
master_encryption_key,
&user.get_username(),
)
})?;
let total_secrets_re_encrypted = run_query!(state.db_pool, tx conn, {
rotate_workspace_encryption_key_helper(
&workspace_context,
conn,
master_encryption_key,
&user.get_username(),
)
})?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

get_workspace is outside the transaction — stale key material can reach rotate_workspace_encryption_key_helper.

let workspace = get_workspace(&schema_name, &state.db_pool)?;   // conn `#1` released here
// …
let total_secrets_re_encrypted = run_query!(state.db_pool, tx conn, {  // conn `#2`
    rotate_workspace_encryption_key_helper(
        &workspace_context,   // carries workspace.encryption_key from conn `#1`
        conn,
        master_encryption_key,
        &user.get_username(),
    )
})?;

A concurrent rotation request can commit a new encryption_key on the workspace between the get_workspace call and the run_query! tx conn block. rotate_workspace_encryption_key_helper would then decrypt secrets with the wrong (already-rotated-away) key, silently corrupting them.

The workspace should be read inside the same transaction used for the rotation:

🛡️ Proposed fix – read workspace inside the transaction
-    let workspace = get_workspace(&schema_name, &state.db_pool)?;
-    let workspace_context = WorkspaceContext {
-        schema_name,
-        organisation_id: org_id,
-        workspace_id: WorkspaceId(workspace.workspace_name.clone()),
-        settings: workspace,
-    };
-
-    let total_secrets_re_encrypted = run_query!(state.db_pool, tx conn, {
-        rotate_workspace_encryption_key_helper(
-            &workspace_context,
-            conn,
-            master_encryption_key,
-            &user.get_username(),
-        )
-    })?;
+    let schema_name_clone = schema_name.clone();
+    let total_secrets_re_encrypted = run_query!(state.db_pool, tx conn, {
+        let workspace = workspaces::dsl::workspaces
+            .filter(workspaces::workspace_schema_name.eq(schema_name_clone.to_string()))
+            .get_result::<Workspace>(conn)?;
+        let workspace_context = WorkspaceContext {
+            schema_name: schema_name_clone,
+            organisation_id: org_id,
+            workspace_id: WorkspaceId(workspace.workspace_name.clone()),
+            settings: workspace,
+        };
+        rotate_workspace_encryption_key_helper(
+            &workspace_context,
+            conn,
+            master_encryption_key,
+            &user.get_username(),
+        )
+    })?;

The same concern applies more mildly to migrate_schema_handler (Line 323), where workspace.encryption_key is checked inside the transaction but the workspace object itself was fetched outside it.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/superposition/src/workspace/handlers.rs` around lines 382 - 397, The
workspace is fetched with get_workspace outside the transaction, which can cause
stale workspace.encryption_key to be used in
rotate_workspace_encryption_key_helper; change the code to fetch the workspace
inside the same transaction passed to run_query! (i.e. call get_workspace using
the transaction connection/conn within the run_query! tx conn block), then
construct WorkspaceContext from that in-transaction workspace before calling
rotate_workspace_encryption_key_helper; apply the same pattern to
migrate_schema_handler so any checks or uses of workspace.encryption_key use the
workspace read from the transaction rather than a previously released
connection.

@ayushjain17 ayushjain17 force-pushed the feat/granular/connection/1 branch 4 times, most recently from bf7f84e to ec0ffb5 Compare February 23, 2026 11:50
@ayushjain17 ayushjain17 force-pushed the feat/granular/connection/1 branch from ec0ffb5 to a0b1e77 Compare February 24, 2026 13:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant