feat(core): accept shorthand enum names in policy API requests#3401
feat(core): accept shorthand enum names in policy API requests#3401marythought wants to merge 10 commits intomainfrom
Conversation
Developers can now use short, readable enum values (e.g. "IN", "AND", "ALL_OF", "ACTIVE") instead of verbose proto names (e.g. "SUBJECT_MAPPING_OPERATOR_ENUM_IN") in JSON API requests. Full canonical names continue to work for backward compatibility. Adds an HTTP middleware that normalizes shorthand enum strings to their canonical proto form before ConnectRPC deserializes the request. The middleware is scoped to specific RPC paths and only processes JSON content types. Enums covered: - SubjectMappingOperatorEnum: IN, NOT_IN, IN_CONTAINS - ConditionBooleanTypeEnum: AND, OR - AttributeRuleTypeEnum: ALL_OF, ANY_OF, HIERARCHY - ActiveStateEnum: ACTIVE, INACTIVE, ANY Closes #3338 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Mary Dickson <mary.dickson@virtru.com>
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Repository UI Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (5)
📝 WalkthroughWalkthroughThis pull request introduces enum normalization functionality that transforms shorthand enum values to their canonical fully-qualified protobuf enum names in JSON request bodies. It includes a new Changes
Sequence Diagram(s)sequenceDiagram
actor Client
participant Middleware as HTTP<br/>Middleware
participant ConnectRPC as ConnectRPC<br/>Handler
participant Service as Backend<br/>Service
Client->>Middleware: POST JSON<br/>(shorthand enums)
Note over Middleware: Check path & content-type<br/>Matches config
Middleware->>Middleware: Read & parse JSON
Middleware->>Middleware: normalizeJSON()<br/>Traverse objects/arrays<br/>Prepend enum prefixes
Middleware->>ConnectRPC: Forward request<br/>(canonical enum names)
ConnectRPC->>Service: Deserialized protobuf<br/>with canonical enums
Service-->>ConnectRPC: Process response
ConnectRPC-->>Middleware: HTTP response
Middleware-->>Client: HTTP response
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request improves the developer experience for the platform's JSON API by allowing the use of concise, readable shorthand values for enum fields. By implementing an HTTP-layer middleware, the system now transparently maps these shorthand inputs to the verbose canonical names required by the underlying protobuf deserialization process, effectively bridging the gap between user-friendly API requests and strict protocol requirements. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Short names for enums, clean and bright, Making our JSON requests feel right. Middleware runs before the parse, Fixing the names without a farce. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a new enumnormalize package and associated HTTP middleware to support shorthand enum string values in JSON request bodies for specific RPC paths. The middleware automatically prepends the required protobuf prefixes to enum fields, allowing for more concise API requests. The review feedback identifies several important improvements: preventing precision loss for large integers by using json.Decoder with UseNumber(), mitigating potential Denial of Service (DoS) vulnerabilities by enforcing request body size limits with http.MaxBytesReader, and optimizing performance by pre-building lookup maps and reducing string allocations during recursive JSON traversal.
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@service/internal/enumnormalize/enumnormalize.go`:
- Around line 28-32: The lookup map built from rules uses
strings.ToLower(r.JSONField) so snake_case incoming names like
"boolean_operator" won't match "booleanOperator"; update enumnormalize.go to
normalize field names by stripping underscores and lowercasing when constructing
lookup (e.g., replace/remove '_' on r.JSONField before ToLower) and use the same
normalization when performing the lookup in normalizeValue (introduce or use a
normKey function that removes underscores and lowercases keys so both
"booleanOperator" and "boolean_operator" resolve to the same lookup entry).
In `@service/internal/enumnormalize/middleware_test.go`:
- Around line 98-110: The test should also assert that the middleware updated
the request Content-Length to match the normalized body: after calling
handler.ServeHTTP, check the captured request (from captureHandler used by
TestMiddleware_ContentLengthUpdated / capture) and assert request.ContentLength
== int64(len(capture.body)) and that request.Header.Get("Content-Length") equals
the decimal string of len(capture.body); this ensures NewMiddleware actually
syncs the Content-Length header/value with the rewritten body rather than
diverging.
In `@service/internal/enumnormalize/middleware.go`:
- Around line 54-56: The current isJSON function uses strings.Contains on the
Content-Type which can mis-match values like "application/jsonl"; update isJSON
to parse the media type using mime.ParseMediaType and then explicitly accept
either "application/json" or "application/connect+json" (or alternatively check
that the parsed mediatype equals "application/json" or has the "+json" suffix if
you want to allow vendor types). Replace the existing strings.Contains logic in
the isJSON function with this precise MIME parsing and comparison to avoid false
positives.
- Around line 29-33: The middleware currently uses io.ReadAll(r.Body) without a
size cap; change NewMiddleware to accept a maxBytes int64 (plumbed from
GRPCConfig.MaxCallRecvMsgSizeBytes in service/internal/server/server.go) and in
middleware.go wrap the request body with http.MaxBytesReader(w, r.Body,
maxBytes) before reading; on ReadAll error detect a too-large read (body too
large) and return HTTP 413, otherwise on other read errors or when you choose
not to normalize, pass through to next.ServeHTTP(w, r) without calling
NormalizeJSON, ensuring NormalizeJSON still operates on the safely bounded
buffer.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 2f571996-872b-47cb-8020-cea6eed88ca1
📒 Files selected for processing (5)
service/internal/enumnormalize/enumnormalize.goservice/internal/enumnormalize/enumnormalize_test.goservice/internal/enumnormalize/middleware.goservice/internal/enumnormalize/middleware_test.goservice/internal/server/server.go
Benchmark results, click to expandBenchmark authorization.GetDecisions Results:
Benchmark authorization.v2.GetMultiResourceDecision Results:
Benchmark Statistics
Bulk Benchmark Results
TDF3 Benchmark Results:
|
- Use json.Decoder with UseNumber() to preserve numeric precision for large int64 values (avoids float64 conversion) - Add http.MaxBytesReader (1 MB cap) to prevent DoS via oversized request bodies - Pre-build field lookup map once at middleware init instead of per-request - Use exact key matching for JSON field names (protojson always emits camelCase) instead of case-insensitive comparison Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Mary Dickson <mary.dickson@virtru.com>
Benchmark results, click to expandBenchmark authorization.GetDecisions Results:
Benchmark authorization.v2.GetMultiResourceDecision Results:
Benchmark Statistics
Bulk Benchmark Results
TDF3 Benchmark Results:
|
|
QA: End-to-end curl testing against running platform Ran 12 curl tests against a local platform instance (Colima + Docker Compose, Keycloak auth via
All responses return full canonical enum names regardless of shorthand input. No regressions observed with existing full-name usage. |
Verify that numeric enum values (e.g., operator: 1, booleanOperator: 3) pass through the normalization middleware unchanged. These are valid protojson and must continue to work alongside shorthand string names. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
📝 Companion docs PR: opentdf/docs#308 — updates subject mapping docs to use shorthand enum names. |
|
Can we please add some end to end test coverage that this works as expected? I don't currently have enough knowledge of the proto/connect rpc internals to be certain this middleware runs before the server has processed request body bytes as serialized protos. Update: I just saw your manual test results matrix above. I'd like to see automated tests, but it's good to know it works e2e. |
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
service/internal/enumnormalize/enumnormalize.go (1)
49-56:⚠️ Potential issue | 🟠 MajorGuard against trailing JSON tokens before re-marshaling.
A single
decoder.Decode(&parsed)does not ensure the entire body is valid JSON. Trailing tokens can be silently dropped, and the output then re-marshals only the first value, causing data loss. If the body contains multiple JSON values (e.g.,{"a":1}{"b":2}), the decoder succeeds on the first one but ignores the rest. SincenormalizeJSONcurrently returns no error in this case, the middleware sends the corrupted output downstream, preventing ConnectRPC from detecting the malformed request.Add an EOF check with a second decode before marshaling to detect this condition and return the original body instead, allowing ConnectRPC to surface the validation error.
💡 Proposed fix
import ( "bytes" "encoding/json" + "io" "strings" ) @@ var parsed any if err := decoder.Decode(&parsed); err != nil { // Not valid JSON — pass through and let ConnectRPC surface the error. return body, nil //nolint:nilerr // intentional: invalid JSON is not our error to report } + var trailing any + if err := decoder.Decode(&trailing); err != io.EOF { + // Multiple JSON values / trailing garbage: preserve original body. + return body, nil + } normalizeValue(parsed, lookup)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@service/internal/enumnormalize/enumnormalize.go` around lines 49 - 56, The decoder.Decode(&parsed) call can leave trailing JSON tokens unconsumed; add a second decode to ensure EOF before modifying and re-marshaling: after the initial decoder.Decode(&parsed) (the call near normalizeValue(parsed, lookup)), perform a second decode into a throwaway variable (e.g., var extra interface{}) and check that it returns io.EOF; if the second decode returns nil or any error other than io.EOF, return the original body (body, nil) so ConnectRPC can surface the malformed JSON; update imports to include io as needed.service/internal/enumnormalize/middleware.go (1)
37-40:⚠️ Potential issue | 🟠 MajorReturn HTTP error responses instead of forwarding consumed request bodies.
The error handling at line 38-40 forwards the request to
next.ServeHTTP()afterio.ReadAll()fails. WhenMaxBytesReaderhits its limit or encounters a read error, the request body is consumed and cannot be re-read. Forwarding to downstream results in an empty or truncated body, causing misleading errors in the handler.Return appropriate HTTP error responses:
413 (Request Entity Too Large)for*http.MaxBytesError, and400 (Bad Request)for other read failures.Suggested fix
import ( "bytes" + "errors" "io" "net/http" "strconv" "strings" ) // maxBodySize is the upper bound on request bodies the middleware will read // into memory for normalization. Policy API request bodies are small (typically // under 10 KB); this cap prevents abuse while being generous enough for any // legitimate request. ConnectRPC enforces its own message size limits downstream. const maxBodySize = 1 << 20 // 1 MB // NewMiddleware returns HTTP middleware that normalises shorthand enum string // values in JSON request bodies for the given RPC paths. Requests that do not // match (wrong content-type, wrong path) are forwarded unchanged with zero // overhead. func NewMiddleware(rules []EnumFieldRule, paths []string) func(http.Handler) http.Handler { lookup := buildLookup(rules) pathSet := make(map[string]struct{}, len(paths)) for _, p := range paths { pathSet[p] = struct{}{} } return func(next http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { // Only rewrite JSON bodies on matching RPC paths. if !isJSON(r) || !matchesPath(r, pathSet) { next.ServeHTTP(w, r) return } body, err := io.ReadAll(http.MaxBytesReader(w, r.Body, maxBodySize)) if err != nil { - next.ServeHTTP(w, r) + var maxErr *http.MaxBytesError + if errors.As(err, &maxErr) { + http.Error(w, http.StatusText(http.StatusRequestEntityTooLarge), http.StatusRequestEntityTooLarge) + return + } + http.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest) return }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@service/internal/enumnormalize/middleware.go` around lines 37 - 40, The middleware currently calls io.ReadAll(http.MaxBytesReader(w, r.Body, maxBodySize)) and on error forwards to next.ServeHTTP which passes a consumed/empty body; change the error handling to return an HTTP error instead: if the error is an *http.MaxBytesError respond with http.Error(..., http.StatusRequestEntityTooLarge) and return, otherwise respond with http.Error(..., http.StatusBadRequest) and return; keep references to io.ReadAll, http.MaxBytesReader, maxBodySize, next.ServeHTTP and use http.StatusRequestEntityTooLarge / http.StatusBadRequest to form the responses.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@service/internal/enumnormalize/middleware_test.go`:
- Around line 113-125: Add a regression test that exercises the MaxBytesReader
branch by sending a request whose normalized body exceeds the middleware's
maxBodySize and asserting the middleware returns HTTP 413 and does not invoke
the downstream handler; update or add a test (alongside
TestMiddleware_ContentLengthUpdated) that constructs a body that will grow when
normalized, uses NewMiddleware with the same testRules/testPath, sends the
request through handler.ServeHTTP, then assert the response recorder status is
413 and that captureHandler (capture or capture.called) was not invoked and
capture.body remains empty.
---
Duplicate comments:
In `@service/internal/enumnormalize/enumnormalize.go`:
- Around line 49-56: The decoder.Decode(&parsed) call can leave trailing JSON
tokens unconsumed; add a second decode to ensure EOF before modifying and
re-marshaling: after the initial decoder.Decode(&parsed) (the call near
normalizeValue(parsed, lookup)), perform a second decode into a throwaway
variable (e.g., var extra interface{}) and check that it returns io.EOF; if the
second decode returns nil or any error other than io.EOF, return the original
body (body, nil) so ConnectRPC can surface the malformed JSON; update imports to
include io as needed.
In `@service/internal/enumnormalize/middleware.go`:
- Around line 37-40: The middleware currently calls
io.ReadAll(http.MaxBytesReader(w, r.Body, maxBodySize)) and on error forwards to
next.ServeHTTP which passes a consumed/empty body; change the error handling to
return an HTTP error instead: if the error is an *http.MaxBytesError respond
with http.Error(..., http.StatusRequestEntityTooLarge) and return, otherwise
respond with http.Error(..., http.StatusBadRequest) and return; keep references
to io.ReadAll, http.MaxBytesReader, maxBodySize, next.ServeHTTP and use
http.StatusRequestEntityTooLarge / http.StatusBadRequest to form the responses.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: c4d90446-348c-4d7b-8831-4c4c8d132fbe
📒 Files selected for processing (4)
service/internal/enumnormalize/enumnormalize.goservice/internal/enumnormalize/enumnormalize_test.goservice/internal/enumnormalize/middleware.goservice/internal/enumnormalize/middleware_test.go
Add end-to-end tests that send raw HTTP POST requests with shorthand enum strings (bypassing the SDK) to verify the normalization middleware works through the full ConnectRPC stack. Tests cover: - Shorthand operator/boolean enums (IN, IN_CONTAINS, AND) - Shorthand rule type enum (ANY_OF) - Mixed shorthand and canonical names in the same request Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Verify that when a request body exceeds the MaxBytesReader limit (1 MB), the middleware skips normalization and forwards the request unchanged. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests-bdd/cukes/steps_enum_shorthand.go`:
- Around line 134-140: The code currently checks scs["id"] != nil but directly
asserts scs["id"].(string) which can panic for non-string IDs; update the block
that handles result and scs in steps_enum_shorthand.go to use the comma-ok form
when converting scs["id"] to a string (e.g., id, ok := scs["id"].(string)) and
if !ok return a clear error (including respBody or the actual value/type)
instead of calling scenarioContext.RecordObject with a forced assertion; this
change should be made where scs is extracted and before calling
scenarioContext.RecordObject("shorthand_scs_id", ...).
- Around line 84-97: The three step methods repeat the same setup
(GetPlatformScenarioContext → ClearError → reading
ScenarioOptions.PlatformEndpoint → PlatformConfiguration.TokenEndpoint() →
getAccessToken); extract this into a helper (e.g., prepareHTTPRequest) that
takes ctx and returns the PlatformScenarioContext, endpoint string, bearer token
string and an error, and update
iCreateASubjectConditionSetViaHTTPWithShorthandEnums and the other two step
methods to call prepareHTTPRequest, return on error, and use the returned
sc/endpoint/token instead of duplicating the boilerplate.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 7feed891-e536-4399-be4f-8b4e55031278
📒 Files selected for processing (4)
service/internal/enumnormalize/middleware_test.gotests-bdd/cukes/steps_enum_shorthand.gotests-bdd/features/shorthand-enums.featuretests-bdd/platform_test.go
- Remove unnecessary TLS config (endpoints are plain HTTP) - Extract prepareAuthenticatedRequest helper to reduce duplication - Use http.NewRequestWithContext to satisfy noctx linter - Fix errcheck on type assertion and perfsprint on static error string - Revert stray io import in enumnormalize.go Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Benchmark results, click to expandBenchmark authorization.GetDecisions Results:
Benchmark authorization.v2.GetMultiResourceDecision Results:
Benchmark Statistics
Bulk Benchmark Results
TDF3 Benchmark Results:
|
…PMiddleware Move enumnormalize from internal/ to pkg/ so downstream consumers (e.g. DSP) can import EnumFieldRule and NewMiddleware. Add ParentField to EnumFieldRule for disambiguating shared field names (e.g. DSP's tagging enums all use "type" but with different prefixes under different parent keys). Add WithHTTPMiddleware server option so downstream servers can inject HTTP middleware into the handler chain for their own enum normalization rules. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Mary Dickson <mary.dickson@virtru.com>
|
Added three changes in 57921d8 to enable DSP (DSPX-3006) to use this same shorthand enum pattern for its own enums: 1. Exported 2. Added 3. Added All existing tests pass + 5 new tests for parent-scoped rules. |
Fields are at the top level of CreateAttributeRequest, not nested under an "attribute" key. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@service/pkg/enumnormalize/enumnormalize.go`:
- Around line 65-76: The current code decodes only the first JSON value (using
decoder.Decode into parsed) which allows inputs like `obj1obj2` to be accepted;
after the first Decode succeeds, perform a second Decode into a temporary
variable (e.g., var extra any) and require that it returns io.EOF; if the second
Decode returns anything other than io.EOF, return the original body unchanged.
Keep the same json.Decoder (with UseNumber) so it consumes any trailing bytes,
reference decoder.Decode, parsed, and normalizeValue when making the change.
In `@service/pkg/enumnormalize/middleware.go`:
- Around line 11-15: The middleware currently uses a hardcoded maxBodySize
(const maxBodySize = 1<<20) which truncates normalization for allowed ConnectRPC
messages up to the server's max (e.g., 4 MB); change the design so the size is
configurable: add a size parameter to enumnormalize.NewMiddleware and
remove/replace the hardcoded const maxBodySize in middleware.go (and other uses
in that file) to use the injected limit, and update server.newHTTPServer
(service/internal/server/server.go) to pass the server's configured max receive
size (e.g., int64(c.GRPC.MaxCallRecvMsgSizeBytes)) when calling
enumnormalize.NewMiddleware so normalization will honor the actual RPC limit.
In `@tests-bdd/cukes/steps_enum_shorthand.go`:
- Around line 34-35: The HTTP calls create a client with zero timeout (client :=
&http.Client{}) which can hang; replace those with a bounded timeout derived
from the request context: get ctx.Deadline (if present) and set client :=
&http.Client{Timeout: time.Until(deadline)} or else use a sane default (e.g.,
10s), then perform the request (client.Do(req)); ensure you import time and/or
use req = req.WithContext(ctx) so the request respects cancellation. Apply the
same change to both occurrences (around the client := &http.Client{} / resp, err
:= client.Do(req) sites).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 69ed74b0-49d9-413f-913d-174547b64bc3
📒 Files selected for processing (8)
service/internal/server/server.goservice/pkg/enumnormalize/enumnormalize.goservice/pkg/enumnormalize/enumnormalize_test.goservice/pkg/enumnormalize/middleware.goservice/pkg/enumnormalize/middleware_test.goservice/pkg/server/options.goservice/pkg/server/start.gotests-bdd/cukes/steps_enum_shorthand.go
- Guard against trailing JSON tokens: add EOF check after first decode to prevent silently dropping concatenated JSON values - Make maxBodySize configurable: NewMiddleware now accepts maxBodyBytes parameter (0 = default 1 MB) so callers can align with their own limits - Add HTTP client timeouts to BDD test helpers to prevent CI hangs Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Addressed all three CodeRabbit suggestions in 5e1ce05:
Added EOF check after first decode. If trailing tokens exist (e.g.,
Added |
Use variadic parameter so callers can omit it entirely. Defaults to 1 MB. Callers that need a custom limit can pass it explicitly. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Benchmark results, click to expandBenchmark authorization.GetDecisions Results:
Benchmark authorization.v2.GetMultiResourceDecision Results:
Benchmark Statistics
Bulk Benchmark Results
TDF3 Benchmark Results:
|
Benchmark results, click to expandBenchmark authorization.GetDecisions Results:
Benchmark authorization.v2.GetMultiResourceDecision Results:
Benchmark Statistics
Bulk Benchmark Results
TDF3 Benchmark Results:
|
|
|
closing in lieu of #3408 |
Summary
IN,AND,ALL_OF,ACTIVE) instead of verbose proto names (e.g.SUBJECT_MAPPING_OPERATOR_ENUM_IN) in JSON API requestsservice/internal/enumnormalize/) that normalizes shorthand enum strings to their canonical proto form before ConnectRPC deserializes the requestEnums covered
SubjectMappingOperatorEnumoperatorIN,NOT_IN,IN_CONTAINSConditionBooleanTypeEnumbooleanOperatorAND,ORAttributeRuleTypeEnumruleALL_OF,ANY_OF,HIERARCHYActiveStateEnumstateACTIVE,INACTIVE,ANYBefore / After
Design decisions
Why HTTP middleware?
ConnectRPC uses
protojson.Unmarshalinternally to deserialize JSON request bodies.protojsononly accepts canonical proto enum names — sending"operator": "IN"fails deserialization before any service code or interceptor runs. The normalization must happen before ConnectRPC processes the request, which means intercepting at the HTTP layer.Alternatives considered
option allow_alias = true)IN,AND,ORviolate buf lintENUM_VALUE_PREFIXrule, pollute the proto package namespace, triggerWIRE_JSONbreaking change detection, and require SDK regeneration across all languagesWhy path-scoped?
The middleware only processes JSON request bodies for specific RPC paths rather than all requests. This avoids false positives where a JSON field name (e.g.
"state") could collide with a non-enum field on a different endpoint.MetadataUpdateEnumwas intentionally left out of this PR because it spans 15+ update RPCs across all policy services — it can be added as a follow-up once we decide on the path-scoping strategy for it.Designed for reuse
The normalization engine is configurable via
EnumFieldRulestructs (JSON field name + proto prefix). Adding new enums (e.g. for DSPX-3006's Config Service enums) requires only appending entries — no structural changes.Test plan
IN,AND,ALL_OF,ACTIVE) expanded to canonical namesin,and,all_of) worksCreateSubjectConditionSetRequestshape) handledgolangci-lintpasses with 0 issuesgo build ./service/...succeedsCloses #3338
🤖 Generated with Claude Code
Summary by CodeRabbit
Release Notes
New Features
IN,AND) that are automatically normalized to canonical forms for policy-related operationsTests