feat(storage): add conversation memory store substrate#1065
feat(storage): add conversation memory store substrate#1065zhoug9127 wants to merge 13 commits intolightseekorg:mainfrom
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (4)
📝 WalkthroughWalkthroughAdds a StorageBundle and create_storage_bundle with optional ConversationMemoryWriter; implements in-memory and no-op memory writers; introduces memory header parsing, runtime config, and execution context; threads memory execution context and optional memory writer through AppContext, middleware, and OpenAI routing. (33 words) Changes
Sequence DiagramsequenceDiagram
autonumber
participant Client as HTTP Request
participant Middleware as build_memory_execution_context
participant Headers as MemoryHeaderView
participant ExecCtx as MemoryExecutionContext
participant Factory as create_storage_bundle
participant Storage as StorageBundle
participant AppCtx as AppContext
participant Router as OpenAI Router
Client->>Middleware: headers + RouterConfig
Middleware->>Headers: MemoryHeaderView::from_http_headers(headers)
Headers-->>Middleware: normalized header view
Middleware->>ExecCtx: MemoryExecutionContext::from_headers(view, runtime)
ExecCtx-->>Middleware: exec context (requested vs active flags)
Factory->>Storage: create_storage_bundle(storage config)
Storage-->>AppCtx: response/conversation/item storages + optional conversation_memory_writer
AppCtx->>Router: inject router_config + conversation_memory_writer + ExecCtx
Router->>Storage: components receive conversation_memory_writer (if present)
Client->>Router: request handled using ExecCtx and Storage handles
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request introduces a new ConversationMemoryWriter trait and its implementations (Memory and NoOp) to the data connector, along with a new StorageBundle structure to manage storage backends. It also adds a MemoryRuntimeConfig and MemoryExecutionContext to the model gateway to handle memory-related HTTP headers and runtime configuration, including validation logic. My feedback highlights that header parsing for policies and boolean flags should be case-insensitive for robustness, and that the HeaderMap clone in RequestContext::refresh_memory_execution_context should be avoided to improve efficiency.
d7f018a to
b344599
Compare
Signed-off-by: zhoug9127 <zhoug9127@gmail.com>
Signed-off-by: zhoug9127 <zhoug9127@gmail.com>
b344599 to
0996098
Compare
Signed-off-by: zhoug9127 <zhoug9127@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
model_gateway/src/routers/openai/context.rs (1)
125-170: 🛠️ Refactor suggestion | 🟠 MajorBuild
memory_execution_contextin the constructors, not as a separate refresh step.
RequestContext::for_chat()andRequestContext::for_responses()already have both the request headers and the router config, but they still seedmemory_execution_contextwithDefaultand rely on a laterrefresh_memory_execution_context()call. That makes header parsing opt-in and easy to miss on new code paths; forgetting the extra call silently drops the memory controls for that request. Compute the context once during construction, and keep a refresh method only if headers can actually change after theRequestContextis built.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@model_gateway/src/routers/openai/context.rs` around lines 125 - 170, The constructors RequestContext::for_responses and RequestContext::for_chat currently set memory_execution_context to MemoryExecutionContext::default() and rely on refresh_memory_execution_context() later; instead, call middleware::build_memory_execution_context(self.components.router_config(), &headers) inside each constructor using the same headers extraction (the provided headers param or HeaderMap::default()) to initialize memory_execution_context at construction time, leaving refresh_memory_execution_context() only for cases where headers may change; update the fields in for_responses and for_chat to compute and assign memory_execution_context accordingly (referencing for_responses, for_chat, memory_execution_context, middleware::build_memory_execution_context, and refresh_memory_execution_context).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/data_connector/src/factory.rs`:
- Around line 168-185: The hook path currently wraps
response/conversation/conversation_item storages but leaves
bundle.conversation_memory_writer unwrapped, letting
ConversationMemoryWriter::create_memory bypass hooks; either wrap it in a new
HookedConversationMemoryWriter (create HookedConversationMemoryWriter that
implements the same trait as conversation_memory_writer and invokes
StorageHook::before/after/current_request_context() like
HookedResponseStorage/HookedConversationStorage/HookedConversationItemStorage)
and return
Arc::new(HookedConversationMemoryWriter::new(bundle.conversation_memory_writer,
hook.clone())) in the StorageBundle, or explicitly reject combinations by
returning an Err when config.hook.is_some() &&
bundle.conversation_memory_writer.is_some() until the wrapper exists; update
create_storage_bundle() to use the chosen approach and add a test asserting
hook-enabled bundles either wrap the memory writer or are rejected.
In `@model_gateway/src/app_context.rs`:
- Around line 559-564: After awaiting create_storage_bundle(storage_config), add
a fail-fast validation that if memory_runtime.ltm_store_enabled (the runtime
config flag) is true but bundle.conversation_memory_writer is None, return an
error (or propagate a Config/Init error) instead of continuing; update the code
path that sets self.conversation_memory_writer to only accept the Some(writer)
case (or fail when absent). Reference create_storage_bundle,
bundle.conversation_memory_writer, and memory_runtime.ltm_store_enabled so the
gateway will refuse to start when store-enabled is requested but the selected
backend exposes no memory writer.
In `@model_gateway/src/config/validation.rs`:
- Around line 1168-1170: The test currently only checks that
ConfigValidator::validate(&config) returns an Err, which is too broad; update
the assertion to match the specific error variant ConfigError::ValidationFailed
(and optionally inspect its reason/message) by pattern-matching on result (e.g.,
let err = result.expect_err(...); match err {
ConfigError::ValidationFailed(reason) => { /* assert reason contains expected
text */ }, _ => panic!("unexpected error: {:?}", err) }); this ensures the
failure is the new validation rule rather than some unrelated error.
In `@model_gateway/src/memory/context.rs`:
- Around line 12-43: MemoryExecutionContext currently drops the parsed Policy so
callers cannot distinguish store_and_recall vs store_only; update
MemoryExecutionContext (and its constructors from_headers/from_http_headers) to
preserve the policy intent by adding explicit fields (e.g., recall_requested and
recall_active) or keeping the parsed Policy, and set them based on
Policy::from_value(headers.policy.as_deref()) and runtime flags; ensure
store_ltm_requested, store_ltm_active logic remains but also compute
recall_requested = policy.allows_recall() and recall_active = recall_requested
&& runtime.recall_enabled (or equivalent), populate
subject_id/recall_method/embedding_model/extraction_model as before, and
propagate these new fields so downstream hooks can observe recall intent in
MemoryExecutionContext and from_headers.
---
Outside diff comments:
In `@model_gateway/src/routers/openai/context.rs`:
- Around line 125-170: The constructors RequestContext::for_responses and
RequestContext::for_chat currently set memory_execution_context to
MemoryExecutionContext::default() and rely on refresh_memory_execution_context()
later; instead, call
middleware::build_memory_execution_context(self.components.router_config(),
&headers) inside each constructor using the same headers extraction (the
provided headers param or HeaderMap::default()) to initialize
memory_execution_context at construction time, leaving
refresh_memory_execution_context() only for cases where headers may change;
update the fields in for_responses and for_chat to compute and assign
memory_execution_context accordingly (referencing for_responses, for_chat,
memory_execution_context, middleware::build_memory_execution_context, and
refresh_memory_execution_context).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: eefa691b-bd5d-491a-8137-761a1dd03f8e
📒 Files selected for processing (17)
crates/data_connector/src/factory.rscrates/data_connector/src/lib.rscrates/data_connector/src/memory.rscrates/data_connector/src/noop.rsmodel_gateway/src/app_context.rsmodel_gateway/src/config/builder.rsmodel_gateway/src/config/types.rsmodel_gateway/src/config/validation.rsmodel_gateway/src/lib.rsmodel_gateway/src/memory/context.rsmodel_gateway/src/memory/headers.rsmodel_gateway/src/memory/mod.rsmodel_gateway/src/middleware.rsmodel_gateway/src/routers/openai/context.rsmodel_gateway/src/routers/openai/responses/route.rsmodel_gateway/src/routers/openai/router.rsmodel_gateway/src/service_discovery.rs
Signed-off-by: zhoug9127 <zhoug9127@gmail.com>
Signed-off-by: zhoug9127 <zhoug9127@gmail.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 2c035f5398
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
model_gateway/src/app_context.rs (1)
286-360:⚠️ Potential issue | 🟠 MajorEnforce the memory-writer invariant in
build()too.The new readiness check only runs in
with_storage(). Any caller that usesAppContextBuilderdirectly can still build anAppContextwithmemory_runtime.ltm_store_enabled = trueandconversation_memory_writer = None, which recreates the missing-writer state this PR is trying to fail fast on.Suggested fix
pub fn build(self) -> Result<AppContext, AppContextBuildError> { let router_config = self .router_config .ok_or(AppContextBuildError::MissingField("router_config"))?; + validate_memory_writer_configuration( + &router_config, + self.conversation_memory_writer.as_ref(), + ) + .map_err(AppContextBuildError::InvalidConfig)?; + let configured_reasoning_parser = router_config.reasoning_parser.clone(); let configured_tool_parser = router_config.tool_call_parser.clone();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@model_gateway/src/app_context.rs` around lines 286 - 360, The build() method must enforce the same invariant as with_storage(): if router_config.memory_runtime.ltm_store_enabled (or equivalent flag on the RouterConfig) is true then conversation_memory_writer must be Some; add an early check in AppContextBuilder::build that reads router_config (or self.router_config) and returns Err(AppContextBuildError::InvalidConfig(...)) when ltm_store_enabled is true but self.conversation_memory_writer.is_none(), mirroring the with_storage() readiness check to fail fast; reference the build() function, with_storage(), RouterConfig.memory_runtime.ltm_store_enabled, and conversation_memory_writer when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/data_connector/src/factory.rs`:
- Around line 169-175: The current check in factory.rs unconditionally rejects
any config.hook when bundle.conversation_memory_writer.is_some(), which wrongly
blocks HistoryBackend::Memory and HistoryBackend::None even if the LTM store is
disabled; change the guard so it only errors when hooks are enabled AND the
runtime will actually use the conversation writer (e.g., check
memory_runtime.ltm_store_enabled or equivalent runtime flag) or move this
validation out of the factory into the runtime-aware layer that has access to
memory_runtime.ltm_store_enabled; specifically modify the logic around
config.hook and bundle.conversation_memory_writer to defer or conditionalize the
rejection (do not unconditionally reject for HistoryBackend::Memory/None).
---
Outside diff comments:
In `@model_gateway/src/app_context.rs`:
- Around line 286-360: The build() method must enforce the same invariant as
with_storage(): if router_config.memory_runtime.ltm_store_enabled (or equivalent
flag on the RouterConfig) is true then conversation_memory_writer must be Some;
add an early check in AppContextBuilder::build that reads router_config (or
self.router_config) and returns Err(AppContextBuildError::InvalidConfig(...))
when ltm_store_enabled is true but self.conversation_memory_writer.is_none(),
mirroring the with_storage() readiness check to fail fast; reference the build()
function, with_storage(), RouterConfig.memory_runtime.ltm_store_enabled, and
conversation_memory_writer when making the change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 5ec57a41-791a-4430-bd70-f6fea53c7d37
📒 Files selected for processing (5)
crates/data_connector/src/factory.rsmodel_gateway/src/app_context.rsmodel_gateway/src/config/validation.rsmodel_gateway/src/memory/context.rsmodel_gateway/src/routers/openai/context.rs
Signed-off-by: zhoug9127 <zhoug9127@gmail.com>
Signed-off-by: zhoug9127 <zhoug9127@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@model_gateway/src/memory/context.rs`:
- Around line 95-108: The is_enabled helper currently treats "1", "true", "yes",
and "enabled" as truthy; update is_enabled to also accept "on" as a truthy value
by adding a check for value.eq_ignore_ascii_case("on") in the boolean expression
that compares the normalized value (function: is_enabled, uses normalize to trim
input). Ensure normalize(&str) remains used so values like " on " are correctly
handled.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: f4cb03ec-aaf6-4a29-ba27-1f0089d398c1
📒 Files selected for processing (4)
crates/data_connector/src/factory.rsmodel_gateway/src/app_context.rsmodel_gateway/src/memory/context.rsmodel_gateway/src/routers/openai/context.rs
Signed-off-by: zhoug9127 <zhoug9127@gmail.com>
Signed-off-by: zhoug9127 <zhoug9127@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@model_gateway/src/config/validation.rs`:
- Around line 82-94: Change the validation error text in validate_memory_runtime
(the ConfigError::ValidationFailed returned when
memory_runtime.ltm_store_enabled is true but memory_runtime.ltm_enabled is
false) to a more explicit constraint wording such as stating that
"ltm_store_enabled requires ltm_enabled to be true" so the message reads clearly
and directly conveys the dependency between memory_runtime.ltm_store_enabled and
memory_runtime.ltm_enabled.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: a88beada-10fd-4dd5-8491-7cc845ef3cb8
📒 Files selected for processing (1)
model_gateway/src/config/validation.rs
Signed-off-by: zhoug9127 <zhoug9127@gmail.com>
Signed-off-by: zhoug9127 <zhoug9127@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/data_connector/src/memory.rs`:
- Around line 280-283: Update the doc comment on MemoryConversationMemoryWriter
to accurately describe which backend it serves: replace the misleading
"memory/noop history backends" text with a clear note that
MemoryConversationMemoryWriter is used only for HistoryBackend::Memory (and that
HistoryBackend::None uses NoOpConversationMemoryWriter). Locate the struct
MemoryConversationMemoryWriter and its doc comment and change the description to
reference HistoryBackend::Memory and NoOpConversationMemoryWriter explicitly so
readers aren’t confused about backend wiring.
In `@model_gateway/src/memory/headers.rs`:
- Around line 5-10: The header-name constants (MEMORY_POLICY_HEADER,
MEMORY_LTM_STORE_ENABLED_HEADER, MEMORY_SUBJECT_ID_HEADER,
MEMORY_RECALL_METHOD_HEADER, MEMORY_EMBEDDING_MODEL_HEADER,
MEMORY_EXTRACTION_MODEL_HEADER) should be made public so other modules and tests
can reuse the single source of truth: change each const to pub const in
model_gateway/src/memory/headers.rs and update any callers (e.g., tests in
model_gateway/src/routers/openai/context.rs) to import and reference these
constants instead of hardcoding "x-smg-memory-ltm-store-enabled" (and other
header strings).
In `@model_gateway/src/routers/openai/context.rs`:
- Around line 301-359: Add a regression test that follows the Responses path
through RequestContext::for_responses and then calls into_streaming_context() to
ensure the new fields are preserved; specifically, construct a RequestContext
via RequestContext::for_responses (using
shared_components_with_memory_enabled()), call ctx.into_streaming_context(), and
assert that the resulting StorageHandles contains a non-none
conversation_memory_writer and that its memory_execution_context has
store_ltm_active set (i.e., verify both conversation_memory_writer and
memory_execution_context survive into StorageHandles).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: dc98e2e0-bb5d-42ac-bce7-b34480c7b367
📒 Files selected for processing (6)
crates/data_connector/src/memory.rscrates/data_connector/src/noop.rsmodel_gateway/src/config/builder.rsmodel_gateway/src/config/validation.rsmodel_gateway/src/memory/headers.rsmodel_gateway/src/routers/openai/context.rs
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
crates/data_connector/src/memory.rs (1)
282-286:⚠️ Potential issue | 🟡 MinorFix the backend description in this doc comment.
The doc comment states "memory/noop history backends" but
MemoryConversationMemoryWriteris only wired forHistoryBackend::Memory. TheHistoryBackend::Nonevariant usesNoOpConversationMemoryWriterinstead.📝 Suggested fix
#[derive(Default, Clone)] -/// In-memory conversation memory writer used by memory/noop history backends. +/// In-memory conversation memory writer used by the Memory history backend. pub struct MemoryConversationMemoryWriter {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/data_connector/src/memory.rs` around lines 282 - 286, The doc comment for MemoryConversationMemoryWriter incorrectly says it's used by "memory/noop history backends" — update the comment to correctly state that MemoryConversationMemoryWriter is used for the HistoryBackend::Memory (in-memory) backend only (and not for HistoryBackend::None, which uses NoOpConversationMemoryWriter); adjust the text on the MemoryConversationMemoryWriter struct to reflect that it is an in-memory conversation memory writer wired for HistoryBackend::Memory.model_gateway/src/routers/openai/context.rs (1)
368-427: 🧹 Nitpick | 🔵 TrivialAdd a regression test for the streaming handoff.
These tests verify
RequestContextinitialization, but the new fields are consumed later ininto_streaming_context()(lines 316-352). A test usingResponsesComponentsshould verify thatconversation_memory_writerandmemory_execution_contextsurvive intoStorageHandles.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@model_gateway/src/routers/openai/context.rs` around lines 368 - 427, Add a regression test that ensures RequestContext fields used by the streaming handoff survive into StorageHandles: create a test that builds a RequestContext (using ResponsesRequest and ResponsesComponents/SharedComponents similar to the existing tests), call RequestContext::into_streaming_context() (or the method that produces StorageHandles) and assert that the returned StorageHandles contain a non-None conversation_memory_writer and that memory_execution_context (e.g., store_ltm_active) is preserved; reference RequestContext::into_streaming_context, ResponsesComponents, StorageHandles, conversation_memory_writer, and memory_execution_context when locating where to add the test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/data_connector/src/memory.rs`:
- Around line 297-307: Add unit tests for MemoryConversationMemoryWriter that
validate create_memory behavior: write tests that call
MemoryConversationMemoryWriter::create_memory with a sample
NewConversationMemory and assert the returned ConversationMemoryId starts with
"mem_" and is unique across multiple calls (use ulid-based format expectations),
inspect the writer's inner store (via its .inner RwLock) to confirm the created
entry was inserted under that id and contains the expected NewConversationMemory
content, and add a clone-sharing-state test similar to
test_conversation_item_storage_clone_shares_state that clones
MemoryConversationMemoryWriter and verifies mutations (creating a memory) on one
instance are visible through the other. Use the types ConversationMemoryId,
NewConversationMemory, and MemoryConversationMemoryWriter to locate targets.
---
Duplicate comments:
In `@crates/data_connector/src/memory.rs`:
- Around line 282-286: The doc comment for MemoryConversationMemoryWriter
incorrectly says it's used by "memory/noop history backends" — update the
comment to correctly state that MemoryConversationMemoryWriter is used for the
HistoryBackend::Memory (in-memory) backend only (and not for
HistoryBackend::None, which uses NoOpConversationMemoryWriter); adjust the text
on the MemoryConversationMemoryWriter struct to reflect that it is an in-memory
conversation memory writer wired for HistoryBackend::Memory.
In `@model_gateway/src/routers/openai/context.rs`:
- Around line 368-427: Add a regression test that ensures RequestContext fields
used by the streaming handoff survive into StorageHandles: create a test that
builds a RequestContext (using ResponsesRequest and
ResponsesComponents/SharedComponents similar to the existing tests), call
RequestContext::into_streaming_context() (or the method that produces
StorageHandles) and assert that the returned StorageHandles contain a non-None
conversation_memory_writer and that memory_execution_context (e.g.,
store_ltm_active) is preserved; reference
RequestContext::into_streaming_context, ResponsesComponents, StorageHandles,
conversation_memory_writer, and memory_execution_context when locating where to
add the test.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 18684e97-49c5-4a16-8ea4-e65b550be079
📒 Files selected for processing (4)
crates/data_connector/src/memory.rscrates/data_connector/src/noop.rsmodel_gateway/src/config/validation.rsmodel_gateway/src/routers/openai/context.rs
Signed-off-by: zhoug9127 <zhoug9127@gmail.com>
Signed-off-by: zhoug9127 <zhoug9127@gmail.com>
Description
Problem
SMG already supports response, conversation, and conversation-item persistence, but it does not yet expose a generic conversation-memory store substrate that later request hooks can consume. We need that generic substrate in upstream first, without introducing specific planners, row semantics, or persistence behavior changes.
Header Note (Placeholder)
Current memory request headers are placeholders and are still under discussion.
They are intentionally treated as provisional and may be renamed in a follow-up PR:
x-smg-memory-policyx-smg-memory-ltm-store-enabledx-smg-memory-subject-idx-smg-memory-recall-methodx-smg-memory-embedding-modelx-smg-memory-extraction-modelSolution
Add a generic conversation-memory store substrate and normalized request memory execution context.
This PR:
ConversationMemoryWriterexposure through the storage factory and app contextNonememory_runtimeconfig plus validation for store readinessMemoryExecutionContextNon-goals in this PR:
Changes
create_storage_bundleplumbing and optionalconversation_memory_writermemory_runtimeconfig and validationmodel_gateway::memorymodule for normalized request memory contextTest Plan
cargo test -p data-connector test_create_storage_bundle_memory_exposes_memory_writer -- --nocapturecargo test -p data-connector test_create_storage_bundle_none_exposes_memory_writer -- --nocapturecargo test -p smg test_builder_memory_runtime_flags_round_trip -- --nocapturecargo test -p smg test_reject_ltm_store_when_ltm_runtime_disabled -- --nocapturecargo test -p smg header_policy_store_and_recall_enables_store_request -- --nocapturecargo test -p smg missing_headers_produce_inactive_context -- --nocapturecargo test -p smg store_headers_become_active_when_runtime_is_ready -- --nocapturegit diff --checkSummary by CodeRabbit
New Features
Behavioral Changes / Bug Fixes
Tests