Skip to content

fix: model fallback to config.llm when resolveAgentPrimaryModelRef returns undefined (PR #618 continuation)#713

Open
jlin53882 wants to merge 8 commits intoCortexReach:masterfrom
jlin53882:fix/pr618-clean-v2
Open

fix: model fallback to config.llm when resolveAgentPrimaryModelRef returns undefined (PR #618 continuation)#713
jlin53882 wants to merge 8 commits intoCortexReach:masterfrom
jlin53882:fix/pr618-clean-v2

Conversation

@jlin53882
Copy link
Copy Markdown
Contributor

Summary

This PR continues the work from PR #618 with a clean diff.

Changes

  1. index.ts - Add inferProviderFromBaseURL() function:

    • Uses URL hostname with .endsWith(".suffix") to prevent subdomain spoofing
    • Supports minimax.io, openai.com, anthropic.com
  2. generateReflectionText() - Model fallback:

    • When resolveAgentPrimaryModelRef returns undefined, fallback to config.llm.model
    • When provider is undefined, infer from config.llm.baseURL
  3. CI manifest - Add regression test entry

Fixes from Original PR #618

  • Fixed CRLF line endings (now LF-only)
  • Clean diff: 29 insertions, 2 deletions
  • Added stub-based test instead of copying production code

Original PR

Please close PR #618 as this PR supersedes it.

/cc @app3apps

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: b76f5afa6c

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread index.ts Outdated
Comment on lines +1215 to +1216
?? (llmConfig?.model as string | undefined);
const split = modelRef ? splitProviderModel(modelRef) : { provider: undefined, model: undefined };
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Validate llm.model type before splitting provider/model

generateReflectionText now pulls llmConfig?.model through a type cast and then calls splitProviderModel(modelRef) without runtime type checks. If a user config provides any non-string truthy llm.model (for example a number/object), splitProviderModel will call .trim() on it and throw, which makes the embedded reflection path fail and forces CLI/fallback execution. Please guard this fallback with typeof === "string"/asNonEmptyString before splitting.

Useful? React with 👍 / 👎.

…romBaseURL

- Add inferProviderFromBaseURL() using hostname.endsWith() to prevent subdomain spoofing
- Modify generateReflectionText to fallback to config.llm.model when resolveAgentPrimaryModelRef returns undefined
- Add regression test to CI manifest
@jlin53882 jlin53882 force-pushed the fix/pr618-clean-v2 branch from b76f5af to ea52a90 Compare April 28, 2026 10:54
jlin53882 and others added 5 commits April 28, 2026 19:04
…iderModel

Address Codex review feedback:
- Guard modelRefFromConfig with typeof === 'string' to prevent runtime errors
  when user config provides non-string values (number/object)
…icts

Conflict resolution:
- scripts/ci-test-manifest.mjs: keep both to-import-specifier-windows.test.mjs
  (removed in master f194d79 rebase, preserved here) and infer-provider-from-baseurl.test.mjs
- scripts/verify-ci-test-manifest.mjs: same resolution

Both test entries belong in core-regression group.
…oduction in test, add fallback comments

- Export inferProviderFromBaseURL for unit testing (matches other exported utilities)
- Rewrite test to jiti-import the actual function instead of duplicating logic
- Add inline comments explaining model/provider resolution chain in generateReflectionText
- Fix test header: PR CortexReach#618 -> PR CortexReach#713
- Add URL path variation test cases (deep path, no path)
Conflicts resolved:
- index.ts: combine HEAD model fallback + inferProviderFromBaseURL with master passing params.api
- ci-test-manifest.mjs: take master additions
- verify-ci-test-manifest.mjs: take master additions
jlin53882 added a commit to jlin53882/memory-lancedb-pro that referenced this pull request May 5, 2026
…onMode + PendingRecall hooks), sync CI manifest, add issue606 to verify baseline
jlin53882 added a commit to jlin53882/memory-lancedb-pro that referenced this pull request May 5, 2026
jlin53882 added a commit to jlin53882/memory-lancedb-pro that referenced this pull request May 5, 2026
jlin53882 added a commit to jlin53882/memory-lancedb-pro that referenced this pull request May 5, 2026
@jlin53882 jlin53882 force-pushed the fix/pr618-clean-v2 branch from ca596f4 to be4ba28 Compare May 5, 2026 12:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant