mirror of
https://github.com/danny-avila/LibreChat.git
synced 2026-05-13 15:58:48 +00:00
501 commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
c67e2b54dc
|
🔐 feat: Mint Code API Auth Tokens (#13028)
* feat: Mint CodeAPI auth tokens * style: Format CodeAPI download route * fix: Prune CodeAPI token cache * fix: Propagate CodeAPI managed auth * test: Mock CodeAPI auth in traversal suite * fix: Pass auth context to invoked skill cache * feat: Mint CodeAPI plan context * chore: Refresh CodeAPI auth guidance * fix: Guard OpenID JWT fallback * fix: Default CodeAPI JWT tenant in single-tenant mode * chore: Update @librechat/agents to version 3.1.84 in package-lock.json and package.json files * chore: Standardize references to Code API in comments and tests |
||
|
|
d90567204e
|
🛟 fix: persist Vertex Gemini 3 thoughtSignatures across DB round-trips (#13026)
When a tool round-trip is interrupted between the tool result and the
model's text reply (user aborted, network drop, pod restart, ...) and
LibreChat persists the partial assistant message, the next conversation
turn reconstructs an `AIMessage` from `formatAgentMessages` that has
`tool_calls` populated but no `additional_kwargs.signatures`. Vertex
Gemini 3 rejects the resumed request with 400 because the most recent
historical functionCall has no `thought_signature`.
## Storage shape
Capture as `Record<tool_call_id, signature>` rather than a flat array.
This addresses the codex P1 review:
> When an assistant turn contains multiple sequential tool-call batches,
> this restoration path writes all persisted thoughtSignatures onto only
> the last tool-bearing AIMessage. Vertex/Gemini validates signatures
> for each step in the current tool-calling turn, so earlier
> functionCall steps reconstructed without their signature can still
> fail with 400.
A single agent run can fire multiple `chat_model_end` events when the
loop cycles the LLM with intervening tool results — each cycle owns a
distinct `tool_call_id`. Per-id storage maps each signature back onto
the right reconstructed `AIMessage`, not just the last one.
## Mapping
`additional_kwargs.signatures` is a flat array indexed by *response part*
(text + functionCall interleaved). `tool_calls` is just the function
calls in their original order. Non-empty signatures correspond 1:1 with
tool_calls in order — see `partsToSignatures` in
`@langchain/google-common`. Single-pass walk maps `signatures[i]` (when
non-empty) onto the i-th `tool_call.id`.
## Pipeline
| Stage | File | Change |
|---|---|---|
| Capture | callbacks.js | `ModelEndHandler` accepts `Record<string,string>` map; walks signatures + tool_calls in tandem to record per-id. Gated on the map being provided — non-Vertex flows are no-op (and also no-op even when provided, since they don't emit signatures). |
| Plumbing | initialize.js | Allocate `collectedThoughtSignatures = {}`, share with handler + client. Always allocated; the JSDoc explicitly documents that it stays empty for non-Vertex providers. |
| Surface | client.js | `sendCompletion` returns `metadata.thoughtSignatures` when the map has entries; falls through unchanged when empty. |
| Persist | (existing BaseClient.handleRespCompletion) | Writes `metadata` from `sendCompletion` onto `responseMessage.metadata`. Mongoose `Mixed` — no migration. |
| Restore | formatMessages.js | Track every tool-bearing AIMessage produced from a TMessage. For each, build a position-aligned `additional_kwargs.signatures` array (empty placeholders for tool_calls without a stored sig). Agents' `fixThoughtSignatures` dispatches non-empty entries to functionCall parts in order. |
## Live verification
- **Single-step:** real Vertex `gemini-3.1-flash-lite-preview` resume-after-tool case. With fix ✅ / without ❌ 400.
- **Multi-step (codex case):** real two-step agent loop (list /tmp → echo done). Each step's signature attaches to its own reconstructed AIMessage. With fix ✅ / without ❌ 400.
- **Cross-provider:** Anthropic Claude haiku-4.5 + OpenAI gpt-5-mini accept the persisted/restored shape unchanged.
## Tests
`modelEndHandler.spec.js` (new) — 6 tests:
- maps non-empty signatures onto tool_call_ids in order
- accumulates per-id across multiple `model_end` events (multi-step)
- no-op when `collectedThoughtSignatures` is null
- no-op when `signatures` field missing (non-Vertex)
- no-op when `tool_calls` missing
- preserves existing `collectedUsage` array contract
`formatAgentMessages.spec.js` — 6 new tests:
- restores onto the AIMessage that owns the tool_call
- per-step attachment for multi-step turns (codex review case)
- preserves tool_call ordering when signatures are partial
- no-op when metadata.thoughtSignatures absent
- no-op when assistant has no tool_calls
- no-op when stored ids don't match any current tool_call
37 passing across 3 suites; 15 existing formatAgentMessages tests unchanged.
## Compatibility
- Backward-compatible — restore gated on `metadata.thoughtSignatures` being a populated object; capture gated on the map being provided.
- No schema migration — uses `Message.metadata: Mixed` already in place.
- Cross-provider safe — non-Vertex providers tolerate the field (verified live against Anthropic + OpenAI converters).
- Pairs with [agents#159](https://github.com/danny-avila/agents/pull/159) for full coverage on histories that mix plain-text and toolcall AIMessages.
|
||
|
|
93c4ef4ba8
|
🧱 refactor: typed CodeEnvRef + kind discriminator + principal-aware sandbox cache (#12960)
* 🧱 refactor: typed CodeEnvRef + kind discriminator + tenant-aware sandbox cache Final cutover for the LibreChat ↔ codeapi sandbox file identity. Replaces the magic string `${session_id}/${file_id}?entity_id=...` with a typed, discriminated `CodeEnvRef`. Pre-release lockstep deploy with codeapi #1455 and agents #148; no legacy aliases retained. ## Final shape ```ts type CodeEnvRef = | { kind: 'skill'; id: string; storage_session_id: string; file_id: string; version: number } | { kind: 'agent'; id: string; storage_session_id: string; file_id: string } | { kind: 'user'; id: string; storage_session_id: string; file_id: string }; ``` `kind` drives codeapi's sessionKey: `<tenant>:<kind>:<id>[✌️<version>]` for shared kinds, `<tenant>:user:<userId>` for user-private (auth context provides `userId`). `version` is statically required for `kind: 'skill'` and forbidden otherwise via discriminated union — constraint holds at compile time on every consumer, not just codeapi's runtime validator. `id` is sessionKey-meaningful for `'skill'` / `'agent'`; informational only for `'user'` (codeapi resolves user identity from auth context). ## What changed - `packages/data-provider/src/codeEnvRef.ts` — discriminated union + `CODE_ENV_KINDS` const-tuple keeps the runtime list and TS union locked together. - Schemas: `metadata.codeEnvRef` and `SkillFile.codeEnvRef` enums tightened to `['skill', 'agent', 'user']`. - `primeSkillFiles` writes `kind: 'skill'`, `id: skill._id`, `version: skill.version`. Cache-hit path reads `codeEnvRef` directly. Bumping `skill.version` on edit naturally invalidates the prior cache entry under the new sessionKey. - `processCodeOutput` writes `kind: 'user'`, `id: req.user.id`. Output bucket is always user-scoped, regardless of which skill the execution invoked. New regression test pins the asymmetry. - `primeFiles` reupload preserves `kind`/`id`/`version?` from the existing ref so a skill-cache-miss reupload doesn't silently demote to user bucket. - `crud.js` upload functions (`uploadCodeEnvFile` / `batchUploadCodeEnvFiles`) thread `kind`/`id`/`version?` to the multipart form (codeapi #1455 option α). Without these on the wire, codeapi falls back to user bucketing and skill-cache invalidation never fires. Client-side validation mirrors codeapi's validator. - `Files/process.js` — chat attachments use `kind: 'user'`; agent setup files use `kind: 'agent'`. - Drops `entity_id` everywhere (struct, schema sub-docs, write paths, upload form fields). Drops `'system'` from the kind enum (no emitter ever existed). ## Test plan - [x] `cd packages/data-provider && npx jest src/codeEnvRef.spec` — 4 / 4 - [x] `cd packages/data-schemas && npx jest` — 1447 / 1447 - [x] `cd packages/api && npx jest src/agents` — 81 / 81 in skillFiles + handlers + resources - [x] `cd api && npx jest server/services/Files server/controllers/agents` — 436 / 436 - [x] `cd api && npx jest server/services/Files/Code` — 98 / 98 (incl. new "outputs are user-scoped regardless of which skill the execution invoked" regression and "reupload forwards kind/id/version from existing ref") - [x] `npx tsc --noEmit -p packages/data-{provider,schemas}/tsconfig.json && npx tsc --noEmit -p packages/api/tsconfig.json` — clean (only pre-existing unrelated dev errors in storage/balance, untouched here) ## Deploy notes - **24h cache-miss burst** on first deploy. Inputs (skill caches re-prime under new sessionKey shape) and outputs (any pre-Phase C skill-output cached files become unreadable). Bounded by codeapi's 24h TTL. - **Lockstep with codeapi #1455 and agents #148.** Either repo can land first since no aliases to drain, but the three deploys must overlap within the same maintenance window. - **`@librechat/agents` bump to `3.1.79-dev.0`** required after agents #148 lands and is published. ## What this enables Auth bridge work (JWT-based tenant/user identity between LC and codeapi) — codeapi now derives sessionKey purely from `req.codeApiAuthContext.{ tenantId, userId}`, so the next chapter is replacing the header-asserted user identity with a verified-claim path. * 🩹 fix: persist execute_code uploads under codeEnvRef metadata key Codex review P1 (chatgpt-codex-connector). `Files/process.js` was storing the upload result under `metadata.fileIdentifier` even though: - `uploadCodeEnvFile` now returns `{ storage_session_id, file_id }`, not the legacy magic string. - The post-cutover schema (`File.metadata.codeEnvRef`) only declares `codeEnvRef` — mongoose strict mode silently strips unknown keys. - All readers (`primeFiles`, `getCodeFilesByIds`, `categorizeFileForToolResources`, controller filtering) check `metadata.codeEnvRef`. Net effect of the bug: chat-attached and agent-setup execute_code files would lose their sandbox reference on save, and primeFiles would skip them on subsequent code-execution turns — the file blob would still be available locally but never re-mounted in the sandbox. Fix: construct the full `CodeEnvRef` (`{ kind, id, storage_session_id, file_id }`) at the write site and persist under `metadata.codeEnvRef`. `BaseClient`'s "is this a code-env file" presence check accepts the new shape alongside the legacy `fileIdentifier` for back-compat with any pre-cutover records still in the database. Mirrors the same change in `processAttachments.spec.ts` (which re-implements the BaseClient logic for testability). New regression tests in `process.spec.js` cover three cases: - chat attachments (`messageAttachment=true`) → `kind: 'user'` - agent setup (`messageAttachment=false`) → `kind: 'agent'` - legacy `fileIdentifier` key is NOT persisted (would be schema-stripped) * 🩹 fix: read storage_session_id on primed file refs (Codex P1) Codex review (chatgpt-codex-connector). After Phase B's per-file `session_id` → `storage_session_id` rename, `primeFiles` emits the new field — but `seedCodeFilesIntoSessions` was still reading `files[0].session_id` for the representative session and `f.session_id` for the dedupe key. In runs with only primed attachments (no skill seed), `representativeSessionId` was `undefined`, the function returned the unchanged map, and `seedCodeFilesIntoSessions` silently dropped the entire batch. The first `execute_code` call then started without `_injected_files` and the agent couldn't see prior-turn artifacts. Fix: - `codeFilesSession.ts`: read `f.storage_session_id` for both the dedupe key and the representative session id. JSDoc updated to match the new field name. - `callbacks.js`: the two output-file persistence paths read `file.session_id` to pass to `processCodeOutput` — switch to `file.storage_session_id`. The original comment explicitly says this should be the STORAGE session, which is exactly the field Phase B renamed. - `codeFilesSession.spec.ts`: fixture builder uses `storage_session_id` and `kind: 'user'` to match the post-cutover `CodeEnvFile` shape. Lockstep coordination: this matches the post-bump shape of `@librechat/agents` 3.1.79+. CI tsc errors against the currently-pinned 3.1.78 are expected and resolve when the dep bumps in this PR before merge. * 📦 chore: Bump `@librechat/agents` to version 3.1.80-dev.0 in package-lock and package.json files * 🪪 fix: thread kind/id/version through codeapi /download URLs (Phase C α) Symmetric fix for the upload-side wire change in 537725a. Codeapi's `sessionAuth` middleware now requires `kind`/`id`/`version?` on every download/freshness URL — without them it 400s with "kind must be one of: skill, agent, user" before serving the file. Three sites construct codeapi-side URLs that go through `sessionAuth`: - `processCodeOutput` (`Files/Code/process.js`): `/download/<sess>/<id>` for freshly-generated sandbox outputs. Always `kind: 'user'` + `id: req.user.id` — code-output files are always user-private, regardless of which skill the run invoked. - `getSessionInfo` (`Files/Code/process.js`): `/sessions/<sess>/objects/<id>` for the 23h freshness check. Pulls kind/id/version straight off the `codeEnvRef` already in scope — skill files stay skill-bucketed, user files stay user-bucketed. - `/code/download/:session_id/:fileId` LC route (`routes/files/files.js`): proxies to codeapi for manual downloads. Code-output files only on this route, so `kind: 'user'` + `id: req.user.id`. The `getCodeOutputDownloadStream` helper in `crud.js` now takes an `identity` param, validated by a `buildCodeEnvDownloadQuery` helper that mirrors `appendCodeEnvFileIdentity`'s shape rules: kind required from the closed `{skill, agent, user}` set, version required for 'skill' and forbidden otherwise. Bad callers fail fast on the client instead of round-tripping a 400. Also cleans up two log-noise sources reported alongside the 400: - `logAxiosError` in `packages/api/src/utils/axios.ts` was dumping `error.response.data` raw. With `responseType: 'arraybuffer'` that's a `Buffer` (~4 chars per byte after JSON-serialization); with `responseType: 'stream'` it's a `Readable` whose internal state serializes the entire ring buffer + socket. New `renderResponseData` decodes small buffers as UTF-8 (truncated past 2KB) and stubs streams as `'[stream]'`. Diagnostics stay useful, log lines stop being megabytes. - `/code/download` route's catch was bare `logger.error('...', error)`, bypassing the redactor. Switched to `logAxiosError` so it benefits from the same buffer/stream handling. Tests updated to match the new contract: - crud.spec: `getCodeOutputDownloadStream` fixtures pass `userIdentity`; new cases cover skill identity (with version), bad kind rejection, skill-without-version rejection. - process.spec: `getSessionInfo` test passes a full `codeEnvRef` object. * ♻️ refactor: extract codeEnv identity helpers into packages/api Per the project convention that new backend code lives in TypeScript under `packages/api`, moves `appendCodeEnvFileIdentity` and `buildCodeEnvDownloadQuery` from `api/server/services/Files/Code/crud.js` into a new `packages/api/src/files/code/identity.ts` module. Both helpers are pure validators that mirror codeapi's `parseUploadSessionKeyInput` server-side rules (closed kind set, `version` required for `'skill'` and forbidden otherwise) — they deserve TS support and a dedicated spec rather than living as JSDoc-typed helpers in the legacy `/api` workspace. The new module: - Exports a `CodeEnvIdentity` interface using the `librechat-data-provider` `CodeEnvKind` discriminated union. - Adds 13 unit tests in `identity.spec.ts` covering the validation matrix (skill+version, agent, user, and every rejection path) plus URL encoding for the download query. - Re-exported from `packages/api/src/files/code/index.ts` alongside `classify`, `extract`, and `form`. Consumer updates: - `api/server/services/Files/Code/crud.js`: drops the local helpers and imports them from `@librechat/api`. Net -64 lines. - `api/server/services/Files/Code/process.js`: same. - Test mocks for `@librechat/api` in three spec files now stub the helpers' validation behavior locally rather than pulling them through `requireActual` (which would drag in provider-config init-time side effects). The package's `exports` field only surfaces the root barrel, so leaf imports aren't reachable from legacy `/api` test setup. No runtime behavior change. Identity validation rules and emitted form/query shapes are byte-for-byte identical pre/post. * 🪪 fix: emit resource_id alongside id on _injected_files (skill 403 fix) Companion to codeapi #1455 fix and agents 3.1.80-dev.1 — the wire shape for shared-kind files now requires `resource_id` distinct from the storage `id`. Without this LC change, codeapi's sessionKey re-derivation on every shared-kind /exec rejects with 403 session_key_mismatch: cached: legacy:skill:69dcf561...✌️59 (signed at upload, skill _id) derived: legacy:skill:ysPwEURuPk-...✌️59 (storage nanoid) Emit sites updated: - `primeInvokedSkills` cache-hit path: `resource_id: ref.id` (the persisted skill `_id` from `codeEnvRef.id`); `id: ref.file_id` unchanged (storage uuid). - `primeInvokedSkills` fresh-upload path: `resource_id: skill._id.toString()` on every primed file (the `allPrimedFiles` builder type now carries the field). - `processCodeOutput`'s `pushFile` (Code/process.js): `resource_id: ref.id` — for `kind: 'user'` this is informational (codeapi derives sessionKey from auth context) but emitted for shape uniformity with shared kinds. Bumps `@librechat/agents` to `^3.1.80-dev.1` (the version that ships the matching `CodeEnvFile.resource_id` field). ## Test plan - [x] `cd packages/api && npx jest src/agents` — 67 / 67 pass (skillFiles fixtures updated to assert `resource_id` on the emitted CodeSessionContext.files). - [x] `cd api && npx jest server/services/Files server/controllers/agents` — 445 / 445 pass (process.spec fixtures updated for the reupload + cache-hit emission). - [x] `npx tsc --noEmit -p packages/api/tsconfig.json` — clean. * fix(skill-tool-call): carry resource_id through primeSkillFiles → artifact Codeapi was 400ing every /exec following a `handle_skill` tool call with `resource_id is invalid` (`type: 'undefined'`). Both code paths in `primeSkillFiles` (cache-hit + fresh-upload) returned files without `resource_id`/`kind`/`version`, and the artifact in `handlers.ts` forwarded the stripped shape into `tc.codeSessionContext.files` → `_injected_files`. `primeInvokedSkills` (the NL-detected loader) had already been fixed end-to-end; this commit aligns the tool-invoked path with the same contract: `resource_id` = `skill._id.toString()`, `kind: 'skill'`, `version` = the skill's monotonic counter. Tests added to `skillFiles.spec.ts` lock the contract on `primeSkillFiles` directly so future refactors can't silently drop the resource identity again. * fix(handlers.spec): align session_id → storage_session_id rename + kind discriminator Pre-existing TS errors against the post-rename `CodeEnvFile` shape: the test file still used `session_id` on per-file objects (renamed to `storage_session_id` in agents Phase B/C) and was missing the `kind` discriminator the discriminated union requires. Both inputs and the matching `expect.toEqual(...)` mirrors updated together so the runtime equality check still holds. Lines 723-732 stay as-is — they sit behind `as unknown as ToolCallRequest` and TS already skipped them. * chore: fix `@librechat/agents`, correct version to 3.1.80-dev.0 in package.json files * chore: bump `@librechat/agents` to version 3.1.80-dev.1 in package.json and package-lock.json * chore: bump `@librechat/agents` to version 3.1.80-dev.2 * feat(observability): trace file priming chain from primeCodeFiles to _injected_files Diagnosing the user-upload "files=[] on first /exec" bug requires seeing where in the LC chain a file ref disappears. Prior to this patch the chain (primeCodeFiles → primedCodeFiles → initialSessions → CodeSessionContext → _injected_files) was opaque end-to-end: - primeCodeFiles silently dropped files without `metadata.codeEnvRef` - reuploadFile catches all errors and continues with no signal - the handlers.ts handoff to codeapi never logged what it was sending After this patch, a single grep on `[primeCodeFiles]` plus `[code-env:inject]` shows the full per-file path: [primeCodeFiles] in: file_ids=N resourceFiles=M [primeCodeFiles] file=<id> path=skip reason=no-codeenvref filename=... [primeCodeFiles] file=<id> path=cache-hit-by-session storage_session_id=... [primeCodeFiles] file=<id> path=reupload reason=no-uploadtime ... [primeCodeFiles] file=<id> path=reupload reason=stale ... [primeCodeFiles] file=<id> path=reupload-success oldSession=... newSession=... newFileId=... [primeCodeFiles] file=<id> path=reupload-failed session=... [primeCodeFiles] file=<id> path=fresh-active storage_session_id=... [primeCodeFiles] out: returned=N skippedNoRef=M reuploadFailures=K [code-env:inject] tool=<name> files=N missingResourceId=K (debug) [code-env:inject] M/N files missing resource_id ... (warn) [code-env:inject] tool=<name> _injected_files=0 ... (warn) The boundary log warns when LC sends zero injected files on a code-execution tool call — that's the user's actual symptom showing up at the LC side instead of having to correlate against codeapi's `Request received { files: [] }`. Tag chosen as `[code-env:inject]` rather than `[handoff:exec]` to avoid collision with the app-level "handoff" semantic (subagent handoff workflow). Structural cleanup in primeFiles: replaced the `if (ref) { ... }` nesting with an early `if (!ref) continue` so the per-path instrumentation hooks land at top-level scope instead of indented inside a conditional. Behavior unchanged; pushFile / reuploadFile identical. Spec fixtures (handlers.spec.ts, codeFilesSession.spec.ts) updated to include `resource_id` on `CodeEnvFile` literals — required by the post-3.1.80-dev.2 type now installed. ## Test plan - [x] `cd packages/api && npx jest src/agents/handlers.spec.ts src/agents/codeFilesSession.spec.ts src/agents/skillFiles.spec.ts` — 69/69 pass - [x] `cd api && npx jest server/services/Files/Code/process.spec.js` — 84/84 pass - [x] `npx tsc --noEmit -p packages/api` — clean - [x] `npx eslint` on all four touched files — clean * chore: add CONSOLE_JSON_STRING_LENGTH to .env.example for JSON log string length configuration * fix(files): align codeapi upload filename with LC's sanitized DB filename User-attached files for code execution were uploading to codeapi under `file.originalname` (raw upload filename, may contain spaces / special chars) while LC's DB record stored the sanitized form (`sanitizeFilename(file.originalname)`, underscores). Codeapi preserves whatever filename the upload sent, so the sandbox saw `/mnt/data/<originalname>` while LC's `primeFiles` toolContext text + `_injected_files.name` referenced `file.filename` (sanitized). Visible failure: agent gets system prompt saying /mnt/data/librechat_code_api_-_active_customer_-_2025-11-05.xlsx …tries that path, hits `FileNotFoundError`, then notices the sandbox's actual `Available files` line says /mnt/data/librechat code api - active customer - 2025-11-05.xlsx …retries with spaces, succeeds. Wastes a tool call per upload and leaks raw filenames into model context. Fix: sanitize once and use the sanitized form in both the codeapi upload AND the LC DB record. Sandbox path = LC toolContext text = in-memory ref name. No drift. Reupload path (`Code/process.js` line 867 `filename: file.filename`) already uses the sanitized DB name, so it stays consistent with the fresh-upload path after this change. ## Test plan - [x] `cd api && npx jest server/services/Files/process` — 32/32 pass - [x] `npx eslint` on the touched file — clean * chore: bump `@librechat/agents` to version 3.1.80-dev.3 in package.json and package-lock.json |
||
|
|
9c81792d25
|
🔐 feat: Add Signed CloudFront File Downloads (#12970)
* feat: add signed CloudFront downloads * fix: preserve local IdP avatar paths * fix: address signed download review findings * fix: harden CloudFront cookie scope validation * fix: preserve URL save API compatibility * fix: store CDN SSO avatars under shared prefix * fix: Harden CloudFront tenant file access * fix: Preserve CloudFront download compatibility * fix: Address CloudFront review follow-ups * fix: Preserve file URL fallback user paths * fix: Address download review hardening * fix: Use file owner for S3 RAG cleanup * fix: Address final download review nits * fix: Clear stale avatar CloudFront cookies * fix: Align download filename helpers with dev * fix: Address final CloudFront review follow-ups * fix: Stream S3 URL uploads * fix: Set S3 stream upload length * fix: Preserve download metadata filepath * fix: Avoid remote content length for stream uploads * fix: Use bounded multipart URL uploads * fix: Harden S3 filename boundaries |
||
|
|
1b79e0b785
|
🧬 chore: Align LibreChat With Agents LangChain Upgrade (#12922)
* 🔧 chore: Update dependencies in package-lock.json and package.json - Bump version of @librechat/agents to 3.1.75-dev.0 in multiple package.json files. - Upgrade various AWS SDK and Smithy dependencies to their latest versions in package-lock.json for improved stability and performance. * 🔧 chore: Update AWS SDK and Smithy dependencies in package-lock.json - Bump version of @aws-sdk/client-bedrock-runtime to 3.1041.0 and update related dependencies for improved performance and stability. - Upgrade various AWS SDK and Smithy packages to their latest versions, ensuring compatibility and enhanced functionality. * chore: Align LibreChat with agents LangChain upgrade - Route LangChain imports through @librechat/agents facade exports - Update @librechat/agents to 3.1.75-dev.1 and remove direct LangChain deps - Normalize nullable agent model params and API key override typing - Update Google thinking config typing for newer LangChain packages - Refresh targeted audit-related dependency overrides * chore: Add Jest types for API specs * test: Fix LangChain upgrade CI specs * test: Exercise agents env facade * fix: Clean up TS preview diagnostics * fix: Address Codex review feedback |
||
|
|
f3e1201ae7
|
📌 fix: Stabilize Agent Prompt Cache Prefix (#12907)
* fix: stabilize agent prompt cache prefix * chore: refresh agents sdk lockfile integrity * test: format agent memory assertion * test: type agent context fixtures * fix: preserve MCP instruction precedence * fix: reuse resolved conversation anchor * fix: keep resumable startup immediate |
||
|
|
24e29aa8cb
|
🌱 fix: Inject Code-Tool Files Into Graph Sessions on First Call (+ read_file Sandbox Fallback) (#12831)
* 🌱 fix: Seed Code Tool Files Into Graph Sessions on First Call
Files attached to an agent's `tool_resources.execute_code` (user uploads
or generated artifacts from a prior turn) were silently dropped on the
first `execute_code` invocation of a turn. The agents-side `ToolNode`
populates `_injected_files` only when its `sessions` map already has an
`EXECUTE_CODE` entry — but that entry is only written by a previous
successful execution, so call #1 had nothing to inject. CodeExecutor
then fell back to a `/files/{session_id}` fetch, but `session_id` was
also empty on call #1, leaving the sandbox without the primed files.
Mirror the existing skill-priming pattern (`primeInvokedSkills` →
`initialSessions`) for code-resource files: eagerly call `primeFiles`
before `createRun` and merge the result into `initialSessions` via a
new `seedCodeFilesIntoSessions` helper. Skill files and code-resource
files now share the same `EXECUTE_CODE` entry; the prior representative
`session_id` is preserved on merge.
* 🔬 chore: Add Diagnostic Logging for Code-Files Seeding
Temporary debug logs to diagnose why first-call file injection is not
firing in real agent runs. Logs `wantsCodeExec`, available tool-resource
keys, primed file count, and the seeded EXECUTE_CODE entry. Will revert
once the failure mode is identified.
* 🪛 refactor: Capture primedCodeFiles per-agent at init, merge across run
Replace the client.js eager `primeFiles` call with a per-agent capture at
initialization time so every agent in a multi-agent run (primary +
handoff + addedConvo) contributes its `tool_resources.execute_code`
files to the shared `Graph.sessions` seed.
- handleTools.js (eager loadTools): the `execute_code` factory closes
over a `primedCodeFiles` slot and surfaces it in the return.
- ToolService.js loadToolDefinitionsWrapper (event-driven): captures
`files` from the existing `primeCodeFiles` call (was dropping them
while only keeping `toolContext`) and surfaces them.
- packages/api initialize.ts: the loadTools callback contract now
includes `primedCodeFiles`, threaded onto `InitializedAgent`.
- client.js: iterate `[primary, ...agentConfigs.values()]` and merge
each agent's `primedCodeFiles` into `initialSessions`. Drop the
primary-only `primeCodeFiles` call and diagnostic logs from the prior
attempt — wrong layer (single-agent), wrong gate (`agent.tools`
contained Tool instances after init, so the `.includes("execute_code")`
string check always failed).
* 🔬 chore: Add per-agent diagnostic logs for code-files seeding
Logs `tool_resources` keys + file counts inside loadToolDefinitionsWrapper
and per-agent `primedCodeFiles` + final initialSessions inside
AgentClient. Will revert once the failure mode is confirmed.
* 🔬 chore: Add file-lookup diagnostics inside initializeAgent
Logs the inputs and intermediate counts of the conversation-file lookup
chain (convo file ids, thread message ids, code-generated and
user-code file counts) so we can pinpoint why `tool_resources.execute_code`
is arriving empty at `loadToolDefinitionsWrapper` despite the agent
having `execute_code` in its tools list.
* 🔬 chore: Probe execute_code files without messageId filter
Adds a relaxed `getFiles({conversationId, context: execute_code})` probe
that runs only when `getCodeGeneratedFiles` returns empty. Lists what's
actually in the DB for this conversation so we can confirm whether the
file is missing entirely or whether the messageId filter is rejecting it.
* 🔬 chore: Fix probe getFiles arg order (sort vs projection)
Probe was passing a projection object as the sort arg, which mongoose
rejected with `Invalid sort value`. Move it to the third arg
(selectFields) so the probe actually runs.
* 🪢 fix: Preserve Original messageId on Code-Output File Update
Each `processCodeOutput` call was overwriting the persisted file's
`messageId` with the *current* run's id. When a turn re-creates an
existing file (filename + conversationId match → `claimCodeFile`
returns the existing record, `isUpdate=true`), the file's link to
the assistant message that originally produced it gets clobbered.
`initializeAgent` later runs `getCodeGeneratedFiles({messageId: $in: <thread>})`
to seed `tool_resources.execute_code` from prior-turn artifacts. With a
stale `messageId` (e.g. from a failed read attempt that re-shelled the
same filename), the file no longer matches the parent-walk thread, so
`tool_resources` arrives empty at agent init, the new
`primedCodeFiles` channel has nothing to seed, and the LLM can't see
its own prior-turn artifacts on the next turn — defeating the
just-added Graph-sessions seeding fix.
Preserve the existing `claimed.messageId` on update; first-creation
behavior is unchanged. The runtime return value still includes the
current run's `messageId` (via `Object.assign(file, { messageId })`)
so the artifact is correctly attributed to the live tool_call.
* 🧹 chore: Remove diagnostic logs from code-files seeding path
Drops the temporary debug logs added to trace the empty-tool_resources
failure mode. Production code paths (loadToolDefinitionsWrapper,
client.js seed loop, initializeAgent file lookup) are left as the
permanent shape: capture primedCodeFiles, merge across agents, seed
initialSessions before run start.
* 🪛 feat: read_file Sandbox Fallback for /mnt/data + Non-Skill Paths
When the model called `read_file` with a code-execution path (e.g.
`/mnt/data/sentinel.txt`), the handler returned a misleading
`Use format: {skillName}/{path}` error. Adds a sandbox-aware fallback:
- Short-circuit `/mnt/data/...` (can never be a skill reference) →
route to a sandbox `cat` via the new host-provided `readSandboxFile`
callback, which POSTs to the codeapi `/exec` endpoint.
- Skip the skill resolver entirely when `accessibleSkillIds` is empty
— the resolved-output of `resolveAgentScopedSkillIds` already
collapses the admin capability + ephemeral badge + persisted
`skills_enabled` chain, so an empty value is the authoritative
"skills aren't in scope for this agent" signal.
- For `{firstSegment}/...` paths, consult the catalog-derived
`activeSkillNames` Set (no DB read) to detect non-skill names and
fall through to the sandbox before the model has to retry with
`bash_tool`.
`activeSkillNames` is captured from `injectSkillCatalog`, threaded onto
`InitializedAgent`, into `agentToolContexts`, then through
`enrichWithSkillConfigurable` into `mergedConfigurable` for the handler.
The host implementation of `readSandboxFile` lives in
`api/server/services/Files/Code/process.js` and shells `cat <path>`
through the seeded sandbox session — `tc.codeSessionContext`
(emitted by ToolNode for `read_file` calls in `@librechat/agents`
v3.1.72+) provides the `session_id` + `_injected_files` so the read
lands in the same sandbox that holds prior-turn artifacts. When the
seeded context isn't available (older agents version, no codeapi
configured), the handler returns a model-visible error pointing at
`bash_tool` instead of silently failing.
Tests: 8 new `handleReadFileCall` cases cover the new short-circuits,
the skills-not-enabled gate, the activeSkillNames lookup, the
sandbox-fallback success path, and the bash_tool retry hint on
fallback failure. Existing `read_file` tests now opt into "skills are
in scope" via a `skillsInScope()` fixture (production wouldn't reach
the skill lookup with empty `accessibleSkillIds`).
* 🔧 chore: Update @librechat/agents dependency to version 3.1.72
Bumps the version of the @librechat/agents package across package-lock.json and relevant package.json files to ensure compatibility with the latest features and fixes.
* 🪛 refactor: Centralize Tool-Session Seed in buildInitialToolSessions Helper
Addresses review feedback on the per-agent merge in client.js:
- **Run-wide semantics, named explicitly.** The merge into a single
`Graph.sessions[EXECUTE_CODE]` was a deliberate match to the
agents-library design (`Graph.sessions` is shared across every
`ToolNode` in the run), but the inline `for (const a of agents)`
loop in `AgentClient.chatCompletion` made it look per-agent. Move
the logic to a TS helper `buildInitialToolSessions` that documents
the run-wide-by-design contract in one place. The CJS controller
now contains a single call site, no business logic.
- **Subagent walk (P2).** The previous loop only iterated
`[primary, ...agentConfigs.values()]`. Pure subagents are pruned
out of `agentConfigs` after init and retained on each parent's
`subagentAgentConfigs`, so their primed code files were silently
dropped from the seed. The helper now walks recursively, with a
visited-Set keyed on object identity that terminates safely on a
malformed agent graph (cycle).
- **`jest.setup.cjs` polyfill for undici `File`.** Reviewer hit
`ReferenceError: File is not defined` running the targeted spec on
WSL — a known Node 18 issue where `globalThis.File` from
`node:buffer` isn't auto-exposed. Polyfill it inside a Jest setup
file so the suite boots regardless of Node patch version.
Helper test coverage (8 new): skill-only / agent-only / both,
recursive subagent walk, cycle-safe walk, primary+subagent
deduplication, undefined/null entries in the agents iterable, and
representative session_id preservation across the merge.
16 tests pass total in `codeFilesSession.spec.ts` (8 prior + 8 new).
No behavior change vs. the previous commit for the existing
primary+agentConfigs case — subagent inclusion is the only new
behavior, and it matches what the existing seeding logic would have
done if subagents had been in `agentConfigs`.
* 🪛 fix: FIFO Walk Order in buildInitialToolSessions (P3 review)
The traversal used `Array.pop()` (LIFO), which visited the LAST
top-level agent first. The docstring says "primary first"; the code
contradicted it. When no skill seed exists the first-visited agent's
first file supplies the representative `session_id` written to
`Graph.sessions[EXECUTE_CODE]` — so a LIFO walk silently flipped which
agent that came from. `ToolNode` ultimately uses per-file `session_id`s
for runtime injection (so behavior was indistinguishable for current
callers), but the discrepancy was a footgun for any future consumer
that read the representative.
Switch to FIFO via `Array.shift()` to match both the docstring and the
existing `loadSubagentsFor` walk pattern in
`Endpoints/agents/initialize.js`. Add a regression test that asserts
the primary's `session_id` is the representative (and that all three
agents' files still contribute, with per-file `session_id`s preserved).
* 🔬 test: Lock In Code-Files Bug Fixes Per Comprehensive Review
Addresses MAJOR + MINOR + NIT findings from the multi-pass review:
**Finding #4 (MINOR) — empty relativePath misses sandbox fallback.**
A model calling `read_file("output/")` where "output" isn't a skill
name dead-ended with `Missing file path after skill name` instead of
being routed to the sandbox like every other malformed-path branch.
Add the same `codeEnvAvailable → handleSandboxFileFallback` pattern,
plus two regression tests.
**Finding #7 (NIT) — duplicate `skillsInScope()` helper.**
Hoist the identical helper out of two nested describe blocks to
module scope. Single source of truth.
**Finding #1 (MAJOR) — `persistedMessageId` had zero test coverage.**
The fix preserves a file's original `messageId` on update so
`getCodeGeneratedFiles` can still match it on subsequent turns. A
regression in the `isUpdate ? (claimed.messageId ?? messageId) : messageId`
ternary would silently re-introduce the original cross-turn priming
bug. Five new tests cover:
- UPDATE preserves `claimed.messageId` in the persisted record
- UPDATE falls back to current run id when `claimed.messageId` is
absent (legacy records predating the field)
- CREATE uses current run id (no claimed record exists)
- The runtime return value uses the LIVE id (artifact attribution)
even when the persisted record kept the original
- The image branch follows the same contract (would silently regress
if the ternary diverged across the two file-build branches)
The tests use a `snapshotCreateFileArgs()` helper because
`processCodeOutput` mutates the file object after `createFile`
returns (`Object.assign(file, { messageId, toolCallId })`) and a
naive `createFile.mock.calls[0][0]` would reflect the post-mutation
state instead of what was actually persisted.
**Finding #2 (MAJOR) — `readSandboxFile` had no direct tests.**
The model-controlled `file_path` flows through a POSIX single-quote
escape into a shell `cat` command, making this a security boundary.
A quoting regression would let a malicious filename break out of the
quoted argument and inject arbitrary shell. 20 new tests across:
- Shell quoting (7): plain filenames, embedded `'`, `$()`, backticks,
newlines, shell metachars, multiple consecutive single-quotes
- Payload shape (6): /exec URL, bash language, conditional
session_id / files inclusion, dedicated keepAlive:false agents
- Response handling (6): `{content}` on success, null on missing
base URL or absent stdout, throws on stderr-only, partial-success
returns stdout, transport errors are logged then rethrown
- Timeout (1): matches processCodeOutput's 15s SLA
Audited findings #5 (acknowledged tech debt — readSandboxFile in JS
workspace), #6 (pre-existing positional-args debt on
enrichWithSkillConfigurable), and #8 (cosmetic JSDoc style) — no
action taken per the reviewer's own assessment.
Audited finding #3 (walk order vs docstring) — already addressed in
commit
|
||
|
|
35bf04b26c |
🧰 refactor: Unify code-execution tools (#12767)
* 🛠️ feat: Add registerCodeExecutionTools helper Idempotently registers `bash_tool` + `read_file` in the run's tool registry and tool-definition list via a registry `.has()` dedupe. Sets up the single code-execution tool path shared by: - `initializeAgent` (when an agent has `execute_code` in its tools and the capability is enabled for the run) - `injectSkillCatalog` (when skills are active; unconditional read_file, bash_tool follows `codeEnvAvailable`) Both callers reach the helper in the same initialization sequence, so the second call becomes a no-op and exactly one copy of each tool reaches the LLM — no more double registration for agents that combine `execute_code` capability with active skills. Unit-tested on a fresh run, idempotence (second call, overlap with prior tooldefs, partial overlap), and the no-registry variant. * 🔀 refactor: Route injectSkillCatalog bash_tool + read_file through registerCodeExecutionTools The `skill` tool is still registered inline (it's skill-path-specific), but `bash_tool` + `read_file` now flow through the shared idempotent helper so a prior registration from the execute_code path doesn't produce a duplicate copy later in the same run. Behavior preserved: - `read_file` always registers when any active skill is in scope — manually-primed `disable-model-invocation: true` skills still need it to load `references/*` from storage. - `bash_tool` follows `codeEnvAvailable` exactly as before. Adds a test pinning the cross-call dedupe: when `injectSkillCatalog` runs AFTER `registerCodeExecutionTools` has already seeded the registry + tool definitions with bash_tool/read_file, the resulting `toolDefinitions` still contains exactly one copy of each. * 🪄 feat: Expand `execute_code` tool name into bash_tool + read_file at initialize-time When an agent's `tools` include `execute_code` and the `execute_code` capability is enabled for the run, `initializeAgent` now registers `bash_tool` + `read_file` via `registerCodeExecutionTools` before `injectSkillCatalog`. The legacy `execute_code` tool definition is no longer handed to the LLM — `execute_code` remains on the agent document as a capability-trigger marker, but the runtime expands it into the skill-flavored tool pair. Call ordering matters: the `execute_code` registration runs BEFORE `injectSkillCatalog`, so the skill path's own `registerCodeExecutionTools` call inside `injectSkillCatalog` becomes a no-op via the registry's `.has()` check. Exactly one copy of each tool reaches the LLM whether the agent has: - only `execute_code` (legacy path) - only skills - both No data migration needed — `agent.tools: ['execute_code']` stays in the DB unchanged; the expansion is a runtime operation. Three tests cover the matrix: execute_code + capability on → bash_tool + read_file registered; execute_code + capability off → neither registered; no execute_code + capability on → neither registered. * 🗑️ refactor: Drop CodeExecutionToolDefinition from the builtin registry Removes the legacy `execute_code` entry from `agentToolDefinitions` and the corresponding import. With the initialize-time expansion in place, nothing consults `getToolDefinition('execute_code')` for a tool schema any more — the capability gate still filters on the string `execute_code`, but the actual tool definitions the LLM sees come from `registerCodeExecutionTools` (i.e. `bash_tool` + `read_file`). `loadToolDefinitions` in `packages/api/src/tools/definitions.ts` silently drops `execute_code` when it no longer resolves in the registry — that's the expected path and is now covered by an updated test. No caller of `getToolDefinition('execute_code')` expects a non-undefined result after this change. * 🔌 refactor: Read CODE_API_KEY from env for primeCodeFiles + PTC Finishes the Phase 4 server-env-keyed rollout on the two remaining `loadAuthValues({ authFields: [EnvVar.CODE_API_KEY] })` sites in `ToolService.js`: - `primeCodeFiles` (user-attached file priming on execute_code agents) - Programmatic Tool Calling (`createProgrammaticToolCallingTool`) Both now read `process.env[EnvVar.CODE_API_KEY]` directly, matching `bash_tool`'s pattern. The per-user plugin-auth path is no longer consulted for code-env credentials anywhere in the hot path — the agents library owns the actual tool-call execution and also reads the env var internally. Priming still fires for existing user-file workflows so the legacy `toolContextMap[execute_code]` hint ("files available at /mnt/data/...") stays in the prompt; only the key lookup changed. * 🔧 fix: Type the pre-seeded dedupe-test tools as LCTool CI TypeScript type checks caught `{ parameters: {} }` in the new cross-call dedupe test: `LCTool.parameters` is a `JsonSchemaType`, not `{}`. Use `{ type: 'object', properties: {} }` and type the local registry Map through the parameter-derived shape so the pre-seeded values match what `toolRegistry.set` expects. * 🛡️ fix: Run execute_code expansion before GOOGLE_TOOL_CONFLICT gate Codex review caught a latent regression: the original Phase 8 placement ran `registerCodeExecutionTools` after `hasAgentTools` was computed, so an execute-code-only agent on Google/Vertex with provider-specific `options.tools` populated would no longer trip `GOOGLE_TOOL_CONFLICT` — the legacy `CodeExecutionToolDefinition` used to populate `toolDefinitions` before the guard, but after dropping it from the registry, `toolDefinitions` stayed empty until my expansion ran downstream of the guard. Mixed provider + agent tools would silently flow through to the LLM. Fix moves the `execute_code` expansion to BEFORE `hasAgentTools` computation. `bash_tool` + `read_file` now contribute to the check the same way the legacy `execute_code` def did. Covered by a new test that pins the Google+execute_code+provider-tools scenario — the `rejects.toThrow(/google_tool_conflict/)` path would have silently passed on the prior placement. * 🔗 fix: Thread codeEnvAvailable through handoff sub-agents Round-2 codex review caught the other half of the execute_code expansion gap: `discoverConnectedAgents` omitted `codeEnvAvailable` from its forwarded `initializeAgent` params, so handoff sub-agents with `agent.tools: ['execute_code']` lost the `bash_tool` + `read_file` registration (pre-Phase 8 the legacy `CodeExecutionToolDefinition` would have landed in their `toolDefinitions` via the registry). - Add `codeEnvAvailable?` to `DiscoverConnectedAgentsParams` and forward it verbatim on every sub-agent `initializeAgent` call. - Update the three JS call sites that construct the primary's `codeEnvAvailable` (`services/Endpoints/agents/initialize.js`, `controllers/agents/openai.js`, `controllers/agents/responses.js`) to pass the same flag into `discoverConnectedAgents` — one authoritative source per request. - Two regression tests in `discovery.spec.ts` pin the true/false passthrough so a future refactor that drops the param-forwarding surfaces immediately. Left intentionally unchanged: `packages/api/src/agents/openai/service.ts` (public API helper with no in-repo caller). External consumers of `createAgentChatCompletion` who want code execution should pass a `codeEnvAvailable`-aware `initializeAgent` via `deps` — documenting the full public-API surface is out of scope for this Phase 8 PR. * 🔗 fix: Thread codeEnvAvailable through addedConvo + memory-agent paths Round-3 codex review caught the last two production `initializeAgent` callers missing the Phase-8 capability flag: - `api/server/services/Endpoints/agents/addedConvo.js` (multi-convo parallel agent execution). Added `codeEnvAvailable` to `processAddedConvo`'s destructured params and forwarded it into the per-added-agent `initializeAgent` call. Caller in `api/server/services/Endpoints/agents/initialize.js` passes the same `codeEnvAvailable` it computed for the primary. - `api/server/controllers/agents/client.js` (`useMemory` — memory extraction agent). Computes its own `codeEnvAvailable` from `appConfig?.endpoints?.[EModelEndpoint.agents]?.capabilities` and forwards into `initializeAgent`. Memory agents rarely list `execute_code`, but if one does, pre-Phase 8 they got the legacy `execute_code` tool registered unconditionally — the passthrough restores parity. With this, every production caller of `initializeAgent` explicitly resolves the capability: main chat flow (primary + handoff), OpenAI chat completions (primary + handoff), Responses API (primary + handoff), added convo parallel agents, and memory agents. The one remaining caller, `packages/api/src/agents/openai/service.ts::createAgentChatCompletion`, is a public API helper with no in-repo consumer (external callers must pass a capability-aware `initializeAgent` via `deps`). * 🪤 fix: Remove duplicate appConfig declaration causing TDZ ReferenceError The Responses API controller had TWO `const appConfig = req.config;` bindings inside `createResponse`: one at the top of the function (added by the Phase 4 `bash_tool` decouple) and one inside the try block (added by the polish PR #12760). Because `const` is block-scoped with a temporal dead zone, the inner redeclaration put `appConfig` in TDZ for the entire try block, so any earlier reference inside the try — notably `appConfig?.endpoints?.[EModelEndpoint.agents]?.allowedProviders` at line 348 — threw `ReferenceError: Cannot access 'appConfig' before initialization`. The error was silently swallowed by the outer try/catch, leaving `recordCollectedUsage` unreached and the six `responses.unit.spec.js` token-usage tests failing. Removing the inner redeclaration fixes the six failing tests (verified: 11/11 pass locally post-fix, 0 regressions elsewhere). The outer function-scoped binding already provides `appConfig` to every downstream reference. * 🔗 fix: Thread codeEnvAvailable through the OpenAI chat-completion public API Round-4 codex review (legitimate on the type-safety angle, even though the runtime concern was already covered): the `createAgentChatCompletion` helper defines its own narrower `InitializeAgentParams` interface locally, and the type was missing `codeEnvAvailable`. External consumers who supply a capability-aware `deps.initializeAgent` couldn't route `codeEnvAvailable` through without a type-cast workaround. - Widen the local `InitializeAgentParams` interface to include `codeEnvAvailable?: boolean` (matches the real `packages/api/src/agents/initialize.ts` type). - Derive `codeEnvAvailable` inside `createAgentChatCompletion` from `deps.appConfig?.endpoints?.agents?.capabilities` (the same source the in-repo controllers use) and forward to `deps.initializeAgent`. Uses a string literal `'execute_code'` lookup so this file stays free of a `librechat-data-provider` import — keeping the dependency surface of the public helper minimal. With this, external consumers of `createAgentChatCompletion` who pass `appConfig` with the agents capabilities get `bash_tool` + `read_file` registration automatically; consumers who don't pass `appConfig` retain the existing "explicit opt-in" semantics (the flag stays `undefined`, expansion is skipped). * 🧹 chore: Review-driven polish — observability log, JSDoc DRY, test gaps, no-op allocation Addresses the comprehensive review of PR #12767: - **Finding #1** (MINOR, observability): `initializeAgent` now emits a debug log when an agent lists `execute_code` in its tools but the runtime gate is off (`params.codeEnvAvailable` !== true). The event-driven `loadToolDefinitionsWrapper` path doesn't log capability-disabled warnings, so without this the tool silently vanishes from the LLM's definitions with zero trace. Operators debugging "why isn't code interpreter working?" now get a signal at the initialize layer. - **Finding #5** (NIT, allocation): `registerCodeExecutionTools` now returns the input `toolDefinitions` array by reference on the no-op path (both tools already registered by a prior caller in the same run) instead of allocating a fresh spread array every time. The common dual-call scenario — `initializeAgent` then `injectSkillCatalog` — saves one O(n) copy per request. - **Finding #4** (NIT, DRY): Collapsed the duplicated 6-line JSDoc comment in `openai.js`, `responses.js`, and `addedConvo.js` into either a one-line `@see DiscoverConnectedAgentsParams.codeEnvAvailable` pointer (the two JS call sites) or a compact 3-line block referring back to the canonical source (addedConvo's @param). - **Finding #2** (MINOR, test gap): Added `api/server/services/Endpoints/agents/addedConvo.spec.js` with three cases covering `codeEnvAvailable=true`, `codeEnvAvailable=false`, and omitted (undefined) passthrough. A future refactor that drops the param from destructuring now surfaces here instead of silently regressing multi-convo parallel agents with `execute_code`. - **Finding #3** (MINOR, test gap): Added `api/server/controllers/agents/__tests__/client.memory.spec.js` pinning the capability-flag derivation that `AgentClient::useMemory` uses — six cases covering present/absent/null/undefined config shapes plus an enum-literal pin (`'execute_code'` / `'agents'`). Catches enum renames or config-path shifts that would otherwise silently strip `bash_tool` + `read_file` from memory agents. Finding #7 (jest.mock scoping, confidence 40) left as-is: the reviewer's own risk assessment noted `buildToolSet` doesn't touch the mocked exports, and restructuring a file-level `jest.mock` to `jest.doMock` + dynamic `import()` introduces more complexity than the speculative risk justifies. The existing mock is scoped to the test file and contains the same stubs the adjacent `skills.test.ts` already uses. Finding #6 (PR description commit count) addressed out-of-band via PR description update. All existing tests pass, typecheck clean, lint clean across touched files. New tests: 9 cases across 2 new spec files. * 🧽 refactor: Replace hardcoded 'execute_code' string with AgentCapabilities enum in service.ts Follow-up review (conf 55) caught that `openai/service.ts`'s Phase 8 `codeEnvAvailable` derivation used the literal `'execute_code'` while every in-repo controller uses `AgentCapabilities.execute_code` from `librechat-data-provider`. The file deliberately uses local type interfaces to keep the public API helper's type surface small, but that pattern was never a ban on single-value imports from the data provider — `packages/api` already depends on it. Importing the enum value means a future rename of `AgentCapabilities.execute_code` propagates to this file automatically, matching the in-repo controllers' behavior. Other follow-up findings left as-is per the reviewer's own verdict: - #2 (memory spec mirrors the production expression rather than calling `AgentClient::useMemory` directly): reviewer flagged as "not blocking" / "design-philosophy observation." The test file's JSDoc already explicitly documents the tradeoff and pins the enum literals to catch the most likely drift vector. Standing up `AgentClient` + all its mocks for a one-line regression guard is disproportionate. - #3 (`addedConvo.spec.js` mock signature vs. underlying `loadAddedAgent` arity): reviewer's own confidence 25 noted the mock matches the wrapper's actual call pattern in the production file. Not a real gap. - #4 was self-retracted as a false alarm. * 🗑️ refactor: Fully deprecate CODE_API_KEY — remove all LibreChat-side references The code-execution sandbox no longer authenticates via a per-run `CODE_API_KEY` (frontend or backend). Auth moved server-side into the agents library / sandbox service, so LibreChat drops every reference: **Backend plumbing:** - `api/server/services/Files/Code/crud.js`: `getCodeOutputDownloadStream`, `uploadCodeEnvFile`, `batchUploadCodeEnvFiles` no longer accept `apiKey` or send the `X-API-Key` header. - `api/server/services/Files/Code/process.js`: `processCodeOutput`, `getSessionInfo`, `primeFiles` drop the `apiKey` param throughout. - `api/server/services/ToolService.js`: stop reading `process.env[EnvVar.CODE_API_KEY]` for `primeCodeFiles` and PTC; the agents library handles auth internally. Remove the now-dead `loadAuthValues` + `EnvVar` imports. Drop the misleading "LIBRECHAT_CODE_API_KEY" hint from the bash_tool error log. - `api/server/services/Files/process.js`: remove the `loadAuthValues` call around `uploadCodeEnvFile`. - `api/server/routes/files/files.js`: code-env file download no longer fetches a per-user key. - `api/server/controllers/tools.js`: `execute_code` is no longer a tool that needs verifyToolAuth with `[EnvVar.CODE_API_KEY]` — the endpoint always reports system-authenticated so the client skips the key-entry dialog. `processCodeOutput` called without `apiKey`. - `api/server/controllers/agents/callbacks.js`: `processCodeOutput` invoked without the loadAuthValues round trip, for both LegacyHandler and Responses-API handlers. - `api/app/clients/tools/util/handleTools.js`: `createCodeExecutionTool` called with just `user_id` + files. **packages/api:** - `packages/api/src/agents/skillFiles.ts`: `PrimeSkillFilesParams`, `PrimeInvokedSkillsDeps`, `primeSkillFiles`, `primeInvokedSkills` all drop the `apiKey` param; the gate is purely `codeEnvAvailable`. - `packages/api/src/agents/handlers.ts`: `handleSkillToolCall` drops the `process.env[EnvVar.CODE_API_KEY]` read; skill-file priming is now gated solely on `codeEnvAvailable`. `ToolExecuteOptions` signatures drop apiKey from `batchUploadCodeEnvFiles` and `getSessionInfo`. - `packages/api/src/agents/skillConfigurable.ts`: JSDoc no longer references the env var. - `packages/api/src/tools/classification.ts`: PTC creation no longer gated on `loadAuthValues`; `buildToolClassification` drops the `loadAuthValues` dep entirely (no LibreChat-side callers need it for this path anymore). - `packages/api/src/tools/definitions.ts`: `LoadToolDefinitionsDeps` drops the `loadAuthValues` field. **Frontend:** - Delete `client/src/hooks/Plugins/useAuthCodeTool.ts`, `useCodeApiKeyForm.ts`, and `client/src/components/SidePanel/Agents/Code/ApiKeyDialog.tsx` — the install/revoke dialogs for CODE_API_KEY are fully dead. - `BadgeRowContext.tsx`: drop `codeApiKeyForm` from the context type and provider. `codeInterpreter` toggle treated as always authenticated (sandbox auth is server-side). - `ToolsDropdown.tsx`, `ToolDialogs.tsx`, `CodeInterpreter.tsx`, `RunCode.tsx`, `SidePanel/Agents/Code/Action.tsx` +`Form.tsx`: all API-key dialog trigger refs, "Configure code interpreter" gear buttons, and auth-verification plumbing removed. The "Code Interpreter" toggle is now a plain `AgentCapabilities.execute_code` checkbox — no key-entry gate. - `client/src/locales/en/translation.json`: drop the three `com_ui_librechat_code_api*` keys and `com_ui_add_code_interpreter_api_key`. Other locales are externally automated per CLAUDE.md. **Config:** - `.env.example`: remove the `# LIBRECHAT_CODE_API_KEY=your-key` section and its header. **Tests:** - `crud.spec.js`: assertions flipped to pin "no X-API-Key header" and "no apiKey param". - `skillFiles.spec.ts`: removed env-var save/restore; tests now pin that the batch-upload path is gated solely on `codeEnvAvailable` and that no apiKey is threaded through. - `handlers.spec.ts`: same — just the `codeEnvAvailable` gate pins remain. - `classification.spec.ts`: remove the two tests that asserted `loadAuthValues` was (not) called for PTC. - `definitions.spec.ts`: drop every `loadAuthValues: mockLoadAuthValues` entry from the deps shape. - `process.spec.js`: strip the mock of `EnvVar.CODE_API_KEY`. **Comment hygiene:** - `tools.ts`, `initialize.ts`, `registry/definitions.ts`: shortened stale comment references to "legacy `execute_code` tool" without naming the retired env var. Tests verified: 678 packages/api tests pass, 836 backend api tests pass. Typecheck clean, lint clean. Only remaining CODE_API_KEY mentions in the code are two regression-guard assertions: - `crud.spec.js`: pins "no X-API-Key header" stays absent. - `skillConfigurable.spec.ts`: pins `configurable` never grows a `codeApiKey` field. * 🧹 chore: Remove the last two CODE_API_KEY name mentions in LibreChat Follow-up to the prior full deprecation commit: two tests still named the retired identifier in their regression-guard assertions. - `packages/api/src/agents/skillConfigurable.spec.ts`: drop the "does not inject a codeApiKey key" test. The `codeApiKey` field is gone from the production configurable shape, so an absence-assertion naming it re-introduces the retired identifier in code. - `api/server/services/Files/Code/crud.spec.js`: rename the "without an X-API-Key header" case back to "should request stream response from the correct URL" and drop the `expect(headers).not.toHaveProperty('X-API-Key')` assertion. The surrounding request-shape checks (URL, timeout, responseType) still pin the behavior; the explicit header-absence line was named-after the deprecated contract. Result: `grep -rn "CODE_API_KEY\|codeApiKey\|LIBRECHAT_CODE_API_KEY"` against the LibreChat source tree returns zero hits. The only remaining `X-API-Key` strings in this repo are on unrelated OpenAPI Action + MCP server auth configurations, where the string is user-facing config, not a LibreChat-owned identifier. Tests: 677 packages/api pass (2 pre-existing summarization e2e failures unrelated); 126 api-workspace controller/service tests pass. Typecheck and lint clean. * 🎯 fix: Narrow codeEnvAvailable to per-agent (admin cap AND agent.tools) Before this commit, `codeEnvAvailable` was computed in the three JS controllers as the admin-level capability flag only (`enabledCapabilities.has(AgentCapabilities.execute_code)`) and passed through `initializeAgent` → `injectSkillCatalog` / `primeInvokedSkills` / `enrichWithSkillConfigurable` unchanged. A skills-only agent whose `tools` array didn't include `execute_code` still got `bash_tool` registered (via `injectSkillCatalog`) and skill files re-primed to the sandbox on every turn — wrong, because the agent never opted in to code execution. **Fix:** `initializeAgent` now computes the per-agent effective value once as `params.codeEnvAvailable === true && agent.tools.includes(Tools.execute_code)`, reuses the same boolean for: 1. The `execute_code` → `bash_tool + read_file` expansion gate (previously already consulted `agent.tools`; now shares the single `effectiveCodeEnvAvailable` binding). 2. The `injectSkillCatalog` call (previously got the raw admin flag). 3. The returned `InitializedAgent.codeEnvAvailable` field (new, typed as required boolean). **Controllers (initialize.js, openai.js, responses.js):** store `primaryConfig.codeEnvAvailable` in `agentToolContexts.set(primaryId, ...)`, capture `config.codeEnvAvailable` in every handoff `onAgentInitialized` callback, and read it from the per-agent ctx inside the `toolExecuteOptions.loadTools` runtime closure. The hoisted `const codeEnvAvailable = enabledCapabilities.has(...)` locals in the two OpenAI-compat controllers are gone — they were shadowing the narrowed per-agent value. **primeInvokedSkills:** `handlePrimeInvokedSkills` in `services/Endpoints/agents/initialize.js` now uses `primaryConfig.codeEnvAvailable` (per-agent, narrowed) instead of the raw admin flag. A skills-only primary agent won't re-prime historical skill files to the sandbox even when the admin enabled the capability globally. **Efficiency:** one extra `&&` in `initializeAgent`. No runtime hot-path cost — the `includes()` scan on `agent.tools` was already happening for the `execute_code` expansion gate; it's now just bound to a local. Tool execution closures read `ctx.codeEnvAvailable === true` (property access + strict equality, O(1)). **Ephemeral-agent note:** per-agent narrowing is authoritative for both persisted and ephemeral flows. The ephemeral toggle (`ephemeralAgent.execute_code`) is reconciled into `agent.tools` upstream in `packages/api/src/agents/added.ts`, so `agent.tools.includes('execute_code')` is the single source of truth by the time `initializeAgent` runs. **Tests:** two new regression tests pin the narrowing contract: - `initialize.test.ts` — four-quadrant matrix on `InitializedAgent.codeEnvAvailable` (cap on × agent asks, cap on × doesn't ask, cap off × asks, neither). Catches future refactors that drop either half of the AND. - `skills.test.ts` — `injectSkillCatalog` with `codeEnvAvailable: false` against an active skill catalog must NOT register `bash_tool` even though it still registers `read_file` + `skill`. This is the state a skills-only agent gets post-narrowing. All 191 affected packages/api tests pass + 836 backend api tests pass. Typecheck clean, lint clean. * 🧽 refactor: Comprehensive-review polish — hoist tool defs, pin verifyToolAuth contract, doc appConfig Addresses the comprehensive review of Phase 8. Findings mapped: **#1 (MINOR): `verifyToolAuth` unconditional auth for execute_code** - Added doc comment explicitly stating the deployment contract (admin capability → reachable sandbox; no per-check health probe to keep UI-gate queries O(1)). - New `api/server/controllers/__tests__/tools.verifyToolAuth.spec.js` with 4 regression tests pinning the contract: 1. `authenticated: true` + `SYSTEM_DEFINED` for execute_code. 2. 404 for unknown tool IDs. 3. `loadAuthValues` is never consulted (catches a future revert that would resurface the per-user key-entry dialog). 4. Response `message` is never `USER_PROVIDED`. **#2 (MINOR): `openai/service.ts` undocumented `appConfig` dependency** - Expanded the `ChatCompletionDependencies.appConfig` JSDoc to spell out that omitting it silently disables code execution for agents with `execute_code` in their tools. External consumers of `createAgentChatCompletion` now have the contract documented at the type boundary. **#5 (NIT): `registerCodeExecutionTools` re-allocates tool defs** - Hoisted `READ_FILE_DEF` and `BASH_TOOL_DEF` to module-level `Object.freeze`d constants. The shapes derive entirely from static `@librechat/agents` exports, so a single frozen object per tool is safe to share across every agent init. Eliminates the ~4-property allocations on every call (including the common second-call no-op path). **#6 (NIT): Verbose history-priming comment in initialize.js** - Trimmed the 16-line `handlePrimeInvokedSkills` block to a 5-line summary with `@see InitializedAgent.codeEnvAvailable` pointer. The canonical narrowing explanation lives on the type; the controller comment is just the ACL-vs-capability rationale. **Skipped:** - #3 (memory spec tests a mirror function): reviewer self-dismissed as a design tradeoff; the enum-literal pin already catches the highest-risk drift vector. - #4 (cross-repo contract for `createCodeExecutionTool`): user will explicitly install the latest `@librechat/agents` dev version once the companion PR publishes, so the version pin will be authoritative. - #7 (migration/deprecation note for self-hosters): out of scope per user direction — release notes handle this. Tests verified: 679 packages/api + 840 backend api tests pass. Typecheck + lint clean. * 🔧 chore: Update @librechat/agents version to 3.1.68-dev.1 across package-lock and package.json files This commit updates the version of the `@librechat/agents` package from `3.1.68-dev.0` to `3.1.68-dev.1` in the `package-lock.json` and relevant `package.json` files. This change ensures consistency across the project and incorporates any updates or fixes from the new version. |
||
|
|
91cd3f7b7c |
🧽 refactor: Skills polish: precedence-aware body validation, controller drop logs, SkillPills rename (#12760)
Post-merge sanity-review cleanup on top of #12746: - `createSkill` / `updateSkill` now parse SKILL.md body's always-apply status once and reuse it for both validation and derivation (was parsing the same YAML block twice per call). - Body-inline `always-apply:` validation becomes precedence-aware: a caller sending an explicit top-level `alwaysApply` or a structured `frontmatter['always-apply']` no longer gets rejected for a typo in the body — the body value is never consulted at derivation time when a higher-precedence source wins. New tests cover the three relevant interactions (explicit+body-typo, frontmatter+body-typo, body-only typo still rejects). - OpenAI and Responses controllers now emit a `logger.warn` when `injectSkillPrimes` drops always-apply primes to stay under `MAX_PRIMED_SKILLS_PER_TURN`. `injectSkillPrimes` already logs internally; the controller-level warn adds endpoint context so operators can identify which path hit the cap at a glance. Mirrors AgentClient's existing log. - Rename `ManualSkillPills` → `SkillPills` (component + type + file + test + all JSDoc references). The component handles both manual and always-apply pills now; the original name was carried over from the manual-only Phase 3 and misleads new readers. - Drive-by fix: declare `appConfig = req.config` at the top of `createResponse` in `responses.js` — it was used unqualified on lines 381/396, which silently evaluated to `undefined` (via optional chaining) and disabled the skills-capability check on the Responses endpoint. Pre-existing, surfaced by lint on the touched file. |
||
|
|
dfc3dfa57f |
📍 feat: always-apply frontmatter: auto-prime skills every turn (#12746)
* 🔁 refactor: Rebase always-apply work onto merged structured-frontmatter columns Phase 6 (disable-model-invocation / user-invocable / allowed-tools) landed first on feat/agent-skills. Reconcile this branch with the new mainline: - Thread alwaysApplySkillPrimes through unionPrimeAllowedTools alongside manualSkillPrimes, applying the combined MAX_PRIMED_SKILLS_PER_TURN ceiling before loading tools. - Add `_id` to ResolvedAlwaysApplySkill to match Phase 6's ResolvedManualSkill shape (read_file name-collision protection). - Register 'always-apply' in ALLOWED_FRONTMATTER_KEYS / FRONTMATTER_KIND so Phase 6's validator recognizes it. - Drop frontmatter from the listSkillsByAccess projection; the backfill helper remains as defensive code but its read path is no longer exercised on summary rows (no legacy rows exist — the branch never shipped), saving ~200KB per page. - Retire the corresponding "backfills legacy on summaries" test. - Plumb listAlwaysApplySkills through the JS controllers + endpoint initializer so the always-apply resolver sees a real DB method. * 🧹 fix: Dedupe manual/always-apply overlap, share YAML util, tidy comments Addresses review findings: - Cross-list dedup: when a user $-invokes a skill that is also marked always-apply, the always-apply copy is now dropped so the same SKILL.md body never primes twice in one turn. Manual wins (explicit intent, closer to the user message). Dedup runs in both initializeAgent (so persisted user-bubble pills stay in sync) and injectSkillPrimes (defense-in-depth at splice time). New test cases cover single-overlap, partial-overlap, and dedup-before-cap. - DRY: extract stripYamlTrailingComment to packages/data-schemas/src/utils/yaml.ts; packages/api/src/skills/import.ts now imports the shared helper. Also drop the redundant inner stripYamlTrailingComment call inside parseBooleanScalar — the call site already strips. - Mark injectManualSkillPrimes as @deprecated in favor of injectSkillPrimes (kept for external consumers of @librechat/api). - Document SKILL_TRIGGER_MODEL as forward-looking plumbing for the model-invoked path rather than leaving it as a bare unused export. - Replace the stale "frontmatter is included" comment on listSkillsByAccess with an accurate explanation of why it was intentionally excluded. * 🔒 fix: Include always-apply primes in skillPrimedIdsByName + clear alwaysApply on body opt-out Two bugs flagged by Codex review: P1 (read_file): `manualSkillPrimedIdsByName` only carried manual-invocation primes, so an always-apply skill with `disable-model-invocation: true` was blocked from reading its own bundled files, and same-name collisions could resolve to a different doc than the one whose body got primed. - Rename `buildManualSkillPrimedIdsByName` → `buildSkillPrimedIdsByName` (accepts both manual + always-apply prime arrays). - Rename the configurable field `manualSkillPrimedIdsByName` → `skillPrimedIdsByName` throughout the plumbing (skillConfigurable.ts, handlers.ts, CJS callers, tests). - Overlap resolution: manual wins on the rare edge case where the same name appears in both arrays (upstream dedup should prevent this, but defensive merging treats manual as authoritative). - New tests: (1) gate-relaxation fires for always-apply primes, (2) `_id` pinning works for always-apply same-name collisions. P2 (updateSkill): when a body update had no `always-apply:` key, `extractAlwaysApplyFromBody` returned `absent` and the column was left untouched. A skill that was once `alwaysApply: true` would keep auto-priming even after its SKILL.md no longer declared the flag. - Treat `absent` as a positive "not always-apply" declaration when the body is explicitly submitted; flip the column to `false`. - Explicit top-level `alwaysApply` still wins (three-source precedence unchanged). - New tests: body removes key → false, body has no frontmatter at all → false, explicit + body-without-key → explicit wins. * 🧵 refactor: Collapse duplicate prime types + tighten parse + test hygiene Sanity-check review follow-ups: - Collapse `ResolvedManualSkill` / `ResolvedAlwaysApplySkill` into a single `ResolvedSkillPrime` canonical interface with two backward- compatible type aliases. Both resolvers feed the same pipeline stages (injectSkillPrimes, unionPrimeAllowedTools, buildSkillPrimedIdsByName); the per-source distinction lives on `additional_kwargs.trigger`, not on the resolver output. - Move the `always-apply` branch in `parseFrontmatter` to operate on the raw post-colon text. The outer `unquoteYaml` was fine today because it's idempotent on non-quoted strings, but running it twice (once per line, once after stripping the inline comment) would be fragile if the unquoter ever grows richer YAML-escape handling. - Add the missing `alwaysApplyDedupedFromManual: 0` field to the `injectSkillPrimes` mocks in `openai.spec.js` and `responses.unit.spec.js` so they match the full `InjectSkillPrimesResult` contract. - Insert the blank line between the `unionPrimeAllowedTools` and `resolveAlwaysApplySkills` describe blocks. * 🔧 fix(tsc): Cast mock.calls via `unknown` for strict tuple destructure `getSkillByName.mock.calls[0]` is typed as `[]` by jest's generic default; a direct cast to `[string, ..., ...]` fails TS2352 under `--noEmit` even though the runtime shape matches. Go through `as unknown as [...]` like the earlier test in the same file so CI's type-check step stays green. * 🪢 fix: Propagate skillPrimedIdsByName into handoff agent tool context Handoff agents go through the same `initializeAgent` flow as the primary (with `listAlwaysApplySkills` now plumbed), so they resolve their own `manualSkillPrimes` and `alwaysApplySkillPrimes` — but the `agentToolContexts.set(...)` for handoff agents didn't carry `skillPrimedIdsByName` into the per-agent context. That meant `handleReadFileCall` fell back to the full ACL set + a `prefer*` flag for handoff agents: same-name collisions could resolve to a different doc than the one whose body got primed, and a `disable-model-invocation: true` skill primed via manual `$` or always-apply inside the handoff flow would be blocked from reading its own bundled files. Build the map via `buildSkillPrimedIdsByName(config.manualSkillPrimes, config.alwaysApplySkillPrimes)` for every handoff tool context so `read_file` behaves identically across primary and handoff agents. |
||
|
|
539c4c7e4d |
🎬 feat: Prime Manually-Invoked Skills via $ Popover (#12709)
* 🎬 feat: Prime Manually-Invoked Skills via $ Popover Lands the backend for manual skill invocation, making the $ popover deterministically prime SKILL.md before the LLM turn instead of leaving the model to discover the skill via the catalog. Flow: popover drains pendingManualSkillsByConvoId on submit, attaches names to the ask payload, controllers forward to initializeAgent, and initialize resolves each name to its body (ACL + active-state filtered, reusing the same rules as catalog injection). AgentClient splices the primes as meta HumanMessages before the user's current message. - Extract primeManualSkill / resolveManualSkills in packages/api/src/agents/skills.ts and reuse primeManualSkill inside handleSkillToolCall for a single shape source. - Thread manualSkills + getSkillByName through InitializeAgentParams / DbMethods and all three initializeAgent call sites (initialize.js, responses.js, openai.js). - Splice HumanMessage primes in client.js chatCompletion after formatAgentMessages, shifting indexTokenCountMap so hydrate still fills fresh positions correctly. - Carry isMeta / source / skillName in additional_kwargs for downstream filtering. * 🛡️ fix: Scope manual skill primes to single-agent + cap resolver input Two follow-ups to the Phase 3 priming path flagged in Codex review. Multi-agent runs: skipping the splice when agentConfigs is non-empty. `initialMessages` is shared across every agent in `createRun`, so splicing a skill body there would bypass Phase 1's per-agent `scopeSkillIds` contract — a handoff / added-convo agent with a different skill scope would see content its configuration excludes. Warn + skip is the minimal correct behavior; lifting this to per-agent initial state is a follow-up. Input bounding: `resolveManualSkills` now truncates to `MAX_MANUAL_SKILLS` (10) after dedup, with a warn listing the dropped tail. Controllers only validate `Array.isArray(req.body.manualSkills)`, so a crafted payload could otherwise fan out into an unbounded `Promise.all` of concurrent `getSkillByName` DB lookups. Cap lives in the resolver so every caller (including future `always-apply` in Phase 5) inherits it. * 🧪 refactor: Testable Helpers + Payload Validation for Manual Skill Primes Follow-ups from the comprehensive review. No behavior change for the happy path — these are architectural and defensive improvements that shrink the JS surface in /api, tighten the request-body contract, and cover the delicate splice logic with proper unit tests. - Extract `injectManualSkillPrimes` into packages/api/src/agents/skills.ts so the message-array splice and `indexTokenCountMap` shift are unit- testable in TS. client.js now calls the helper. Tests pin the `>=` vs `>` boundary condition — a regression here would silently corrupt token accounting for every message after the insertion point. - Extract `extractManualSkills(body)` and use in all three controllers (initialize.js, responses.js, openai.js). Replaces copy-pasted `Array.isArray(...) ? ... : undefined` with a helper that also filters non-string / empty elements — closes a type-safety gap where a crafted payload like `{"manualSkills": [123, {"$gt":""}]}` would otherwise reach `getSkillByName` and waste DB round-trips. - Rename `primeManualSkill` → `buildSkillPrimeMessage`. The helper serves three invocation modes (`$` popover, `always-apply`, model-invoked); the old name misled readers coming from `handleSkillToolCall`. - Add `loadable.state === 'hasValue'` guard in `drainPendingManualSkills` — defensive, since the atom has a synchronous `[]` default, but the previous `.contents` cast would have been unsound under loading/error. - Document why `resolveManualSkills` honors the active-state filter even for explicit `$` selections (Phase 2 popover filter + API-direct hardening). - Remove stray `void Types;` in initialize.test.ts — `Types` is already consumed elsewhere in that test. * 🔖 refactor: Single source for the skill-message source marker Export `SKILL_MESSAGE_SOURCE = 'skill'` and use it in both construction paths that stamp skill-primed messages — `buildSkillPrimeMessage` (for the model-invoked tool path) and `injectManualSkillPrimes` (for the user-invoked splice path). Downstream filtering and telemetry read this marker, so the two paths must agree; keeping the literal in one place removes the risk of them drifting when Phase 5's `always-apply` adds a third caller. * ♻️ refactor: Drop Multi-Agent Guard + Review Polish - Remove the multi-agent skip in `AgentClient.chatCompletion`. Leaking primes to handoff / added-convo agents via shared `initialMessages` is the agents SDK's concern to scope; this layer should just inject and let the graph handle agent-scoped state. The guard was well-intended but produced a silent-drop UX where `$skill` in a multi-agent run did nothing. - Bound the `[resolveManualSkills] Truncating ...` warn output to the first 5 dropped names plus a count suffix. A malicious payload of 1000 names was previously spilling all ~990 names into the log line. - Remove dead `?? []` from the `hasValue`-guarded loadable read in `drainPendingManualSkills` — the atom always yields a string[] when resolved, so the nullish fallback was unreachable. - Reorder skills.ts imports to follow the style guide: value imports shortest-to-longest (`data-schemas` → `langchain/core/messages` → multi-line `@librechat/agents`), type imports longest-to-shortest. * 🧠 fix: Strip Skill Primes from Memory Window + Unbreak CI Mocks Two fixes after the last push. CI unbreak: `responses.unit.spec.js` and `openai.spec.js` mock `@librechat/api` and the mock didn't expose the new `extractManualSkills` symbol, so every test in those files crashed before reaching the `recordCollectedUsage` assertion. Added `extractManualSkills: jest.fn()` returning `undefined` to both mocks; the controllers now no-op on manualSkills as the tests expect. Codex P2: `runMemory` passes `messages` straight through to the memory processor, so after the splice in `injectManualSkillPrimes`, SKILL.md bodies ride along as if they were real user chat. That pollutes memory extraction with synthetic instruction content and crowds out real turns from the window. - Export `isSkillPrimeMessage(msg)` from `packages/api/src/agents/skills.ts` — a predicate keyed on the shared `SKILL_MESSAGE_SOURCE` marker. - Filter `chatMessages = messages.filter(m => !isSkillPrimeMessage(m))` at the top of `runMemory` before the window-sizing logic. Keeps the primes visible to the LLM (they still ride in `initialMessages`) but invisible to the memory layer. - 5 new tests for the predicate covering marker-present, plain messages, different source, non-object inputs, and array filter integration. * 📜 feat: Show Skill-Loaded Cards for Manually-Invoked Skills The $ popover was priming SKILL.md bodies into the turn but leaving no visible trace on the assistant response — from the user's view it looked like the `$name ` cosmetic text did nothing. Now each manually-invoked skill renders the same "Skill X loaded" tool-call card that model-invoked skills already produce via PR #12684's SkillCall renderer. Approach: post-run prepend to `this.contentParts`. The aggregator owns per-step indices during the run, so pre-seeding collides; waiting until `await runAgents(...)` returns lets the graph settle before synthetic parts slot in at the front. - Export `buildSkillPrimeContentParts(primes, { runId })` from `packages/api/src/agents/skills.ts`. Returns completed tool_call parts (`progress: 1`, args JSON-encoded with `{skillName}`, output matching the model-invoked path's wording) that the existing `SkillCall.tsx` renderer draws identically. - In `AgentClient.chatCompletion`, prepend the built parts to `this.contentParts` immediately after `await runAgents`. Persistence and the final-event reconcile come for free — `sendCompletion` already reads `this.contentParts` verbatim. - Card ordering: skills appear first in the assistant message, reflecting that priming ran before the LLM's turn. Live-during-streaming cards are a separate follow-up — the graph's index-based aggregator makes that a bigger lift and this change delivers the core UX win without fighting the stream ordering. 6 new unit tests covering part shape, args JSON contract, output text, unique IDs, empty input, and startOffset ID differentiation. * ⚡ feat: Emit Optimistic Skill Cards + Wire Primes in OpenAI/Responses Two follow-ups from testing. Optimistic card emit: the main chat path was only showing "Skill X loaded" cards at final-reconcile time, so the user saw nothing happen until the stream finished. Now emit synthetic ON_RUN_STEP + ON_RUN_STEP_COMPLETED events right before `runAgents` starts — same pattern the MCP OAuth flow uses in `ToolService` — so the cards appear immediately. The graph's content at index 0 may overwrite them during streaming, but the post-run `contentParts` prepend (unchanged) restores them on final reconcile. OpenAI + Responses parity: both controllers were resolving `manualSkillPrimes` via `initializeAgent` but never injecting them into `formattedMessages` before the run. Manual invocation silently did nothing on `/v1/chat/completions` and the Responses API path. Now both call `injectManualSkillPrimes` on the formatted messages so the model sees SKILL.md bodies on every path. LibreChat-style card SSE events don't apply to these OpenAI-shaped responses, so the live-emit is chat-path-only. - Export `buildSkillPrimeStepEvents(primes, { runId })` from `packages/api/src/agents/skills.ts`. Uses `Constants.USE_PRELIM_RESPONSE_MESSAGE_ID` by default so the frontend maps events to the in-flight preliminary response message, matching the OAuth emitter. - In `AgentClient.chatCompletion`, emit via `sendEvent` (or `GenerationJobManager.emitChunk` in resumable mode) after `injectManualSkillPrimes` runs, before the LLM turn begins. - Wire `injectManualSkillPrimes` into `openai.js` + `responses.js` after `formatAgentMessages`. Refactored the destructure to `let` on `indexTokenCountMap` so the injector's returned map is usable. - 8 new unit tests covering the step-event builder: pair cardinality, default/custom runId, TOOL_CALLS shape + JSON args, progress:1 on completion, index ordering, stepId/toolCallId pairing, empty input. * 🎯 fix: Route Skill Prime Events to the Real Response + Sparse-Array Offset Two bugs in the optimistic-card emit from the last pass. 1. Wrong runId. The events used `USE_PRELIM_RESPONSE_MESSAGE_ID` (the MCP OAuth pattern), but OAuth emits DURING tool loading — before the real response messageId exists. By the time skill priming fires, the graph is about to emit with `this.responseMessageId`, so the PRELIM runId orphaned every card onto the client's placeholder response entry in `messageMap`, separate from the one the LLM's events were building. Net effect: cards never rendered mid-stream. Now passing `this.responseMessageId` — the same ID `createRun` receives — so synthetic and real steps land on the same `messageMap` entry. 2. Index 0 collision. With the runId fixed, card-at-0 would have hit `updateContent`'s type-mismatch guard when the LLM's text delta arrived at the same index, suppressing the whole text stream. New `SKILL_PRIME_INDEX_OFFSET` = 100 placed on both the live SSE emit and the server-side `contentParts` assignment. Sparse array during streaming renders as `[llm_text, ..., card]` (skip-holes via `Array#filter` / `Array#map`). `filterMalformedContentParts` from `sendCompletion` compacts to dense `[text, card]` before persistence, so streaming UI and saved message agree on order — no finalize reorder jank. Post-run switches from `contentParts.unshift` to `contentParts[OFFSET + i] = part` to mirror the live placement. - Add `startIndex` option to `buildSkillPrimeStepEvents` with `SKILL_PRIME_INDEX_OFFSET` default. Export the constant from `@librechat/api` so `client.js` can reuse it for the post-run splice. - Update the existing index-ordering test to the new default and add a new test for the explicit `startIndex` override. * 🎗️ feat: Replace \$skill-name Text with Pills on the User Message The `$skill-name ` cosmetic text the popover was inserting into the textarea had two problems: it lingered in the user message forever (the card is a more meaningful marker), and it implied that free-form text invocation like \"\$foo help me\" should work — which it doesn't, and supporting it would mean another parsing layer nobody asked for. Dropped the textarea insertion. Visual confirmation after submit now comes from a compact `ManualSkillPills` row on the user bubble that self-extinguishes once the backend's live skill-card stream (`buildSkillPrimeStepEvents` from the last commit) populates the sibling assistant response. Multiple skills render as multiple pills — the atom was already a string array, so multi-select works for free. - `SkillsCommand.tsx`: select handler no longer writes to the textarea. Still drops the trigger `$` via `removeCharIfLast`, still pushes to `pendingManualSkillsByConvoId`, still flips `ephemeralAgent.skills`. - `families.ts`: new `attachedSkillsByMessageId` atomFamily keyed by user messageId. `useChatFunctions.ask` writes the drained skill list here on every fresh submit (regenerate/continue/edit still skip). - `ManualSkillPills.tsx` renders pills conditionally: hidden when the message isn't a user message, when no skills are attached, or when the sibling assistant response already carries a `skill` tool_call content part (the live card took over). Reads messages via React Query so we don't re-render on every message-state keystroke. - `Container.tsx` mounts the pills above the user message text, parallel to the existing `Files` slot. - Updated the SkillsCommand select-flow spec to assert the textarea is cleared of `$` instead of populated with `\$name `. 5 new tests for `ManualSkillPills` covering empty state, non-user message guard, multi-skill rendering, the skill-card hide condition, and the text-only-content-doesn't-hide case. * 🎛️ feat: Manual Skills as Persisted Message Field + Compose-Time Chips Three problems with the previous pass: 1. Cards rendered BELOW the LLM text on the assistant message (and stayed there on reload) because the sparse index-100 offset put them after the model's content. Now back to `unshift` — cards at the top, same as before the live-emit detour. 2. Pills on the user message disappeared the moment the live card arrived, so users barely saw them. The live-emit channel also added meaningful complexity and relied on a per-message Recoil atom that had no clean cleanup story. 3. No visual cue at all during new-chat compose — the `$name ` text was removed, the submitted-message pills weren't there yet, and the popover closes after selection. User had no way to see what they'd queued up before sending. New architecture: `manualSkills` is a first-class field on `TMessage`, persisted by the backend on the user message. `ManualSkillPills` reads straight from `message.manualSkills` — no atom, no sibling-lookup — so pills survive reload, show in history, and stay for the lifetime of the message. Compose-time chips above the textarea read the existing `pendingManualSkillsByConvoId` atom and let users × skills out before submitting. Backend reverts: - `client.js`: dropped the `ON_RUN_STEP` live-emit loop, restored `this.contentParts.unshift(...primeParts)` so cards sit at the top of the persisted assistant response. - `skills.ts`: removed `buildSkillPrimeStepEvents` and `SKILL_PRIME_INDEX_OFFSET` (both unused now). `GraphEvents`, `StepTypes`, and `Constants` imports went with them. Removed 8 tests. Field persistence: - `tMessageSchema` gains `manualSkills: z.array(z.string()).optional()`. - Mongoose message schema gains `manualSkills: { type: [String] }` with matching `IMessage` TS field. - `BaseClient.js` reads `req.body.manualSkills` on user-message save, filters to non-empty strings, pins onto `userMessage` before `saveMessageToDatabase`. Mirrors the existing `files` pattern right above it. Runtime resolution still reads top-level `req.body.manualSkills` — persistence and resolution are separate concerns. Frontend: - `useChatFunctions.ask` sets `currentMsg.manualSkills` directly; the drained atom value goes onto the message, not a separate atom. Removed the `attachSkillsToMessage` Recoil callback. - `ManualSkillPills`: pure render of `message.manualSkills`. No more `useQueryClient`, no sibling scan, no atom read. Loses the auto-hide-when-card-arrives behavior — pills stay on the user bubble, cards live on the assistant bubble, both are informative. - Dropped the `attachedSkillsByMessageId` atomFamily and its export. - New `PendingManualSkillsChips` above the textarea reads the compose-time atom and renders chips with × to remove. Mounted in `ChatForm` right after `TextareaHeader`. Naturally hides on submit when the atom drains. Tests: updated `ManualSkillPills` suite to the new field-based reads (5 passing). New `PendingManualSkillsChips` suite covering empty state, multi-chip render, single × removal, and full-clear (4 passing). Backend suite trimmed to 89 (was 97) from the step-events test removal — no regressions on the remaining helpers. * 🧪 feat: Assistant-Side Skill-Loading Chips + Pill Padding Two small UX fixes on top of the field-on-message architecture. Pill padding: bumped the user-side `ManualSkillPills` from `py-0.5` to `py-1` on each chip and added `py-0.5` to the wrapper so the row breathes a little without feeling tall. Mid-stream indicator: new `InvokingSkillsIndicator` mirrors the parent user message's `manualSkills` onto the assistant bubble as transient "Running X" chips while the real card is in flight. Renders above `ContentParts` in `MessageParts`. Hides itself when the assistant's own `content` grows a `skill` tool_call — the authoritative card from `buildSkillPrimeContentParts.unshift` is showing, so the placeholder steps aside. No SSE emit, no aggregator injection, no index collision with the LLM's streaming content: just a render slot keyed off the parent's field. Why not stream the cards live: whichever content index we'd choose either blocks the LLM's text stream (`updateContent` type-mismatch at index 0) or lands below the response after sparse compaction (index 100+). Mirroring the parent field sidesteps the aggregator entirely and gives the user an immediate "skill is loading" signal that naturally gives way to the real card at finalize. Covers the gap the user flagged: pills on the user message said "I asked for these" but nothing on the assistant side said "we're working on it" until the stream finished. 5 new tests for the indicator: user-msg guard, missing parent-field guard, multi-chip render, hides-on-card-landing, orphan-parent guard. * 🔁 fix: Indicator Visibility + Carry Manual Skills Through Regenerate/Edit Two bugs. Indicator never rendered: `InvokingSkillsIndicator` looked up the parent user message via `queryClient.getQueryData([QueryKeys.messages, convoId])`, but on a new chat the React Query cache is keyed by `"new"` (the URL `paramId`) until the server assigns a real conversation ID — while `message.conversationId` on the assistant message is already the server ID. Lookup missed, `skills.length === 0`, nothing rendered. Switched to `useChatContext().getMessages()`, which reads from the same `paramId` the rest of the UI uses, so new-chat and existing-chat cases both resolve to the correct message list. Regenerate / save-and-submit dropped manual skills: the compose-time `pendingManualSkillsByConvoId` atom is drained on the first submit, so replaying that turn later found an empty atom and sent `manualSkills: []`. The pills were still on the user bubble, so from the user's point of view the model was running primed — but the backend saw nothing and produced an unprimed response. - Added `overrideManualSkills?: string[]` to `TOptions`. Callers with a reference message pass its persisted `manualSkills`; `useChatFunctions.ask` uses the override verbatim when present, otherwise falls back to the existing drain-or-empty logic. - `regenerate` in `useChatFunctions` passes `parentMessage.manualSkills` — the user message being regenerated has the field persisted by the backend, so the second turn primes the same skills as the first. - `EditMessage.resubmitMessage` covers both edit branches: - User-message save-and-submit: forwards the edited message's own `manualSkills` so the new sibling turn primes identically. - Assistant-response edit: forwards the parent user message's `manualSkills` for the same reason. Indicator test suite converted from `@tanstack/react-query` harness to a jest-mocked `useChatContext().getMessages()`. 6 tests (was 5), added a cache-miss case. * 🧭 fix: Drive Mid-Stream Skill Chips from Submission Atom, Not Message Lookup Message-ID-keyed lookups kept racing the stream: the user message flips from its client-side intermediate UUID to the server-assigned ID mid-run, conversation IDs flip from the URL `paramId="new"` to the real convo ID on brand-new chats, and the React Query cache splits briefly between the two. Previous attempts — direct `queryClient.getQueryData` and then `useChatContext().getMessages()` — each missed a different window. `TSubmission.manualSkills` is already populated at `ask()` time and the submission atom (`store.submissionByIndex(index)`) is the single stable anchor across the whole lifecycle: set once at submit, lives through every SSE event, cleared when the stream ends. No ID lookups, no cache timing. - `InvokingSkillsIndicator` now reads `submissionByIndex(index)` via Recoil. Shows chips when: • the message is assistant-side, • a submission is in flight with non-empty `manualSkills`, • the assistant's `parentMessageId` matches `submission.userMessage.messageId` (so chips appear only on the bubble for the current turn, never on siblings), • the assistant's own content doesn't yet carry a `skill` tool_call (real card takes over from the server's post-run `contentParts.unshift`). - Drops the `useChatContext().getMessages()` dependency and the `useQueryClient` dependency before that. No more lookups by conversationId or messageId. Test suite now mocks `useChatContext` to supply `index: 0` and seeds the `submissionByIndex(0)` atom via Recoil initializer. 6 cases cover user-side, no-submission history, empty `manualSkills`, multi-chip render, hides-on-card-landing, and wrong-turn guard. * 🌱 fix: Seed Response manualSkills in createdHandler, Indicator Becomes Pure The mid-stream indicator kept getting wired off state I don't own: first `queryClient.getQueryData` (raced the new-chat paramId flip), then `useChatContext().getMessages()` (same cache, same race), then `useRecoilValue(submissionByIndex)` (pulled every message into the submission subscription — re-renders all indicators on any submission change, exactly the "limit hooks in rendering" concern). Cleanest path is the one the user pointed at: the submission owns the data, `useSSE` / `useEventHandlers` owns the save points, so seed the field ONTO the response message at the save site and let the indicator be a pure prop-read. - `createdHandler` now writes `manualSkills` onto the initial response from `submission.manualSkills` at the moment the placeholder enters the messages array. The field rides through the normal mutation pipeline via spreads (`useStepHandler` response creation, `updateContent` result returns) — no special handling needed. - `InvokingSkillsIndicator` drops the Recoil / context / queryClient reads. Pure function of `message`: if assistant, has `manualSkills`, and `content` hasn't grown a `skill` tool_call yet, render chips. Only `useLocalize` left, which was already unavoidable for the i18n string. - Renders decouple: no single state change (`submissionByIndex` flip, React Query cache update) forces every indicator in the message list to re-render anymore. Only the message whose prop changed re-runs. Finalize story unchanged: server's `responseMessage` doesn't carry the frontend-only `manualSkills` field, so `finalHandler`'s replacement drops it — but by then the real `skill` tool_call is in `content` and the indicator's content-scan hides itself anyway. Test suite back to pure prop mocks: 7 cases covering user-guard, no-seed, multi-chip render, skill-card-hide, non-skill-tool-call-keeps, text-only-keeps, and missing message. * 🪞 fix: Render Skill Indicator Inside ContentParts, Adjacent to Parts The indicator still wasn't showing because even though MessageParts mounted it as a sibling of ContentParts, ContentParts is a `memo`'d component that owns the only rendering path that refreshes in lockstep with content deltas. Mounting above it put the indicator one layer further out — reachable, but not exercised on the same render cycle that processes the streaming `message` prop. Moved the indicator into ContentParts itself, rendered at the top of both the sequential and parallel branches. Reads the `message` prop (newly threaded through as an optional prop alongside `content`), so: - Same render cycle as Parts — updates from the SSE pipeline flow through the same pathway. - Lives outside the `content.map`, so delta-driven content reshuffles never wipe it. - Still a pure prop-read inside the indicator itself (no Recoil, queryClient, context hooks). The only dep is `useLocalize`. Thread: - `ContentPartsProps` gains `message?: TMessage`. - `MessageParts` passes `message={message}` through, drops its own indicator mount + import. - `ContentParts` renders `<InvokingSkillsIndicator message={message} />` in both the parallel-content and sequential-content branches, right under `MemoryArtifacts` and before the empty-cursor / parts map. Companion data flow (unchanged): `createdHandler` seeds `initialResponse.manualSkills` from `submission.manualSkills`; the field rides through `useStepHandler` via spreads; indicator hides on `skill` tool_call landing in `content`. * 🔎 refactor: Narrow Skill Components to Scalar skills Prop, Kill Memo Churn Passing the full `message` object into presentational components busts `React.memo` shallow comparisons every time the message reference changes for unrelated reasons. Swap to scalar `skills?: string[]` throughout: - `InvokingSkillsIndicator`: props-only (`skills?: string[]`); visibility logic (user-vs-assistant, skill tool_call arrival) now lives in the caller so this stays pure presentational. - `ManualSkillPills`: props-only (`skills?: string[]`). - `ContentParts`: takes `manualSkills?: string[]` scalar, computes `showInvokingSkills` once per render from `manualSkills` + content scan for the `skill` tool_call, then mounts the indicator with `skills=` prop in both parallel and sequential branches. - `MessageParts`: passes `manualSkills={message.manualSkills}` through to `ContentParts`. - `Container`: passes `skills={message.manualSkills}` to `ManualSkillPills`. - Tests updated to exercise the narrowed prop surface. * 📜 feat: Mid-Stream Skill Cards via SkillCall, Drop Custom Indicator Instead of a separate `InvokingSkillsIndicator` chip component, render pending skill placeholders through the existing `SkillCall` renderer — same component the backend's finalized prime part uses. The loading visual (`progress < 1` + empty output → pulsing "Running X") and the completed visual ("Ran X") now come from one source of truth. `ContentParts` computes `pendingSkillNames` from `manualSkills` minus any `skill` tool_call already in `content` (dedupe by `args.skillName` since the synthetic's id differs from the real one). Those names render through a separate slot ABOVE the Parts iteration — not prepended to the content array, which would shift React keys on every downstream streaming text / tool part and force unmount/remount mid-stream. When the real prime `tool_call` lands at finalize (backend unshifts to content[0..]), `collectExistingSkillNames` picks it up, the pending set empties, and the real part takes over rendering in the Parts iteration. Layout is identical either way because primes are always at the top of content. - `InvokingSkillsIndicator.tsx` + test deleted (no longer referenced) - `ContentParts.tsx` renders `<SkillCall .../>` directly for pending names, mirrors `Part.tsx`'s usage of the same component - `createdHandler` doc comment updated to reflect the new flow * ✂️ fix: Render Interim Skill Cards From manualSkills Only, Leave Content Untouched Previous revision read `content` to de-dupe pending cards against real `skill` tool_calls, so any optimistic skill part streamed from the backend would race our placeholder off the screen mid-turn — exactly the "getting overridden" symptom. Now: interim `SkillCall` cards are driven purely by the response message's `manualSkills` field. `content` is never inspected here, so no backend delta can pull the cards down. The field is now seeded directly onto the assistant placeholder in `useChatFunctions` (not only in `createdHandler`) so the cards appear from the first render, before the `created` SSE event round-trip. Lifecycle: - `useChatFunctions` puts `manualSkills` on the freshly-minted `initialResponse` — cards render the instant the placeholder lands. - `createdHandler` keeps its own re-seed (idempotent; safe) so a regenerate / save-and-submit flow that hits that path still works. - `useStepHandler` spread operations preserve the field through every content update. - `finalHandler` replaces the message with the server-backed `responseMessage` (no `manualSkills`) — cards disappear, and the real `skill` tool_call part in `content` takes over. ContentParts changes: - Drop `collectExistingSkillNames` / `parseJsonField` dedupe path. - `renderPendingSkills` reads only `manualSkills` + `isCreatedByUser`. - Simpler control flow — one boolean (`hasPendingSkills`) gates the early return, one function renders. * 🩹 fix: Codex Review Resolutions — Localization, Guards, Tests, Docs Addresses seven findings from comprehensive code review: Finding 1 (MAJOR) — Document sticky re-priming as intentional - `buildSkillPrimeContentParts`: expanded doc comment explaining synthetic `skill` tool_calls persist and get re-primed on every subsequent turn via `extractInvokedSkillsFromPayload` (shape parity with model-invoked skills). This matches the UX: the assistant skill card is a visible, persistent signal that the skill is active for the conversation. Not a bug — called out explicitly so future maintainers don't mistake it for one. Finding 2 (MAJOR) — Add ContentParts render tests - New `ContentParts.test.tsx` with 7 cases covering the interim skill card logic: assistant-only rendering, user-message suppression, undefined-content safety, parallel+sequential branch integration, progress<1 (pending) state. Child components mocked so the test exercises only the branching and prop wiring ContentParts owns. Finding 3 (MINOR) — Localize hardcoded aria-labels - Added `com_ui_skills_manual_invoked` + `com_ui_skills_queued` keys. - Reused existing `com_ui_remove_skill_var` for the remove-button aria-label. - `PendingManualSkillsChips` and `ManualSkillPills` now call `useLocalize()`. Test mocks updated to the label-echo pattern. Finding 4 (MINOR) — Max-length guard in `extractManualSkills` - New `MAX_SKILL_NAME_LENGTH = 200` constant and filter. Blocks a crafted payload like `{ manualSkills: ['a'.repeat(100000)] }` from reaching `getSkillByName` / Mongo's query planner. Finding 5 (NIT) — `BaseClient.js` comment contradicted itself - Rewrote to call the filter what it is: defense-in-depth on top of Mongoose schema validation, not a redundant second layer. Finding 6 (NIT) — `ManualSkillPills` now wrapped in `React.memo` - Consistent with peer components (`PendingManualSkillsChips`, `ContentParts`). Rendered inside `Container`, which re-renders on every content update, so the memo is a real cycle savings. Finding 7 (NIT) — Redundant guard in `ContentParts.renderPendingSkills` - Collapsed the duplicate null-check by computing `pendingSkills` as a `useMemo`'d array (`[]` when not applicable), and mapping directly. `hasPendingSkills` now derives from the array length — one source of truth, no redundant gate inside the render function. * 🔧 fix: Update ParallelContent to Handle Optional Content Prop Modified the `ParallelContentRendererProps` to make the `content` prop optional, ensuring safer access within the component. Adjusted the calculation of `lastContentIdx` to handle cases where `content` may be undefined, preventing potential runtime errors. This change enhances the robustness of the component when dealing with varying message structures. * 🎯 fix: Thread manualSkills Through ContentRender — The Real Renderer This is why the interim skill cards never appeared across many rounds of iteration: `ContentRender.tsx` (the memo'd renderer used by most paths, including the agents endpoint) was calling `ContentParts` without the `manualSkills` prop. Only `MessageParts.tsx` had it wired up — and that's not the component that actually renders the assistant response in production. Two fixes: 1. Pass `manualSkills={msg.manualSkills}` to the `ContentParts` call. 2. Extend the `areContentRenderPropsEqual` memo comparator to include `manualSkills.length`, otherwise a message update that adds the field (seeded by `useChatFunctions` on the initialResponse) would be bailed out by the memo and never re-render. Verified the two ContentParts call sites are now consistent; Container usages for `ManualSkillPills` on the user side were already correct. * 🧹 polish: Address Audit Follow-Up (F1/F3/F6) F1 — Clarify sticky re-priming opt-out path. The previous comment said "regenerate without the pick" as one opt-out, but `useChatFunctions.regenerate` forwards the original picks via `overrideManualSkills`, so regeneration alone keeps the skill sticky. Updated to: edit the originating message to remove the pills and resubmit, or start a new conversation. F3 — Add DOM-order assertions to the parallel + sequential tests. The two "alongside" tests verified both elements existed but didn't pin the ordering contract. Both now use `compareDocumentPosition` to assert the pending SkillCall precedes the real content, matching the backend semantic (`contentParts.unshift(...primeParts)` puts primes at the top). F6 — Fix package import order in PendingManualSkillsChips. `recoil` (58 chars) was listed before `lucide-react` (45 chars) which violates the "shortest to longest after react" rule in AGENTS.md. Swapped order; no behavior change. F2 / F4 / F5 from the audit were confirmed as non-issues (React-safe empty map, cosmetic test-mock artifact, accepted memo tradeoff) and require no change. * ✨ feat: Dedicated PendingSkillCall + Running→Ran Transition on Real Content UX polish on the interim skill card now that it's actually rendering: 1. New `PendingSkillCall` component (mirrors `SkillCall` visually but drops the expand affordance). `SkillCall`'s underlying `ProgressText` always renders a chevron + clickable button when any input is present, which on a card with empty output points at nothing — misleading cursor:pointer and a no-op toggle. The pending variant has only the icon + label, no button wrapper, no chevron. 2. "Running X" → "Ran X" transition when real content lands. `ContentParts` computes `hasRealContent` (any non-text part, or a text part with non-empty content — placeholder empty-text parts don't count) and passes `loaded={hasRealContent}` to `PendingSkillCall`. Matches what users see for model-invoked skills as they finish priming: pulsing shimmer → static icon. 3. Cleanup: - Dropped direct `SkillCall` import from `ContentParts` (replaced by `PendingSkillCall`). `SkillCall` is still used by `Part` for real `skill` tool_call content parts — no behavior change there. - Removed the now-redundant explicit `manualSkills` assignment in `createdHandler`. `useChatFunctions` seeds the field on `initialResponse` at construction, so the `...submission.initialResponse` spread already carries it through — the re-assignment was defensive belt-and-suspenders doing the same work twice. Comment rewritten to describe the actual lifecycle. Tests updated to the new component (12/12 pass): two new cases pin the loaded-state transition (unloaded when content has no real parts, flips to loaded once a non-empty text part lands). |
||
|
|
6ecd1b510f
|
📎 fix: Route Unrecognized File Types via supportedMimeTypes Config (#12508)
* fix: check supportedMimeTypes before routing unrecognized file types In processAttachments, files not matching the hardcoded mime type categories (image, PDF, video, audio) were silently dropped. Now resolves the endpoint's file config and checks the file type against supportedMimeTypes before routing to the documents pipeline. Files not matching any config are still skipped (original behavior). Closes #12482 * feat: encode generic document types for supported providers Remove restrictive mime type filter in encodeAndFormatDocuments that only allowed PDFs and application/* types. Add a generic encoding path for non-PDF, non-Bedrock files using the provider's native format (Anthropic base64 document, OpenAI file block, Google media block). Files are already validated upstream by supportedMimeTypes. * fix: guard file.type and cache file config in processAttachments - Add file.type truthiness check before checkType to prevent coercion of null/undefined to string 'null'/'undefined' - Cache mergedFileConfig and endpointFileConfig on the instance so addPreviousAttachments doesn't recompute per message * refactor: harden generic document encoding with validation and tests - Extract formatDocumentBlock helper to eliminate ~30 lines of duplicate provider-dispatch code between PDF and generic paths - Add size validation in generic encoding path using configuredFileSizeLimit (was fetched but unused) - Guard Bedrock from generic path — non-bedrockDocumentFormats types are now skipped instead of silently tracking metadata - Only push metadata to result.files when a document block was actually created, preventing silent inconsistent state - Enable Anthropic citations for text/plain, text/html, text/markdown (supported by Anthropic's document API) - Fix != to !== for Providers.AZURE comparison - Add 9 tests covering all four provider branches, Bedrock exclusion, size limit enforcement, and unhandled provider * fix: resolve filename type mismatch in formatDocumentBlock filename parameter is string | undefined but OpenAIFileBlock and OpenAIInputFileBlock require string. Default to 'document' when filename is undefined. * fix: use endpoint name for file config lookup in processAttachments Agent runs can have agent.provider set to a base provider (e.g., openAI) while agent.endpoint is a custom endpoint name. Using provider for the getEndpointFileConfig lookup bypassed custom endpoint supportedMimeTypes config. Now uses agent.endpoint, matching the pattern in addDocuments. * perf: filter non-Bedrock files before fetching streams Bedrock only supports types in bedrockDocumentFormats. Previously, getFileStream was called for all files and unsupported types were discarded after download. Now pre-filters the file list for Bedrock to avoid unnecessary network and memory overhead for large unsupported attachments. * refactor: clean up processAttachments file config handling - Remove redundant ?? null intermediaries; use instance properties directly in the else-if condition - Add JSDoc @type annotations for _mergedFileConfig and _endpointFileConfig in the constructor * refactor: harden document encoding and add routing tests - Hoist configuredFileSizeLimit above the loop to avoid recomputing mergeFileConfig per file - Replace Buffer.from decode with base64 length formula in the generic size check to avoid unnecessary heap allocation - Use nullish coalescing (??) for filename fallback - Clean up test: remove unnecessary type cast, use createMockRequest helper for size-limit test - Add 14 tests for processAttachments categorization logic covering supportedMimeTypes routing, null/undefined guards, standard type passthrough, and edge cases * fix: use optional chaining for checkType in routing tests FileConfig.checkType is typed as optional. Use optional chaining to satisfy strict type checking. * fix: skip stream fetches for unsupported providers, block Bedrock generic routing - Return early from encodeAndFormatDocuments when the provider is neither document-supported nor Bedrock, avoiding unnecessary getFileStream calls for providers that would discard all results - Add !isBedrock guard to the supportedMimeTypes fallback branch in processAttachments so permissive patterns like '.*' don't route non-Bedrock types into documents that would be silently dropped - Add test for Bedrock + non-Bedrock-document-type skipping * fix: respect supportedMimeTypes config for Bedrock endpoints Remove !isBedrock guard from the generic supportedMimeTypes routing branch. If a user configures permissive supportedMimeTypes for a Bedrock endpoint, the upload validation already accepted the file. The encoding layer pre-filters to Bedrock-supported types before fetching streams, so unsupported types are handled there without silently dropping files the user explicitly allowed. |
||
|
|
fd01dfc083
|
💰 fix: Lazy-Initialize Balance Record at Check Time for Overrides (#12474)
* fix: Lazy-initialize balance record when missing at check time When balance is configured via admin panel DB overrides, users with existing sessions never pass through the login middleware that creates their balance record. This causes checkBalanceRecord to find no record and return balance: 0, blocking the user. Add optional balanceConfig and upsertBalanceFields deps to CheckBalanceDeps. When no balance record exists but startBalance is configured, lazily create the record instead of returning canSpend: false. Pass the new deps from BaseClient, chatV1, and chatV2 callers. * test: Add checkBalance lazy initialization tests Cover lazy balance init scenarios: successful init with startBalance, insufficient startBalance, missing config fallback, undefined startBalance, missing upsertBalanceFields dep, and startBalance of 0. * fix: Address review findings for lazy balance initialization - Use canonical BalanceConfig and IBalanceUpdate types from @librechat/data-schemas instead of inline type definitions - Include auto-refill fields (autoRefillEnabled, refillIntervalValue, refillIntervalUnit, refillAmount, lastRefill) during lazy init, mirroring the login middleware's buildUpdateFields logic - Add try/catch around upsertBalanceFields with graceful fallback to canSpend: false on DB errors - Read balance from DB return value instead of raw startBalance constant - Fix misleading test names to describe observable throw behavior - Add tests: upsertBalanceFields rejection, auto-refill field inclusion, DB-returned balance value, and logViolation assertions * fix: Address second review pass findings - Fix import ordering: package type imports before local type imports - Remove misleading comment on DB-fallback test, rename for clarity - Add logViolation assertion to insufficient-balance lazy-init test - Add test for partial auto-refill config (autoRefillEnabled without required dependent fields) * refactor: Replace createMockReqRes factory with describe-scoped consts Replace zero-argument factory with plain const declarations using direct type casts instead of double-cast through unknown. * fix: Sort local type imports longest-first, add missing logViolation assertion - Reorder local type imports in spec file per AGENTS.md (longest to shortest within sub-group) - Add logViolation assertion to startBalance: 0 test for consistent violation payload coverage across all throw paths |
||
|
|
1455f15b7b
|
📄 feat: Model-Aware Bedrock Document Size Validation (#12467)
* 📄 fix: Model-Aware Bedrock Document Size Validation
Remove the hard 4.5MB clamp on Bedrock document uploads so that
Claude 4+ (PDF) and Nova (PDF/DOCX) models can accept larger files
per AWS documentation. The default 4.5MB limit is preserved for
other models/formats, and fileConfig can now override it in either
direction—consistent with every other provider.
* address review: restore Math.min for non-exempt docs, tighten regexes, add tests
- Restore Math.min clamp for non-exempt Bedrock documents (fileConfig can
only lower the hard 4.5 MB API limit, not raise it); only exempt models
(Claude 4+ PDF, Nova PDF/DOCX) use ?? to allow fileConfig override
- Replace copied isBedrockClaude4Plus regex with cleaner anchored pattern
that correctly handles multi-digit version numbers (e.g. sonnet-40)
and removes dead Alt 1 branch matching no real Bedrock model IDs
- Tighten isBedrockNova from includes() to startsWith() to prevent
substring matching in unexpected positions
- Add integration test verifying model is threaded to validateBedrockDocument
- Add boundary tests for exempt + low configuredFileSizeLimit, non-exempt
+ high configuredFileSizeLimit, and exempt model accepting files up to 32 MB
- Revert two tests that were incorrectly inverted to prove wrong behavior
- Fix inaccurate JSDoc and misleading test name
* simplify: allow fileConfig to override Bedrock limit in either direction
Make Bedrock consistent with all other providers — fileConfig sets the
effective limit unconditionally via ?? rather than clamping with Math.min.
The model-aware defaults (4.5 MB for non-exempt, 32 MB for exempt) remain
as sensible fallbacks when no fileConfig is set.
* fix: handle cross-region inference profile IDs in Bedrock model matchers
Bedrock cross-region inference profiles prepend a region code to the
model ID (e.g. "us.amazon.nova-pro-v1:0", "eu.anthropic.claude-sonnet-4-...").
Both isBedrockNova and isBedrockClaude4Plus would miss these prefixed IDs,
silently falling back to the 4.5 MB default for eligible models.
Switch both matchers to use (?:^|\.) to anchor the vendor segment so the
pattern matches with or without a leading region prefix.
|
||
|
|
0d94881c2d
|
🧹 refactor: Tighten Config Schema Typing and Remove Deprecated Fields (#12452)
* refactor: Remove deprecated and unused fields from endpoint schemas - Remove summarize, summaryModel from endpointSchema and azureEndpointSchema - Remove plugins from azureEndpointSchema - Remove customOrder from endpointSchema and azureEndpointSchema - Remove baseURL from all and agents endpoint schemas - Type paramDefinitions with full SettingDefinition-based schema - Clean up summarize/summaryModel references in initialize.ts and config.spec.ts * refactor: Improve MCP transport schema typing - Add defaults to transport type discriminators (stdio, websocket, sse) - Type stderr field as IOType union instead of z.any() * refactor: Add narrowed preset schema for model specs - Create tModelSpecPresetSchema omitting system/DB/deprecated fields - Update tModelSpecSchema to use the narrowed preset schema * test: Add explicit type field to MCP test fixtures Add transport type discriminator to test objects that construct MCPOptions/ParsedServerConfig directly, required after type field changed from optional to default in schema definitions. * chore: Bump librechat-data-provider to 0.8.404 * refactor: Tighten z.record(z.any()) fields to precise value types - Type headers fields as z.record(z.string()) in endpoint, assistant, and azure schemas - Type addParams as z.record(z.union([z.string(), z.number(), z.boolean(), z.null()])) - Type azure additionalHeaders as z.record(z.string()) - Type memory model_parameters as z.record(z.union([z.string(), z.number(), z.boolean()])) - Type firecrawl changeTrackingOptions.schema as z.record(z.string()) * refactor: Type supportedMimeTypes schema as z.array(z.string()) Replace z.array(z.any()).refine() with z.array(z.string()) since config input is always strings that get converted to RegExp via convertStringsToRegex() after parsing. Destructure supportedMimeTypes from spreads to avoid string[]/RegExp[] type mismatch. * refactor: Tighten enum, role, and numeric constraint schemas - Type engineSTT as enum ['openai', 'azureOpenAI'] - Type engineTTS as enum ['openai', 'azureOpenAI', 'elevenlabs', 'localai'] - Constrain playbackRate to 0.25–4 range - Type titleMessageRole as enum ['system', 'user', 'assistant'] - Add int().nonnegative() to MCP timeout and firecrawl timeout * chore: Bump librechat-data-provider to 0.8.405 * fix: Accept both string and RegExp in supportedMimeTypes schema The schema must accept both string[] (config input) and RegExp[] (post-merge runtime) since tests validate merged output against the schema. Use z.union([z.string(), z.instanceof(RegExp)]) to handle both. * refactor: Address review findings for schema tightening PR - Revert changeTrackingOptions.schema to z.record(z.unknown()) (JSON Schema is nested, not flat strings) - Remove dead contextStrategy code from BaseClient.js and cleanup.js - Extract paramDefinitionSchema to named exported constant - Add .int() constraint to columnSpan and columns - Apply consistent .int().nonnegative() to initTimeout, sseReadTimeout, scraperTimeout - Update stale stderr JSDoc to match actual accepted types - Add comprehensive tests for paramDefinitionSchema, tModelSpecPresetSchema, endpointSchema deprecated field stripping, and azureEndpointSchema * fix: Address second review pass findings - Revert supportedMimeTypesSchema to z.array(z.string()) and remove as string[] casts — fix tests to not validate merged RegExp[] output against the config input schema - Remove unused tModelSpecSchema import from test file - Consolidate duplicate '../src/schemas' imports - Add expiredAt coverage to tModelSpecPresetSchema test - Assert plugins is absent in azureEndpointSchema test - Add sync comments for engineSTT/engineTTS enum literals * refactor: Omit preset-management fields from tModelSpecPresetSchema Omit conversationId, presetId, title, defaultPreset, and order from the model spec preset schema — these are preset-management fields that don't belong in model spec configuration. |
||
|
|
935288f841
|
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435)
* feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring
- Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with
`enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains`
- Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source`
field to `ParsedServerConfig`
- Change `dbSourced` determination from `!!config.dbId` to
`config.source === 'user'` across MCPManager, ConnectionsRepository,
UserConnectionManager, and MCPServerInspector
- Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB
* feat: three-layer MCPServersRegistry with config cache and lazy init
- Add `configCacheRepo` as third repository layer between YAML cache and
DB for admin-defined config-source MCP servers
- Implement `ensureConfigServers()` that identifies config-override servers
from resolved `getAppConfig()` mcpConfig, lazily inspects them, and
caches parsed configs with `source: 'config'`
- Add `lazyInitConfigServer()` with timeout, stub-on-failure, and
concurrent-init deduplication via `pendingConfigInits` map
- Extend `getAllServerConfigs()` with optional `configServers` param for
three-way merge: YAML → Config → User
- Add `getServerConfig()` lookup through config cache layer
- Add `invalidateConfigCache()` for clearing config-source inspection
results on admin config mutations
- Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on
DB-stored servers in `addServer()` and `addServerStub()`
* feat: wire tenant context into MCP controllers, services, and cache invalidation
- Resolve config-source servers via `getAppConfig({ role, tenantId })`
in `getMCPTools()` and `getMCPServersList()` controllers
- Pass `ensureConfigServers()` results through `getAllServerConfigs()`
for three-way merge of YAML + Config + User servers
- Add tenant/role context to `getMCPSetupData()` and connection status
routes via `getTenantId()` from ALS
- Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin
config mutations trigger re-inspection of config-source MCP servers
* feat: enforce tenantMcpPolicy on admin config mcpServers mutations
- Add `validateMcpServerPolicy()` helper that checks mcpServers against
operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant,
allowedTransports, allowedDomains)
- Wire validation into `upsertConfigOverrides` and `patchConfigField`
handlers — rejects with 403 when policy is violated
- Infer transport type from config shape (command → stdio, url protocol
→ websocket/sse, type field → streamable-http)
- Validate server domains against policy allowlist when configured
* revert: remove tenantMcpPolicy schema and enforcement
The existing admin config CRUD routes already provide the mechanism
for granular MCP server prepopulation (groups, roles, users). The
tenantMcpPolicy gating adds unnecessary complexity that can be
revisited if needed in the future.
- Remove tenantMcpPolicy from mcpSettings Zod schema
- Remove validateMcpServerPolicy helper and TenantMcpPolicy interface
- Remove policy enforcement from upsertConfigOverrides and
patchConfigField handlers
* test: update test assertions for source field and config-server wiring
- Use objectContaining in MCPServersRegistry reset test to account for
new source: 'yaml' field on CACHE-stored configs
- Add getTenantId and ensureConfigServers mocks to MCP route tests
- Add getAppConfig mock to route test Config service mock
- Update getMCPSetupData assertion to expect second options argument
- Update getAllServerConfigs assertions for new configServers parameter
* fix: disconnect active connections when config-source servers are evicted
When admin config overrides change and config-source MCP servers are
removed, the invalidation now proactively disconnects active connections
for evicted servers instead of leaving them lingering until timeout.
- Return evicted server names from invalidateConfigCache()
- Disconnect app-level connections for evicted servers in
clearMcpConfigCache() via MCPManager.appConnections.disconnect()
* fix: address code review findings (CRITICAL, MAJOR, MINOR)
CRITICAL fixes:
- Scope configCacheRepo keys by config content hash to prevent
cross-tenant cache poisoning when two tenants define the same
server name with different configurations
- Change dbSourced checks from `source === 'user'` to
`source !== 'yaml' && source !== 'config'` so undefined source
(pre-upgrade cached configs) fails closed to restricted mode
MAJOR fixes:
- Derive OAuth servers from already-computed mcpConfig instead of
calling getOAuthServers() separately — config-source OAuth servers
are now properly detected
- Add parseInt radix (10) and NaN guard with fallback to 30_000
for CONFIG_SERVER_INIT_TIMEOUT_MS
- Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in
ServerConfigsCacheFactory to avoid SCAN-based Redis stalls
- Remove `if (role || tenantId)` guard in getMCPSetupData — config
servers now always resolve regardless of tenant context
MINOR fixes:
- Extract resolveAllMcpConfigs() helper in mcp controller to
eliminate 3x copy-pasted config resolution boilerplate
- Distinguish "not initialized" from real errors in
clearMcpConfigCache — log actual failures instead of swallowing
- Remove narrative inline comments per style guide
- Remove dead try/catch inside Promise.allSettled in
ensureConfigServers (inner method never throws)
- Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll()
calls per request
Test updates:
- Add ensureConfigServers mock to registry test fixtures
- Update getMCPSetupData assertions for inline OAuth derivation
* fix: address code review findings (CRITICAL, MAJOR, MINOR)
CRITICAL fixes:
- Break circular dependency: move CONFIG_CACHE_NAMESPACE from
MCPServersRegistry to ServerConfigsCacheFactory
- Fix dbSourced fail-closed: use source field when present, fall back to
legacy dbId check when absent (backward-compatible with pre-upgrade
cached configs that lack source field)
MAJOR fixes:
- Add CONFIG_CACHE_NAMESPACE to aggregate-key set in
ServerConfigsCacheFactory to avoid SCAN-based Redis stalls
- Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests)
covering lazy init, stub-on-failure, cross-tenant isolation via config
hash keys, concurrent deduplication, merge order, and cache invalidation
MINOR fixes:
- Update MCPServerInspector test assertion for dbSourced change
* fix: restore getServerConfig lookup for config-source servers (NEW-1)
Add configNameToKey map that indexes server name → hash-based cache key
for O(1) lookup by name in getServerConfig. This restores the config
cache layer that was dropped when hash-based keys were introduced.
Without this fix, config-source servers appeared in tool listings
(via getAllServerConfigs) but getServerConfig returned undefined,
breaking all connection and tool call paths.
- Populate configNameToKey in ensureSingleConfigServer
- Clear configNameToKey in invalidateConfigCache and reset
- Clear stale read-through cache entries after lazy init
- Remove dead code in invalidateConfigCache (config.title, key parsing)
- Add getServerConfig tests for config-source server lookup
* fix: eliminate configNameToKey race via caller-provided configServers param
Replace the process-global configNameToKey map (last-writer-wins under
concurrent multi-tenant load) with a configServers parameter on
getServerConfig. Callers pass the pre-resolved config servers map
directly — no shared mutable state, no cross-tenant race.
- Add optional configServers param to getServerConfig; when provided,
returns matching config directly without any global lookup
- Remove configNameToKey map entirely (was the source of the race)
- Extract server names from cache keys via lastIndexOf in
invalidateConfigCache (safe for names containing colons)
- Use mcpConfig[serverName] directly in getMCPTools instead of a
redundant getServerConfig call
- Add cross-tenant isolation test for getServerConfig
* fix: populate read-through cache after config server lazy init
After lazyInitConfigServer succeeds, write the parsed config to
readThroughCache keyed by serverName so that getServerConfig calls
from ConnectionsRepository, UserConnectionManager, and
MCPManager.callTool find the config without needing configServers.
Without this, config-source servers appeared in tool listings but
every connection attempt and tool call returned undefined.
* fix: user-scoped getServerConfig fallback to server-only cache key
When getServerConfig is called with a userId (e.g., from callTool or
UserConnectionManager), the cache key is serverName::userId. Config-source
servers are cached under the server-only key (no userId). Add a fallback
so user-scoped lookups find config-source servers in the read-through cache.
* fix: configCacheRepo fallback, isUserSourced DRY, cross-process race
CRITICAL: Add findInConfigCache fallback in getServerConfig so
config-source servers remain reachable after readThroughCache TTL
expires (5s). Without this, every tool call after 5s returned
undefined for config-source servers.
MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace
all 5 inline dbSourced ternary expressions (MCPManager x2,
ConnectionsRepository, UserConnectionManager, MCPServerInspector).
MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when
configCacheRepo.add throws (key exists from another process), fall
back to reading the existing entry instead of returning undefined.
MINOR: Parallelize invalidateConfigCache awaits with Promise.all.
Remove redundant .catch(() => {}) inside Promise.allSettled.
Tighten dedup test assertion to toBe(1).
Add TTL-expiry tests for getServerConfig (with and without userId).
* feat: thread configServers through getAppToolFunctions and formatInstructionsForContext
Add optional configServers parameter to getAppToolFunctions,
getInstructions, and formatInstructionsForContext so config-source
server tools and instructions are visible to agent initialization
and context injection paths.
Existing callers (boot-time init, tests) pass no argument and
continue to work unchanged. Agent runtime paths can now thread
resolved config servers from request context.
* fix: stale failure stubs retry after 5 min, upsert for cross-process races
- Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried
instead of permanently disabling config-source servers after transient
errors (DNS outage, cold-start race)
- Extract upsertConfigCache() helper that tries add then falls back to
update, preventing cross-process Redis races where a second instance's
successful inspection result was discarded
- Add test for stale-stub retry after CONFIG_STUB_RETRY_MS
* fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup
- Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so
CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs
were always considered stale (updatedAt ?? 0 → epoch → always expired)
- Add null guard for rawConfig in MCPManager.callTool before passing to
preProcessGraphTokens — prevents unsafe `as` cast on undefined
- Log double-failure in upsertConfigCache instead of silently swallowing
- Replace module-scope Date.now monkey-patch with jest.useFakeTimers /
jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests
* fix: server-only readThrough fallback only returns truthy values
Prevents a cached undefined from a prior no-userId lookup from
short-circuiting the DB query on a subsequent userId-scoped lookup.
* fix: remove findInConfigCache to eliminate cross-tenant config leakage
The findInConfigCache prefix scan (serverName:*) could return any
tenant's config after readThrough TTL expires, violating tenant
isolation. Config-source servers are now ONLY resolvable through:
1. The configServers param (callers with tenant context from ALS)
2. The readThrough cache (populated by ensureSingleConfigServer,
5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs)
Connection/tool-call paths without tenant context rely exclusively on
the readThrough cache. If it expires before the next HTTP request
repopulates it, the server is not found — which is correct because
there is no tenant context to determine which config to return.
- Remove findInConfigCache method and its call in getServerConfig
- Update server-only readThrough fallback to only return truthy values
(prevents cached undefined from short-circuiting user-scoped DB lookup)
- Update tests to document tenant isolation behavior after cache expiry
* style: fix import order per AGENTS.md conventions
Sort package imports shortest-to-longest, local imports longest-to-shortest
across MCPServersRegistry, ConnectionsRepository, MCPManager,
UserConnectionManager, and MCPServerInspector.
* fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures
Thread pre-resolved serverConfig from tool creation context into
callTool, removing dependency on the readThrough cache for config-source
servers. This fixes two issues:
- Cross-tenant contamination: the readThrough cache key was unscoped
(just serverName), so concurrent multi-tenant requests for same-named
servers would overwrite each other's entries
- TTL expiry: tool calls happening >5s after config resolution would
fail with "Configuration not found" because the readThrough entry
had expired
Changes:
- Add optional serverConfig param to MCPManager.callTool — uses
provided config directly, falling back to getServerConfig lookup
for YAML/user servers
- Thread serverConfig from createMCPTool through createToolInstance
closure to callTool
- Remove readThrough write from ensureSingleConfigServer — config-source
servers are only accessible via configServers param (tenant-scoped)
- Remove server-only readThrough fallback from getServerConfig
- Increase config cache hash from 8 to 16 hex chars (64-bit)
- Add isUserSourced boundary tests for all source/dbId combinations
- Fix double Object.keys call in getMCPTools controller
- Update test assertions for new getServerConfig behavior
* fix: cache base configs for config-server users; narrow upsertConfigCache error handling
- Refactor getAllServerConfigs to separate base config fetch (YAML + DB)
from config-server layering. Base configs are cached via readThroughCacheAll
regardless of whether configServers is provided, eliminating uncached
MongoDB queries per request for config-server users
- Narrow upsertConfigCache catch to duplicate-key errors only;
infrastructure errors (Redis timeouts, network failures) now propagate
instead of being silently swallowed, preventing inspection storms
during outages
* fix: restore correct merge order and document upsert error matching
- Restore YAML → Config → User DB precedence in getAllServerConfigs
(user DB servers have highest precedence, matching the JSDoc contract)
- Add source comment on upsertConfigCache duplicate-key detection
linking to the two cache implementations that define the error message
* feat: complete config-source server support across all execution paths
Wire configServers through the entire agent execution pipeline so
config-source MCP servers are fully functional — not just visible in
listings but executable in agent sessions.
- Thread configServers into handleTools.js agent tool pipeline: resolve
config servers from tenant context before MCP tool iteration, pass to
getServerConfig, createMCPTools, and createMCPTool
- Thread configServers into agent instructions pipeline:
applyContextToAgent → getMCPInstructionsForServers →
formatInstructionsForContext, resolved in client.js before agent
context application
- Add configServers param to createMCPTool and createMCPTools for
reconnect path fallback
- Add source field to redactServerSecrets allowlist for client UI
differentiation of server tiers
- Narrow invalidateConfigCache to only clear readThroughCacheAll (merged
results), preserving YAML individual-server readThrough entries
- Update context.spec.ts assertions for new configServers parameter
* fix: add missing mocks for config-source server dependencies in client.test.js
Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added
to client.js but not reflected in the test file's jest.mock declarations.
* fix: update formatInstructionsForContext assertions for configServers param
The test assertions expected formatInstructionsForContext to be called with
only the server names array, but it now receives configServers as a second
argument after the config-source server feature wiring.
* fix: move configServers resolution before MCP tool loop to avoid TDZ
configServers was declared with `let` after the first tool loop but
referenced inside it via getServerConfig(), causing a ReferenceError
temporal dead zone. Move declaration and resolution before the loop,
using tools.some(mcpToolPattern) to gate the async resolution.
* fix: address review findings — cache bypass, discoverServerTools gap, DRY
- #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via
readThroughCacheAll) instead of bypassing it when configServers is present.
Extracts user-DB entries from cached base by diffing against YAML keys
to maintain YAML → Config → User DB merge order without extra MongoDB calls.
- #3: Add configServers param to ToolDiscoveryOptions and thread it through
discoverServerTools → getServerConfig so config-source servers are
discoverable during OAuth reconnection flows.
- #6: Replace inline import() type annotations in context.ts with proper
import type { ParsedServerConfig } per AGENTS.md conventions.
- #7: Extract resolveConfigServers(req) helper in MCP.js and use it from
handleTools.js and client.js, eliminating the duplicated 6-line config
resolution pattern.
- #10: Restore removed "why" comment explaining getLoaded() vs getAll()
choice in getMCPSetupData — documents non-obvious correctness constraint.
- #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs.
* fix: consolidate imports, reorder constants, fix YAML-DB merge edge case
- Merge duplicate @librechat/data-schemas requires in MCP.js into one
- Move resolveConfigServers after module-level constants
- Fix getAllServerConfigs edge case where user-DB entry overriding a
YAML entry with the same name was excluded from userDbConfigs; now
uses reference equality check to detect DB-overwritten YAML keys
* fix: replace fragile string-match error detection with proper upsert method
Add upsert() to IServerConfigsRepositoryInterface and all implementations
(InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle
error message string match ('already exists in cache') in upsertConfigCache
that was the only thing preventing cross-process init races from silently
discarding inspection results.
Each implementation handles add-or-update atomically:
- InMemory: direct Map.set()
- Redis: direct cache.set()
- RedisAggregateKey: read-modify-write under write lock
- DB: delegates to update() (DB servers use explicit add() with ACL setup)
* fix: wire configServers through remaining HTTP endpoints
- getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig
- reinitialize route: resolve configServers before getServerConfig
- auth-values route: resolve configServers before getServerConfig
- getOAuthHeaders: accept configServers param, thread from callers
- Update mcp.spec.js tests to mock getAllServerConfigs for GET by name
* fix: thread serverConfig through getConnection for config-source servers
Config-source servers exist only in configCacheRepo, not in YAML cache or
DB. When callTool → getConnection → getUserConnection → getServerConfig
runs without configServers, it returns undefined and throws. Fix by
threading the pre-resolved serverConfig (providedConfig) from callTool
through getConnection → getUserConnection → createUserConnectionInternal,
using it as a fallback before the registry lookup.
* fix: thread configServers through reinit, reconnect, and tool definition paths
Wire configServers through every remaining call chain that creates or
reconnects MCP server connections:
- reinitMCPServer: accepts serverConfig and configServers, uses them for
getServerConfig fallback, getConnection, and discoverServerTools
- reconnectServer: accepts and passes configServers to reinitMCPServer
- createMCPTools/createMCPTool: pass configServers to reconnectServer
- ToolService.loadToolDefinitionsWrapper: resolves configServers from req,
passes to both reinitMCPServer call sites
- reinitialize route: passes serverConfig and configServers to reinitMCPServer
* fix: address review findings — simplify merge, harden error paths, fix log labels
- Simplify getAllServerConfigs merge: replace fragile reference-equality
loop with direct spread { ...yamlConfigs, ...configServers, ...base }
- Guard upsertConfigCache in lazyInitConfigServer catch block so cache
failures don't mask the original inspection error
- Deduplicate getYamlServerNames cold-start with promise dedup pattern
- Remove dead `if (!mcpConfig)` guard in getMCPSetupData
- Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error
messages — now uses this.namespace for correct Config/App labeling
- Remove misleading OAuth callback comment about readThrough cache
- Move resolveConfigServers after module-level constants in MCP.js
* fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label
- Clear yamlServerNamesPromise on rejection so transient cache errors
don't permanently prevent ensureConfigServers from working
- Skip reinspectServer for config-source servers (source: 'config') in
reinitMCPServer — they lack a CACHE/DB storage location; retry is
handled by CONFIG_STUB_RETRY_MS in ensureConfigServers
- Use source field instead of dbId for storageLocation derivation
- Fix remaining hardcoded "App" in reset() leaderCheck message
* fix: persist oauthHeaders in flow state for config-source OAuth servers
The OAuth callback route has no JWT auth context and cannot resolve
config-source server configs. Previously, getOAuthHeaders would silently
return {} for config-source servers, dropping custom token exchange headers.
Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow
initiation (which has auth context), and the callback reads them from
the stored flow state with a fallback to the registry lookup for
YAML/user-DB servers.
* fix: update tests for getMCPSetupData null guard removal and ToolService mock
- MCP.spec.js: update test to expect graceful handling of null mcpConfig
instead of a throw (getAllServerConfigs always returns an object)
- MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of
null from test mocks
- ToolService.spec.js: add missing mock for ~/server/services/MCP
(resolveConfigServers)
* fix: address review findings — DRY, naming, logging, dead code, defensive guards
- #1: Simplify getAllServerConfigs to single getBaseServerConfigs call,
eliminating redundant double-fetch of cacheConfigsRepo.getAll()
- #2: Add warning log when oauthHeaders absent from OAuth callback flow state
- #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller
imports shared helper instead of reimplementing
- #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider
in createToolInstance — these are actively used, not unused
- #5: Log rejected results from ensureConfigServers Promise.allSettled
so cache errors are visible instead of silently dropped
- #6: Remove dead 'MCP config not found' error handlers from routes
- #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache
- #8: Remove logger.error from withTimeout to prevent double-logging timeouts
- #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message
- #12: Use spread instead of mutation in addServer for immutability consistency
- Add upsert mock to ensureConfigServers.test.ts DB mock
- Update route tests for resolveAllMcpConfigs import change
* fix: restore correct merge priority, use immutable spread, fix test mock
- getAllServerConfigs: { ...configServers, ...base } so userDB wins over
configServers, matching documented "User DB (highest)" priority
- lazyInitConfigServer: use immutable spread instead of direct mutation
for parsedConfig.source, consistent with addServer fix
- Fix test to mock getAllServerConfigs as {} instead of null, remove
unnecessary || {} defensive guard in getMCPSetupData
* fix: error handling, stable hashing, flatten nesting, remove dead param
- Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with
graceful {} fallback so transient DB/cache errors don't crash tool pipeline
- Sort keys in configCacheKey JSON.stringify for deterministic hashing
regardless of object property insertion order
- Flatten clearMcpConfigCache from 3 nested try-catch to early returns;
document that user connections are cleaned up lazily (accepted tradeoff)
- Remove dead configServers param from getAppToolFunctions (never passed)
- Add security rationale comment for source field in redactServerSecrets
* fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision
The array replacer in JSON.stringify acts as a property allowlist at
every nesting depth, silently dropping nested keys like headers['X-API-Key'],
oauth.client_secret, etc. Two configs with different nested values but
identical top-level structure produced the same hash, causing cross-tenant
cache hits and potential credential contamination.
Switch to a function replacer that recursively sorts keys at all depths
without dropping any properties.
Also document the known gap in getOAuthServers: config-source OAuth
servers are not covered by auto-reconnection or uninstall cleanup
because callers lack request context.
* fix: move clearMcpConfigCache to packages/api to eliminate circular dependency
The function only depends on MCPServersRegistry and MCPManager, both of
which live in packages/api. Import it directly from @librechat/api in
the CJS layer instead of using dynamic require('~/config').
* chore: imports/fields ordering
* fix: address review findings — error handling, targeted lookup, test gaps
- Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so
getAppConfig/getAllServerConfigs failures propagate instead of masking
infrastructure errors as empty server lists.
- Use targeted getServerConfig in getMCPServerById instead of fetching
all server configs for a single-server lookup.
- Forward configServers to inner createMCPTool calls so reconnect path
works for config-source servers.
- Update getAllServerConfigs JSDoc to document disjoint-key design.
- Add OAuth callback oauthHeaders fallback tests (flow state present
vs registry fallback).
- Add resolveConfigServers/resolveAllMcpConfigs unit tests covering
happy path and error propagation.
* fix: add getOAuthReconnectionManager mock to OAuth callback tests
* chore: imports ordering
|
||
|
|
f277b32030
|
📸 fix: Snapshot Options to Prevent Mid-Await Client Disposal Crash (#12398)
* 🐛 fix: Prevent crash when token balance exhaustion races with client disposal Add null guard in saveMessageToDatabase to handle the case where disposeClient nullifies this.options while a userMessagePromise is still pending from a prior async save operation. * 🐛 fix: Snapshot this.options to prevent mid-await disposal crash The original guard at function entry was insufficient — this.options is always valid at entry. The crash occurs after the first await (db.saveMessage) when disposeClient nullifies client.options while the promise is suspended. Fix: capture this.options into a local const before any await. The local reference is immune to client.options = null set by disposeClient. Also add .catch on userMessagePromise in sendMessage to prevent unhandled rejections when checkBalance throws before the promise is awaited, and add two regression tests. |
||
|
|
b5c097e5c7
|
⚗️ feat: Agent Context Compaction/Summarization (#12287)
* chore: imports/types
Add summarization config and package-level summarize handler contracts
Register summarize handlers across server controller paths
Port cursor dual-read/dual-write summary support and UI status handling
Selectively merge cursor branch files for BaseClient summary content
block detection (last-summary-wins), dual-write persistence, summary
block unit tests, and on_summarize_status SSE event handling with
started/completed/failed branches.
Co-authored-by: Cursor <cursoragent@cursor.com>
refactor: type safety
feat: add localization for summarization status messages
refactor: optimize summary block detection in BaseClient
Updated the logic for identifying existing summary content blocks to use a reverse loop for improved efficiency. Added a new test case to ensure the last summary content block is updated correctly when multiple summary blocks exist.
chore: add runName to chainOptions in AgentClient
refactor: streamline summarization configuration and handler integration
Removed the deprecated summarizeNotConfigured function and replaced it with a more flexible createSummarizeFn. Updated the summarization handler setup across various controllers to utilize the new function, enhancing error handling and configuration resolution. Improved overall code clarity and maintainability by consolidating summarization logic.
feat(summarization): add staged chunk-and-merge fallback
feat(usage): track summarization usage separately from messages
feat(summarization): resolve prompt from config in runtime
fix(endpoints): use @librechat/api provider config loader
refactor(agents): import getProviderConfig from @librechat/api
chore: code order
feat(app-config): auto-enable summarization when configured
feat: summarization config
refactor(summarization): streamline persist summary handling and enhance configuration validation
Removed the deprecated createDeferredPersistSummary function and integrated a new createPersistSummary function for MongoDB persistence. Updated summarization handlers across various controllers to utilize the new persistence method. Enhanced validation for summarization configuration to ensure provider, model, and prompt are properly set, improving error handling and overall robustness.
refactor(summarization): update event handling and remove legacy summarize handlers
Replaced the deprecated summarization handlers with new event-driven handlers for summarization start and completion across multiple controllers. This change enhances the clarity of the summarization process and improves the integration of summarization events in the application. Additionally, removed unused summarization functions and streamlined the configuration loading process.
refactor(summarization): standardize event names in handlers
Updated event names in the summarization handlers to use constants from GraphEvents for consistency and clarity. This change improves maintainability and reduces the risk of errors related to string literals in event handling.
feat(summarization): enhance usage tracking for summarization events
Added logic to track summarization usage in multiple controllers by checking the current node type. If the node indicates a summarization task, the usage type is set accordingly. This change improves the granularity of usage data collected during summarization processes.
feat(summarization): integrate SummarizationConfig into AppSummarizationConfig type
Enhanced the AppSummarizationConfig type by extending it with the SummarizationConfig type from librechat-data-provider. This change improves type safety and consistency in the summarization configuration structure.
test: add end-to-end tests for summarization functionality
Introduced a comprehensive suite of end-to-end tests for the summarization feature, covering the full LibreChat pipeline from message creation to summarization. This includes a new setup file for environment configuration and a Jest configuration specifically for E2E tests. The tests utilize real API keys and ensure proper integration with the summarization process, enhancing overall test coverage and reliability.
refactor(summarization): include initial summary in formatAgentMessages output
Updated the formatAgentMessages function to return an initial summary alongside messages and index token count map. This change is reflected in multiple controllers and the corresponding tests, enhancing the summarization process by providing additional context for each agent's response.
refactor: move hydrateMissingIndexTokenCounts to tokenMap utility
Extracted the hydrateMissingIndexTokenCounts function from the AgentClient and related tests into a new tokenMap utility file. This change improves code organization and reusability, allowing for better management of token counting logic across the application.
refactor(summarization): standardize step event handling and improve summary rendering
Refactored the step event handling in the useStepHandler and related components to utilize constants for event names, enhancing consistency and maintainability. Additionally, improved the rendering logic in the Summary component to conditionally display the summary text based on its availability, providing a better user experience during the summarization process.
feat(summarization): introduce baseContextTokens and reserveTokensRatio for improved context management
Added baseContextTokens to the InitializedAgent type to calculate the context budget based on agentMaxContextNum and maxOutputTokensNum. Implemented reserveTokensRatio in the createRun function to allow configurable context token management. Updated related tests to validate these changes and ensure proper functionality.
feat(summarization): add minReserveTokens, context pruning, and overflow recovery configurations
Introduced new configuration options for summarization, including minReserveTokens, context pruning settings, and overflow recovery parameters. Updated the createRun function to accommodate these new options and added a comprehensive test suite to validate their functionality and integration within the summarization process.
feat(summarization): add updatePrompt and reserveTokensRatio to summarization configuration
Introduced an updatePrompt field for updating existing summaries with new messages, enhancing the flexibility of the summarization process. Additionally, added reserveTokensRatio to the configuration schema, allowing for improved management of token allocation during summarization. Updated related tests to validate these new features.
feat(logging): add on_agent_log event handler for structured logging
Implemented an on_agent_log event handler in both the agents' callbacks and responses to facilitate structured logging of agent activities. This enhancement allows for better tracking and debugging of agent interactions by logging messages with associated metadata. Updated the summarization process to ensure proper handling of log events.
fix: remove duplicate IBalanceUpdate interface declaration
perf(usage): single-pass partition of collectedUsage
Replace two Array.filter() passes with a single for-of loop that
partitions message vs. summarization usages in one iteration.
fix(BaseClient): shallow-copy message content before mutating and preserve string content
Avoid mutating the original message.content array in-place when
appending a summary block. Also convert string content to a text
content part instead of silently discarding it.
fix(ui): fix Part.tsx indentation and useStepHandler summarize-complete handling
- Fix SUMMARY else-if branch indentation in Part.tsx to match chain level
- Guard ON_SUMMARIZE_COMPLETE with didFinalize flag to avoid unnecessary
re-renders when no summarizing parts exist
- Protect against undefined completeData.summary instead of unsafe spread
fix(agents): use strict enabled check for summarization handlers
Change summarizationConfig?.enabled !== false to === true so handlers
are not registered when summarizationConfig is undefined.
chore: fix initializeClient JSDoc and move DEFAULT_RESERVE_RATIO to module scope
refactor(Summary): align collapse/expand behavior with Reasoning component
- Single render path instead of separate streaming vs completed branches
- Use useMessageContext for isSubmitting/isLatestMessage awareness so
the "Summarizing..." label only shows during active streaming
- Default to collapsed (matching Reasoning), user toggles to expand
- Add proper aria attributes (aria-hidden, role, aria-controls, contentId)
- Hide copy button while actively streaming
feat(summarization): default to self-summarize using agent's own provider/model
When no summarization config is provided (neither in librechat.yaml nor
on the agent), automatically enable summarization using the agent's own
provider and model. The agents package already provides default prompts,
so no prompt configuration is needed.
Also removes the dead resolveSummarizationLLMConfig in summarize.ts
(and its spec) — run.ts buildAgentContext is the single source of truth
for summarization config resolution. Removes the duplicate
RuntimeSummarizationConfig local type in favor of the canonical
SummarizationConfig from data-provider.
chore: schema and type cleanup for summarization
- Add trigger field to summarizationAgentOverrideSchema so per-agent
trigger overrides in librechat.yaml are not silently stripped by Zod
- Remove unused SummarizationStatus type from runs.ts
- Make AppSummarizationConfig.enabled non-optional to reflect the
invariant that loadSummarizationConfig always sets it
refactor(responses): extract duplicated on_agent_log handler
refactor(run): use agents package types for summarization config
Import SummarizationConfig, ContextPruningConfig, and
OverflowRecoveryConfig from @librechat/agents and use them to
type-check the translation layer in buildAgentContext. This ensures
the config object passed to the agent graph matches what it expects.
- Use `satisfies AgentSummarizationConfig` on the config object
- Cast contextPruningConfig and overflowRecoveryConfig to agents types
- Properly narrow trigger fields from DeepPartial to required shape
feat(config): add maxToolResultChars to base endpoint schema
Add maxToolResultChars to baseEndpointSchema so it can be configured
on any endpoint in librechat.yaml. Resolved during agent initialization
using getProviderConfig's endpoint resolution: custom endpoint config
takes precedence, then the provider-specific endpoint config, then the
shared `all` config.
Passed through to the agents package ToolNode, which uses it to cap
tool result length before it enters the context window. When not
configured, the agents package computes a sensible default from
maxContextTokens.
fix(summarization): forward agent model_parameters in self-summarize default
When no explicit summarization config exists, the self-summarize
default now forwards the agent's model_parameters as the
summarization parameters. This ensures provider-specific settings
(e.g. Bedrock region, credentials, endpoint host) are available
when the agents package constructs the summarization LLM.
fix(agents): register summarization handlers by default
Change the enabled gate from === true to !== false so handlers
register when no explicit summarization config exists. This aligns
with the self-summarize default where summarization is always on
unless explicitly disabled via enabled: false.
refactor(summarization): let agents package inherit clientOptions for self-summarize
Remove model_parameters forwarding from the self-summarize default.
The agents package now reuses the agent's own clientOptions when the
summarization provider matches the agent's provider, inheriting all
provider-specific settings (region, credentials, proxy, etc.)
automatically.
refactor(summarization): use MessageContentComplex[] for summary content
Unify summary content to always use MessageContentComplex[] arrays,
matching the pattern used by on_message_delta. No more string | array
unions — content is always an array of typed blocks ({ type: 'text',
text: '...' } for text, { type: 'reasoning_content', ... } for
reasoning).
Agents package:
- SummaryContentBlock.content: MessageContentComplex[] (was string)
- tokenCount now optional (not sent on deltas)
- Removed reasoning field — reasoning is now a content block type
- streamAndCollect normalizes all chunks to content block arrays
- Delta events pass content blocks directly
LibreChat:
- SummaryContentPart.content: Agents.MessageContentComplex[]
- Updated Part.tsx, Summary.tsx, useStepHandler.ts, BaseClient.js
- Summary.tsx derives display text from content blocks via useMemo
- Aggregator uses simple array spread
refactor(summarization): enhance summary handling and text extraction
- Updated BaseClient.js to improve summary text extraction, accommodating both legacy and new content formats.
- Modified summarization logic to ensure consistent handling of summary content across different message formats.
- Adjusted test cases in summarization.e2e.spec.js to utilize the new summary text extraction method.
- Refined SSE useStepHandler to initialize summary content as an array.
- Updated configuration schema by removing unused minReserveTokens field.
- Cleaned up SummaryContentPart type by removing rangeHash property.
These changes streamline the summarization process and ensure compatibility with various content structures.
refactor(summarization): streamline usage tracking and logging
- Removed direct checks for summarization nodes in ModelEndHandler and replaced them with a dedicated markSummarizationUsage function for better readability and maintainability.
- Updated OpenAIChatCompletionController and responses handlers to utilize the new markSummarizationUsage function for setting usage types.
- Enhanced logging functionality by ensuring the logger correctly handles different log levels.
- Introduced a new useCopyToClipboard hook in the Summary component to encapsulate clipboard copy logic, improving code reusability and clarity.
These changes improve the overall structure and efficiency of the summarization handling and logging processes.
refactor(summarization): update summary content block documentation
- Removed outdated comment regarding the last summary content block in BaseClient.js.
- Added a new comment to clarify the purpose of the findSummaryContentBlock method, ensuring consistency in documentation.
These changes enhance code clarity and maintainability by providing accurate descriptions of the summarization logic.
refactor(summarization): update summary content structure in tests
- Modified the summarization content structure in e2e tests to use an array format for text, aligning with recent changes in summary handling.
- Updated test descriptions to clarify the behavior of context token calculations, ensuring consistency and clarity in the tests.
These changes enhance the accuracy and maintainability of the summarization tests by reflecting the updated content structure.
refactor(summarization): remove legacy E2E test setup and configuration
- Deleted the e2e-setup.js and jest.e2e.config.js files, which contained legacy configurations for E2E tests using real API keys.
- Introduced a new summarization.e2e.ts file that implements comprehensive E2E backend integration tests for the summarization process, utilizing real AI providers and tracking summaries throughout the run.
These changes streamline the testing framework by consolidating E2E tests into a single, more robust file while removing outdated configurations.
refactor(summarization): enhance E2E tests and error handling
- Added a cleanup step to force exit after all tests to manage Redis connections.
- Updated the summarization model to 'claude-haiku-4-5-20251001' for consistency across tests.
- Improved error handling in the processStream function to capture and return processing errors.
- Enhanced logging for cross-run tests and tight context scenarios to provide better insights into test execution.
These changes improve the reliability and clarity of the E2E tests for the summarization process.
refactor(summarization): enhance test coverage for maxContextTokens behavior
- Updated run-summarization.test.ts to include a new test case ensuring that maxContextTokens does not exceed user-defined limits, even when calculated ratios suggest otherwise.
- Modified summarization.e2e.ts to replace legacy UsageMetadata type with a more appropriate type for collectedUsage, improving type safety and clarity in the test setup.
These changes improve the robustness of the summarization tests by validating context token constraints and refining type definitions.
feat(summarization): add comprehensive E2E tests for summarization process
- Introduced a new summarization.e2e.test.ts file that implements extensive end-to-end integration tests for the summarization pipeline, covering the full flow from LibreChat to agents.
- The tests utilize real AI providers and include functionality to track summaries during and between runs.
- Added necessary cleanup steps to manage Redis connections post-tests and ensure proper exit.
These changes enhance the testing framework by providing robust coverage for the summarization process, ensuring reliability and performance under real-world conditions.
fix(service): import logger from winston configuration
- Removed the import statement for logger from '@librechat/data-schemas' and replaced it with an import from '~/config/winston'.
- This change ensures that the logger is correctly sourced from the updated configuration, improving consistency in logging practices across the application.
refactor(summary): simplify Summary component and enhance token display
- Removed the unused `meta` prop from the `SummaryButton` component to streamline its interface.
- Updated the token display logic to use a localized string for better internationalization support.
- Adjusted the rendering of the `meta` information to improve its visibility within the `Summary` component.
These changes enhance the clarity and usability of the Summary component while ensuring better localization practices.
feat(summarization): add maxInputTokens configuration for summarization
- Introduced a new `maxInputTokens` property in the summarization configuration schema to control the amount of conversation context sent to the summarizer, with a default value of 10000.
- Updated the `createRun` function to utilize the new `maxInputTokens` setting, allowing for more flexible summarization based on agent context.
These changes enhance the summarization capabilities by providing better control over input token limits, improving the overall summarization process.
refactor(summarization): simplify maxInputTokens logic in createRun function
- Updated the logic for the `maxInputTokens` property in the `createRun` function to directly use the agent's base context tokens when the resolved summarization configuration does not specify a value.
- This change streamlines the configuration process and enhances clarity in how input token limits are determined for summarization.
These modifications improve the maintainability of the summarization configuration by reducing complexity in the token calculation logic.
feat(summary): enhance Summary component to display meta information
- Updated the SummaryContent component to accept an optional `meta` prop, allowing for additional contextual information to be displayed above the main content.
- Adjusted the rendering logic in the Summary component to utilize the new `meta` prop, improving the visibility of supplementary details.
These changes enhance the user experience by providing more context within the Summary component, making it clearer and more informative.
refactor(summarization): standardize reserveRatio configuration in summarization logic
- Replaced instances of `reserveTokensRatio` with `reserveRatio` in the `createRun` function and related tests to unify the terminology across the codebase.
- Updated the summarization configuration schema to reflect this change, ensuring consistency in how the reserve ratio is defined and utilized.
- Removed the per-agent override logic for summarization configuration, simplifying the overall structure and enhancing clarity.
These modifications improve the maintainability and readability of the summarization logic by standardizing the configuration parameters.
* fix: circular dependency of `~/models`
* chore: update logging scope in agent log handlers
Changed log scope from `[agentus:${data.scope}]` to `[agents:${data.scope}]` in both the callbacks and responses controllers to ensure consistent logging format across the application.
* feat: calibration ratio
* refactor(tests): update summarizationConfig tests to reflect changes in enabled property
Modified tests to check for the new `summarizationEnabled` property instead of the deprecated `enabled` field in the summarization configuration. This change ensures that the tests accurately validate the current configuration structure and behavior of the agents.
* feat(tests): add markSummarizationUsage mock for improved test coverage
Introduced a mock for the markSummarizationUsage function in the responses unit tests to enhance the testing of summarization usage tracking. This addition supports better validation of summarization-related functionalities and ensures comprehensive test coverage for the agents' response handling.
* refactor(tests): simplify event handler setup in createResponse tests
Removed redundant mock implementations for event handlers in the createResponse unit tests, streamlining the setup process. This change enhances test clarity and maintainability while ensuring that the tests continue to validate the correct behavior of usage tracking during on_chat_model_end events.
* refactor(agents): move calibration ratio capture to finally block
Reorganized the logic for capturing the calibration ratio in the AgentClient class to ensure it is executed in the finally block. This change guarantees that the ratio is captured even if the run is aborted, enhancing the reliability of the response message persistence. Removed redundant code and improved clarity in the handling of context metadata.
* refactor(agents): streamline bulk write logic in recordCollectedUsage function
Removed redundant bulk write operations and consolidated document handling in the recordCollectedUsage function. The logic now combines all documents into a single bulk write operation, improving efficiency and reducing error handling complexity. Updated logging to provide consistent error messages for bulk write failures.
* refactor(agents): enhance summarization configuration resolution in createRun function
Streamlined the summarization configuration logic by introducing a base configuration and allowing for overrides from agent-specific settings. This change improves clarity and maintainability, ensuring that the summarization configuration is consistently applied while retaining flexibility for customization. Updated the handling of summarization parameters to ensure proper integration with the agent's model and provider settings.
* refactor(agents): remove unused tokenCountMap and streamline calibration ratio handling
Eliminated the unused tokenCountMap variable from the AgentClient class to enhance code clarity. Additionally, streamlined the logic for capturing the calibration ratio by using optional chaining and a fallback value, ensuring that context metadata is consistently defined. This change improves maintainability and reduces potential confusion in the codebase.
* refactor(agents): extract agent log handler for improved clarity and reusability
Refactored the agent log handling logic by extracting it into a dedicated function, `agentLogHandler`, enhancing code clarity and reusability across different modules. Updated the event handlers in both the OpenAI and responses controllers to utilize the new handler, ensuring consistent logging behavior throughout the application.
* test: add summarization event tests for useStepHandler
Implemented a series of tests for the summarization events in the useStepHandler hook. The tests cover scenarios for ON_SUMMARIZE_START, ON_SUMMARIZE_DELTA, and ON_SUMMARIZE_COMPLETE events, ensuring proper handling of summarization logic, including message accumulation and finalization. This addition enhances test coverage and validates the correct behavior of the summarization process within the application.
* refactor(config): update summarizationTriggerSchema to use enum for type validation
Changed the type of the `type` field in the summarizationTriggerSchema from a string to an enum with a single value 'token_count'. This modification enhances type safety and ensures that only valid types are accepted in the configuration, improving overall clarity and maintainability of the schema.
* test(usage): add bulk write tests for message and summarization usage
Implemented tests for the bulk write functionality in the recordCollectedUsage function, covering scenarios for combined message and summarization usage, summarization-only usage, and message-only usage. These tests ensure correct document handling and token rollup calculations, enhancing test coverage and validating the behavior of the usage tracking logic.
* refactor(Chat): enhance clipboard copy functionality and type definitions in Summary component
Updated the Summary component to improve the clipboard copy functionality by handling clipboard permission errors. Refactored type definitions for SummaryProps to use a more specific type, enhancing type safety. Adjusted the SummaryButton and FloatingSummaryBar components to accept isCopied and onCopy props, promoting better separation of concerns and reusability.
* chore(translations): remove unused "Expand Summary" key from English translations
Deleted the "Expand Summary" key from the English translation file to streamline the localization resources and improve clarity in the user interface. This change helps maintain an organized and efficient translation structure.
* refactor: adjust token counting for Claude model to account for API discrepancies
Implemented a correction factor for token counting when using the Claude model, addressing discrepancies between Anthropic's API and local tokenizer results. This change ensures accurate token counts by applying a scaling factor, improving the reliability of token-related functionalities.
* refactor(agents): implement token count adjustment for Claude model messages
Added a method to adjust token counts for messages processed by the Claude model, applying a correction factor to align with API expectations. This enhancement improves the accuracy of token counting, ensuring reliable functionality when interacting with the Claude model.
* refactor(agents): token counting for media content in messages
Introduced a new method to estimate token costs for image and document blocks in messages, improving the accuracy of token counting. This enhancement ensures that media content is properly accounted for, particularly for the Claude model, by integrating additional token estimation logic for various content types. Updated the token counting function to utilize this new method, enhancing overall reliability and functionality.
* chore: fix missing import
* fix(agents): clamp baseContextTokens and document reserve ratio change
Prevent negative baseContextTokens when maxOutputTokens exceeds the
context window (misconfigured models). Document the 10%→5% default
reserve ratio reduction introduced alongside summarization.
* fix(agents): include media tokens in hydrated token counts
Add estimateMediaTokensForMessage to createTokenCounter so the hydration
path (used by hydrateMissingIndexTokenCounts) matches the precomputed
path in AgentClient.getTokenCountForMessage. Without this, messages
containing images or documents were systematically undercounted during
hydration, risking context window overflow.
Add 34 unit tests covering all block-type branches of
estimateMediaTokensForMessage.
* fix(agents): include summarization output tokens in usage return value
The returned output_tokens from recordCollectedUsage now reflects all
billed LLM calls (message + summarization). Previously, summarization
completions were billed but excluded from the returned metadata, causing
a discrepancy between what users were charged and what the response
message reported.
* fix(tests): replace process.exit with proper Redis cleanup in e2e test
The summarization E2E test used process.exit(0) to work around a Redis
connection opened at import time, which killed the Jest runner and
bypassed teardown. Use ioredisClient.quit() and keyvRedisClient.disconnect()
for graceful cleanup instead.
* fix(tests): update getConvo imports in OpenAI and response tests
Refactor test files to import getConvo from the main models module instead of the Conversation submodule. This change ensures consistency across tests and simplifies the import structure, enhancing maintainability.
* fix(clients): improve summary text validation in BaseClient
Refactor the summary extraction logic to ensure that only non-empty summary texts are considered valid. This change enhances the robustness of the message processing by utilizing a dedicated method for summary text retrieval, improving overall reliability.
* fix(config): replace z.any() with explicit union in summarization schema
Model parameters (temperature, top_p, etc.) are constrained to
primitive types rather than the policy-violating z.any().
* refactor(agents): deduplicate CLAUDE_TOKEN_CORRECTION constant
Export from the TS source in packages/api and import in the JS client,
eliminating the static class property that could drift out of sync.
* refactor(agents): eliminate duplicate selfProvider in buildAgentContext
selfProvider and provider were derived from the same expression with
different type casts. Consolidated to a single provider variable.
* refactor(agents): extract shared SSE handlers and restrict log levels
- buildSummarizationHandlers() factory replaces triplicated handler
blocks across responses.js and openai.js
- agentLogHandlerObj exported from callbacks.js for consistent reuse
- agentLogHandler restricted to an allowlist of safe log levels
(debug, info, warn, error) instead of accepting arbitrary strings
* fix(SSE): batch summarize deltas, add exhaustiveness check, conditional error announcement
- ON_SUMMARIZE_DELTA coalesces rapid-fire renders via requestAnimationFrame
instead of calling setMessages per chunk
- Exhaustive never-check on TStepEvent catches unhandled variants at
compile time when new StepEvents are added
- ON_SUMMARIZE_COMPLETE error announcement only fires when a summary
part was actually present and removed
* feat(agents): persist instruction overhead in contextMeta and seed across runs
Extend contextMeta with instructionOverhead and toolCount so the
provider-observed instruction overhead is persisted on the response message
and seeded into the pruner on subsequent runs. This enables the pruner to
use a calibrated budget from the first call instead of waiting for a
provider observation, preventing the ratio collapse caused by local
tokenizer overestimating tool schema tokens.
The seeded overhead is only used when encoding and tool count match
between runs, ensuring stale values from different configurations
are discarded.
* test(agents): enhance OpenAI test mocks for summarization handlers
Updated the OpenAI test suite to include additional mock implementations for summarization handlers, including buildSummarizationHandlers, markSummarizationUsage, and agentLogHandlerObj. This improves test coverage and ensures consistent behavior during testing.
* fix(agents): address review findings for summarization v2
Cancel rAF on unmount to prevent stale Recoil writes from dead
component context. Clear orphaned summarizing:true parts when
ON_SUMMARIZE_COMPLETE arrives without a summary payload. Add null
guard and safe spread to agentLogHandler. Handle Anthropic-format
base64 image/* documents in estimateMediaTokensForMessage. Use
role="region" for expandable summary content. Add .describe() to
contextMeta Zod fields. Extract duplicate usage loop into helper.
* refactor: simplify contextMeta to calibrationRatio + encoding only
Remove instructionOverhead and toolCount from cross-run persistence —
instruction tokens change too frequently between runs (prompt edits,
tool changes) for a persisted seed to be reliable. The intra-run
calibration in the pruner still self-corrects via provider observations.
contextMeta now stores only the tokenizer-bias ratio and encoding,
which are stable across instruction changes.
* test(SSE): enhance useStepHandler tests for ON_SUMMARIZE_COMPLETE behavior
Updated the test for ON_SUMMARIZE_COMPLETE to clarify that it finalizes the existing part with summarizing set to false when the summary is undefined. Added assertions to verify the correct behavior of message updates and the state of summary parts.
* refactor(BaseClient): remove handleContextStrategy and truncateToolCallOutputs functions
Eliminated the handleContextStrategy method from BaseClient to streamline message handling. Also removed the truncateToolCallOutputs function from the prompts module, simplifying the codebase and improving maintainability.
* refactor: add AGENT_DEBUG_LOGGING option and refactor token count handling in BaseClient
Introduced AGENT_DEBUG_LOGGING to .env.example for enhanced debugging capabilities. Refactored token count handling in BaseClient by removing the handleTokenCountMap method and simplifying token count updates. Updated AgentClient to log detailed token count recalculations and adjustments, improving traceability during message processing.
* chore: update dependencies in package-lock.json and package.json files
Bumped versions of several dependencies, including @librechat/agents to ^3.1.62 and various AWS SDK packages to their latest versions. This ensures compatibility and incorporates the latest features and fixes.
* chore: imports order
* refactor: extract summarization config resolution from buildAgentContext
* refactor: rename and simplify summarization configuration shaping function
* refactor: replace AgentClient token counting methods with single-pass pure utility
Extract getTokenCount() and getTokenCountForMessage() from AgentClient
into countFormattedMessageTokens(), a pure function in packages/api that
handles text, tool_call, image, and document content types in one loop.
- Decompose estimateMediaTokensForMessage into block-level helpers
(estimateImageDataTokens, estimateImageBlockTokens, estimateDocumentBlockTokens)
shared by both estimateMediaTokensForMessage and the new single-pass function
- Remove redundant per-call getEncoding() resolution (closure captures once)
- Remove deprecated gpt-3.5-turbo-0301 model branching
- Drop this.getTokenCount guard from BaseClient.sendMessage
* refactor: streamline token counting in createTokenCounter function
Simplified the createTokenCounter function by removing the media token estimation and directly calculating the token count. This change enhances clarity and performance by consolidating the token counting logic into a single pass, while maintaining compatibility with Claude's token correction.
* refactor: simplify summarization configuration types
Removed the AppSummarizationConfig type and directly used SummarizationConfig in the AppConfig interface. This change streamlines the type definitions and enhances consistency across the codebase.
* chore: import order
* fix: summarization event handling in useStepHandler
- Cancel pending summarizeDeltaRaf in clearStepMaps to prevent stale
frames firing after map reset or component unmount
- Move announcePolite('summarize_completed') inside the didFinalize
guard so screen readers only announce when finalization actually occurs
- Remove dead cleanup closure returned from stepHandler useCallback body
that was never invoked by any caller
* fix: estimate tokens for non-PDF/non-image base64 document blocks
Previously estimateDocumentBlockTokens returned 0 for unrecognized MIME
types (e.g. text/plain, application/json), silently underestimating
context budget. Fall back to character-based heuristic or countTokens.
* refactor: return cloned usage from markSummarizationUsage
Avoid mutating LangChain's internal usage_metadata object by returning
a shallow clone with the usage_type tag. Update all call sites in
callbacks, openai, and responses controllers to use the returned value.
* refactor: consolidate debug logging loops in buildMessages
Merge the two sequential O(n) debug-logging passes over orderedMessages
into a single pass inside the map callback where all data is available.
* refactor: narrow SummaryContentPart.content type
Replace broad Agents.MessageContentComplex[] with the specific
Array<{ type: ContentTypes.TEXT; text: string }> that all producers
and consumers already use, improving compile-time safety.
* refactor: use single output array in recordCollectedUsage
Have processUsageGroup append to a shared array instead of returning
separate arrays that are spread into a third, reducing allocations.
* refactor: use for...in in hydrateMissingIndexTokenCounts
Replace Object.entries with for...in to avoid allocating an
intermediate tuple array during token map hydration.
|
||
|
|
8ba2bde5c1
|
📦 refactor: Consolidate DB models, encapsulating Mongoose usage in data-schemas (#11830)
* chore: move database model methods to /packages/data-schemas * chore: add TypeScript ESLint rule to warn on unused variables * refactor: model imports to streamline access - Consolidated model imports across various files to improve code organization and reduce redundancy. - Updated imports for models such as Assistant, Message, Conversation, and others to a unified import path. - Adjusted middleware and service files to reflect the new import structure, ensuring functionality remains intact. - Enhanced test files to align with the new import paths, maintaining test coverage and integrity. * chore: migrate database models to packages/data-schemas and refactor all direct Mongoose Model usage outside of data-schemas * test: update agent model mocks in unit tests - Added `getAgent` mock to `client.test.js` to enhance test coverage for agent-related functionality. - Removed redundant `getAgent` and `getAgents` mocks from `openai.spec.js` and `responses.unit.spec.js` to streamline test setup and reduce duplication. - Ensured consistency in agent mock implementations across test files. * fix: update types in data-schemas * refactor: enhance type definitions in transaction and spending methods - Updated type definitions in `checkBalance.ts` to use specific request and response types. - Refined `spendTokens.ts` to utilize a new `SpendTxData` interface for better clarity and type safety. - Improved transaction handling in `transaction.ts` by introducing `TransactionResult` and `TxData` interfaces, ensuring consistent data structures across methods. - Adjusted unit tests in `transaction.spec.ts` to accommodate new type definitions and enhance robustness. * refactor: streamline model imports and enhance code organization - Consolidated model imports across various controllers and services to a unified import path, improving code clarity and reducing redundancy. - Updated multiple files to reflect the new import structure, ensuring all functionalities remain intact. - Enhanced overall code organization by removing duplicate import statements and optimizing the usage of model methods. * feat: implement loadAddedAgent and refactor agent loading logic - Introduced `loadAddedAgent` function to handle loading agents from added conversations, supporting multi-convo parallel execution. - Created a new `load.ts` file to encapsulate agent loading functionalities, including `loadEphemeralAgent` and `loadAgent`. - Updated the `index.ts` file to export the new `load` module instead of the deprecated `loadAgent`. - Enhanced type definitions and improved error handling in the agent loading process. - Adjusted unit tests to reflect changes in the agent loading structure and ensure comprehensive coverage. * refactor: enhance balance handling with new update interface - Introduced `IBalanceUpdate` interface to streamline balance update operations across the codebase. - Updated `upsertBalanceFields` method signatures in `balance.ts`, `transaction.ts`, and related tests to utilize the new interface for improved type safety. - Adjusted type imports in `balance.spec.ts` to include `IBalanceUpdate`, ensuring consistency in balance management functionalities. - Enhanced overall code clarity and maintainability by refining type definitions related to balance operations. * feat: add unit tests for loadAgent functionality and enhance agent loading logic - Introduced comprehensive unit tests for the `loadAgent` function, covering various scenarios including null and empty agent IDs, loading of ephemeral agents, and permission checks. - Enhanced the `initializeClient` function by moving `getConvoFiles` to the correct position in the database method exports, ensuring proper functionality. - Improved test coverage for agent loading, including handling of non-existent agents and user permissions. * chore: reorder memory method exports for consistency - Moved `deleteAllUserMemories` to the correct position in the exported memory methods, ensuring a consistent and logical order of method exports in `memory.ts`. |
||
|
|
a88bfae4dd
|
🖼️ fix: Correct ToolMessage Response Format for Agent-Mode Image Tools (#12310)
* fix: Set response format for agent tools in DALLE3, FluxAPI, and StableDiffusion classes - Added logic to set `responseFormat` to 'content_and_artifact' when `isAgent` is true in DALLE3.js, FluxAPI.js, and StableDiffusion.js. * test: Add regression tests for image tool agent mode in imageTools-agent.spec.js - Introduced a new test suite for DALLE3, FluxAPI, and StableDiffusion classes to verify that the invoke() method returns a ToolMessage with base64 in artifact.content, ensuring it is not serialized into content. - Validated that responseFormat is set to 'content_and_artifact' when isAgent is true, and confirmed the correct handling of base64 data in the response. * fix: handle agent error paths and generateFinetunedImage in image tools - StableDiffusion._call() was returning a raw string on API error, bypassing returnValue() and breaking the content_and_artifact contract when isAgent is true - FluxAPI.generateFinetunedImage() had no isAgent branch; it would call processFileURL (unset in agent context) instead of fetching and returning the base64 image as an artifact tuple - Add JSDoc to all three responseFormat assignments clarifying why LangChain requires this property for correct ToolMessage construction * test: expand image tool agent mode regression suite - Add env var save/restore in beforeEach/afterEach to prevent test pollution - Add error path tests for all three tools verifying ToolMessage content and artifact are correctly populated when the upstream API fails - Add generate_finetuned action test for FluxAPI covering the new agent branch in generateFinetunedImage * chore: fix lint errors in FluxAPI and imageTools-agent spec * chore: fix import ordering in imageTools-agent spec |
||
|
|
9a5d7eaa4e
|
⚡ refactor: Replace tiktoken with ai-tokenizer (#12175)
* chore: Update dependencies by adding ai-tokenizer and removing tiktoken - Added ai-tokenizer version 1.0.6 to package.json and package-lock.json across multiple packages. - Removed tiktoken version 1.0.15 from package.json and package-lock.json in the same locations, streamlining dependency management. * refactor: replace js-tiktoken with ai-tokenizer - Added support for 'claude' encoding in the AgentClient class to improve model compatibility. - Updated Tokenizer class to utilize 'ai-tokenizer' for both 'o200k_base' and 'claude' encodings, replacing the previous 'tiktoken' dependency. - Refactored tests to reflect changes in tokenizer behavior and ensure accurate token counting for both encoding types. - Removed deprecated references to 'tiktoken' and adjusted related tests for improved clarity and functionality. * chore: remove tiktoken mocks from DALLE3 tests - Eliminated mock implementations of 'tiktoken' from DALLE3-related test files to streamline test setup and align with recent dependency updates. - Adjusted related test structures to ensure compatibility with the new tokenizer implementation. * chore: Add distinct encoding support for Anthropic Claude models - Introduced a new method `getEncoding` in the AgentClient class to handle the specific BPE tokenizer for Claude models, ensuring compatibility with the distinct encoding requirements. - Updated documentation to clarify the encoding logic for Claude and other models. * docs: Update return type documentation for getEncoding method in AgentClient - Clarified the return type of the getEncoding method to specify that it can return an EncodingName or undefined, enhancing code readability and type safety. * refactor: Tokenizer class and error handling - Exported the EncodingName type for broader usage. - Renamed encodingMap to encodingData for clarity. - Improved error handling in getTokenCount method to ensure recovery attempts are logged and return 0 on failure. - Updated countTokens function documentation to specify the use of 'o200k_base' encoding. * refactor: Simplify encoding documentation and export type - Updated the getEncoding method documentation to clarify the default behavior for non-Anthropic Claude models. - Exported the EncodingName type separately from the Tokenizer module for improved clarity and usage. * test: Update text processing tests for token limits - Adjusted test cases to handle smaller text sizes, changing scenarios from ~120k tokens to ~20k tokens for both the real tokenizer and countTokens functions. - Updated token limits in tests to reflect new constraints, ensuring tests accurately assess performance and call reduction. - Enhanced console log messages for clarity regarding token counts and reductions in the updated scenarios. * refactor: Update Tokenizer imports and exports - Moved Tokenizer and countTokens exports to the tokenizer module for better organization. - Adjusted imports in memory.ts to reflect the new structure, ensuring consistent usage across the codebase. - Updated memory.test.ts to mock the Tokenizer from the correct module path, enhancing test accuracy. * refactor: Tokenizer initialization and error handling - Introduced an async `initEncoding` method to preload tokenizers, improving performance and accuracy in token counting. - Updated `getTokenCount` to handle uninitialized tokenizers more gracefully, ensuring proper recovery and logging on errors. - Removed deprecated synchronous tokenizer retrieval, streamlining the overall tokenizer management process. * test: Enhance tokenizer tests with initialization and encoding checks - Added `beforeAll` hooks to initialize tokenizers for 'o200k_base' and 'claude' encodings before running tests, ensuring proper setup. - Updated tests to validate the loading of encodings and the correctness of token counts for both 'o200k_base' and 'claude'. - Improved test structure to deduplicate concurrent initialization calls, enhancing performance and reliability. |
||
|
|
490ad30427
|
🧩 fix: Expand Toolkit Definitions to Include Child Tools in Event-Driven Mode (#12066)
* chore: Update logging format for tool execution handler to improve clarity * fix: Expand toolkit tools in loadToolDefinitions for event-driven mode The image_gen_oai toolkit contains both image_gen_oai and image_edit_oai tools, but the definitions-only path only returned image_gen_oai. This adds toolkit expansion so child tools are included in definitions, and resolves child tool names to their parent toolkit constructor at runtime. * chore: Remove toolkit flag from gemini_image_gen gemini_image_gen only has a single tool, so it is not a true toolkit. * refactor: Address review findings for toolkit expansion - Guard against duplicate constructor calls when parent and child tools are both in the tools array (Finding 2) - Consolidate image tool descriptions/schemas — registry now derives from toolkit objects (oaiToolkit, geminiToolkit) instead of duplicating them, so env var overrides are respected everywhere (Finding 5) - Move toolkitExpansion/toolkitParent to toolkits/mapping.ts with immutable types (Findings 6, 9) - Add tests for toolkit expansion, deduplication, and mapping invariants (Finding 1) - Fix log format to quote each tool individually (Finding 8) * fix: Correct toolkit constructor lookup to store under requested tool name The previous dedup guard stored the factory under toolKey (parent name) instead of tool (requested name), causing the promise loop to miss child tools like image_edit_oai. Now stores under both the parent key (for dedup) and the requested name (for lookup), with a memoized factory to ensure the constructor runs only once. |
||
|
|
a0bcb44b8f
|
🎨 chore: Update Agent Tool with new SVG assets (#12065)
- Replaced external icon URLs in manifest.json with local SVG assets for Google Search, DALL-E-3, Tavily Search, Calculator, Stable Diffusion, Azure AI Search, and Flux. - Added new SVG files for Google Search, DALL-E-3, Tavily, Calculator, Stable Diffusion, and Azure AI Search to the assets directory, enhancing performance and reliability by using local resources. |
||
|
|
43ff3f8473
|
💸 fix: Model Identifier Edge Case in Agent Transactions (#11988)
* 🔧 fix: Add skippedAgentIds tracking in initializeClient error handling - Enhanced error handling in the initializeClient function to track agent IDs that are skipped during processing. This addition improves the ability to monitor and debug issues related to agent initialization failures. * 🔧 fix: Update model assignment in BaseClient to use instance model - Modified the model assignment in BaseClient to use `this.model` instead of `responseMessage.model`, clarifying that when using agents, the model refers to the agent ID rather than the model itself. This change improves code clarity and correctness in the context of agent usage. * 🔧 test: Add tests for recordTokenUsage model assignment in BaseClient - Introduced new test cases in BaseClient to ensure that the correct model is passed to the recordTokenUsage method, verifying that it uses this.model instead of the agent ID from responseMessage.model. This enhances the accuracy of token usage tracking in agent scenarios. - Improved error handling in the initializeClient function to log errors when processing agents, ensuring that skipped agent IDs are tracked for better debugging. |
||
|
|
8b159079f5
|
🪙 feat: Add messageId to Transactions (#11987)
* feat: Add messageId to transactions * chore: field order * feat: Enhance token usage tracking by adding messageId parameter - Updated `recordTokenUsage` method in BaseClient to accept a new `messageId` parameter for improved tracking. - Propagated `messageId` in the AgentClient when recording usage. - Added tests to ensure `messageId` is correctly passed and handled in various scenarios, including propagation across multiple usage entries. * chore: Correct field order in createGeminiImageTool function - Moved the conversationId field to the correct position in the object being passed to the recordTokenUsage method, ensuring proper parameter alignment for improved functionality. * refactor: Update OpenAIChatCompletionController and createResponse to use responseId instead of requestId - Replaced instances of requestId with responseId in the OpenAIChatCompletionController for improved clarity in logging and tracking. - Updated createResponse to include responseId in the requestBody, ensuring consistency across the handling of message identifiers. * test: Add messageId to agent client tests - Included messageId in the agent client tests to ensure proper handling and propagation of message identifiers during transaction recording. - This update enhances the test coverage for scenarios involving messageId, aligning with recent changes in the tracking of message identifiers. * fix: Update OpenAIChatCompletionController to use requestId for context - Changed the context object in OpenAIChatCompletionController to use `requestId` instead of `responseId` for improved clarity and consistency in handling request identifiers. * chore: field order |
||
|
|
6169d4f70b
|
🚦 fix: 404 JSON Responses for Unmatched API Routes (#11976)
* feat: Implement 404 JSON response for unmatched API routes - Added middleware to return a 404 JSON response with a message for undefined API routes. - Updated SPA fallback to serve index.html for non-API unmatched routes. - Ensured the error handler is positioned correctly as the last middleware in the stack. * fix: Enhance logging in BaseClient for better token usage tracking - Updated `getTokenCountForResponse` to log the messageId of the response for improved debugging. - Enhanced userMessage logging to include messageId, tokenCount, and conversationId for clearer context during token count mapping. * chore: Improve logging in processAddedConvo for better debugging - Updated the logging structure in the processAddedConvo function to provide clearer context when processing added conversations. - Removed redundant logging and enhanced the output to include model, agent ID, and endpoint details for improved traceability. * chore: Enhance logging in BaseClient for improved token usage tracking - Added debug logging in the BaseClient to track response token usage, including messageId, model, promptTokens, and completionTokens for better debugging and traceability. * chore: Enhance logging in MemoryAgent for improved context - Updated logging in the MemoryAgent to include userId, conversationId, messageId, and provider details for better traceability during memory processing. - Adjusted log messages to provide clearer context when content is returned or not, aiding in debugging efforts. * chore: Refactor logging in initializeClient for improved clarity - Consolidated multiple debug log statements into a single message that provides a comprehensive overview of the tool context being stored for the primary agent, including the number of tools and the size of the tool registry. This enhances traceability and debugging efficiency. * feat: Implement centralized 404 handling for unmatched API routes - Introduced a new middleware function `apiNotFound` to standardize 404 JSON responses for undefined API routes. - Updated the server configuration to utilize the new middleware, enhancing code clarity and maintainability. - Added tests to ensure correct 404 responses for various non-GET methods and the `/api` root path. * fix: Enhance logging in apiNotFound middleware for improved safety - Updated the `apiNotFound` function to sanitize the request path by replacing problematic characters and limiting its length, ensuring safer logging of 404 errors. * refactor: Move apiNotFound middleware to a separate file for better organization - Extracted the `apiNotFound` function from the error middleware into its own file, enhancing code organization and maintainability. - Updated the index file to export the new `notFound` middleware, ensuring it is included in the middleware stack. * docs: Add comment to clarify usage of unsafeChars regex in notFound middleware - Included a comment in the notFound middleware file to explain that the unsafeChars regex is safe to reuse with .replace() at the module scope, as it does not retain lastIndex state. |
||
|
|
3a079b980a
|
📌 fix: Populate userMessage.files Before First DB Save (#11939)
* fix: populate userMessage.files before first DB save * fix: ESLint error fixed * fix: deduplicate file-population logic and add test coverage Extract `buildMessageFiles` helper into `packages/api/src/utils/message` to replace three near-identical loops in BaseClient and both agent controllers. Fixes set poisoning from undefined file_id entries, moves file population inside the skipSaveUserMessage guard to avoid wasted work, and adds full unit test coverage for the new behavior. * chore: reorder import statements in openIdJwtStrategy.js for consistency --------- Co-authored-by: Danny Avila <danny@librechat.ai> |
||
|
|
f3eb197675
|
💎 fix: Gemini Image Gen Tool Vertex AI Auth and File Storage (#11923)
* chore: saveToCloudStorage function and enhance error handling - Removed unnecessary parameters and streamlined the logic for saving images to cloud storage. - Introduced buffer handling for base64 image data and improved the integration with file strategy functions. - Enhanced error handling during local image saving to ensure robustness. - Updated the createGeminiImageTool function to reflect changes in the saveToCloudStorage implementation. * refactor: streamline image persistence logic in GeminiImageGen - Consolidated image saving functionality by renaming and refactoring the saveToCloudStorage function to persistGeneratedImage. - Improved error handling and logging for image persistence operations. - Enhanced the replaceUnwantedChars function to better sanitize input strings. - Updated createGeminiImageTool to reflect changes in image handling and ensure consistent behavior across storage strategies. * fix: clean up GeminiImageGen by removing unused functions and improving logging - Removed the getSafeFormat and persistGeneratedImage functions to streamline image handling. - Updated logging in createGeminiImageTool for clarity and consistency. - Consolidated imports by eliminating unused dependencies, enhancing code maintainability. * chore: update environment configuration and manifest for unused GEMINI_VERTEX_ENABLED - Removed the Vertex AI configuration option from .env.example to simplify setup. - Updated the manifest.json to reflect the removal of the Vertex AI dependency in the authentication field. - Cleaned up the createGeminiImageTool function by eliminating unused fields related to Vertex AI, streamlining the code. * fix: update loadAuthValues call in loadTools function for GeminiImageGen tool - Modified the loadAuthValues function call to include throwError: false, preventing exceptions on authentication failures. - Removed the unused processFileURL parameter from the tool context object, streamlining the code. * refactor: streamline GoogleGenAI initialization in GeminiImageGen - Removed unused file system access check for Google application credentials, simplifying the environment setup. - Added googleAuthOptions to the GoogleGenAI instantiation, enhancing the configuration for authentication. * fix: update Gemini API Key label and description in manifest.json - Changed the label to indicate that the Gemini API Key is optional. - Revised the description to clarify usage with Vertex AI and service accounts, enhancing user guidance. * fix: enhance abort signal handling in createGeminiImageTool - Introduced derivedSignal to manage abort events during image generation, improving responsiveness to cancellation requests. - Added an abortHandler to log when image generation is aborted, enhancing debugging capabilities. - Ensured proper cleanup of event listeners in the finally block to prevent memory leaks. * fix: update authentication handling for plugins to support optional fields - Added support for optional authentication fields in the manifest and PluginAuthForm. - Updated the checkPluginAuth function to correctly validate plugins with optional fields. - Enhanced tests to cover scenarios with optional authentication fields, ensuring accurate validation logic. |
||
|
|
1d0a4c501f
|
🪨 feat: AWS Bedrock Document Uploads (#11912)
* feat: add aws bedrock upload to provider support * chore: address copilot comments * feat: add shared Bedrock document format types and MIME mapping Bedrock Converse API accepts 9 document formats beyond PDF. Add BedrockDocumentFormat union type, MIME-to-format mapping, and helpers in data-provider so both client and backend can reference them. * refactor: generalize Bedrock PDF validation to support all document types Rename validateBedrockPdf to validateBedrockDocument with MIME-aware logic: 4.5MB hard limit applies to all types, PDF header check only runs for application/pdf. Adds test coverage for non-PDF documents. * feat: support all Bedrock document formats in encoding pipeline Widen file type gates to accept csv, doc, docx, xls, xlsx, html, txt, md for Bedrock. Uses shared MIME-to-format map instead of hardcoded 'pdf'. Other providers' PDF-only paths remain unchanged. * feat: expand Bedrock file upload UI to accept all document types Add 'image_document_extended' upload type for Bedrock with accept filters for all 9 supported formats. Update drag-and-drop validation to use isBedrockDocumentType helper. * fix: route Bedrock document types through provider pipeline |
||
|
|
24625f5693
|
🧩 refactor: Tool Context Builders for Web Search & Image Gen (#11644)
* fix: Web Search + Image Gen Tool Context - Added `buildWebSearchContext` function to create a structured context for web search tools, including citation format instructions. - Updated `loadTools` and `loadToolDefinitionsWrapper` functions to utilize the new web search context, improving tool initialization and response handling. - Introduced logic to handle image editing tools with `buildImageToolContext`, enhancing the overall tool management capabilities. - Refactored imports in `ToolService.js` to include the new context builders for better organization and maintainability. * fix: Trim critical output escape sequence instructions in web toolkit - Updated the critical output escape sequence instructions in the web toolkit to include a `.trim()` method, ensuring that unnecessary whitespace is removed from the output. This change enhances the consistency and reliability of the generated output. |
||
|
|
5af1342dbb
|
🦥 refactor: Event-Driven Lazy Tool Loading (#11588)
* refactor: json schema tools with lazy loading - Added LocalToolExecutor class for lazy loading and caching of tools during execution. - Introduced ToolExecutionContext and ToolExecutor interfaces for better type management. - Created utility functions to generate tool proxies with JSON schema support. - Added ExtendedJsonSchema type for enhanced schema definitions. - Updated existing toolkits to utilize the new schema and executor functionalities. - Introduced a comprehensive tool definitions registry for managing various tool schemas. chore: update @librechat/agents to version 3.1.2 refactor: enhance tool loading optimization and classification - Improved the loadAgentToolsOptimized function to utilize a proxy pattern for all tools, enabling deferred execution and reducing overhead. - Introduced caching for tool instances and refined tool classification logic to streamline tool management. - Updated the handling of MCP tools to improve logging and error reporting for missing tools in the cache. - Enhanced the structure of tool definitions to support better classification and integration with existing tools. refactor: modularize tool loading and enhance optimization - Moved the loadAgentToolsOptimized function to a new service file for better organization and maintainability. - Updated the ToolService to utilize the new service for optimized tool loading, improving code clarity. - Removed legacy tool loading methods and streamlined the tool loading process to enhance performance and reduce complexity. - Introduced feature flag handling for optimized tool loading, allowing for easier toggling of this functionality. refactor: replace loadAgentToolsWithFlag with loadAgentTools in tool loader refactor: enhance MCP tool loading with proxy creation and classification refactor: optimize MCP tool loading by grouping tools by server - Introduced a Map to group cached tools by server name, improving the organization of tool data. - Updated the createMCPProxyTool function to accept server name directly, enhancing clarity. - Refactored the logic for handling MCP tools, streamlining the process of creating proxy tools for classification. refactor: enhance MCP tool loading and proxy creation - Added functionality to retrieve MCP server tools and reinitialize servers if necessary, improving tool availability. - Updated the tool loading logic to utilize a Map for organizing tools by server, enhancing clarity and performance. - Refactored the createToolProxy function to ensure a default response format, streamlining tool creation. refactor: update createToolProxy to ensure consistent response format - Modified the createToolProxy function to await the executor's execution and validate the result format. - Ensured that the function returns a default response structure when the result is not an array of two elements, enhancing reliability in tool proxy creation. refactor: ToolExecutionContext with toolCall property - Added toolCall property to ToolExecutionContext interface for improved context handling during tool execution. - Updated LocalToolExecutor to include toolCall in the runnable configuration, allowing for more flexible tool invocation. - Modified createToolProxy to pass toolCall from the configuration, ensuring consistent context across tool executions. refactor: enhance event-driven tool execution and logging - Introduced ToolExecuteOptions for improved handling of event-driven tool execution, allowing for parallel execution of tool calls. - Updated getDefaultHandlers to include support for ON_TOOL_EXECUTE events, enhancing the flexibility of tool invocation. - Added detailed logging in LocalToolExecutor to track tool loading and execution metrics, improving observability and debugging capabilities. - Refactored initializeClient to integrate event-driven tool loading, ensuring compatibility with the new execution model. chore: update @librechat/agents to version 3.1.21 refactor: remove legacy tool loading and executor components - Eliminated the loadAgentToolsWithFlag function, simplifying the tool loading process by directly using loadAgentTools. - Removed the LocalToolExecutor and related executor components to streamline the tool execution architecture. - Updated ToolService and related files to reflect the removal of deprecated features, enhancing code clarity and maintainability. refactor: enhance tool classification and definitions handling - Updated the loadAgentTools function to return toolDefinitions alongside toolRegistry, improving the structure of tool data returned to clients. - Removed the convertRegistryToDefinitions function from the initialize.js file, simplifying the initialization process. - Adjusted the buildToolClassification function to ensure toolDefinitions are built and returned simultaneously with the toolRegistry, enhancing efficiency in tool management. - Updated type definitions in initialize.ts to include toolDefinitions, ensuring consistency across the codebase. refactor: implement event-driven tool execution handler - Introduced createToolExecuteHandler function to streamline the handling of ON_TOOL_EXECUTE events, allowing for parallel execution of tool calls. - Updated getDefaultHandlers to utilize the new handler, simplifying the event-driven architecture. - Added handlers.ts file to encapsulate tool execution logic, improving code organization and maintainability. - Enhanced OpenAI handlers to integrate the new tool execution capabilities, ensuring consistent event handling across the application. refactor: integrate event-driven tool execution options - Added toolExecuteOptions to support event-driven tool execution in OpenAI and responses controllers, enhancing flexibility in tool handling. - Updated handlers to utilize createToolExecuteHandler, allowing for streamlined execution of tools during agent interactions. - Refactored service dependencies to include toolExecuteOptions, ensuring consistent integration across the application. refactor: enhance tool loading with definitionsOnly parameter - Updated createToolLoader and loadAgentTools functions to include a definitionsOnly parameter, allowing for the retrieval of only serializable tool definitions in event-driven mode. - Adjusted related interfaces and documentation to reflect the new parameter, improving clarity and flexibility in tool management. - Ensured compatibility across various components by integrating the definitionsOnly option in the initialization process. refactor: improve agent tool presence check in initialization - Added a check for tool presence using a new hasAgentTools variable, which evaluates both structuredTools and toolDefinitions. - Updated the conditional logic in the agent initialization process to utilize the hasAgentTools variable, enhancing clarity and maintainability in tool management. refactor: enhance agent tool extraction to support tool definitions - Updated the extractMCPServers function to handle both tool instances and serializable tool definitions, improving flexibility in agent tool management. - Added a new property toolDefinitions to the AgentWithTools type for better integration of event-driven mode. - Enhanced documentation to clarify the function's capabilities in extracting unique MCP server names from both tools and tool definitions. refactor: enhance tool classification and registry building - Added serverName property to ToolDefinition for improved tool identification. - Introduced buildToolRegistry function to streamline the creation of tool registries based on MCP tool definitions and agent options. - Updated buildToolClassification to utilize the new registry building logic, ensuring basic definitions are returned even when advanced classification features are not allowed. - Enhanced documentation and logging for clarity in tool classification processes. refactor: update @librechat/agents dependency to version 3.1.22 fix: expose loadTools function in ToolService - Added loadTools function to the exported module in ToolService.js, enhancing the accessibility of tool loading functionality. chore: remove configurable options from tool execute options in OpenAI controller refactor: enhance tool loading mechanism to utilize agent-specific context chore: update @librechat/agents dependency to version 3.1.23 fix: simplify result handling in createToolExecuteHandler * refactor: loadToolDefinitions for efficient tool loading in event-driven mode * refactor: replace legacy tool loading with loadToolsForExecution in OpenAI and responses controllers - Updated OpenAIChatCompletionController and createResponse functions to utilize loadToolsForExecution for improved tool loading. - Removed deprecated loadToolsLegacy references, streamlining the tool execution process. - Enhanced tool loading options to include agent-specific context and configurations. * refactor: enhance tool loading and execution handling - Introduced loadActionToolsForExecution function to streamline loading of action tools, improving organization and maintainability. - Updated loadToolsForExecution to handle both regular and action tools, optimizing the tool loading process. - Added detailed logging for missing tools in createToolExecuteHandler, enhancing error visibility. - Refactored tool definitions to normalize action tool names, improving consistency in tool management. * refactor: enhance built-in tool definitions loading - Updated loadToolDefinitions to include descriptions and parameters from the tool registry for built-in tools, improving the clarity and usability of tool definitions. - Integrated getToolDefinition to streamline the retrieval of tool metadata, enhancing the overall tool management process. * feat: add action tool definitions loading to tool service - Introduced getActionToolDefinitions function to load action tool definitions based on agent ID and tool names, enhancing the tool loading process. - Updated loadToolDefinitions to integrate action tool definitions, allowing for better management and retrieval of action-specific tools. - Added comprehensive tests for action tool definitions to ensure correct loading and parameter handling, improving overall reliability and functionality. * chore: update @librechat/agents dependency to version 3.1.26 * refactor: add toolEndCallback to handle tool execution results * fix: tool definitions and execution handling - Introduced native tools (execute_code, file_search, web_search) to the tool service, allowing for better integration and management of these tools. - Updated isBuiltInTool function to include native tools in the built-in check, improving tool recognition. - Added comprehensive tests for loading parameters of native tools, ensuring correct functionality and parameter handling. - Enhanced tool definitions registry to include new agent tool definitions, streamlining tool retrieval and management. * refactor: enhance tool loading and execution context - Added toolRegistry to the context for OpenAIChatCompletionController and createResponse functions, improving tool management. - Updated loadToolsForExecution to utilize toolRegistry for better integration of programmatic tools and tool search functionalities. - Enhanced the initialization process to include toolRegistry in agent context, streamlining tool access and configuration. - Refactored tool classification logic to support event-driven execution, ensuring compatibility with new tool definitions. * chore: add request duration logging to OpenAI and Responses controllers - Introduced logging for request start and completion times in OpenAIChatCompletionController and createResponse functions. - Calculated and logged the duration of each request, enhancing observability and performance tracking. - Improved debugging capabilities by providing detailed logs for both streaming and non-streaming responses. * chore: update @librechat/agents dependency to version 3.1.27 * refactor: implement buildToolSet function for tool management - Introduced buildToolSet function to streamline the creation of tool sets from agent configurations, enhancing tool management across various controllers. - Updated AgentClient, OpenAIChatCompletionController, and createResponse functions to utilize buildToolSet, improving consistency in tool handling. - Added comprehensive tests for buildToolSet to ensure correct functionality and edge case handling, enhancing overall reliability. * refactor: update import paths for ToolExecuteOptions and createToolExecuteHandler * fix: update GoogleSearch.js description for maximum search results - Changed the default maximum number of search results from 10 to 5 in the Google Search JSON schema description, ensuring accurate documentation of the expected behavior. * chore: remove deprecated Browser tool and associated assets - Deleted the Browser tool definition from manifest.json, which included its name, plugin key, description, and authentication configuration. - Removed the web-browser.svg asset as it is no longer needed following the removal of the Browser tool. * fix: ensure tool definitions are valid before processing - Added a check to verify the existence of tool definitions in the registry before accessing their properties, preventing potential runtime errors. - Updated the loading logic for built-in tool definitions to ensure that only valid definitions are pushed to the built-in tool definitions array. * fix: extend ExtendedJsonSchema to support 'null' type and nullable enums - Updated the ExtendedJsonSchema type to include 'null' as a valid type option. - Modified the enum property to accept an array of values that can include strings, numbers, booleans, and null, enhancing schema flexibility. * test: add comprehensive tests for tool definitions loading and registry behavior - Implemented tests to verify the handling of built-in tools without registry definitions, ensuring they are skipped correctly. - Added tests to confirm that built-in tools include descriptions and parameters in the registry. - Enhanced tests for action tools, checking for proper inclusion of metadata and handling of tools without parameters in the registry. * test: add tests for mixed-type and number enum schema handling - Introduced tests to validate the parsing of mixed-type enum values, including strings, numbers, booleans, and null. - Added tests for number enum schema values to ensure correct parsing of numeric inputs, enhancing schema validation coverage. * fix: update mock implementation for @librechat/agents - Changed the mock for @librechat/agents to spread the actual module's properties, ensuring that all necessary functionalities are preserved in tests. - This adjustment enhances the accuracy of the tests by reflecting the real structure of the module. * fix: change max_results type in GoogleSearch schema from number to integer - Updated the type of max_results in the Google Search JSON schema to 'integer' for better type accuracy and validation consistency. * fix: update max_results description and type in GoogleSearch schema - Changed the type of max_results from 'number' to 'integer' for improved type accuracy. - Updated the description to reflect the new default maximum number of search results, changing it from 10 to 5. * refactor: remove unused code and improve tool registry handling - Eliminated outdated comments and conditional logic related to event-driven mode in the ToolService. - Enhanced the handling of the tool registry by ensuring it is configurable for better integration during tool execution. * feat: add definitionsOnly option to buildToolClassification for event-driven mode - Introduced a new parameter, definitionsOnly, to the BuildToolClassificationParams interface to enable a mode that skips tool instance creation. - Updated the buildToolClassification function to conditionally add tool definitions without instantiating tools when definitionsOnly is true. - Modified the loadToolDefinitions function to pass definitionsOnly as true, ensuring compatibility with the new feature. * test: add unit tests for buildToolClassification with definitionsOnly option - Implemented tests to verify the behavior of buildToolClassification when definitionsOnly is set to true or false. - Ensured that tool instances are not created when definitionsOnly is true, while still adding necessary tool definitions. - Confirmed that loadAuthValues is called appropriately based on the definitionsOnly parameter, enhancing test coverage for this new feature. |
||
|
|
774f1f2cc2
|
🗑️ chore: Remove YouTube API integration (#11331)
* 🗑️ refactor: Remove YouTube API integration and related configurations as it's broken and should be integrated via MCP instead. Currently there seems not to be a single MCP out there with working get_transcript methods for months. API seems to have changed and there are no maintainers on these projects. We will work out something soon an MCP solution - Deleted YouTube API key and related configurations from .env.example. - Removed YouTube tools and their references from the API client, including the manifest and structured files. - Updated package.json to remove YouTube-related dependencies. - Cleaned up toolkit exports by removing YouTube toolkit references. * chore: revert package removal to properly remove packages * 🗑️ refactor: Remove YouTube API and related dependencies due to integration issues --------- Co-authored-by: Danny Avila <danny@librechat.ai> |
||
|
|
fc6f127b21
|
🌉 fix: Add Proxy Support to Gemini Image Gen Tool (#11302)
* ✨ feat: Add proxy support for Google APIs in GeminiImageGen
- Implemented a proxy wrapper for globalThis.fetch to route requests to googleapis.com through a specified proxy.
- Added tests to verify the proxy configuration behavior, ensuring correct dispatcher application for Google API calls and preserving existing options.
Co-authored-by: [Your Name] <your.email@example.com>
* chore: remove comment
---------
Co-authored-by: [Your Name] <your.email@example.com>
Co-authored-by: Danny Avila <danacordially@gmail.com>
|
||
|
|
200098d992
|
🍌 feat: Gemini Image Generation Tool (Nano Banana) (#10676)
* Added fully functioning Agent Tool supporting Google's Nano Banana * 🔧 refactor: Update Google credentials handling in GeminiImageGen.js * Refactored the credentials path to follow a consistent pattern with other Google service integrations, allowing for an environment variable override. * Updated documentation in README-GeminiNanoBanana.md to reflect the new credentials handling approach and removed references to hardcoded paths. * 🛠️ refactor: Remove unnecessary whitespace in handleTools.js * 🔧 feat: Update Gemini Image Generation Tool - Bump @google/genai package version to ^1.19.0 for improved functionality. - Refactor GeminiImageGen to createGeminiImageTool for better clarity and consistency. - Enhance manifest.json for Gemini Image Tools with updated descriptions and icon. - Add SVG icon for Gemini Image Tools. - Implement progress tracking for Gemini image generation in the UI. - Introduce new toolkit and context handling for image generation tools. This update improves the Gemini image generation capabilities and user experience. * 🗑️ chore: Remove outdated Gemini image generation PNG and update SVG icon - Deleted the obsolete PNG file for Gemini image generation. - Updated the SVG icon with a new design featuring a gradient and shadow effect, enhancing visual appeal and consistency. * fix: ESLint formatting and unused variable in GeminiImageGen * fix: Update default model to gemini-2.5-flash-image * ✨ feat: Enhance Gemini Image Generation Configuration - Updated .env.example to include new environment variables for Google Cloud region, service account configuration, and Gemini API key options. - Modified GeminiImageGen.js to support both user-provided API keys and Vertex AI service accounts, improving flexibility in client initialization. - Updated manifest.json to reflect changes in authentication methods for the Gemini Image Tools. - Bumped @google/genai package version to 1.19.0 in package-lock.json for compatibility with new features. * 🔧 fix: Format Default Service Key Path in GeminiImageGen.js - Adjusted the return statement in getDefaultServiceKeyPath function for improved readability by formatting it across multiple lines. This change enhances code clarity without altering functionality. * ✨ feat: Enhance Gemini Image Generation with Token Usage Tracking - Added `recordTokenUsage` function to track token usage for balance management. - Integrated token recording into the image generation process. - Updated Gemini image generation tool to accept optional `aspectRatio` and `imageSize` parameters for improved image customization. - Updated token values for new Gemini models in the transaction model. - Improved documentation for image generation tool descriptions and parameters. * ✨ feat: Add new Gemini models for image generation token limits - Introduced token limits for 'gemini-3-pro-image' and 'gemini-2.5-flash-image' models. - Updated token values to enhance the Gemini image generation capabilities. * 🔧 fix: Update Google Service Key Path for Consistency in Initialization (#11001) * 🔧 refactor: Update GeminiImageGen for improved file handling and path resolution - Changed the default service key path to use process.cwd() for better compatibility. - Replaced synchronous file system operations with asynchronous promises for mkdir and writeFile, enhancing performance and error handling. - Added error handling for credential file access to prevent crashes when the file does not exist. * 🔧 refactor: Update GeminiImageGen to streamline API key handling - Refactored API key checks to improve clarity and consistency. - Removed redundant checks for user-provided keys, enhancing code readability. - Ensured proper logging for API key usage across different configurations. * 🔧 fix: Update GeminiImageGen to handle imageSize support conditionally - Added a check to ensure imageSize is only applied if the gemini model does not include 'gemini-2.5-flash-image', improving compatibility. - Enhanced the logic for setting imageConfig to prevent potential issues with unsupported configurations. * 🔧 refactor: Simplify local storage condition in createGeminiImageTool function * 🔧 feat: Enhance image format handling in GeminiImageGen with conversion support * 🔧 refactor: Streamline API key initialization in GeminiImageGen - Simplified the handling of API keys by removing redundant checks for user-provided keys. - Updated logging to reflect the new priority order for API key usage, enhancing clarity and consistency. - Improved code readability by consolidating key retrieval logic. --------- Co-authored-by: Dev Bhanushali <dev.bhanushali@hingehealth.com> Co-authored-by: Danny Avila <danny@librechat.ai> |
||
|
|
e452c1a8d9
|
🔀 refactor: Conditional Mapping Support for Multi-Convo (Parallel) Messages (#11180)
* refactor: message handling with addedConvo support - Introduced `addedConvo` property in message schema to track conversation additions. - Updated `BaseClient` to conditionally include `addedConvo` in saved messages based on request body. - Enhanced `AgentClient` to apply mapping logic for messages with the `addedConvo` flag, improving message processing. - Updated documentation to reflect new optional `mapCondition` parameter for message mapping functions, enhancing flexibility in message handling. * test: Add comprehensive tests for getMessagesForConversation method - Introduced a suite of tests for the `getMessagesForConversation` method in the `AgentClient` to validate mapping logic based on `mapMethod` and `mapCondition`. - Covered various scenarios including applying mapping to all messages, conditional mapping based on `addedConvo`, handling of empty messages, and preserving message order. - Ensured robust handling of edge cases such as null `mapMethod` and undefined `mapCondition`, enhancing overall test coverage and reliability of message processing. |
||
|
|
b7ea340769
|
🏞️ feat: Modifiable OpenAI Image Gen Model Environment Variable (#11082) | ||
|
|
439bc98682
|
⏸ refactor: Improve UX for Parallel Streams (Multi-Convo) (#11096)
* 🌊 feat: Implement multi-conversation feature with added conversation context and payload adjustments
* refactor: Replace isSubmittingFamily with isSubmitting across message components for consistency
* feat: Add loadAddedAgent and processAddedConvo for multi-conversation agent execution
* refactor: Update ContentRender usage to conditionally render PlaceholderRow based on isLast and isSubmitting
* WIP: first pass, sibling index
* feat: Enhance multi-conversation support with agent tracking and display improvements
* refactor: Introduce isEphemeralAgentId utility and update related logic for agent handling
* refactor: Implement createDualMessageContent utility for sibling message display and enhance useStepHandler for added conversations
* refactor: duplicate tools for added agent if ephemeral and primary agent is also ephemeral
* chore: remove deprecated multimessage rendering
* refactor: enhance dual message content creation and agent handling for parallel rendering
* refactor: streamline message rendering and submission handling by removing unused state and optimizing conditional logic
* refactor: adjust content handling in parallel mode to utilize existing content for improved agent display
* refactor: update @librechat/agents dependency to version 3.0.53
* refactor: update @langchain/core and @librechat/agents dependencies to latest versions
* refactor: remove deprecated @langchain/core dependency from package.json
* chore: remove unused SearchToolConfig and GetSourcesParams types from web.ts
* refactor: remove unused message properties from Message component
* refactor: enhance parallel content handling with groupId support in ContentParts and useStepHandler
* refactor: implement parallel content styling in Message, MessageRender, and ContentRender components. use explicit model name
* refactor: improve agent ID handling in createDualMessageContent for dual message display
* refactor: simplify title generation in AddedConvo by removing unused sender and preset logic
* refactor: replace string interpolation with cn utility for className in HoverButtons component
* refactor: enhance agent ID handling by adding suffix management for parallel agents and updating related components
* refactor: enhance column ordering in ContentParts by sorting agents with suffix management
* refactor: update @librechat/agents dependency to version 3.0.55
* feat: implement parallel content rendering with metadata support
- Added `ParallelContentRenderer` and `ParallelColumns` components for rendering messages in parallel based on groupId and agentId.
- Introduced `contentMetadataMap` to store metadata for each content part, allowing efficient parallel content detection.
- Updated `Message` and `ContentRender` components to utilize the new metadata structure for rendering.
- Modified `useStepHandler` to manage content indices and metadata during message processing.
- Enhanced `IJobStore` interface and its implementations to support storing and retrieving content metadata.
- Updated data schemas to include `contentMetadataMap` for messages, enabling multi-agent and parallel execution scenarios.
* refactor: update @librechat/agents dependency to version 3.0.56
* refactor: remove unused EPHEMERAL_AGENT_ID constant and simplify agent ID check
* refactor: enhance multi-agent message processing and primary agent determination
* refactor: implement branch message functionality for parallel responses
* refactor: integrate added conversation retrieval into message editing and regeneration processes
* refactor: remove unused isCard and isMultiMessage props from MessageRender and ContentRender components
* refactor: update @librechat/agents dependency to version 3.0.60
* refactor: replace usage of EPHEMERAL_AGENT_ID constant with isEphemeralAgentId function for improved clarity and consistency
* refactor: standardize agent ID format in tests for consistency
* chore: move addedConvo property to the correct position in payload construction
* refactor: rename agent_id values in loadAgent tests for clarity
* chore: reorder props in ContentParts component for improved readability
* refactor: rename variable 'content' to 'result' for clarity in RedisJobStore tests
* refactor: streamline useMessageActions by removing duplicate handleFeedback assignment
* chore: revert placeholder rendering logic MessageRender and ContentRender components to original
* refactor: implement useContentMetadata hook for optimized content metadata handling
* refactor: remove contentMetadataMap and related logic from the codebase and revert back to agentId/groupId in content parts
- Eliminated contentMetadataMap from various components and services, simplifying the handling of message content.
- Updated functions to directly access agentId and groupId from content parts instead of relying on a separate metadata map.
- Adjusted related hooks and components to reflect the removal of contentMetadataMap, ensuring consistent handling of message content.
- Updated tests and documentation to align with the new structure of message content handling.
* refactor: remove logging from groupParallelContent function to clean up output
* refactor: remove model parameter from TBranchMessageRequest type for simplification
* refactor: enhance branch message creation by stripping metadata for standalone content
* chore: streamline branch message creation by simplifying content filtering and removing unnecessary metadata checks
* refactor: include attachments in branch message creation for improved content handling
* refactor: streamline agent content processing by consolidating primary agent identification and filtering logic
* refactor: simplify multi-agent message processing by creating a dedicated mapping method and enhancing content filtering
* refactor: remove unused parameter from loadEphemeralAgent function for cleaner code
* refactor: update groupId handling in metadata to only set when provided by the server
|
||
|
|
0ae3b87b65
|
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926)
* ✨ feat: Implement Resumable Generation Jobs with SSE Support
- Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections.
- Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress.
- Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling.
- Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings.
- Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections.
* WIP: resuming
* WIP: resumable stream
* feat: Enhance Stream Management with Abort Functionality
- Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId.
- Introduced a new mutation hook `useAbortStreamMutation` for client-side integration.
- Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations.
- Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation.
- Improved `useResumableSSE` to handle stream errors and token refresh seamlessly.
- Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately.
* fix: Update query parameter handling in useChatHelpers
- Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations.
* fix: improve syncing when switching conversations
* fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup
* fix: Improve content type mismatch handling in useStepHandler
- Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates.
* fix: Allow dynamic content creation in useChatFunctions
- Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text.
* fix: Refine response message handling in useStepHandler
- Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow.
* refactor: Enhance GenerationJobManager with In-Memory Implementations
- Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling.
- Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance.
- Enhanced job metadata handling to support user messages and response IDs for resumable functionality.
- Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage.
* refactor: Enhance GenerationJobManager with improved subscriber handling
- Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count.
- Refined createJob and subscribe methods to ensure generation starts only when the first real client connects.
- Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness.
- Improved logging for subscriber checks and event handling to facilitate debugging and monitoring.
* chore: Adjust timeout for subscriber readiness in ResumableAgentController
- Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process.
* refactor: Update GenerationJobManager documentation and structure
- Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design.
- Updated comments to reflect the potential for Redis integration and the need for async refactoring.
- Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code.
* refactor: Convert GenerationJobManager methods to async for improved performance
- Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management.
- Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling.
- Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management.
* refactor: Simplify initial response handling in useChatFunctions
- Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow.
* refactor: Clarify content handling logic in useStepHandler
- Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios.
- Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability.
* refactor: Improve message handling logic in useStepHandler
- Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized.
- Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow.
* fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context.
* refactor: Integrate streamId handling for improved resumable functionality for attachments
- Added streamId parameter to various functions to support resumable mode in tool loading and memory processing.
- Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience.
- Improved logging and attachment management to accommodate both standard and resumable modes.
* refactor: Streamline abort handling and integrate GenerationJobManager for improved job management
- Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager.
- Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency.
- Simplified cleanup processes and improved error handling during abort operations.
- Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management.
* refactor: Unify streamId and conversationId handling for improved job management
- Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency.
- Simplified job creation and metadata management by removing redundant conversationId updates from callbacks.
- Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling.
- Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability.
* refactor: Enhance resumable SSE handling with improved UI state management and error recovery
- Added UI state restoration on successful SSE connection to indicate ongoing submission.
- Implemented detailed error handling for network failures, including retry logic with exponential backoff.
- Introduced abort event handling to reset UI state on intentional stream closure.
- Enhanced debugging capabilities for testing reconnection and clean close scenarios.
- Updated generation function to retry on network errors, improving resilience during submission processes.
* refactor: Consolidate content state management into IJobStore for improved job handling
- Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management.
- Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy.
- Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks.
- Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations.
* feat: Introduce Redis-backed stream services for enhanced job management
- Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options.
- Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios.
- Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations.
- Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness.
- Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options.
* refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport
- Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity.
- Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output.
- Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency.
* refactor: Enhance job state management and TTL configuration in RedisJobStore
- Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management.
- Refactored the handling of job expiration and cleanup processes to align with new TTL configurations.
- Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance.
- Improved comments and documentation for better understanding of the changes made.
* refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management
- Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion.
- Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management.
- Improved cleanup logic in the cleanup method to handle orphaned resources effectively.
- Enhanced documentation and comments for better clarity on the new functionality.
* refactor: Update TTL configuration for completed jobs in InMemoryJobStore
- Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup.
- Enhanced cleanup logic to respect the new TTL setting, improving resource management.
- Updated comments for clarity on the behavior of the TTL configuration.
* refactor: Enhance RedisJobStore with local graph caching for improved performance
- Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance.
- Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed.
- Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects.
- Improved documentation and comments for clarity on the caching mechanism and its benefits.
* feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support
- Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling.
- Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling.
- Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior.
- Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability.
* fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers
- Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately.
- Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting.
* ci: Improve Redis client disconnection handling in integration tests
- Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client.
- Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown.
- Improved comments for clarity on the disconnection process and error handling.
* refactor: Enhance GenerationJobManager and event transports for improved resource management
- Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup.
- Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs.
- Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams.
- Improved comments for clarity on resource management and cleanup processes.
* refactor: Update GenerationJobManager and ResumableAgentController for improved event handling
- Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers.
- Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions.
- Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes.
* fix: Update cache integration test command for stream to ensure proper execution
- Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests.
- This change enhances the reliability of the test suite by ensuring all tests complete as expected.
* feat: Add active job management for user and show progress in conversation list
- Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks.
- Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs.
- Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup.
- Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback.
* feat: Implement active job tracking by user in RedisJobStore
- Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks.
- Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs.
- Updated job creation, update, and deletion methods to manage user-specific job sets effectively.
- Enhanced integration tests to validate the new user-specific job management features.
* refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore
* WIP: Add backend inspect script for easier debugging in production
* refactor: title generation logic
- Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID.
- Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load.
- Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion.
- Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance.
* feat: Enhance updateConvoInAllQueries to support moving conversations to the top
* chore: temp. remove added multi convo
* refactor: Update active jobs query integration for optimistic updates on abort
- Introduced a new interface for active jobs response to standardize data handling.
- Updated query keys for active jobs to ensure consistency across components.
- Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness.
* refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints
- Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away.
- Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type.
- Refactored imports in ChatView for better organization.
* refactor: streamline conversation title generation handling
- Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase.
- Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application.
* feat: Add USE_REDIS_STREAMS configuration for stream job storage
- Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set.
- Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration.
- Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides.
* fix: title generation queue management for assistants
- Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams.
- Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete.
- Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow.
* refactor: streamline agent controller and remove legacy resumable handling
- Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic.
- Deprecated the legacy non-resumable path, providing a clear migration path for future use.
- Adjusted setHeaders middleware to remove unnecessary checks for resumable mode.
- Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance.
* feat: Add USE_REDIS_STREAMS configuration to .env.example
- Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams.
- Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management.
* refactor: remove unused setHeaders middleware from chat route
- Eliminated the setHeaders middleware from the chat route, streamlining the request handling process.
- This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks.
* fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth)
* fix(flow): add immediate abort handling and fix intervalId initialization
- Add immediate abort handler that responds instantly to abort signal
- Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error
- Consolidate cleanup logic into single function to avoid duplicate cleanup
- Properly remove abort event listener on cleanup
* fix(mcp): clean up OAuth flows on abort and simplify flow handling
- Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows
- Update createAbortHandler to clean up both flow types on tool call abort
- Pass abort signal to createFlow in returnOnOAuth path
- Simplify handleOAuthRequired to always cancel existing flows and start fresh
- This ensures user always gets a new OAuth URL instead of waiting for stale flows
* fix(agents): handle 'new' conversationId and improve abort reliability
- Treat 'new' as placeholder that needs UUID in request controller
- Send JSON response immediately before tool loading for faster SSE connection
- Use job's abort controller instead of prelimAbortController
- Emit errors to stream if headers already sent
- Skip 'new' as valid ID in abort endpoint
- Add fallback to find active jobs by userId when conversationId is 'new'
* fix(stream): detect early abort and prevent navigation to non-existent conversation
- Abort controller on job completion to signal pending operations
- Detect early abort (no content, no responseMessageId) in abortJob
- Set conversation and responseMessage to null for early aborts
- Add earlyAbort flag to final event for frontend detection
- Remove unused text field from AbortResult interface
- Frontend handles earlyAbort by staying on/navigating to new chat
* test(mcp): update test to expect signal parameter in createFlow
fix(agents): include 'new' conversationId in newConvo check for title generation
When frontend sends 'new' as conversationId, it should still trigger
title generation since it's a new conversation. Rename boolean variable for clarity
fix(agents): check abort state before completeJob for title generation
completeJob now triggers abort signal for cleanup, so we need to
capture the abort state beforehand to correctly determine if title
generation should run.
|
||
|
|
95a69df70e
|
🔒 feat: Add MCP server domain restrictions for remote transports (#11013)
* 🔒 feat: Add MCP server domain restrictions for remote transports * 🔒 feat: Implement comprehensive MCP error handling and domain validation - Added `handleMCPError` function to centralize error responses for domain restrictions and inspection failures. - Introduced custom error classes: `MCPDomainNotAllowedError` and `MCPInspectionFailedError` for better error management. - Updated MCP server controllers to utilize the new error handling mechanism. - Enhanced domain validation logic in `createMCPTools` and `createMCPTool` functions to prevent operations on disallowed domains. - Added tests for runtime domain validation scenarios to ensure correct behavior. * chore: import order * 🔒 feat: Enhance domain validation in MCP tools with user role-based restrictions - Integrated `getAppConfig` to fetch allowed domains based on user roles in `createMCPTools` and `createMCPTool` functions. - Removed the deprecated `getAllowedDomains` method from `MCPServersRegistry`. - Updated tests to verify domain restrictions are applied correctly based on user roles. - Ensured that domain validation logic is consistent and efficient across tool creation processes. * 🔒 test: Refactor MCP tests to utilize configurable app settings - Introduced a mock for `getAppConfig` to enhance test flexibility. - Removed redundant mock definition to streamline test setup. - Ensured tests are aligned with the latest domain validation logic. --------- Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com> Co-authored-by: Danny Avila <danny@librechat.ai> |
||
|
|
f9060fa25f
|
🔧 chore: Update ESLint Config & Run Linter (#10986) | ||
|
|
b5ab32c5ae
|
🎯 refactor: Centralize Agent Model Handling Across Conversation Lifecycle (#10956)
* refactor: Implement clearModelForNonEphemeralAgent utility for improved agent handling - Introduced clearModelForNonEphemeralAgent function to manage model state for non-ephemeral agents across various components. - Updated ModelSelectorContext to initialize model based on agent type. - Enhanced useNavigateToConvo, useQueryParams, and useSelectMention hooks to clear model for non-ephemeral agents. - Refactored buildDefaultConvo and endpoints utility to ensure proper handling of agent_id and model state. - Improved overall conversation logic and state management for better performance and reliability. * refactor: Enhance useNewConvo hook to improve agent conversation handling - Added logic to skip access checks for existing agent conversations, utilizing localStorage to restore conversations after refresh. - Improved handling of default endpoints for agents based on user access and existing conversation state, ensuring more reliable conversation initialization. * refactor: Update ChatRoute and useAuthRedirect to include user roles - Enhanced ChatRoute to utilize user roles for improved conversation state management. - Modified useAuthRedirect to return user roles alongside authentication status, ensuring roles are available for conditional logic in ChatRoute. - Adjusted conversation initialization logic to depend on the loaded roles, enhancing the reliability of the conversation setup. * refactor: Update BaseClient to handle non-ephemeral agents in conversation logic - Added a check for non-ephemeral agents in BaseClient, modifying the exceptions set to include 'model' when applicable. - Enhanced conversation handling to improve flexibility based on agent type. * test: Add mock for clearModelForNonEphemeralAgent in useQueryParams tests - Introduced a mock for clearModelForNonEphemeralAgent to enhance testing of query parameters related to non-ephemeral agents. - This addition supports improved test coverage and ensures proper handling of model state in relevant scenarios. * refactor: Simplify mocks in useQueryParams tests for improved clarity - Updated the mocking strategy for utilities in useQueryParams tests to use actual implementations where possible, while still suppressing test output for the logger. - Enhanced the mock for tQueryParamsSchema to minimize complexity and avoid unnecessary validation during tests, improving test reliability and maintainability. * refactor: Enhance agent identification logic in BaseClient for improved clarity * chore: Import Constants in families.ts for enhanced functionality |
||
|
|
04a4a2aa44
|
🧵 refactor: Migrate Endpoint Initialization to TypeScript (#10794)
* refactor: move endpoint initialization methods to typescript * refactor: move agent init to packages/api - Introduced `initialize.ts` for agent initialization, including file processing and tool loading. - Updated `resources.ts` to allow optional appConfig parameter. - Enhanced endpoint configuration handling in various initialization files to support model parameters. - Added new artifacts and prompts for React component generation. - Refactored existing code to improve type safety and maintainability. * refactor: streamline endpoint initialization and enhance type safety - Updated initialization functions across various endpoints to use a consistent request structure, replacing `unknown` types with `ServerResponse`. - Simplified request handling by directly extracting keys from the request body. - Improved type safety by ensuring user IDs are safely accessed with optional chaining. - Removed unnecessary parameters and streamlined model options handling for better clarity and maintainability. * refactor: moved ModelService and extractBaseURL to packages/api - Added comprehensive tests for the models fetching functionality, covering scenarios for OpenAI, Anthropic, Google, and Ollama models. - Updated existing endpoint index to include the new models module. - Enhanced utility functions for URL extraction and model data processing. - Improved type safety and error handling across the models fetching logic. * refactor: consolidate utility functions and remove unused files - Merged `deriveBaseURL` and `extractBaseURL` into the `@librechat/api` module for better organization. - Removed redundant utility files and their associated tests to streamline the codebase. - Updated imports across various client files to utilize the new consolidated functions. - Enhanced overall maintainability by reducing the number of utility modules. * refactor: replace ModelService references with direct imports from @librechat/api and remove ModelService file * refactor: move encrypt/decrypt methods and key db methods to data-schemas, use `getProviderConfig` from `@librechat/api` * chore: remove unused 'res' from options in AgentClient * refactor: file model imports and methods - Updated imports in various controllers and services to use the unified file model from '~/models' instead of '~/models/File'. - Consolidated file-related methods into a new file methods module in the data-schemas package. - Added comprehensive tests for file methods including creation, retrieval, updating, and deletion. - Enhanced the initializeAgent function to accept dependency injection for file-related methods. - Improved error handling and logging in file methods. * refactor: streamline database method references in agent initialization * refactor: enhance file method tests and update type references to IMongoFile * refactor: consolidate database method imports in agent client and initialization * chore: remove redundant import of initializeAgent from @librechat/api * refactor: move checkUserKeyExpiry utility to @librechat/api and update references across endpoints * refactor: move updateUserPlugins logic to user.ts and simplify UserController * refactor: update imports for user key management and remove UserService * refactor: remove unused Anthropics and Bedrock endpoint files and clean up imports * refactor: consolidate and update encryption imports across various files to use @librechat/data-schemas * chore: update file model mock to use unified import from '~/models' * chore: import order * refactor: remove migrated to TS agent.js file and its associated logic from the endpoints * chore: add reusable function to extract imports from source code in unused-packages workflow * chore: enhance unused-packages workflow to include @librechat/api dependencies and improve dependency extraction * chore: improve dependency extraction in unused-packages workflow with enhanced error handling and debugging output * chore: add detailed debugging output to unused-packages workflow for better visibility into unused dependencies and exclusion lists * chore: refine subpath handling in unused-packages workflow to correctly process scoped and non-scoped package imports * chore: clean up unused debug output in unused-packages workflow and reorganize type imports in initialize.ts |
||
|
|
ad6ba4b6d1
|
🧬 refactor: Wire Database Methods into MCP Package via Registry Pattern (#10715)
* Refactor: MCPServersRegistry Singleton Pattern with Dependency Injection for DB methods consumption * refactor: error handling in MCP initialization and improve logging for MCPServersRegistry instance creation. - Added checks for mongoose instance in ServerConfigsDB constructor and refined error messages for clarity. - Reorder and use type imports --------- Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com> Co-authored-by: Danny Avila <danny@librechat.ai> |
||
|
|
ef1b7f0157
|
🧩 refactor: Decouple MCP Config from Startup Config (#10689)
* Decouple mcp config from start up config * Chore: Work on AI Review and Copilot Comments - setRawConfig is not needed since the private raw config is not needed any more - !!serversLoading bug fixed - added unit tests for route /api/mcp/servers - copilot comments addressed * chore: remove comments * chore: rename data-provider dir for MCP * chore: reorganize mcp specific query hooks * fix: consolidate imports for MCP server manager * chore: add dev-staging branch to frontend review workflow triggers * feat: add GitHub Actions workflow for building and pushing Docker images to GitHub Container Registry and Docker Hub * fix: update label for tag input in BookmarkForm tests to improve clarity --------- Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com> Co-authored-by: Danny Avila <danny@librechat.ai> |
||
|
|
656e1abaea
|
🪦 refactor: Remove Legacy Code (#10533)
* 🗑️ chore: Remove unused Legacy Provider clients and related helpers * Deleted OpenAIClient and GoogleClient files along with their associated tests. * Removed references to these clients in the clients index file. * Cleaned up typedefs by removing the OpenAISpecClient export. * Updated chat controllers to use the OpenAI SDK directly instead of the removed client classes. * chore/remove-openapi-specs * 🗑️ chore: Remove unused mergeSort and misc utility functions * Deleted mergeSort.js and misc.js files as they are no longer needed. * Removed references to cleanUpPrimaryKeyValue in messages.js and adjusted related logic. * Updated mongoMeili.ts to eliminate local implementations of removed functions. * chore: remove legacy endpoints * chore: remove all plugins endpoint related code * chore: remove unused prompt handling code and clean up imports * Deleted handleInputs.js and instructions.js files as they are no longer needed. * Removed references to these files in the prompts index.js. * Updated docker-compose.yml to simplify reverse proxy configuration. * chore: remove unused LightningIcon import from Icons.tsx * chore: clean up translation.json by removing deprecated and unused keys * chore: update Jest configuration and remove unused mock file * Simplified the setupFiles array in jest.config.js by removing the fetchEventSource mock. * Deleted the fetchEventSource.js mock file as it is no longer needed. * fix: simplify endpoint type check in Landing and ConversationStarters components * Updated the endpoint type check to use strict equality for better clarity and performance. * Ensured consistency in the handling of the azureOpenAI endpoint across both components. * chore: remove unused dependencies from package.json and package-lock.json * chore: remove legacy EditController, associated routes and imports * chore: update banResponse logic to refine request handling for banned users * chore: remove unused validateEndpoint middleware and its references * chore: remove unused 'res' parameter from initializeClient in multiple endpoint files * chore: remove unused 'isSmallScreen' prop from BookmarkNav and NewChat components; clean up imports in ArchivedChatsTable and useSetIndexOptions hooks; enhance localization in PromptVersions * chore: remove unused import of Constants and TMessage from MobileNav; retain only necessary QueryKeys import * chore: remove unused TResPlugin type and related references; clean up imports in types and schemas |
||
|
|
03c9d5f79f
|
📑 refactor: File Search Citations Dual-Format Unicode Handling (#10888)
* 🔖 refactor: citation handling with support for both literal and Unicode formats * refactor: file search messages for edge cases in documents * 🔧 refactor: Enhance citation handling with detailed regex patterns for literal and Unicode formats * 🔧 refactor: Simplify file search query handling by removing unnecessary parameters and improving result formatting * ✨ test: Add comprehensive integration tests for citation processing flow with support for literal and Unicode formats * 🔧 refactor: Improve regex match handling and add performance tests for citation processing |
||
|
|
a07cc11cd6
|
🖇️ refactor: Improve prompt for Better Citation Formatting (#10858)
* Improve prompt for better citation formatting * Provide format example Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * refactor: Simplify citation guidelines and response structure in tool loading --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: Danny Avila <danny@librechat.ai> |
||
|
|
8bdc808074
|
⚡ refactor: Optimize & Standardize Tokenizer Usage (#10777)
* refactor: Token Limit Processing with Enhanced Efficiency - Added a new test suite for `processTextWithTokenLimit`, ensuring comprehensive coverage of various scenarios including under, at, and exceeding token limits. - Refactored the `processTextWithTokenLimit` function to utilize a ratio-based estimation method, significantly reducing the number of token counting function calls compared to the previous binary search approach. - Improved handling of edge cases and variable token density, ensuring accurate truncation and performance across diverse text inputs. - Included direct comparisons with the old implementation to validate correctness and efficiency improvements. * refactor: Remove Tokenizer Route and Related References - Deleted the tokenizer route from the server and removed its references from the routes index and server files, streamlining the API structure. - This change simplifies the routing configuration by eliminating unused endpoints. * refactor: Migrate countTokens Utility to API Module - Removed the local countTokens utility and integrated it into the @librechat/api module for centralized access. - Updated various files to reference the new countTokens import from the API module, ensuring consistent usage across the application. - Cleaned up unused references and imports related to the previous countTokens implementation. * refactor: Centralize escapeRegExp Utility in API Module - Moved the escapeRegExp function from local utility files to the @librechat/api module for consistent usage across the application. - Updated imports in various files to reference the new centralized escapeRegExp function, ensuring cleaner code and reducing redundancy. - Removed duplicate implementations of escapeRegExp from multiple files, streamlining the codebase. * refactor: Enhance Token Counting Flexibility in Text Processing - Updated the `processTextWithTokenLimit` function to accept both synchronous and asynchronous token counting functions, improving its versatility. - Introduced a new `TokenCountFn` type to define the token counting function signature. - Added comprehensive tests to validate the behavior of `processTextWithTokenLimit` with both sync and async token counting functions, ensuring consistent results. - Implemented a wrapper to track call counts for the `countTokens` function, optimizing performance and reducing unnecessary calls. - Enhanced existing tests to compare the performance of the new implementation against the old one, demonstrating significant improvements in efficiency. * chore: documentation for Truncation Safety Buffer in Token Processing - Added a safety buffer multiplier to the character position estimates during text truncation to prevent overshooting token limits. - Updated the `processTextWithTokenLimit` function to utilize the new `TRUNCATION_SAFETY_BUFFER` constant, enhancing the accuracy of token limit processing. - Improved documentation to clarify the rationale behind the buffer and its impact on performance and efficiency in token counting. |
||
|
|
1477da4987
|
🖥️ feat: Add Proxy Support for Tavily API Tool (#10770)
* 🖥️ feat: Add Proxy Support for Tavily API Tool
- Integrated ProxyAgent from undici to enable proxy support for API requests in TavilySearch and TavilySearchResults.
- Updated fetch options to conditionally include the proxy configuration based on the environment variable, enhancing flexibility for network requests.
* ci: TavilySearchResults with Proxy Support Tests
- Added tests to verify the integration of ProxyAgent for API requests in TavilySearchResults.
- Implemented conditional logic to check for the PROXY environment variable, ensuring correct usage of ProxyAgent based on its presence.
- Updated test setup to clear mocks before each test for improved isolation and reliability.
|
||
|
|
5b8f0cba04
|
📄 refactor: Add Provider Fallback for Media Encoding using Client Endpoint (#10656)
When using direct endpoints (e.g., Google) instead of Agents, `this.options.agent` is undefined, causing provider detection to fail. This resulted in "Unknown content type document" errors for Google/Gemini PDF uploads. Added `?? this.options.endpoint` fallback in addDocuments(), addVideos(), and addAudios() methods to ensure correct provider detection for all endpoint types. |