Commit graph

178 commits

Author SHA1 Message Date
Danny Avila
7129b1b1e4
📜 refactor: Improve Skill Handling Logs (#13057)
* refactor: Streamline batch upload error handling in `uploadCodeEnvFile`
* refactor: Enhance session info error logging in `getSessionInfo`
* refactor: Update error logging to use `logAxiosError` in various agent handlers and skill file processing functions
* refactor: Consolidate missing resource checks in `createToolExecuteHandler` for better clarity
2026-05-11 02:15:51 -04:00
Danny Avila
c67e2b54dc
🔐 feat: Mint Code API Auth Tokens (#13028)
* feat: Mint CodeAPI auth tokens

* style: Format CodeAPI download route

* fix: Prune CodeAPI token cache

* fix: Propagate CodeAPI managed auth

* test: Mock CodeAPI auth in traversal suite

* fix: Pass auth context to invoked skill cache

* feat: Mint CodeAPI plan context

* chore: Refresh CodeAPI auth guidance

* fix: Guard OpenID JWT fallback

* fix: Default CodeAPI JWT tenant in single-tenant mode

* chore: Update @librechat/agents to version 3.1.84 in package-lock.json and package.json files

* chore: Standardize references to Code API in comments and tests
2026-05-09 16:09:10 -04:00
Danny Avila
ac3600cdd7
🗂️ fix: Remove Generated Code Files From Prompt Context (#13037) 2026-05-09 11:38:53 -04:00
Danny Avila
93c4ef4ba8
🧱 refactor: typed CodeEnvRef + kind discriminator + principal-aware sandbox cache (#12960)
* 🧱 refactor: typed CodeEnvRef + kind discriminator + tenant-aware sandbox cache

Final cutover for the LibreChat ↔ codeapi sandbox file identity. Replaces
the magic string `${session_id}/${file_id}?entity_id=...` with a typed,
discriminated `CodeEnvRef`. Pre-release lockstep deploy with codeapi
#1455 and agents #148; no legacy aliases retained.

## Final shape

```ts
type CodeEnvRef =
  | { kind: 'skill'; id: string; storage_session_id: string; file_id: string; version: number }
  | { kind: 'agent'; id: string; storage_session_id: string; file_id: string }
  | { kind: 'user';  id: string; storage_session_id: string; file_id: string };
```

`kind` drives codeapi's sessionKey: `<tenant>:<kind>:<id>[✌️<version>]`
for shared kinds, `<tenant>:user:<userId>` for user-private (auth context
provides `userId`). `version` is statically required for `kind: 'skill'`
and forbidden otherwise via discriminated union — constraint holds at
compile time on every consumer, not just codeapi's runtime validator.

`id` is sessionKey-meaningful for `'skill'` / `'agent'`; informational
only for `'user'` (codeapi resolves user identity from auth context).

## What changed

- `packages/data-provider/src/codeEnvRef.ts` — discriminated union +
  `CODE_ENV_KINDS` const-tuple keeps the runtime list and TS union
  locked together.
- Schemas: `metadata.codeEnvRef` and `SkillFile.codeEnvRef` enums
  tightened to `['skill', 'agent', 'user']`.
- `primeSkillFiles` writes `kind: 'skill'`, `id: skill._id`,
  `version: skill.version`. Cache-hit path reads `codeEnvRef`
  directly. Bumping `skill.version` on edit naturally invalidates
  the prior cache entry under the new sessionKey.
- `processCodeOutput` writes `kind: 'user'`, `id: req.user.id`. Output
  bucket is always user-scoped, regardless of which skill the
  execution invoked. New regression test pins the asymmetry.
- `primeFiles` reupload preserves `kind`/`id`/`version?` from the
  existing ref so a skill-cache-miss reupload doesn't silently demote
  to user bucket.
- `crud.js` upload functions (`uploadCodeEnvFile` /
  `batchUploadCodeEnvFiles`) thread `kind`/`id`/`version?` to the
  multipart form (codeapi #1455 option α). Without these on the wire,
  codeapi falls back to user bucketing and skill-cache invalidation
  never fires. Client-side validation mirrors codeapi's validator.
- `Files/process.js` — chat attachments use `kind: 'user'`; agent
  setup files use `kind: 'agent'`.
- Drops `entity_id` everywhere (struct, schema sub-docs, write paths,
  upload form fields). Drops `'system'` from the kind enum (no emitter
  ever existed).

## Test plan

- [x] `cd packages/data-provider && npx jest src/codeEnvRef.spec` — 4 / 4
- [x] `cd packages/data-schemas && npx jest` — 1447 / 1447
- [x] `cd packages/api && npx jest src/agents` — 81 / 81 in skillFiles +
  handlers + resources
- [x] `cd api && npx jest server/services/Files server/controllers/agents` —
  436 / 436
- [x] `cd api && npx jest server/services/Files/Code` — 98 / 98 (incl.
  new "outputs are user-scoped regardless of which skill the execution
  invoked" regression and "reupload forwards kind/id/version from
  existing ref")
- [x] `npx tsc --noEmit -p packages/data-{provider,schemas}/tsconfig.json
  && npx tsc --noEmit -p packages/api/tsconfig.json` — clean (only
  pre-existing unrelated dev errors in storage/balance, untouched here)

## Deploy notes

- **24h cache-miss burst** on first deploy. Inputs (skill caches re-prime
  under new sessionKey shape) and outputs (any pre-Phase C skill-output
  cached files become unreadable). Bounded by codeapi's 24h TTL.
- **Lockstep with codeapi #1455 and agents #148.** Either repo can land
  first since no aliases to drain, but the three deploys must overlap
  within the same maintenance window.
- **`@librechat/agents` bump to `3.1.79-dev.0`** required after agents
  #148 lands and is published.

## What this enables

Auth bridge work (JWT-based tenant/user identity between LC and codeapi)
— codeapi now derives sessionKey purely from `req.codeApiAuthContext.{
tenantId, userId}`, so the next chapter is replacing the header-asserted
user identity with a verified-claim path.

* 🩹 fix: persist execute_code uploads under codeEnvRef metadata key

Codex review P1 (chatgpt-codex-connector). `Files/process.js` was
storing the upload result under `metadata.fileIdentifier` even though:
- `uploadCodeEnvFile` now returns `{ storage_session_id, file_id }`,
  not the legacy magic string.
- The post-cutover schema (`File.metadata.codeEnvRef`) only declares
  `codeEnvRef` — mongoose strict mode silently strips unknown keys.
- All readers (`primeFiles`, `getCodeFilesByIds`,
  `categorizeFileForToolResources`, controller filtering) check
  `metadata.codeEnvRef`.

Net effect of the bug: chat-attached and agent-setup execute_code files
would lose their sandbox reference on save, and primeFiles would skip
them on subsequent code-execution turns — the file blob would still be
available locally but never re-mounted in the sandbox.

Fix: construct the full `CodeEnvRef` (`{ kind, id, storage_session_id,
file_id }`) at the write site and persist under `metadata.codeEnvRef`.
`BaseClient`'s "is this a code-env file" presence check accepts the new
shape alongside the legacy `fileIdentifier` for back-compat with any
pre-cutover records still in the database. Mirrors the same change in
`processAttachments.spec.ts` (which re-implements the BaseClient logic
for testability).

New regression tests in `process.spec.js` cover three cases:
- chat attachments (`messageAttachment=true`) → `kind: 'user'`
- agent setup (`messageAttachment=false`) → `kind: 'agent'`
- legacy `fileIdentifier` key is NOT persisted (would be schema-stripped)

* 🩹 fix: read storage_session_id on primed file refs (Codex P1)

Codex review (chatgpt-codex-connector). After Phase B's per-file
`session_id` → `storage_session_id` rename, `primeFiles` emits the
new field — but `seedCodeFilesIntoSessions` was still reading
`files[0].session_id` for the representative session and `f.session_id`
for the dedupe key. In runs with only primed attachments (no skill
seed), `representativeSessionId` was `undefined`, the function
returned the unchanged map, and `seedCodeFilesIntoSessions` silently
dropped the entire batch. The first `execute_code` call then started
without `_injected_files` and the agent couldn't see prior-turn
artifacts.

Fix:
- `codeFilesSession.ts`: read `f.storage_session_id` for both the
  dedupe key and the representative session id. JSDoc updated to
  match the new field name.
- `callbacks.js`: the two output-file persistence paths read
  `file.session_id` to pass to `processCodeOutput` — switch to
  `file.storage_session_id`. The original comment explicitly says
  this should be the STORAGE session, which is exactly the field
  Phase B renamed.
- `codeFilesSession.spec.ts`: fixture builder uses `storage_session_id`
  and `kind: 'user'` to match the post-cutover `CodeEnvFile` shape.

Lockstep coordination: this matches the post-bump shape of
`@librechat/agents` 3.1.79+. CI tsc errors against the currently-pinned
3.1.78 are expected and resolve when the dep bumps in this PR before
merge.

* 📦 chore: Bump `@librechat/agents` to version 3.1.80-dev.0 in package-lock and package.json files

* 🪪 fix: thread kind/id/version through codeapi /download URLs (Phase C α)

Symmetric fix for the upload-side wire change in 537725a. Codeapi's
`sessionAuth` middleware now requires `kind`/`id`/`version?` on every
download/freshness URL — without them it 400s with "kind must be one
of: skill, agent, user" before serving the file.

Three sites construct codeapi-side URLs that go through `sessionAuth`:

- `processCodeOutput` (`Files/Code/process.js`): `/download/<sess>/<id>`
  for freshly-generated sandbox outputs. Always `kind: 'user'` +
  `id: req.user.id` — code-output files are always user-private,
  regardless of which skill the run invoked.
- `getSessionInfo` (`Files/Code/process.js`): `/sessions/<sess>/objects/<id>`
  for the 23h freshness check. Pulls kind/id/version straight off the
  `codeEnvRef` already in scope — skill files stay skill-bucketed,
  user files stay user-bucketed.
- `/code/download/:session_id/:fileId` LC route (`routes/files/files.js`):
  proxies to codeapi for manual downloads. Code-output files only on
  this route, so `kind: 'user'` + `id: req.user.id`.

The `getCodeOutputDownloadStream` helper in `crud.js` now takes an
`identity` param, validated by a `buildCodeEnvDownloadQuery` helper
that mirrors `appendCodeEnvFileIdentity`'s shape rules: kind required
from the closed `{skill, agent, user}` set, version required for
'skill' and forbidden otherwise. Bad callers fail fast on the client
instead of round-tripping a 400.

Also cleans up two log-noise sources reported alongside the 400:

- `logAxiosError` in `packages/api/src/utils/axios.ts` was dumping
  `error.response.data` raw. With `responseType: 'arraybuffer'` that's
  a `Buffer` (~4 chars per byte after JSON-serialization); with
  `responseType: 'stream'` it's a `Readable` whose internal state
  serializes the entire ring buffer + socket. New `renderResponseData`
  decodes small buffers as UTF-8 (truncated past 2KB) and stubs streams
  as `'[stream]'`. Diagnostics stay useful, log lines stop being
  megabytes.
- `/code/download` route's catch was bare `logger.error('...', error)`,
  bypassing the redactor. Switched to `logAxiosError` so it benefits
  from the same buffer/stream handling.

Tests updated to match the new contract:
- crud.spec: `getCodeOutputDownloadStream` fixtures pass `userIdentity`;
  new cases cover skill identity (with version), bad kind rejection,
  skill-without-version rejection.
- process.spec: `getSessionInfo` test passes a full `codeEnvRef` object.

* ♻️ refactor: extract codeEnv identity helpers into packages/api

Per the project convention that new backend code lives in TypeScript
under `packages/api`, moves `appendCodeEnvFileIdentity` and
`buildCodeEnvDownloadQuery` from `api/server/services/Files/Code/crud.js`
into a new `packages/api/src/files/code/identity.ts` module.

Both helpers are pure validators that mirror codeapi's
`parseUploadSessionKeyInput` server-side rules (closed kind set,
`version` required for `'skill'` and forbidden otherwise) — they
deserve TS support and a dedicated spec rather than living as
JSDoc-typed helpers in the legacy `/api` workspace. The new module:

- Exports a `CodeEnvIdentity` interface using the
  `librechat-data-provider` `CodeEnvKind` discriminated union.
- Adds 13 unit tests in `identity.spec.ts` covering the validation
  matrix (skill+version, agent, user, and every rejection path) plus
  URL encoding for the download query.
- Re-exported from `packages/api/src/files/code/index.ts` alongside
  `classify`, `extract`, and `form`.

Consumer updates:
- `api/server/services/Files/Code/crud.js`: drops the local helpers
  and imports them from `@librechat/api`. Net -64 lines.
- `api/server/services/Files/Code/process.js`: same.
- Test mocks for `@librechat/api` in three spec files now stub the
  helpers' validation behavior locally rather than pulling them
  through `requireActual` (which would drag in provider-config
  init-time side effects). The package's `exports` field only
  surfaces the root barrel, so leaf imports aren't reachable from
  legacy `/api` test setup.

No runtime behavior change. Identity validation rules and emitted
form/query shapes are byte-for-byte identical pre/post.

* 🪪 fix: emit resource_id alongside id on _injected_files (skill 403 fix)

Companion to codeapi #1455 fix and agents 3.1.80-dev.1 — the wire
shape for shared-kind files now requires `resource_id` distinct from
the storage `id`. Without this LC change, codeapi's sessionKey
re-derivation on every shared-kind /exec rejects with 403
session_key_mismatch:

    cached:  legacy:skill:69dcf561...✌️59  (signed at upload, skill _id)
    derived: legacy:skill:ysPwEURuPk-...✌️59  (storage nanoid)

Emit sites updated:

- `primeInvokedSkills` cache-hit path: `resource_id: ref.id` (the
  persisted skill `_id` from `codeEnvRef.id`); `id: ref.file_id`
  unchanged (storage uuid).
- `primeInvokedSkills` fresh-upload path: `resource_id: skill._id.toString()`
  on every primed file (the `allPrimedFiles` builder type now carries
  the field).
- `processCodeOutput`'s `pushFile` (Code/process.js): `resource_id: ref.id`
  — for `kind: 'user'` this is informational (codeapi derives
  sessionKey from auth context) but emitted for shape uniformity
  with shared kinds.

Bumps `@librechat/agents` to `^3.1.80-dev.1` (the version that
ships the matching `CodeEnvFile.resource_id` field).

## Test plan

- [x] `cd packages/api && npx jest src/agents` — 67 / 67 pass
  (skillFiles fixtures updated to assert `resource_id` on the
  emitted CodeSessionContext.files).
- [x] `cd api && npx jest server/services/Files server/controllers/agents` —
  445 / 445 pass (process.spec fixtures updated for the reupload
  + cache-hit emission).
- [x] `npx tsc --noEmit -p packages/api/tsconfig.json` — clean.

* fix(skill-tool-call): carry resource_id through primeSkillFiles → artifact

Codeapi was 400ing every /exec following a `handle_skill` tool call
with `resource_id is invalid` (`type: 'undefined'`). Both code paths
in `primeSkillFiles` (cache-hit + fresh-upload) returned files
without `resource_id`/`kind`/`version`, and the artifact in
`handlers.ts` forwarded the stripped shape into
`tc.codeSessionContext.files` → `_injected_files`.

`primeInvokedSkills` (the NL-detected loader) had already been fixed
end-to-end; this commit aligns the tool-invoked path with the same
contract: `resource_id` = `skill._id.toString()`, `kind: 'skill'`,
`version` = the skill's monotonic counter.

Tests added to `skillFiles.spec.ts` lock the contract on
`primeSkillFiles` directly so future refactors can't silently drop
the resource identity again.

* fix(handlers.spec): align session_id → storage_session_id rename + kind discriminator

Pre-existing TS errors against the post-rename `CodeEnvFile` shape:
the test file still used `session_id` on per-file objects (renamed to
`storage_session_id` in agents Phase B/C) and was missing the `kind`
discriminator the discriminated union requires. Both inputs and the
matching `expect.toEqual(...)` mirrors updated together so the
runtime equality check still holds.

Lines 723-732 stay as-is — they sit behind `as unknown as
ToolCallRequest` and TS already skipped them.

* chore: fix `@librechat/agents`, correct version to 3.1.80-dev.0 in package.json files

* chore: bump `@librechat/agents` to version 3.1.80-dev.1 in package.json and package-lock.json

* chore: bump `@librechat/agents` to version 3.1.80-dev.2

* feat(observability): trace file priming chain from primeCodeFiles to _injected_files

Diagnosing the user-upload "files=[] on first /exec" bug requires
seeing where in the LC chain a file ref disappears. Prior to this
patch the chain (primeCodeFiles → primedCodeFiles → initialSessions
→ CodeSessionContext → _injected_files) was opaque end-to-end:
  - primeCodeFiles silently dropped files without `metadata.codeEnvRef`
  - reuploadFile catches all errors and continues with no signal
  - the handlers.ts handoff to codeapi never logged what it was sending

After this patch, a single grep on `[primeCodeFiles]` plus
`[code-env:inject]` shows the full per-file path:

  [primeCodeFiles] in: file_ids=N resourceFiles=M
  [primeCodeFiles] file=<id> path=skip reason=no-codeenvref filename=...
  [primeCodeFiles] file=<id> path=cache-hit-by-session storage_session_id=...
  [primeCodeFiles] file=<id> path=reupload reason=no-uploadtime ...
  [primeCodeFiles] file=<id> path=reupload reason=stale ...
  [primeCodeFiles] file=<id> path=reupload-success oldSession=... newSession=... newFileId=...
  [primeCodeFiles] file=<id> path=reupload-failed session=...
  [primeCodeFiles] file=<id> path=fresh-active storage_session_id=...
  [primeCodeFiles] out: returned=N skippedNoRef=M reuploadFailures=K

  [code-env:inject] tool=<name> files=N missingResourceId=K     (debug)
  [code-env:inject] M/N files missing resource_id ...           (warn)
  [code-env:inject] tool=<name> _injected_files=0 ...           (warn)

The boundary log warns when LC sends zero injected files on a
code-execution tool call — that's the user's actual symptom showing
up at the LC side instead of having to correlate against codeapi's
`Request received { files: [] }`.

Tag chosen as `[code-env:inject]` rather than `[handoff:exec]` to
avoid collision with the app-level "handoff" semantic (subagent
handoff workflow).

Structural cleanup in primeFiles: replaced the `if (ref) { ... }`
nesting with an early `if (!ref) continue` so the per-path
instrumentation hooks land at top-level scope instead of indented
inside a conditional. Behavior unchanged; pushFile / reuploadFile
identical.

Spec fixtures (handlers.spec.ts, codeFilesSession.spec.ts) updated
to include `resource_id` on `CodeEnvFile` literals — required by
the post-3.1.80-dev.2 type now installed.

## Test plan

- [x] `cd packages/api && npx jest src/agents/handlers.spec.ts src/agents/codeFilesSession.spec.ts src/agents/skillFiles.spec.ts` — 69/69 pass
- [x] `cd api && npx jest server/services/Files/Code/process.spec.js` — 84/84 pass
- [x] `npx tsc --noEmit -p packages/api` — clean
- [x] `npx eslint` on all four touched files — clean

* chore: add CONSOLE_JSON_STRING_LENGTH to .env.example for JSON log string length configuration

* fix(files): align codeapi upload filename with LC's sanitized DB filename

User-attached files for code execution were uploading to codeapi
under `file.originalname` (raw upload filename, may contain spaces /
special chars) while LC's DB record stored the sanitized form
(`sanitizeFilename(file.originalname)`, underscores). Codeapi
preserves whatever filename the upload sent, so the sandbox saw
`/mnt/data/<originalname>` while LC's `primeFiles` toolContext text
+ `_injected_files.name` referenced `file.filename` (sanitized).

Visible failure: agent gets system prompt saying

    /mnt/data/librechat_code_api_-_active_customer_-_2025-11-05.xlsx

…tries that path, hits `FileNotFoundError`, then notices the
sandbox's actual `Available files` line says

    /mnt/data/librechat code api - active customer - 2025-11-05.xlsx

…retries with spaces, succeeds. Wastes a tool call per upload and
leaks raw filenames into model context.

Fix: sanitize once and use the sanitized form in both the codeapi
upload AND the LC DB record. Sandbox path = LC toolContext text =
in-memory ref name. No drift.

Reupload path (`Code/process.js` line 867 `filename: file.filename`)
already uses the sanitized DB name, so it stays consistent with the
fresh-upload path after this change.

## Test plan

- [x] `cd api && npx jest server/services/Files/process` — 32/32 pass
- [x] `npx eslint` on the touched file — clean

* chore: bump `@librechat/agents` to version 3.1.80-dev.3 in package.json and package-lock.json
2026-05-08 12:29:43 -04:00
Danny Avila
8f92ec012c
🧭 fix: Navigate Signed CDN Downloads (#12998)
* fix(files): navigate signed CDN downloads

* fix(files): avoid popup target for signed downloads

* test(files): restore download URL mock
2026-05-07 13:36:57 -04:00
Danny Avila
1bc2692a15
🌥️ feat: Add Optional Region-aware S3/CloudFront Storage Keys (#12987)
* feat(files): add optional region-aware storage keys

* test(files): fix region storage CI fixtures

* feat(files): finalize inline CloudFront asset namespaces

* fix(files): allow wildcard region CloudFront cookies

* fix(files): preserve legacy storage key compatibility

* fix(files): align CloudFront clear cookie cleanup

* fix(files): clear legacy CloudFront cookie scopes

* chore(files): clean up storage review nits

* fix(files): keep inline namespaces CloudFront-only
2026-05-06 23:16:56 -04:00
Danny Avila
5c338a4642
🛂 fix: Harden Agent File Preview Access (#12981)
* fix: harden agent file access

* style: format agent file query

* fix: prune agent file refs on alternate writes

* test: fix agent pruning specs
2026-05-06 19:56:04 -04:00
Danny Avila
9c81792d25
🔐 feat: Add Signed CloudFront File Downloads (#12970)
* feat: add signed CloudFront downloads

* fix: preserve local IdP avatar paths

* fix: address signed download review findings

* fix: harden CloudFront cookie scope validation

* fix: preserve URL save API compatibility

* fix: store CDN SSO avatars under shared prefix

* fix: Harden CloudFront tenant file access

* fix: Preserve CloudFront download compatibility

* fix: Address CloudFront review follow-ups

* fix: Preserve file URL fallback user paths

* fix: Address download review hardening

* fix: Use file owner for S3 RAG cleanup

* fix: Address final download review nits

* fix: Clear stale avatar CloudFront cookies

* fix: Align download filename helpers with dev

* fix: Address final CloudFront review follow-ups

* fix: Stream S3 URL uploads

* fix: Set S3 stream upload length

* fix: Preserve download metadata filepath

* fix: Avoid remote content length for stream uploads

* fix: Use bounded multipart URL uploads

* fix: Harden S3 filename boundaries
2026-05-06 19:48:30 -04:00
Danny Avila
6c6c72def7
🚀 feat: Decouple File Attachment Persistence from Preview Rendering (#12957)
* 🗂️ feat: add `status` lifecycle to file records for two-phase previews

Schema and model foundation for decoupling the agent's final response
from CPU-heavy office-format HTML extraction.

- `MongoFile.status: 'pending' | 'ready' | 'failed'` (indexed) and
  `previewError?: string` mirror the lifecycle: phase-1 emits the file
  record at `pending` so the response is unblocked; phase-2 transitions
  to `ready` (with text/textFormat) or `failed` (with previewError) in
  the background. Absent for legacy records — clients treat that as
  `ready` for back-compat.
- Mirror types added to `TFile` in data-provider so frontend cache
  consumers see the new fields.
- New `sweepOrphanedPreviews(maxAgeMs)` method on the file model
  recovers stale `pending` records left behind by a process restart
  mid-extraction; transitions them to `failed` with
  `previewError: 'orphaned'`. Cheap because `status` is indexed.

*  feat: two-phase code-execution preview flow (unblocks final response)

The agent's final response no longer waits on CPU-heavy office HTML
extraction. Phase-1 (download + storage save + DB record at
`status: 'pending'`) is awaited as before; phase-2 (extract +
`updateFile`) runs in the background with a hard 60s ceiling.

Three flows, all funneling through `processCodeOutput` and updated to
the new `{ file, finalize? }` return shape:

- `callbacks.js` (chat-completions + Open Responses streaming): emit
  the phase-1 attachment immediately (carries `status: 'pending'` for
  office buckets so the UI shows "preparing preview…"), then
  fire-and-forget `finalize()`. If the SSE stream is still open when
  phase-2 lands, push an `attachment` update event with the same
  `file_id` so the client merges over the placeholder in place.

- `tools.js` direct endpoint: same split — return the phase-1
  metadata immediately, run extraction in the background. Client
  polls for the resolved record.

`finalize()` wraps the existing 12s per-render timeout in a 60s outer
`withTimeout`. The HTML-or-null contract from #12934 is preserved:
office types that fail extraction transition to `status: 'failed'`
with `previewError: 'parser-error' | 'timeout'` rather than falling
back to plain text (would be an XSS vector).

Promises continue running after the HTTP response closes (Node
doesn't kill them). The boot-time orphan sweep covers the only case
that loses progress — actual process restart mid-extraction.

`primeFiles` annotates the agent's `toolContext` line for prior-turn
files: `(preview not yet generated)` for pending, `(preview
unavailable: <reason>)` for failed. The model can volunteer "you can
still download it" instead of pretending the preview is fine.

`hasOfficeHtmlPath` exported from `@librechat/api` so `processCodeOutput`
can decide whether a file expects a preview at all.

* 🔍 feat: `GET /api/files/:file_id/preview` endpoint and boot orphan sweep

- New `GET /api/files/:file_id/preview` route returns
  `{ status, text?, textFormat?, previewError? }`. The frontend's
  `useFilePreview` React Query hook polls this while phase-2 is in
  flight, then auto-stops on terminal status. ACL identical to the
  download route (reuses `fileAccess` middleware). Defaults `status`
  to `'ready'` for legacy records so back-compat is implicit.
  `text` only included when `status === 'ready'` and non-null —
  preserves the HTML-or-null security contract from #12934.

- `sweepOrphanedPreviews()` invoked on boot in both `server/index.js`
  and `server/experimental.js`. Recovers any `pending` records left
  behind by a process restart mid-extraction (the only case the
  in-process two-phase flow can't handle on its own). Fire-and-forget
  so a transient sweep failure doesn't block startup.

* 🖥️ feat: frontend two-phase preview consumer (polling + UI states)

Wires the React side to the new lifecycle so the user sees what's
happening with their file while phase-2 extraction runs in the
background and after the response stream closes.

- `useAttachmentHandler` upserts by `file_id` (was append-only) so
  the phase-2 SSE update event merges over the pending placeholder
  in place. Lightweight attachments without a `file_id`
  (web_search / file_search citations) keep the legacy append path.

- `useFilePreview(file_id)` React Query hook with
  `refetchInterval: (data) => data?.status === 'pending' ? 2500 : false`
  so polling auto-stops on the first terminal response without the
  caller having to flip `enabled`.

- `useAttachmentPreviewSync(attachment)` bridges polled data into
  `messageAttachmentsMap`. Polling enabled iff
  `status === 'pending' && isAnySubmitting` — per the design ask:
  active polling while the LLM is still generating, then quiet.
  Process-restart and post-stream cases are covered by polling on
  the next interaction.

- `Attachment.tsx` renders a small `PreviewStatusIndicator` (spinner +
  "Preparing preview…" for pending, alert icon + "Preview unavailable"
  for failed) inside `FileAttachment`. Download button stays fully
  functional in both states. Two new English locale keys.

- Data-provider scaffolding: `TFilePreview` type, `endpoints.filePreview`,
  `dataService.getFilePreview`, `QueryKeys.filePreview`.

* 🧪 fix: stub `useAttachmentPreviewSync` in pre-existing Attachment test mocks

The new `useAttachmentPreviewSync` hook is called unconditionally inside
`FileAttachment` (added in the prior commit). Two pre-existing test
files mock `~/hooks` to provide `useLocalize` only — the un-mocked
preview hook reference resolved to undefined and crashed render with
`(0 , _hooks.useAttachmentPreviewSync) is not a function` on the
Ubuntu/Windows CI runners.

Fix is local to the test mocks: add a no-op stub that returns
`{ status: 'ready' }` so the component renders the legacy chip path.
The two-phase preview behavior itself has its own dedicated suites
(`useAttachmentHandler.spec.tsx`, `useAttachmentPreviewSync.spec.tsx`).

* 🐛 fix: route phase-2 attachment update to current-run messageId

Codex P1 review on PR #12957. `processCodeOutput` intentionally
preserves the original DB `messageId` across cross-turn filename reuse
so `getCodeGeneratedFiles` can still trace a file back to the
assistant message that originally produced it. The phase-1 SSE emit
already routes by the current run's messageId — `processCodeOutput`
runtime-overlays it via `Object.assign(file, { messageId, toolCallId })`
and the callback writes `result.file` directly.

Phase-2 was passing the raw `updateFile` return through
`attachmentFromFileMetadata`, which read `messageId` straight off the
DB record. On a turn-N run that re-emitted a filename from turn-1
(e.g. agent writes `output.csv` again), the phase-2 SSE update
routed to `turn-1-msg` instead of `turn-N-msg`. Frontend's
`useAttachmentHandler` upserts under the wrong messageAttachmentsMap
slot — turn-N's pending chip stays stuck at "preparing preview…"
while turn-1's already-resolved attachment gets re-merged.

Fix: thread `runtimeMessageId` through `attachmentFromFileMetadata`
and pass `metadata.run_id` from the phase-2 emit site. Mirrors how
phase-1 sources its messageId. Tests cover the cross-turn reuse case
plus the writableEnded / null-finalize / no-finalize paths to lock
in the broader phase-2 emit contract.

* 🛠️ refactor: address codex audit findings (wire-shape parity, DRY, defensive catch)

Comprehensive audit on PR #12957. Resolves all valid findings:

- **MAJOR #1 — Wire-shape parity**: phase-1 ships the full `fileMetadata`
  record over SSE; phase-2 was using a tight `attachmentFromFileMetadata`
  projection. Drop the projection and have phase-2 spread `{...updated,
  messageId, toolCallId}` so both events match the long-standing
  legacy phase-1 shape clients depend on.

- **MAJOR #2 — DRY**: extract `runPhase2Finalize({ finalize, fileId,
  onResolved })` into `process.js` (alongside `processCodeOutput` whose
  contract it pairs with). Both `callbacks.js` paths and `tools.js`
  now flow through it. Single catch path eliminates divergence
  surface — the fix landed in 01704d4f0 (cross-turn messageId routing)
  was a symptom of this duplication risk.

- **MINOR #3 — JSDoc accuracy**: `finalizePreview`'s buffer is bounded
  by `fileSizeLimit`, not the 1MB extractor cap. Updated and added a
  note about peak heap from queued buffers.

- **MINOR #4 — Defensive catch**: `runPhase2Finalize`'s catch attempts
  a best-effort `updateFile({ status: 'failed', previewError:
  'unexpected' })` for the file_id, so a programming bug in
  `finalizePreview` doesn't leave the record stuck `'pending'` until
  the next boot-time orphan sweep.

- **NIT #6 — Stale PR refs**: 12952 → 12957 in 3 places.

- **NIT #7 — Schema bound**: `previewError` capped at `maxlength: 200`
  to prevent a future codepath from accidentally persisting a stack
  trace.

Skipped per audit verdict (non-blocking):
- #5 (memory pressure): documented in JSDoc; impl change was reviewer's
  "consider", not actionable.
- #8 (double DB query per poll): low cost, indexed by_id, polling is
  gated narrow.
- #9 (TAttachment cast): the union type is intentional; the casts are
  safe widening, refactoring TAttachment is invasive and out of scope.

Tests: 11 new (7 `runPhase2Finalize` unit tests covering happy path,
null-finalize, throws, double-fail, no-fileId, no-onResolved; +4
wire-shape parity assertions in the existing cross-turn test). 328
backend tests pass; 528 frontend tests pass; lint and typecheck clean.

* 🛡️ refactor: address codex P1+P2 + rename to drop phase-1/2 jargon

Codex round 2 review on PR #12957 caught two race conditions and one
recovery gap, all triggered by cross-turn filename reuse (`claimCodeFile`
intentionally returns the same `file_id` for the same
`(filename, conversationId)` across turns). Plus naming cleanup the
user requested — internal "phase 1 / phase 2" vocabulary leaks across
sprints, replace it everywhere with terms describing what's actually
happening.

P1 — stale render overwrites newer revision (process.js)
  Two turns reusing `output.csv` share a `file_id`. If turn-1's
  background render resolves AFTER turn-2's persist step, the
  unconditional `updateFile` writes turn-1's stale text/status over
  turn-2's pending placeholder. Fix: stamp a fresh `previewRevision`
  UUID on every emit, thread it through `finalizePreview`, and make
  the commit conditional via a new optional `extraFilter` argument
  on `updateFile` (`{ previewRevision: <expected> }`). The defensive
  `updateFile` in `runPreviewFinalize`'s catch uses the same guard
  so a programming error from an older render also can't override a
  newer turn.

P1 — stale React Query cache on pending remount (queries.ts)
  Same root cause from the frontend side. Cache key
  `[QueryKeys.filePreview, file_id]` may hold a prior turn's `'ready'`
  payload; with `refetchOnMount: false` and the polling gate on
  `pending`, polling never starts for the new placeholder. Fix:
  `useAttachmentHandler` invalidates that query whenever an attachment
  with a `file_id` arrives. Both initial-emit and update events
  trigger invalidation — uniform gate.

P2 — quick-restart orphans skipped by boot sweep (files.js)
  Boot `sweepOrphanedPreviews` uses a 5-min cutoff for multi-instance
  safety. A crash + restart inside the cutoff leaves `pending` records
  that never get touched again. Fix: lazy sweep inside the preview
  endpoint — if a polled record is `pending` and `updatedAt` is older
  than 5 min, mark it `failed:orphaned` on the spot before responding.
  Conditional on the same `updatedAt` we observed so a concurrent
  legitimate update wins. Cheap, bounded by user activity.

Naming cleanup
  - `runPhase2Finalize` → `runPreviewFinalize`
  - `PHASE_TWO_TIMEOUT_MS` → `PREVIEW_FINALIZE_TIMEOUT_MS`
  - All `phase-1` / `phase-2` / `two-phase` prose replaced with
    "the immediate emit", "the deferred render", "the persist step",
    "the deferred preview", etc. Skill-feature `phase 1/2` references
    (different feature) left alone.

Tests: 10 new (4 lazy-sweep × preview endpoint, 3 cache-invalidation ×
useAttachmentHandler, 3 extraFilter × updateFile data-schemas).
Backend 332/332, frontend 531/531, data-schemas 37/37, lint clean.

* 🛠️ refactor: address comprehensive review (round 3) — stale-cache MAJOR + 3 minors

Comprehensive review on PR #12957 caught a P1 follow-on bug from the
prior `invalidateQueries` fix, plus 3 maintainability findings.

MAJOR: stale React Query cache not actually fixed by `invalidateQueries`
  The previous fix called `invalidateQueries` to flush stale cached
  preview data on cross-turn filename reuse. But `useFilePreview` had
  `refetchOnMount: false`, which made the new observer read the
  stale-marked 'ready' data without refetching. The polling
  `refetchInterval` then evaluated against stale 'ready' → returned
  `false` → polling never started → user stuck on stale content.

  Fix (belt-and-suspenders):
    a) `useAttachmentHandler` switched to `removeQueries` — drops the
       cache entry entirely so the next mount has nothing to read and
       must fetch.
    b) `useFilePreview` no longer sets `refetchOnMount: false`, so the
       React Query default (`true`) kicks in — second line of defense
       if any future codepath observes stale data before the handler
       has a chance to evict.

MINOR: `finalizePreview` JSDoc missing `previewRevision` param
  Added with explanation of the conditional update guard.

MINOR: asymmetric stream-writable guard between SSE protocols
  Chat-completions delegated the gate to `writeAttachmentUpdate`;
  Open Responses inlined `!res.writableEnded && res.headersSent`.
  Extracted `isStreamWritable(res, streamId)` predicate; both paths
  + `writeAttachmentUpdate` now share the single source of truth.

NIT: `(data as Partial<TFile>).file_id` cast repeated 4 times
  Extracted to a `fileId` local at the top of the handler.

Tests: existing 9 invalidate-tests rewritten as remove-tests; +1 new
lock-in test asserts removeQueries is called and invalidateQueries
is NOT (regression guard against round-3 finding). 332 backend pass,
532 frontend pass, lint clean.

Skipped findings (deferred / acceptable):
- MINOR: post-submission pending state has no auto-recovery — the
  `isAnySubmitting` polling gate was the user's explicit design;
  LLM context surfaces failed/pending so the model can volunteer.
  Worth a follow-up if real users hit it.
- NIT: double DB query per preview poll — reviewer marked acceptable;
  changing `fileAccess` middleware is out of scope.

* 🛡️ test: address comprehensive review NITs (initial-emit guard + isStreamWritable coverage)

NIT — chat-completions initial emit skips writableEnded check
  The Open Responses initial emit was switched to use the new
  `isStreamWritable` predicate in the round-3 commit, but the
  chat-completions initial emit kept the older narrower check
  (`streamId || res.headersSent`). On a client disconnect mid-stream
  (`writableEnded === true`) it would still hit `res.write` and
  raise `ERR_STREAM_WRITE_AFTER_END` — caught by the outer IIFE
  catch but logged as noise. Switch this site to `isStreamWritable`
  too so both initial-emit paths share the same gate as the
  deferred update emits.

NIT — `isStreamWritable` not directly unit-tested
  The predicate was only covered indirectly via the deferred-preview
  SSE tests (writableEnded skip, headersSent check). Export from
  `callbacks.js` and add 5 parametric tests pinning down each branch
  (streamId truthy, res null, !headersSent, writableEnded, happy
  path) so a future condition addition can't silently regress.

* 🐛 fix: stuck "Preparing preview…" + inline the chip subtitle

Two related fixes for a stuck-spinner bug a user reported in manual
testing of PR #12957.

**Stuck spinner (the bug)**
The deferred preview render can complete a few seconds AFTER the SSE
stream closes (typical case: PPTX render finishes ~3s after the LLM
emits FINAL). When that happens, the SSE update is silently dropped
(`isStreamWritable` returns false on a closed stream) and polling is
the only recovery path.

The earlier polling gate was `status === 'pending' && isAnySubmitting`,
which mirrored the original design intent ("only query while the LLM
is still generating"). But `isAnySubmitting` flips false the moment
the model emits FINAL — milliseconds before the deferred render
commits. Polling never runs, the chip stays "Preparing preview…"
forever even though the DB has `status: 'ready'` with valid HTML.

Drop the `isAnySubmitting` part of the gate. `useFilePreview`'s
`refetchInterval` is already a function-form that returns `false` on
the first terminal response, so polling auto-stops within one tick of
resolution. The server-side render ceiling (60s) plus the lazy sweep
in the preview endpoint cap the worst case to ~24 polls per pending
attachment. Polling itself never blocks UX — the gate's purpose was
"don't waste cycles", and capping by terminal status is the correct
expression of that.

**Inline the chip subtitle (the visual)**
The previous design rendered "Preparing preview…" as a loose-feeling
spinner+text BELOW the file chip. The chip itself looked done while a
floating annotation said it wasn't.

`FileContainer` gains an optional `subtitle?: ReactNode` prop that
overrides the default file-type label. `Attachment.tsx` passes a
`PreviewStatusSubtitle` (spinner + "Preparing preview…" / alert +
"Preview unavailable") into that slot when the file's preview is
pending or failed. The chip footprint stays identical to its `'ready'`
form — just the second row swaps from "PowerPoint Presentation" to
the status indicator. No floating element, no layout shift.

Tests: regression test pinning down "polling stays enabled after the
LLM finishes" so a future revert can't reintroduce the stuck-spinner
bug. Existing FileContainer tests pass unchanged (subtitle override
is opt-in). 522 frontend tests pass; lint clean.

* 🐛 fix: deferred-preview survives reload + matches artifact card chrome

Fixes the remaining stuck-pending case after the polling gate fix: on
a reloaded conversation, message.attachments come from the DB frozen at
the immediate-persist `status: 'pending'`, but `messageAttachmentsMap`
is empty because no SSE handler ever fired for that messageId. Polling
now INSERTS a new live entry when no record matches the file_id, and
`useAttachments` merges live entries onto DB entries by file_id so the
resolved text/textFormat reach `artifactTypeForAttachment` and the
chip routes through the proper PanelArtifact card.

Also replaces the small file chip used during the pending state with
a PreviewPlaceholderCard that mirrors ToolArtifactCard chrome, so the
transition to the resolved PanelArtifact no longer reshapes the UI.

*  feat: auto-open panel when deferred preview resolves pending→ready

The legacy auto-open path is gated only on `isSubmitting`, so an
office-file preview that resolves *after* the SSE stream closes would
render in place but never auto-open the panel — even though that's
exactly the moment the result becomes meaningful to the user. Adds a
per-file_id one-shot signal that `useAttachmentPreviewSync` flips on
the pending→ready edge; `ToolArtifactCard` consumes it on mount and
auto-opens regardless of submission state. The signal is *only* set on
the actual transition (history loads of pre-resolved files don't
trigger it) and is consumed once (panel close + reopen on the same
card stays user-controlled).

* 🐛 fix: drop placeholder Terminal overlay + scope auto-open to fresh resolutions

Two fixes for issues spotted in manual testing of the deferred-preview
auto-open feature:

1. PreviewPlaceholderCard was passing `file={attachment}` to FilePreview,
   which triggered SourceIcon's Terminal overlay (`metadata.fileIdentifier`
   is set on every code-execution file). The artifact card itself doesn't
   show that overlay; the placeholder shouldn't either, so the
   pending→resolved transition is visually seamless.

2. The `previewJustResolved` flag flipped on every pending→ready
   transition observed by the polling hook — including stale-pending
   DB records that resolve via the first poll on a *history load*.
   Conversations whose immediate-persist snapshot left attachments at
   `status: 'pending'` would yank the panel open every revisit.
   Adds `mountedDuringStreamRef` to the hook (mirroring ToolArtifactCard)
   so the flag fires only when the hook itself was mounted during an
   active turn — preserving the pre-PR contract that the panel only
   auto-opens for results the user is actively waiting on, never for
   history.

* 🐛 fix: don't downgrade preview to failed when only the SSE emit throws

Codex P2 finding on PR #12957: the original chain placed `.catch` after
`.then(onResolved)`, so a throw inside `onResolved` (transport-side
errors — SSE write race after stream close, an emitter listener
throwing) would propagate into the finalize catch and persist
`status: 'failed'` / `previewError: 'unexpected'`. That surfaced
"preview unavailable" in the UI for a perfectly valid file, and
degraded next-turn LLM context to reflect a non-existent failure.

Wraps `onResolved` in its own try/catch so emit errors are logged but
do not affect the file's persisted status. Extraction success and
emit success are now independent: if extraction succeeds and
`finalizePreview` writes the terminal status, the polling layer / next
page load surfaces the resolved preview even if this turn's SSE emit
didn't land.

* 🛡️ fix: run boot-time orphan sweep under system tenant context

Codex P2 finding on PR #12957: `File` is tenant-isolated, so under
`TENANT_ISOLATION_STRICT=true` the boot-time `sweepOrphanedPreviews`
threw `[TenantIsolation] Query attempted without tenant context in
strict mode` and the recovery path silently failed every restart.
Stale `status: 'pending'` records would be stuck until a user happened
to poll the preview endpoint and trigger the lazy sweep — which only
covers the file the user is currently looking at, not the bulk
candidate set the boot sweep is designed to recover.

Wraps the sweep in `runAsSystem(...)` in both boot paths
(`api/server/index.js` and `api/server/experimental.js`) and pins the
contract with regression tests in `file.spec.ts` — one test asserts
the bare call throws under strict mode, the other asserts the
`runAsSystem`-wrapped call succeeds.

* 🧹 chore: trim verbose comments from previous commit

* 🧹 chore: address review findings (dead branch, lazy-sweep cutoff, stale JSDoc)

- finalizePreview: drop unreachable !isOfficeBucket branch (caller
  already gates on hasOfficeHtmlPath, so this path is always office)
- preview endpoint: drop lazy-sweep cutoff from 5min to 2min — anything
  past the 60s render ceiling is definitively orphaned, and per-request
  sweep can be tighter than the per-instance boot sweep
- strip stale `isSubmitting` references from JSDoc in 3 spots (the
  client-side gate was removed in 9a65840)

Skipped: function-length (#3) and client-side polling cap (#4) —
refactors without correctness/perf wins; remaining NITs.

* 🧹 fix: trim 1 query off pending polls + clear stale lifecycle on cross-shape updates

- Preview endpoint: reuse fileAccess middleware's record for the
  lifecycle check; only re-fetch with text on the terminal ready
  response. Cuts the typical poll lifecycle from 2(N+1) to N+1
  queries, since the vast majority of polls hit while pending and
  don't need text at all.
- processCodeOutput non-office branch: explicitly null out status,
  previewError, previewRevision (codex P2). Without this, an update at
  the same (filename, conversationId) where the prior emit was an
  office file leaves stale lifecycle fields and the client renders
  the wrong state for the now non-office artifact.
- Tests: rewire preview.spec mocks for the new shape, add boundary
  test pinning the 2min cutoff, add regression test for the
  cross-shape update.

* 🐛 fix: keep polling on transient errors but cap permanently-broken endpoint

Codex P2: the previous `data?.status === 'pending' ? 2500 : false` gate
killed polling on the first transient error. With `retry: false`, a 500
left `data` undefined, the callback returned false, and the chip was
stuck "Preparing preview…" forever — exactly the bug the polling layer
was supposed to recover from.

Inverts the gate: stop on terminal success (`ready`/`failed`) or after
5 consecutive errors. Transient errors keep retrying; a permanently
broken endpoint caps at ~12.5s instead of polling forever. Predicate
extracted as `previewRefetchInterval` for direct unit testing without
fighting React Query's timer machinery.

*  feat: render pending-preview files in their own row

Pending deferred-preview chips now bucket into a separate row above
the resolved attachments — reads as "this is still happening" rather
than mixing with completed downloads. Once status flips to ready, the
chip re-buckets into panelArtifacts; failed re-buckets into the file
row alongside other downloads.

* 🎨 fix: render pending-preview chips in the panel-artifact row, not the file row

Previous bucketing put pending chips in the file row (since
`artifactTypeForAttachment` returns null for empty-text records). The
pending placeholder is a future panel artifact — sharing the row keeps
the chip in place when it resolves instead of jumping rows.

Plain files still get their own row.

* 🐛 fix: phase-1 SSE replay must not regress a resolved attachment

Codex P1: useEventHandlers.finalHandler iterates
responseMessage.attachments at stream end and dispatches each through
the attachment handler. Those records are the immediate-persist
snapshot (status:pending, text:null) — if a deferred update has
already moved the same file_id to ready/failed, the existing merge
let the pending fields win and downgraded the resolved record. Result:
chip flickers back to pending and polling restarts until the lazy
sweep corrects.

Pin the terminal lifecycle fields (status, text, textFormat,
previewError) when existing is ready/failed and incoming is pending.
Other field updates still go through.

* 🐛 fix: track preview-poll error cap outside React Query state

Codex P2: the previous cap relied on `query.state.fetchFailureCount`,
but React Query v4's reducer resets that to 0 on every fetch dispatch
(the `'fetch'` action). With `retry: false`, each failed poll left
count at 1 and the next dispatch reset it back to 0, so the `>= 5`
branch never fired and a permanently-broken endpoint polled forever.

Track consecutive errors in a module-level Map keyed by file_id,
incremented in a thin `fetchFilePreview` wrapper around the data
service call. The Map is cleared on success and on cap-stop, so
memory is bounded by in-flight pending file_ids per session.
2026-05-06 03:04:19 -04:00
Danny Avila
9efd61d57d
🔐 fix: Forward per-file entity_id through code-env priming (#12958)
* 🔐 fix: Forward per-file `entity_id` through code-env priming

Skill files and persisted code-env files now carry their `entity_id` on
the in-memory file refs that seed `Graph.sessions`. Without this, an
execute call that mixes a skill file (uploaded with `entity_id=skillId`)
and a user attachment (uploaded with no `entity_id`) collapses onto a
single request-level entity at the codeapi authorization step and one
side 403s. With per-file `entity_id`, codeapi resolves sessionKey per
file and both authorize.

- `primeSkillFiles` / `primeInvokedSkills`: thread `entity_id` through
  fresh-upload, cache-hit, and per-skill-batch paths in
  `packages/api/src/agents/skillFiles.ts`.
- `primeFiles` (Code/process.js): parse `entity_id` from the persisted
  `codeEnvIdentifier` query string once per iteration; forward through
  `pushFile`, including the reupload path which re-parses the fresh
  identifier returned by codeapi.
- Tests: extend `skillFiles.spec.ts` with two cases — fresh-upload
  propagation and cached-hot-path parsing.

Companion PRs in flight on `@librechat/agents` (forward `entity_id`
through `_injected_files`) and codeapi (per-file authorization). All
three are wire-back-compat: an absent `entity_id` falls back to the
existing request-level resolution.

* 🔧 chore: Update dependencies in package-lock.json and package.json

- Bump `@librechat/agents` to version `3.1.78-dev.0` across multiple package files.
- Upgrade `@langchain/langgraph-checkpoint` to version `1.0.2` and update its peer dependency for `@langchain/core` to `^1.1.44`.
- Update `axios` to version `1.16.0` and `follow-redirects` to version `1.16.0`.
- Add `@types/diff` as a new dependency at version `7.0.2` and include `diff` at version `9.0.0` in the `@librechat/agents` module.
- Introduce optional peer dependency `@anthropic-ai/sandbox-runtime` for `@librechat/agents` with metadata indicating it is optional.

* 🐛 fix: Make skill code-env cache persistence observable

Two changes to surface the skill-bundle re-upload issue without
behavioral changes to tenant scoping (root cause to be confirmed via
the new warn log):

1. `primeSkillFiles` now awaits `updateSkillFileCodeEnvIds` instead of
   firing-and-forgetting it. The prior shape could race with the next
   prime (read-before-write) even when the bulkWrite itself succeeds,
   producing a silent cache miss. Latency cost: ~10–50ms on first
   prime; in exchange every subsequent prime can rely on the
   identifier being persisted by the time it reads.

2. `updateSkillFileCodeEnvIds` now returns `{matchedCount, modifiedCount}`
   from the underlying bulkWrite. `primeSkillFiles` warn-logs when
   `modifiedCount < updates.length`, making any silent drop visible —
   whether the cause is tenant filtering, a `relativePath` mismatch,
   schema-plugin scoping, or something else. Prior shape returned
   `Promise<void>` so any zero-modification result was invisible.

Tests:
- `skill.spec.ts`: real-MongoDB happy path (counts match), no-match
  case (modifiedCount=0), and empty-input contract.
- `skillFiles.spec.ts`: deferred-promise harness proving the call
  site awaits the persist (prime stays pending until the persist
  resolves) and forwards partial-write counts.

Deliberately narrower than the original draft of this commit, which
also bypassed `tenantSafeBulkWrite` for the codeEnvIdentifier write
on the speculative diagnosis that tenant filtering was the cause.
That change was a behavior shift on tenant scoping without
confirmation; reverted pending real-world signal from the new warn
log.

* 🐛 fix: Justify await for skill code-env persistence under concurrency

The await on `updateSkillFileCodeEnvIds` isn't a defensive nicety —
it's load-bearing for cache effectiveness under concurrent priming.

Verified with an out-of-tree harness (`config/test-skill-cache.ts`,
not committed) that wires `primeSkillFiles` against a real codeapi
stack:

- With fire-and-forget (prior shape after this branch's revert):
  back-to-back primes for the same skill miss the cache. Call N+1
  reads SkillFile docs before Call N's write commits, sees no
  `codeEnvIdentifier`, re-uploads, fires its own forget that Call N+2
  also races. Steady-state stays in cache miss for the full burst.

- With await: the prime that does the upload commits its persist
  before resolving, so the next concurrent prime observes the cache
  pointer instead of racing the read. Latency cost ~10–50ms on the
  upload prime; subsequent concurrent primes save an entire batch
  upload.

In production with primes seconds apart this race is rare; at scale
with many users hitting the same skill in the same second it's the
difference between M and N×M uploads.

Updates the regression test to assert the await contract (deferred
persist promise → prime stays pending until persist resolves).
Comment in `skillFiles.ts` rewritten to document the concurrency
rationale rather than the weaker "race-with-next-prime" framing the
prior commit used.
2026-05-05 18:35:09 -04:00
Atef Bellaaj
187ab787da
🌩️ feat: CloudFront CDN File Strategy (#12193)
* 🌩️ feat: CloudFront CDN File Strategy + signed cookies

Squashed from PR #12193:
- feat(storage): add CloudFront CDN file strategy
- feat(auth): add CloudFront signed cookie support

Note: package.json/package-lock.json dependency additions are intentionally
omitted from this commit and will be re-added via `npm install` after rebase
to avoid lock-file merge conflicts. The two new peer deps that need to be
re-installed are:
  - @aws-sdk/client-cloudfront@^3.1032.0
  - @aws-sdk/cloudfront-signer@^3.1012.0

Also fixes 4 missing destructured names in AuthService.spec.js
(getUserById, generateToken, generateRefreshToken, createSession) that
were referenced in tests but not imported from the mocked '~/models'.

* 📦 chore: install CloudFront SDK deps for PR #12193

Adds the two AWS CloudFront packages required by the rebased
CloudFront CDN strategy:
  - @aws-sdk/client-cloudfront
  - @aws-sdk/cloudfront-signer

Following the @aws-sdk/client-s3 pattern:
  - api/package.json: regular dependency (runtime resolution)
  - packages/api/package.json: peerDependency

Generated by `npm install` against the freshly rebased lock file
to avoid the merge conflicts that came from the original PR's
lock-file edits being made against an older base of dev.

* 🐛 fix: CI failures + review findings on CloudFront PR #12193

CI fixes
- Rename packages/data-provider/src/__tests__/cloudfront-config.test.ts
  → src/cloudfront-config.spec.ts. Jest's default testMatch picks up
  __tests__/ directories even inside dist/, so the compiled .d.ts shell
  was being executed as an empty test suite. Moving to .spec.ts (matching
  the rest of the package) avoids the dist/ pickup.
- Add cookieExpiry: 1800 to CloudFront crud.test makeConfig: the schema
  applies a default so CloudFrontFullConfig requires it.

Review findings addressed
- #1 (Codex + comprehensive): Normalize CloudFront domain with /\/+$/
  regex (and key with /^\/+/ regex) in buildCloudFrontUrl, matching the
  cookie code so resource policy and file URLs stay aligned even when
  the configured domain has multiple trailing slashes. Added tests.
- #2: Move DEFAULT_BASE_PATH out of s3Config into shared
  packages/api/src/storage/constants.ts. ImageService no longer imports
  S3-specific config.
- #3: getCloudFrontConfig() returns Readonly<CloudFrontFullConfig> | null
  to discourage mutation of the cached signing config.
- #4: Add cross-field refinement tests for cloudfrontConfigSchema
  (invalidateOnDelete-without-distributionId,
  imageSigning="cookies"-without-cookieDomain).
- #6: Revert unrelated MCP comment re-indentation in
  librechat.example.yaml.
- #7: Add azure_blob to the strategy list comment.

Skipped
- #5 (extractKeyFromS3Url with CloudFront URLs): existing
  deleteFileFromCloudFront tests already cover the path-equivalence
  assumption; renaming the helper is real refactor work beyond this
  PR's scope.
- #8, #9 (NIT, low confidence): leaving for author judgement.

* 🧹 chore: drop dead DEFAULT_BASE_PATH from s3Config test mock

After moving DEFAULT_BASE_PATH to ~/storage/constants, crud.ts no longer
reads it from s3Config — so the entry in the s3Config jest mock was
misleading dead config. The tests still pass because the unmocked real
constants module provides the value.

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2026-05-05 13:21:05 -04:00
Danny Avila
f20419d0b7
📄 feat: Rich File Artifact Previews for DOCX, CSV, XLSX, PPTX (#12934)
* 📄 feat: Rich File Artifact Previews for DOCX, CSV, XLSX, PPTX

Render office files emitted by tools as interactive previews in the
artifact panel instead of raw extracted text. The backend produces a
sanitized HTML document via mammoth (DOCX), SheetJS (CSV/XLSX/XLS/ODS),
or yauzl-based slide extraction (PPTX) and ships it through the
existing SSE attachment payload; the client routes it through the
Sandpack `static` template's `index.html` slot — no new browser deps,
no client-side blob fetch, no React renderer components.

* 🔐 fix: Restrict data: URLs to <img> in office HTML sanitizer

Codex review on #12934 caught that `data:` lived in the global
`allowedSchemes`, which meant a smuggled `<a href="data:text/html,
<script>...</script>">` would survive sanitization. The Sandpack
iframe sandbox does not gate `target="_blank"` navigations, so a
click would open attacker-controlled HTML in a new tab.

Scope `data:` to `<img src>` only via `allowedSchemesByTag` (mammoth
inlines DOCX images as base64 `data:image/...` URIs — that path still
works). Add a regression suite (`sanitizeOfficeHtml security`) with
8 cases covering: <script> stripping, event-handler removal,
javascript:/data: rejection on anchors, data:image preservation in
<img>, http/https/mailto allowance, target=_blank rel=noopener
enforcement, and <iframe> stripping.

* 🔧 fix: Route extensionless office files by MIME alone

Codex review on #12934 caught that the office-render gate in
`extractCodeArtifactText` only fired when the extension was in
`OFFICE_HTML_EXTENSIONS` or the category was `document`/`pptx`. A tool
emitting `data` with `text/csv` (no extension) classifies as
`utf8-text`, so the gate was skipped and raw CSV text shipped to the
client — but the client routes by MIME to the SPREADSHEET bucket
expecting a full HTML document, so the panel rendered broken text.

Extract a shared `officeHtmlBucket(name, mime)` predicate from
`html.ts` (returns the bucket name or null). Both `bufferToOfficeHtml`
(the dispatcher) and the upstream gate in `extract.ts` now go through
this single source of truth, so they can never drift apart again. The
predicate already mirrors the dispatcher's extension/MIME logic
(extension wins; MIME is the fallback for extensionless inputs).

Adds:
- 14 cases for the new `officeHtmlBucket` predicate covering the
  positive paths (each bucket via extension OR MIME) and the negative
  paths (txt, py, json, jpg, pdf, zip, odt, plain noext).
- A direct regression test in `extract.spec.ts` for the Codex catch:
  `data` with `text/csv` + utf8-text category routes through the
  office HTML producer.
- Parameterized cases for extensionless DOCX/XLSX/XLS/ODS/PPTX files
  identified by MIME alone.

* 🛡️ fix: Enforce extension-wins precedence in officeHtmlBucket

Codex review on #12934 caught that the predicate's if-chain interleaved
extension and MIME checks for each bucket — e.g. CSV's branch was
`ext === 'csv' || CSV_MIME_PATTERN.test(mimeType)`. A `deck.pptx`
shipped with `text/csv` (sandboxed tools sometimes ship generic MIMEs)
matched the CSV branch BEFORE the PPTX extension branch was reached,
so a binary PPTX would have been handed to `csvToHtml` to parse as
text — yielding garbage or a parse exception.

Restructure to a strict two-pass dispatch: an exhaustive extension
table first (one lookup, all known extensions), then MIME-only
fallback for extensionless / unknown-ext inputs. The doc comment's
"extension wins" claim is now actually enforced by the implementation.

Add 7 regression cases covering the conflicting-MIME footgun for each
bucket: deck.pptx + text/csv → pptx; workbook.xlsx + text/csv →
spreadsheet; legacy.xls + pptx-MIME → spreadsheet; report.docx +
text/csv → docx; data.csv + docx-MIME → csv; etc.

* 🛡️ fix: Reject zip-bomb office files before in-process parsing (SEC)

Addresses pre-existing availability vulnerability validated by
SEC review (Codex finding 275344c5...) and made worse by this PR's
HTML rendering path. A sub-1MiB compressed XLSX/DOCX/PPTX (highly
compressed run-of-zeros) inflates to 200+ MiB of XML when handed
to mammoth/xlsx — blocking the Node event loop for 10+ seconds and
spiking RSS to ~1 GiB. The existing 8s `withTimeout` wrapper uses
`Promise.race`, which can only return early; it cannot interrupt
synchronous parser CPU/RAM consumption. PoC ran an authenticated
execute_code call to OOM the API process.

Add `assertSafeZipSize(buffer)` — a yauzl-based pre-flight that
streams every entry with mid-inflate byte counting and bails on
either a per-entry or total decompressed-size cap. Mid-inflate
counting cannot be bypassed by falsifying the central directory's
`uncompressedSize` field (the technique the PoC used). Defaults:
25 MiB per entry, 100 MiB total — generous headroom for legitimate
image-heavy office files, well below the attack profile.

Hook the check into every path that hands a buffer to mammoth/xlsx
/yauzl:
- New HTML producers (`wordDocToHtml`, `excelSheetToHtml`,
  `pptxToSlideListHtml`) — added by this PR
- Legacy RAG text extractors (`wordDocToText`, `excelSheetToText`
  in `crud.ts`) — pre-existing path, also vulnerable
Errors propagate as a tag-distinct `ZipBombError` so callers can
distinguish a refused bomb from generic parse failures. The outer
`extractCodeArtifactText` swallows the error and returns null,
falling back to the regular download UI.

`.xls` (BIFF/CFB binary, not ZIP) is detected by magic bytes and
skipped — yauzl would reject it as malformed anyway.

Adds 15 tests:
- `zipSafety.spec.ts` (9): benign passes, per-entry cap, total cap,
  ZipBombError type-tagging, malformed-zip distinction, directory-
  entry handling, named-error surfacing, and the SEC-PoC pattern
  (sub-1 MiB compressed → 50 MiB inflated rejected on default caps).
- `html.spec.ts` zip-bomb suite (5): each producer rejects a bomb;
  dispatcher propagates correctly; legitimate fixtures still render.
- `extract.spec.ts` (1): outer extractor swallows ZipBombError and
  returns null so the download UI fallback fires.

* 🧹 fix: Normalize MIME parameters; add legacy CSV MIME variant

Two related Codex catches on PR #12934 — both about MIME-routing
inconsistencies between backend and client that would cause
extensionless CSV files to render as broken (raw text under an HTML
slot) or skip the artifact panel entirely.

P2 — backend MIME normalization:
`officeHtmlBucket` matched MIME strings exactly, so a real-world
`text/csv; charset=utf-8` Content-Type slipped through and the
backend returned raw CSV text. The client's `baseMime` helper
strips parameters before its own MIME lookup, so it routed the
same file to the SPREADSHEET bucket expecting an HTML body that
never arrived. Mirror the client's normalization on the backend
(strip everything from `;` onward, lowercase) before bucket
matching.

P3 — client legacy CSV MIME:
Backend's `CSV_MIME_PATTERN` accepts three variants (`text/csv`,
`application/csv`, `text/comma-separated-values`); the client's
`MIME_TO_TOOL_ARTIFACT_TYPE` only had the first two. An
extensionless file with `text/comma-separated-values` would have
backend HTML produced but the client would skip the artifact
panel entirely. Add the missing variant.

Tests:
- 9 new parameterized-MIME cases on backend covering charset/
  boundary/case variants for every bucket.
- 1 new client routing case for `text/comma-separated-values`.

* 🩹 fix: Try office HTML before short-circuiting on category=other

Codex review on #12934 caught that the early `category === 'other'`
return short-circuited before `hasOfficeHtmlPath` was checked. The
classifier returns 'other' for inputs the new dispatcher can still
route — extensionless `application/csv` (CSV MIMEs aren't in the
classifier's text-MIME set and don't start with `text/`), and
extensionless office MIMEs with parameters like `application/vnd...
spreadsheetml.sheet; charset=binary` (the classifier's `isDocumentMime`
exact-matches these MIMEs without parameter normalization). Both would
route correctly through `officeHtmlBucket` but never reached it.

Move the office-HTML attempt above the 'other' early return, and drop
the `|| category === 'document' || category === 'pptx'` shortcut now
that `hasOfficeHtmlPath` covers the same surface (with parameter
normalization) and a wider one. ODT still routes through `extractDocument`
unchanged — `hasOfficeHtmlPath` returns false for it and the
`category === 'document'` branch below handles it.

Adds 3 regression tests:
- extensionless `application/csv` + category='other' → office HTML
- extensionless parameterized office MIME + category='other' → office HTML
- defense check: actual binary 'other' (image/jpeg) still returns null
  without invoking the office producer

* 🛡️ fix: Office types are HTML-or-null (no text fallback → XSS)

Codex P1 review on #12934 caught that when `renderOfficeHtml` failed
(timeout, malformed file, zip-bomb rejection) for an office type, the
extractor fell through to `extractDocument` and returned plain text.
The client routes by extension/MIME to the office preview buckets and
feeds `attachment.text` straight into the Sandpack iframe's
`index.html`. A spreadsheet cell or document body containing the
literal string `<script>alert(1)</script>` would have been injected
as executable markup — direct XSS.

The contract for office types is now HTML-or-null with no text
fallback. Failed render returns null, the client's empty-text gate
keeps the artifact off the panel, and the file falls back to the
regular download UI (matching what PPTX already did). PDF and ODT
still go through `extractDocument` because the client routes them to
PLAIN_TEXT (which the markdown viewer escapes) or no artifact at all,
so plain text is safe there.

Test reshuffle:
- `document` describe block now uses ODT/PDF for the legacy
  parseDocument-path tests (DOCX/XLSX/XLS/ODS bypass that path).
- New "does NOT call parseDocument for office HTML types" test locks
  in the SEC contract for all four office HTML buckets.
- "falls back to ..." tests rewritten as "returns null when ..." with
  explicit `parseDocumentCalls.length === 0` assertions to prove no
  text leaks back to the client.
- New XSS regression test for the XLSX failure path.
- Mock parseDocument failure-name match relaxed to `includes()` so
  ODT-named tests can use the same trigger.

* 🧽 chore: Address follow-up review findings on PR #12934

Wraps up the 10-finding follow-up review. Two MAJOR + four MINOR + two
NIT addressed; one NIT skipped after verifying it was a misread of the
package.json structure.

MAJOR
- #1: Rewrite `renderOfficeHtml` JSDoc to document the HTML-or-null
  contract explicitly. The pre-fix doc described a text-fallback path
  that was the original XSS vector (commit b06f08a). A future
  maintainer trusting the stale doc could reintroduce the fallback.
- #2: Replace byte-truncation of office HTML with a small "preview too
  large" banner document. Cutting at a UTF-8 boundary lands mid-tag
  (`<table><tr><td>con\n…[truncated]`) and ships malformed markup to
  the iframe — unpredictable rendering, occasional broken layouts on
  DOCX with embedded images / wide spreadsheets.

MINOR
- #4: Wrap `readSlidesFromZip`'s `zipfile.close()` in try/catch so a
  close-time exception (mid-flight stream) doesn't replace the
  original error. Mirrors the defensive pattern in zipSafety.ts.
- #5: Refactor PPTX extraction to use `yauzl.fromBuffer` directly,
  eliminating the temp-file write/unlink the safety pre-flight already
  proved unnecessary. Removes 4 unused imports (os, path, fs/promises,
  randomUUID).
- #6: Extract `isPreviewOnlyArtifact(type)` to `client/src/utils/
  artifacts.ts` so the membership check is unit-testable without
  mounting the full Artifacts component (Recoil + Sandpack + media
  query). 15 new test cases covering positive types, negative types,
  null/undefined, and unknown strings.

NIT
- #3: Remove dead `stripColorStyles` / `COLOR_PROPERTY_PATTERN` —
  unused (sanitizer's `allowedStyles` config handles color implicitly).
- #7: Remove dead `!_lc_csv_label` worksheet property write.
- #9: Remove no-op `exclusiveFilter: () => false` sanitize-html config.
- #10: Type-narrow `PREVIEW_ONLY_ARTIFACT_TYPES` to
  `ReadonlySet<ToolArtifactType>` so the membership table is
  compile-time checked against the enum.

SKIPPED
- #8: Reviewer flagged `sanitize-html` as duplicated in devDeps and
  dependencies. The package has no `dependencies` section — only
  `devDependencies` and `peerDependencies`. Existing convention
  (mammoth, xlsx, yauzl, pdfjs-dist) is to appear in BOTH. Removing
  the devDep entry would break local test runs.

Tests: packages/api 4406/4406, client artifacts 128/128.

* 🪞 chore: Fix isPreviewOnlyArtifact test description parameter order

Follow-up review nit on PR #12934. Jest's `it.each` substitutes `%s`
positionally, and the table rows were `[type, expected]` while the
description template read `'returns %s for type %s'` — outputting
"returns application/vnd.librechat.docx-preview for type true"
instead of the intended "type ... returns true".

Reorder the template to match the column order. Test runner output
now reads naturally: "type application/vnd.librechat.docx-preview
returns true". Pure cosmetic — runtime behavior unchanged.

*  feat: Improve DOCX rendering and surface filename in panel header

Two UX improvements based on hands-on use of the office preview pipeline.

DOCX rendering — mammoth strips the navy banners, cell shading, and
column layouts that direct-formatted docs apply (python-docx-style
output is a common case). The flat `<p><strong>X</strong></p>` and
bare `<table><tr><td>` it emits looks washed out next to the source.
Three targeted compensations:

- Style map promotes `Title`, `Subtitle`, `Heading 1` thru `Heading 6`,
  and `Quote` paragraphs to their semantic HTML equivalents (mammoth's
  default only handles Heading 1-6, missing Title/Subtitle/Quote).
- Extra CSS scoped to `.lc-docx` gives the first table row sticky-
  looking header styling regardless of `<thead>` (mammoth never emits
  `<thead>`), adds zebra striping, and treats the python-docx
  `<p><strong>X</strong></p>` section-heading idiom as a pseudo-h2 with
  a thin accent left border so document structure survives the round
  trip. Headings get a left accent or underline so they read as
  headings instead of just bold paragraphs.
- Sanitizer's `allowedAttributes` opens `class` on the heading and
  block tags the styleMap and CSS heuristics rely on. `<script>`,
  event handlers, javascript: URLs, etc. are still stripped — the
  existing security regression suite catches any drift.

Panel header — `Artifacts.tsx` showed a generic "Preview" pill for
preview-only artifacts. Single-tab Radio is a no-op; surfacing the
document filename there gives the user something useful in the chrome
without taking real estate. `displayFilename` handles the sandbox
dotfile suffix the upload pipeline applies.

Tests: html.spec.ts +1 (new CSS-emission lock), 71/71. Backend files
suite 428/428. Client 308/308.

*  feat: High-fidelity DOCX preview via docx-preview in iframe

Switch the default DOCX render path from server-side mammoth → flat
HTML to client-side `docx-preview` loaded inside the Sandpack iframe.
Mammoth becomes the fallback for files above the cap.

Why
---
The Sandpack iframe is a real browser DOM. Server-side rendering
ceiling for DOCX→HTML is well below the source's visual fidelity —
mammoth strips cell shading, run colors, banners, and column layouts
because Word's layout model doesn't fit HTML's flow model. Pushing the
render into the iframe lifts that ceiling without paying the
server-side cost of jsdom or LibreOffice.

What
----
- New `wordDocToHtmlViaCdn(buffer)` builds a self-contained HTML doc
  that embeds the binary as base64 and lets `docx-preview@0.3.7`
  render it on load. CSS preserves dark/light mode handoff via
  `prefers-color-scheme`. Bootstrap script falls back to a "preview
  unavailable, please download" message if the CDN is unreachable or
  the parse throws.
- `docx-preview` and its `jszip` peer dep are pinned to specific
  versions on jsdelivr with SRI sha384 integrity hashes and
  `crossorigin="anonymous"`. Refresh: re-fetch the file, run
  `openssl dgst -sha384 -binary FILE | openssl base64 -A`.
- CSP locked down on the iframe: `default-src 'none'`, scripts only
  from jsdelivr (no eval), `connect-src 'none'` so a parser bug in
  docx-preview can't be turned into exfiltration of the embedded
  document, `base-uri 'none'`, `form-action 'none'`. Defense in depth
  on top of the Sandpack cross-origin sandbox.
- `wordDocToHtml` dispatches by size: ≤ 350 KB binary → CDN path
  (high fidelity), larger → mammoth fallback (preserves the size cap
  on `attachment.text`). 350 KB chosen so worst-case base64-inflated
  output (~478 KB) plus wrapper overhead (~5 KB) fits under
  MAX_TEXT_CACHE_BYTES (512 KB) with 40 KB headroom.
- Internal renderers exported as `_internal` for tests. Public API
  unchanged — callers still go through `wordDocToHtml`.

PPTX intentionally NOT switched
-------------------------------
Surveyed the available client-side PPTX libraries:
- `pptx-preview@1.0.7` ships an ESM-only main entry plus a 1.36 MB
  UMD that references `require("stream"/"events"/"buffer"/"util")` —
  bundled for Node, not browser-clean. Could work but the runtime
  references to undefined Node globals are a fragility risk worth
  more validation than this PR can absorb.
- `pptxjs` is jQuery-era, requires four separate UMD scripts in a
  specific order, less actively maintained.
- The honest answer for PPTX is the LibreOffice sidecar (DOCX/XLSX/
  PPTX → PDF → PDF.js), which is the architecture every major
  product (Google Drive, Claude.ai, ChatGPT) effectively uses and
  the only path to ~5/5 fidelity for arbitrary user decks.

PPTX stays on the existing slide-list extraction for now. Open a
follow-up issue for the LibreOffice/Gotenberg sidecar.

Tests
-----
- 6 new in CDN-rendered describe block: wrapper structure, base64
  round-trip, SRI integrity + crossorigin, CSP locks
  (connect-src/eval/base-uri/form-action), fallback message wiring,
  size-threshold lock.
- Adjusted 2 existing tests that asserted on mammoth-path artifacts
  (literal document text in `<article class="lc-docx">`) — those
  assertions move to the mammoth-fallback test that calls
  `_internal.wordDocToHtmlViaMammoth` directly. Dispatcher tests now
  assert CDN-path signatures instead.

packages/api files: 434/434 , full unit suite 4473/4473 .

* 🧷 fix: Address Codex P1 (MIME aliases) + P2 (CDN dependency)

Two follow-up review findings on PR #12934, both real.

P1 — Spreadsheet MIME aliases on client
----------------------------------------
Backend's `officeHtmlBucket` uses the broad `excelMimeTypes` regex from
`librechat-data-provider` (covers `application/x-ms-excel`,
`application/x-msexcel`, `application/msexcel`, `application/x-excel`,
`application/x-dos_ms_excel`, `application/xls`, `application/x-xls`,
plus the canonical sheet MIMEs). The client's exact-match
`MIME_TO_TOOL_ARTIFACT_TYPE` only had three of those, so an
extensionless XLS upload with a legacy MIME would have backend HTML
produced but the client would fail to route the artifact at all —
preview chip never registers.

Fix: import the same regex on the client and add it as a fallback in
`detectArtifactTypeFromFile` after the exact-match map miss. Stays in
lock-step with the backend automatically.

7 new test cases — one per legacy alias.

P2 — Hard CDN dependency on jsdelivr
-------------------------------------
Air-gapped / corporate-filtered networks where jsdelivr is unreachable
would see DOCX previews permanently degrade to "Preview unavailable"
because the iframe could never load the renderer scripts. Mammoth was
sitting right there on the server but the dispatcher always preferred
the CDN path for files under 350 KB.

Fix: `OFFICE_PREVIEW_DISABLE_CDN` env var. When truthy (`1`, `true`,
`yes`, case-insensitive, whitespace-trimmed), `wordDocToHtml`
short-circuits to the mammoth path regardless of file size. Operators
on filtered networks set the env var; default behavior is unchanged.

Read at function-call time (not module load) so jest can flip it in
`beforeEach` without `jest.resetModules()`. The cost is one property
access per render.

12 new test cases: env-unset uses CDN (default), all five truthy
forms force mammoth, five non-truthy forms (`false`/`0`/`no`/empty/
arbitrary string) leave CDN active.

Tests
-----
packages/api/src/files: 446/446  (was 434, +12 from env-var matrix).
client artifact suites: 235/235  (was 228, +7 from MIME aliases).

*  feat: High-fidelity PPTX preview via pptx-preview in iframe

Mirrors the DOCX CDN architecture for PPTX: small files (≤350 KB
binary) embed as base64 and render via `pptx-preview` loaded from
jsdelivr inside the Sandpack iframe. Larger files and air-gapped
deployments fall back to the existing slide-list extraction.

Why
---
PPTX is the format where the gap between LibreChat's preview and
Claude.ai-style previews was most visible (slide-list of bullet
points vs. rendered slide layouts). LibreOffice → PDF → PDF.js is
still the eventual gold-standard answer for PPTX fidelity, but
client-side rendering inside the Sandpack iframe gets us a
meaningful intermediate step (~1.5/5 → ~3.5/5) without a sidecar.

What
----
- `pptx-preview@1.0.7` (ISC license, ~1.36 MB UMD bundle that
  includes its echarts/lodash/uuid/jszip/tslib deps inline). Pinned
  to a specific version on jsdelivr with SHA-384 SRI and
  `crossorigin="anonymous"`.
- `buildPptxCdnDocument` mirrors the DOCX wrapper: same CSP locks
  (`default-src 'none'`, `connect-src 'none'`, no eval, no base/form
  tampering), same `id="lc-doc-data"` base64 slot, same fallback
  message wiring (`typeof pptxPreview === 'undefined'` →
  "Preview unavailable").
- New public `pptxToHtml(buffer)` dispatcher; `bufferToOfficeHtml`
  switches its `'pptx'` case to call it. `pptxToSlideListHtml` stays
  exported as the slide-list-only path (still hit by tests directly
  and by the dispatcher fallback).
- `OFFICE_PREVIEW_DISABLE_CDN=true` env-var hatch applies to PPTX
  too — air-gapped operators get the slide-list path. Same env-var
  read at call time, same matrix of truthy values (`1` / `true` /
  `yes` / case-insensitive / whitespace-trimmed).
- `_internal` re-exports moved to after the PPTX section since the
  PPTX internals live further down in the file. Adds
  `pptxToHtmlViaCdn`, `MAX_PPTX_CDN_BINARY_BYTES`,
  `PPTX_PREVIEW_CDN`.

Honest caveats
--------------
- The 1.36 MB UMD bundle has `require("stream"/"events"/"buffer"/
  "util")` references in its outer wrapper. Those are bundled-dep
  artifacts (likely from `tslib` / Node-shim transforms) and don't
  appear to execute on the browser code paths, but I haven't done
  manual e2e on a wide range of decks. If a class of files turns up
  that breaks rendering, the iframe-side fallback message catches it
  and operators have `OFFICE_PREVIEW_DISABLE_CDN=true` as the bail.
- First-render CDN fetch is ~1.36 MB (browser-cached after).
- PPTX with embedded media easily exceeds the 350 KB binary cap;
  those files take the slide-list path. Lifting the cap is a
  follow-up (tied to the broader self-hosting work).

Tests
-----
11 new in two new describe blocks:
- `pptxToHtml dispatcher`: routing predicate (small → CDN, env-set
  → slide-list).
- `CDN-rendered path`: base64 round-trip, SRI integrity +
  crossorigin, CSP locks (connect/eval/base/form), fallback message,
  size-threshold lock at 350 KB.
- `OFFICE_PREVIEW_DISABLE_CDN escape hatch`: env-var matrix for
  truthy values.

packages/api/src/files: 457/457  (was 446, +11).

* 🪟 fix: DOCX preview fills the artifact panel width

docx-preview defaults to rendering at the document's native page
width (8.5in for letter, 21cm for A4). In a wide artifact panel
that left whitespace on either side; in a narrow one it forced
horizontal scroll.

Two changes:
- Pass `ignoreWidth: true` to `docx.renderAsync` so the library skips
  the document's pageSize width and uses its container's width.
- Defensive CSS overrides on `.docx-wrapper` and `.docx-wrapper > section.docx`
  in case a future library version regresses on the option, plus
  `padding: 0` on the wrapper to drop the page-edge whitespace
  docx-preview otherwise reserves.

`renderHeaders`/`renderFooters`/etc. stay enabled — those still
appear in the rendered output, just inside a container that fills
the panel instead of a fixed-width "page."

Tests unchanged (100/100); manual e2e ahead of merge.

* 🩹 fix: PPTX black screen — allow blob: workers + harden bootstrap

Manual e2e of the PPTX CDN renderer surfaced a black screen with
"Could not establish connection. Receiving end does not exist."
unhandled-rejection — characteristic of a Web Worker that couldn't
start.

Root cause: pptx-preview's bundled echarts dep spins up Web Workers
via blob: URLs for chart rendering. Our CSP had `default-src 'none'`
and no `worker-src`, so workers fell back to default → blocked. The
async failure deep inside echarts didn't surface through the outer
`previewer.preview()` promise, so my bootstrap's `.catch` never fired,
the loading state was removed, and the iframe sat with the body
background showing through (dark navy in dark mode = "black screen").

Three changes:
- Add `worker-src blob:` to the PPTX CSP. Allows blob:-only worker
  creation without permitting arbitrary worker URLs.
- Bootstrap: window-level `unhandledrejection` and `error` listeners
  so rejections from inside bundled-dep async pipelines surface as
  the user-facing "Preview unavailable" fallback instead of going
  silent.
- Bootstrap: 8-second timeout that checks `container.children.length`
  — if the renderer hasn't appended anything visible by then, assume
  silent failure and show the fallback.

Also wipe `container.innerHTML` when showing the fallback so a partial
render doesn't compete with the message.

DOCX wrapper unchanged: docx-preview doesn't use workers, so the
worker-src directive doesn't apply, and the existing fallback path
already covers its failure modes.

Tests
-----
- Existing PPTX CSP test now also asserts `worker-src blob:` is present.
- Existing fallback-message test extended to cover the new
  unhandledrejection/error/timeout listeners.

packages/api/src/files: 467/467 .

* 🔒 fix: gate office HTML routing on backend trust flag (textFormat)

Codex P1 review on PR #12934: routing .docx/.csv/.xlsx/.xls/.ods/.pptx
into the office preview buckets assumed `attachment.text` was already
sanitized full-document HTML, but that guarantee only existed for the
new code-output extractor path. Existing stored attachments and other
non-code paths can still carry plain extracted text — `useArtifactProps`
would then inject that as `index.html` inside the Sandpack iframe.

Adds a `textFormat: 'html' | 'text' | null` trust flag persisted on
the file record by the code-output extractor, surfaced over the SSE
attachment payload and the TFile API type. The client's routing in
`detectArtifactTypeFromFile` requires `textFormat === 'html'` before
landing on an office HTML bucket; everything else (legacy attachments,
RAG-extracted plain text from `parseDocument`, explicitly-marked
'text' entries) falls back to the PLAIN_TEXT bucket where the
markdown viewer escapes content rather than executing it.

Tests: new `getExtractedTextFormat` helper has 14 cases covering all
office paths, legacy XLS MIME aliases, parseDocument fallthroughs,
and null-input. Client `artifacts.test.ts` adds three security-gate
tests proving downgrade behavior for missing/null/'text' textFormat,
plus a `fileToArtifact` test that legacy office attachments without
the flag end up in PLAIN_TEXT with their content escaped.

* 🌐 fix: air-gapped DOCX preview — embed mammoth fallback in CDN doc

Codex P2 review on PR #12934: the CDN-rendered DOCX path always pulled
docx-preview + jszip from cdn.jsdelivr.net. Air-gapped or corporate-
filtered networks where jsdelivr is blocked would degrade to a static
"Preview unavailable" message even though the server already had a
local mammoth renderer that could produce readable output.

Now the dispatcher renders mammoth first and embeds the sanitized
output inside the CDN document as a hidden `#lc-fallback` block. The
iframe's existing `typeof docx === 'undefined'` check (which fires
when the CDN scripts can't load) un-hides the fallback so the user
sees a real preview. CDN-success path is unchanged: high-fidelity
docx-preview output owns the viewport, mammoth fallback stays hidden.

Two new safeguards in the dispatcher:
- Size budget: if base64(binary) + mammoth body + wrapper > 512 KB
  (the `attachment.text` cache cap), drop to mammoth-only so a giant
  document still renders. The `OFFICE_HTML_OUTPUT_CAP` constant
  mirrors `MAX_TEXT_CACHE_BYTES` from extract.ts (separate constant
  to avoid a circular import; pinned by a unit test).
- `lc-render` is hidden when fallback shows so the empty padded slot
  doesn't sit above the mammoth content.

Tests: existing CDN-path tests updated for the new
`wordDocToHtmlViaCdn(buffer, mammothBody)` signature; new test for
the embedded fallback structure (`#lc-fallback`, mammoth body
content, "High-fidelity renderer unavailable" notice, render-slot
hide); new constant pin and per-fixture cap-respect assertion.

* 🧪 feat: LibreOffice → PDF preview path (POC, opt-in via env)

Per the plan-mode discussion: prove out a LibreOffice subprocess
pipeline as an alternative to the docx-preview / pptx-preview CDN
renderers. LibreOffice handles every office format Microsoft and
LibreOffice itself can open (DOCX, PPTX, XLSX, ODT, ODP, ODS, RTF,
many more), produces a PDF, and the host browser's built-in PDF
viewer renders it inside the Sandpack iframe via a `data:` URI.
No client-side JS dependency, no CDN dependency, true high
fidelity for any feature LibreOffice supports.

Off by default. Operators opt in by setting both:
  - `OFFICE_PREVIEW_LIBREOFFICE=true`
  - LibreOffice (`soffice` or `libreoffice`) on the server's `$PATH`

When either is missing, the dispatcher falls through to the
existing CDN/mammoth/slide-list pipeline so a misconfiguration
doesn't break previews.

Hardening (`packages/api/src/files/documents/libreoffice.ts`):
- Fresh subprocess per call with isolated temp dir, stripped env
  (PATH/HOME/TMPDIR only), and `-env:UserInstallation` so concurrent
  conversions can't collide on shared `~/.config/libreoffice` locks
- 30-second wall-time cap; SIGKILL on timeout
- 50 MB PDF output cap to bound disk pressure
- 512 KB output cap on the wrapped HTML so the SSE/cache contract
  stays intact (base64 inflates ~33%, effective PDF cap ~380 KB)
- Macros disabled by default flags (`--norestore --invisible
  --nodefault --nofirststartwizard --nolockcheck`)
- Tag-distinct `LibreOfficeUnavailableError` /
  `LibreOfficeConversionError` so callers can swallow appropriately

Iframe wrapper (`buildPdfEmbedDocument`):
- Native browser PDF viewer via `<iframe src="data:application/pdf;
  base64,...">` — works in Chrome, Edge, Safari, Firefox
- CSP locks the iframe to `default-src 'none'; frame-src data:;
  connect-src 'none'; script-src 'unsafe-inline'` — no outbound
  network, no eval, no external scripts
- `#view=FitH` for first-paint sizing
- 4-second heuristic timer that swaps to a "Preview unavailable"
  fallback when the browser's PDF viewer is disabled (kiosk mode,
  Brave Shields, etc.)

Wired into `wordDocToHtml` and `pptxToHtml` as the first branch —
returns null when disabled / unavailable / oversized so the existing
pipeline takes over. XLSX intentionally NOT routed through this
path: SheetJS's HTML output is already excellent for spreadsheets
(sortable, sticky headers) and PDF rendering of sheets is awkward.

Tests (`libreoffice.spec.ts`, 30 cases — 25 always run, 5 conditional
on the binary): env-gating parser semantics matching
`OFFICE_PREVIEW_DISABLE_CDN`, fallthrough contract (never throws,
returns null on any failure), CSP lock-down, fallback structure,
binary probe caching + missing-binary path, error tagging, and
integration tests that engage when `soffice`/`libreoffice` is on
PATH (DOCX→PDF, PPTX→PDF, output-cap fallthrough). Integration
tests skip cleanly on bare CI.

* 🩹 fix: CI — preserve legacy download path for empty-text office attachments

Two regressions surfaced after the textFormat security gate landed.

1. **Client** (`LogContent.test.tsx` "falls back to the legacy download
   branch for an office file with no extracted text"):

   When the security gate downgraded an office type without
   `textFormat: 'html'` to PLAIN_TEXT, the lenient empty-text gate on
   PLAIN_TEXT then accepted a missing `text` field and rendered a
   half-empty panel card. The historical contract is "office type +
   no text → legacy download UI"; the downgrade should only fire when
   there's actual plain text that needs safe-escaping.

   Fix in `detectArtifactTypeFromFile`: short-circuit to null when the
   office type lands in the security-gate branch with no text. The
   PLAIN_TEXT downgrade still fires for legacy attachments that DO
   carry plain text.

2. **API** (`process.spec.js` + `process-traversal.spec.js`): the
   `@librechat/api` mocks didn't expose `getExtractedTextFormat`, so
   `processCodeOutput` called `undefined(...)` → TypeError → tests got
   undefined results. Added the helper to both mocks with a faithful
   default (returns 'text' for non-null extractor output, null
   otherwise).

Tests: new regression in `artifacts.test.ts` pinning the empty-text
+ no-textFormat → null contract for all four office types
(.docx/.csv/.xlsx/.pptx), so a future refactor can't silently
re-introduce the half-empty card.

* 🩹 fix: PPTX slides scale to fit panel width (no horizontal scroll)

Manual e2e on PR #12934: pptx-preview rendered slides at their native
init dimensions (960×540 default). The artifact panel is much narrower
than that, so the iframe got a horizontal scrollbar and only a corner
of each slide showed at any time — the user had to drag-scroll across
each slide to read it.

Fix: keep pptx-preview's init at 960×540 so its internal layout math
stays correct, then post-process each rendered slide:
- Cache the slide's native width/height on its dataset BEFORE
  applying any transform (so subsequent re-fits don't measure the
  already-transformed box).
- Wrap the slide in `.lc-slide-wrap` with explicit width/height set
  inline to the scaled dimensions; the wrap shrinks the layout space
  the slide occupies.
- Apply `transform: scale(panel_width / 960)` to the slide itself
  with `transform-origin: top left` so the rendered output shrinks
  from the top-left corner into the wrap.
- Cap the scale at 1.0 so small slides don't upscale and get blurry.

Streaming + resize:
- `MutationObserver` watches the container for slide insertions so
  streaming renders get scaled on arrival rather than waiting for
  the entire `previewer.preview` promise to settle.
- `ResizeObserver` re-fits all wrapped slides when the iframe
  resizes (panel drag, window resize).

Tests: new "bootstrap wraps + scales each slide" lock in the wrap
class, scale computation, observer setup, and native-size caching
so a future refactor can't silently re-introduce the overflow.

* 🩹 fix: PPTX wrap+scale runs after preview, not during streaming

Manual e2e on PR #12934: regenerated PPTX showed "Preview unavailable"
in the iframe. Root cause: the MutationObserver I added in the
previous commit fired during pptx-preview's render and moved slides
out from under the library's references. pptx-preview's async
pipeline raised an unhandled rejection, the iframe's window-level
listener caught it, and the fallback message replaced the partial
render.

Fix: drop the MutationObserver. Apply the wrap+scale ONCE in a
`finalize` step that runs:
  - On `previewer.preview().then` (the happy path)
  - On the 8-second timeout safety net IF the container has children
    (silent-failure path — pptx-preview emitted slides but never
    resolved its outer promise)

To prevent the user from seeing an unscaled flash while pptx-preview
renders into the 960px-wide canvas, the container is set to
`visibility: hidden` at init and only revealed inside `finalize`
after wrap+scale completes.

Resize handling stays via `ResizeObserver` on `document.body`,
installed AFTER the wrap pass so it doesn't fire during the wrap
itself.

Tests: regression assertion now also locks in:
  - `container.style.visibility = 'hidden' / 'visible'` (the flash-
    prevention contract)
  - Absence of MutationObserver (the bug we just removed — must NOT
    creep back in via a future "let's scale during streaming" idea)

* 🩹 fix: PPTX slides fill panel width (drop upscale cap, per-slide scale)

Manual e2e on PR #12934: slides rendered correctly but didn't fill the
artifact panel — whitespace on either side. Two issues:

1. The scale was capped at `Math.min(1, available / SLIDE_W)`. On
   panels wider than 960px, the cap clamped the scale to 1.0 and
   slides rendered at native size with whitespace on the sides
   instead of stretching.

2. The scale was computed against the constant `SLIDE_W = 960`, but
   pptx-preview can emit slides whose `offsetWidth` differs from the
   init param if the source PPTX has a non-16:9 layout. Per-slide
   division of `available / nativeW` handles that case.

Fix: replace `computeScale()` with two helpers — `availableWidth()`
returns the panel content-box width and `scaleFor(nativeW)` returns
the per-slide scale. No upscale cap. The slide content is rendered
by pptx-preview against its 960×540 canvas using vector text /
canvas — scaling up to e.g. 1500px doesn't visibly degrade quality.

Tests: regression now also asserts:
  - `availableWidth()` and `scaleFor()` exist by name
  - The exact scale formula `availableWidth() / (nativeW || SLIDE_W)`
  - Negative assertion that `Math.min(1, ...)` is NOT present, so a
    future "let's add an upscale cap" rewrite can't silently
    re-introduce the whitespace.

* 🩹 fix: PPTX preview fills panel height (no white gap below slides)

Manual e2e on PR #12934: PPTX preview filled the panel width but left
empty space below the last slide. DOCX didn't have this issue because
its content (mammoth-rendered HTML) flows naturally and either fits
exactly or overflows; PPTX slides are fixed-aspect 16:9 and don't
grow with the panel.

Two changes:

1. **Body fills the iframe viewport** — `html, body { min-height:
   100vh }` plus `body { display: flex; flex-direction: column }`
   and `#lc-render { flex: 1 0 auto }`. The dark theme bg now fills
   the iframe even when total slide content is shorter than the
   panel, so a single-slide deck never reveals a "white below" gap.

2. **Per-slide scale honors viewport height** — `scaleFor(nativeW,
   nativeH)` now returns `min(width-fit, height-fit)` (largest
   factor that fits without overflowing either dimension). On a
   tall artifact panel with a short deck, slides grow up to the
   full panel height instead of staying at the width-bound size.
   Existing height-fit was always considered correct conceptually
   but the previous implementation only used width-fit, leaving
   half the viewport unused per slide.

Tests: regression now also asserts `availableHeight()`, the
`Math.min(sw, sh)` formula, and `min-height: 100vh` are in the
bootstrap. Negative assertion for the old `Math.min(1, ...)` upscale
cap remains.

* 🩹 fix: revert body flex on PPTX bootstrap (caused black-screen render)

Manual e2e regression on PR #12934: the previous commit added
`body { display: flex; flex-direction: column }` plus
`#lc-render { flex: 1 0 auto }` to fill the panel height. Side effect:
pptx-preview's internal layout assumes block flow on its ancestor
elements; making body a flex container caused slides to render as
solid-black rectangles (sized correctly, but with no visible content
inside).

Fix: keep just `html, body { min-height: 100vh }` for the bg-fill
effect — that alone gives empty space below short decks the dark
theme bg without changing flow. Drop the body-flex and the
`#lc-render { flex: 1 0 auto }` directives.

The height-aware `scaleFor(nativeW, nativeH)` from the same commit
stays — it doesn't interact with pptx-preview's layout, just chooses
a per-slide scale. Each slide still grows to fit the viewport
contain-style.

Negative-assertion added to the regression test: `body { display:
flex }` must NOT appear in the bootstrap, so a future "let's flex
the body to make height work" rewrite can't silently re-introduce
this.

(Note: the user also flagged DOCX theming as faint body text; I'm
leaving that for now per their note that it may be pre-existing.
Not addressed in this commit.)

* 🩹 fix: revert PPTX height-fill changes; lock DOCX CDN to light scheme

Two fixes for separate manual e2e regressions on PR #12934.

**1. PPTX black screen (single slide rendering as solid black).**

The previous fix removed `body { display: flex }` thinking that was
the sole cause, but the regression persisted. Bisecting against the
last known-good commit (4e2d538b0, width-fit only), the actual culprit
is the COMBINATION of:
- `min-height: 100vh` on html/body
- `availableHeight()` reading viewport-derived dimensions
- `Math.min(sw, sh)` height-aware scale

pptx-preview's CSS injection step interacts unpredictably with
these. Reverting to width-only `scaleFor(nativeW)` and dropping the
viewport min-height restores reliable rendering. Vertical empty
space below short decks now shows the body's bg color (`var(--bg)`)
which still matches the panel theme — that's an acceptable trade-off
vs. the black-screen regression.

Negative assertions added: `Math.min(sw, sh)`, `availableHeight`,
`min-height: 100vh`, `body { display: flex }` must NOT appear in
the bootstrap. So a future "let's fill height" rewrite has to
demonstrate it doesn't break pptx-preview before it can land.

**2. DOCX body text rendering as faint / translucent grey.**

docx-preview emits page-style rendering with white pages and the
docs native text colors. The CDN doc declared
`color-scheme: light dark`, so on OS dark mode the iframes
inheritable `--fg` resolved to `#e5e7eb` (light grey). docx-preview
body text (no explicit color in the source DOCX) inherited that
light-grey on the white page bg → barely-visible "translucent"
rendering.

Fix: declare `color-scheme: light` only in `buildDocxCdnDocument`,
drop the dark-mode `@media` override. docx-preview is a light-mode-
only renderer; matching that produces correct contrast regardless
of OS theme. The mammoth-only `wrapAsDocument` path is unaffected
— it owns its own bg + text colors and continues to respect the
users OS scheme.

New regression test pins the lock: CDN doc must contain
`color-scheme: light`, must NOT contain `color-scheme: light dark`,
must NOT contain `prefers-color-scheme: dark`.

* 🩹 fix: relax connect-src to allow sourcemap fetches (silence CSP noise)

Manual e2e on PR #12934: every time DevTools is open while viewing a
DOCX or PPTX preview, the console fills with CSP violations like:

  Connecting to 'https://cdn.jsdelivr.net/npm/docx-preview@0.3.7/
  dist/docx-preview.min.js.map' violates the following Content
  Security Policy directive: "connect-src 'none'". The request has
  been blocked.

The actual rendering isn't affected (sourcemap fetches happen AFTER
the script has already loaded and executed via `script-src`), but
the noise is enough to make people suspect a real problem and
distracts from useful console output.

Fix: relax `connect-src` from `'none'` to `'self' https://cdn.
jsdelivr.net` in both DOCX and PPTX CDN docs. This allows:
  - Same-origin fetches (sandpack-static-server) — covers any
    bundler-embedded sourcemaps + same-origin runtime fetches the
    renderer might make
  - jsdelivr fetches — covers sourcemaps from the CDN where we
    loaded the script

Exfiltration risk stays minimal: the iframe is cross-origin to
LibreChat so an attacker can't read application data anyway, and
neither 'self' (sandpack-static-server) nor jsdelivr is a useful
target for exfiltrating slide content to a host the attacker
controls.

Tests updated: assertions for `connect-src 'none'` swapped to
`connect-src 'self' https://cdn.jsdelivr.net` for both DOCX + PPTX
CDN docs. Added negative assertion for wildcard `*` in connect-src
so a future "let's allow everything" rewrite can't widen the
exfiltration surface.

* 🩹 fix: surface PPTX/DOCX fallback reason (inline + console)

Manual e2e on PR #12934: "Preview unavailable" appears in the iframe
with no way to know what actually failed. The reason was tucked into
the fallback element's `title` attribute (hover-only tooltip) — easy
to miss and impossible to copy/paste.

Now surfaces three ways:
  1. Visible inline via a `<details>` element with the reason in
     monospace, folded so the friendly message stays primary but the
     diagnostic is one click away in the iframe itself.
  2. `title` attribute (preserved) for hover tooltip.
  3. `console.error('[pptx-preview] fallback fired:', reason)` so
     DevTools shows it in red — also the only reliable way to see
     the reason if the iframe is detached / re-mounted.

DOCX gets the same console mirror (as `console.warn` since the
fallback there is "high-fidelity unavailable, showing simplified
preview" — informational, not error). The DOCX fallback already
displays the mammoth-rendered content visibly, so no `<details>`
needed there.

Tests: regression assertions pin the diagnostic surfacing — the
`<details>` element, the `title` write, and the `console.error`
call must all be present in the bootstrap.

* 🩹 fix: PPTX CDN embeds slide-list fallback + detects empty renders

Manual e2e + DOM inspection on PR #12934: pptx-preview silently
produces empty `.pptx-preview-wrapper` placeholders for pptxgenjs-
generated decks. The library parses the file enough to create the
960×540 host element with a black bg, then fails to populate it.
The outer Promise resolves "successfully" — no throw, no rejection,
the bootstrap thinks rendering succeeded — and the user sees a black
rectangle with no content and no fallback message.

Fix mirrors the DOCX mammoth-fallback pattern from commit 0c0b0ce88:

1. **Server side**: `pptxToHtml` now renders the slide-list body
   (`<ol class="lc-pptx-list">...`) via the new `renderPptxSlidesBody`
   helper, then embeds it inside the CDN doc via the new
   `buildPptxCdnDocument(base64, slideListFallbackBody)` signature.
   Combined-doc size budget mirrors the DOCX pattern: if the CDN doc
   would exceed `OFFICE_HTML_OUTPUT_CAP` (512 KB), drop to slide-list
   only.

2. **Iframe bootstrap**: new `hasRenderedContent()` check after
   `wrapSlides()` walks each `.lc-slide-wrap` looking for actual
   child content inside pptx-preview's emitted slide nodes. If every
   wrap is empty, fires `showFallback('renderer-produced-empty-
   wrappers ...')` which reveals the embedded slide-list view
   instead of the previous static "Preview unavailable" message.

3. **CSS**: slide-list rules extracted to `PPTX_SLIDE_LIST_CSS`
   constant so they can be inlined into both the standalone slide-
   list document AND the CDN doc's `<style>` block (CSP `style-src`
   is `'unsafe-inline'` only — no external sheets).

`renderPptxSlidesHtml` now delegates to `renderPptxSlidesBody`
wrapped in `wrapAsDocument` — single source of truth for the slide
markup.

Tests (506 passing, +1 vs before): existing `pptxToHtmlViaCdn`
call sites updated for the new fallback-body argument; new
regression test pins `hasRenderedContent`, the
`renderer-produced-empty-wrappers` reason string, the embedded
fallback structure, and the inlined slide-list CSS.

* fix: Detect Empty PPTX Preview Slides

* 🩹 fix: LibreOffice PDF embed uses blob: URL (Chrome blocks data: PDFs)

Manual e2e on PR #12934: enabling `OFFICE_PREVIEW_LIBREOFFICE=true`
on a host with `soffice` installed surfaced "This page has been
blocked by Chrome" inside the PDF preview iframe.

Root cause: Chrome blocks `data:application/pdf;base64,...`
navigations inside sandboxed iframes (anti-phishing measure since
Chrome 76, see crbug.com/863001). The Sandpack iframe IS sandboxed
(its `sandbox="..."` attribute lacks `allow-top-navigation` for
data: URLs specifically), so when our inner `<iframe src="data:
application/pdf;...">` tries to navigate, Chrome's interstitial
fires and renders the "blocked" message.

Fix: switch from `data:` URL to `blob:` URL. The bootstrap now:
  1. Reads the base64 payload from a `<script type="application/
     octet-stream;base64">` data block (same pattern as the DOCX
     and PPTX wrappers).
  2. Decodes via `atob` + `Uint8Array.from`.
  3. Creates a `Blob` with `type: 'application/pdf'`.
  4. `URL.createObjectURL(blob)` produces a same-origin blob: URL.
  5. Sets `pdfFrame.src = url + '#view=FitH'` — Chrome treats blob:
     URLs as legitimate navigation and serves the built-in PDF
     viewer.

CSP updated: `frame-src blob:` (was `frame-src data:`). `data:` is
now explicitly NOT allowed in `frame-src` since Chrome would block
it anyway in our context — keeping it would be misleading
documentation.

Bonus: failure paths now log to `console.error` with a
`[libreoffice-pdf]` prefix so DevTools surfaces blob-creation
failures and PDF-viewer load timeouts in red.

Tests updated:
- "emits a complete sandboxed HTML document" now asserts the
  data-block + blob URL construction (not the old data: URL).
- New CSP test "allows blob: in frame-src (NOT data:)" with both
  positive and negative assertions to lock in the change.
- Integration test for `tryLibreOfficePreview` updated to look for
  the data block + `URL.createObjectURL` instead of the data: URL.
- Large-payload test now verifies the data block round-trip rather
  than data: URL escaping (base64 alphabet has no characters that
  break out of `<script>` anyway).

* 🩹 fix: LibreOffice PDF embed renders via pdf.js (Chrome blocks blob: PDFs too)

Manual e2e on PR #12934 round 2: switching from `data:` to `blob:`
URLs (commit d90f26c11) didn't fix the "This page has been blocked
by Chrome" interstitial. Chrome blocks BOTH data: AND blob: PDF
navigations inside sandboxed iframes — the built-in PDF viewer
requires a top-level browsing context. The Sandpack host iframe is
sandboxed, so neither approach works.

Fix: switch from native browser PDF viewer to pdf.js (Mozilla's
pdfjs-dist) loaded from CDN. pdf.js renders to `<canvas>` which
works in any context — no plugin, no privileged viewer, no
top-level requirement. ~1 MB CDN load is acceptable for a path
that's already opt-in via `OFFICE_PREVIEW_LIBREOFFICE=true`.

Implementation:
- Pin pdf.js v3.11.174 (single-file UMD; v4+ uses ES modules which
  complicate the load + SRI flow)
- Worker URL pointed at the same jsdelivr origin; CSP `worker-src
  https://cdn.jsdelivr.net blob:` allows it
- DPR-aware canvas rendering: scale based on `panelWidth /
  page.viewport.width * devicePixelRatio` so retina displays get
  crisp pixels
- Sequential page rendering (Promise chain) so a many-slide PDF
  doesn't spawn N parallel render jobs
- 15 s timeout safety net (was 4 s for the native viewer; pdf.js
  with DPR=2 on a many-page PDF can take longer)

CSP changes:
- Added `script-src https://cdn.jsdelivr.net 'unsafe-inline'` (was
  inline-only)
- Added `worker-src https://cdn.jsdelivr.net blob:`
- Removed `frame-src` entirely (no nested iframes)
- Removed `object-src` (no `<object>`/`<embed>` either)

Same diagnostic surfacing as the other CDN paths: failure reasons
shown via `<details>` disclosure inline + `console.error` to
DevTools.

Tests updated: PDF.js script presence, GlobalWorkerOptions setup,
canvas render path, all the new failure detection paths. Negative
assertions for both `data:application/pdf` and `blob:...application
/pdf` so a future "let's just try the native viewer again" rewrite
can't silently re-introduce the Chrome block.

SRI hashes intentionally omitted (unlike docx-preview / pptx-
preview) — operator opted in by setting the env flag and trusts
the LibreOffice render pipeline. Worth adding once the path is
proven in production.

* 🧹 cleanup: trim unused _internal exports + stale JSDoc references

After the LibreOffice + pdf.js path proved out, swept the office HTML
modules for dead code and stale documentation.

**Unused `_internal` exports removed (`html.ts`):**
  - `renderMammothBody` — only called within the file (by
    `wordDocToHtmlViaMammoth` and `wordDocToHtml`), never imported by
    tests.
  - `DOCX_PREVIEW_CDN` — internal config constant, never referenced.
  - `PPTX_PREVIEW_CDN` — same, never referenced.

The remaining `_internal` surface (`wordDocToHtmlViaCdn`,
`wordDocToHtmlViaMammoth`, `pptxToHtmlViaCdn`,
`MAX_DOCX_CDN_BINARY_BYTES`, `MAX_PPTX_CDN_BINARY_BYTES`,
`OFFICE_HTML_OUTPUT_CAP`) is all actively used by the spec file.

**Stale JSDoc fixed (`libreoffice.ts`):**

Module-level header still claimed we "embed the PDF as a base64
data:application/pdf URI" and "rely on the host browser's built-in
PDF viewer". Both untrue after the pdf.js switch in commit b2cc81ad8.
Updated to:
  - Describe the actual pipeline: PPTX → soffice → PDF → pdf.js → canvas
  - Document the dead-end iterations (data: blocked, blob: also blocked,
    pdf.js works) so future readers don't re-discover the same Chrome
    PDF-viewer-in-sandboxed-iframe limitation
  - Drop "(POC)" tag — the path is production-quality, just opt-in
  - Adjust disk footprint estimate (250-350 MB with
    `--no-install-recommends` is more accurate than the 500 MB original)

No production code changes; tests still 505 passing.

*  feat: per-format LibreOffice opt-in (env value accepts format list)

Manual e2e on PR #12934: enabling `OFFICE_PREVIEW_LIBREOFFICE=true`
forces both DOCX and PPTX through the LibreOffice path. DOCX renders
~instantly via docx-preview and rarely needs the LibreOffice
treatment; paying the ~2-3 s cold-start there hurts UX without
adding much.

Solution: extend the env var to accept three forms:
  - Truthy (`true`/`1`/`yes`): all formats — backwards compatible
    with the previous behavior
  - Falsy (`false`/`0`/`no`/empty/unset): no formats — default
  - Comma-separated list (`pptx`, `pptx,docx`): just those formats

Practical guidance documented in the module header: most operators
will set `OFFICE_PREVIEW_LIBREOFFICE=pptx` — pptx-preview chokes on
pptxgenjs decks and the slide-list fallback loses formatting, so
LibreOffice is the only path that produces a faithful PPTX preview.
DOCX is well-served by docx-preview's existing CDN renderer.

API:
- New `isLibreOfficeEnabledFor(format)` is the per-format gate, used
  by `tryLibreOfficePreview` to short-circuit before doing work.
- Existing `isLibreOfficeEnabled()` retained for "any format
  enabled" diagnostic checks (returns true if at least one format
  is opted in).
- Internal `parseLibreOfficeEnablement` returns `'all' | Set | null`
  — keeps the gate future-proof: adding a new format to the
  LibreOffice route doesnt require operators to re-enumerate their
  env value.

Edge cases handled:
- Whitespace-tolerant: `  pptx  ,  docx  ` works
- Case-insensitive on both env value AND format name
- Empty list entries dropped: `pptx, ,docx` enables pptx + docx
- Empty string treated as unset (not as a valid empty list)

Tests: 21 new cases pinning the parse semantics + per-format gate
(`pptx` env vs `docx` lookup → false, etc.). Existing
`isLibreOfficeEnabled` tests retained but renamed to clarify the
"any format" semantic.

Total file tests: 526 passing (+21 vs before).

* 🔒 fix: officeHtmlBucket only does MIME fallback when extension is empty

Codex P2 review on PR #12934: the server's `officeHtmlBucket` falls
back to MIME whenever the extension isn't an OFFICE extension. The
client's `detectArtifactTypeFromFile` is stricter — it routes by
extension first for ANY known extension (`.txt` → PLAIN_TEXT,
`.md` → MARKDOWN, `.py` → CODE, etc.), only falling back to MIME
when the extension is unknown.

Mismatch case: `notes.txt` shipped with `Content-Type: application/
vnd.openxmlformats-officedocument.wordprocessingml.document`. Server
runs `officeHtmlBucket` → extension `.txt` not office → MIME fallback
→ 'docx' → produces full HTML, sets `textFormat: 'html'`. Client
routes by extension to PLAIN_TEXT (extension wins), markdown viewer
escapes the HTML, user sees raw `<html>...` markup instead of the
rendered preview.

Fix: server only falls back to MIME when extension is genuinely empty
(extensionless filename). Symmetric with the client's "extension
wins for any known extension" semantic — neither will mis-route.

Trade-off: a true DOCX renamed to `myfile.bin` with the canonical
DOCX MIME no longer routes through office HTML on the server. The
client would have routed to the office bucket via MIME, then the
security gate (`textFormat !== 'html'`) would have downgraded to
PLAIN_TEXT anyway. So the user-visible outcome is the same (raw
bytes via PLAIN_TEXT) — the new behavior just avoids producing HTML
that the client would never use.

Long-term fix: share the extension routing table in data-provider
so both server and client query the same source of truth. Out of
scope for this PR.

Tests: new 8-case `it.each` block in `officeHtmlBucket predicate`
locks in the contract — `.txt`/`.md`/`.json`/`.py`/`.html`/`.css`
+ office MIME → null, and `.bin`/`.dat` + office MIME → null too.
Existing extension-wins tests still pass unchanged.

Total file tests: 534 (+8 vs before).
2026-05-05 12:06:10 +09:00
Danny Avila
c7f38d9621
🛡️ fix: Validate Avatar URL Before Fetch (#12928)
`resizeAvatar` previously called `node-fetch` on any string input with
no validation. When OIDC providers surface a user-controllable
`picture` claim, this could be used to make blind SSRF requests to
internal services on every social login.

Wrap the URL fetch with:
- An allowlist on the URL protocol (http/https only).
- The shared `createSSRFSafeAgents` utility, which blocks resolution to
  private, loopback, and link-local IPs at TCP connect time
  (TOCTOU-safe; works equally for hostname targets that DNS-resolve
  privately and for IP-literal targets, since Node's `net.Socket`
  always dispatches through the agent's `lookup` hook).
- `redirect: 'error'` so a public-IP redirect target cannot be used to
  bypass the agent check on a subsequent hop.
- A 5-second total request budget (node-fetch v2's `timeout` covers
  request initiation through full body receipt, bounding slow-loris
  exposure rather than just the TCP connect).
- A 10 MB response cap (`size` option + `Content-Length` pre-check +
  post-read length assertion) so a hostile payload cannot exhaust
  memory before `sharp()` rejects it.

Fetch the canonicalized `parsed.href` rather than the raw input string
to eliminate any future parser-differential between `new URL()` and
the underlying fetch implementation.

Per-call agent construction is intentional: the avatar path runs once
per social login per user, so pooling adds complexity without a
measurable benefit. Documented inline.

Comprehensive test coverage in `avatar.spec.js`:
- Rejects malformed URLs, non-http(s) schemes (file://, data:,
  javascript:).
- Asserts the happy-path canonicalization (`fetch` is called with
  `parsed.href`) and the SSRF-safe agent factory routing
  (https→httpsAgent, http→httpAgent).
- Rejects non-2xx HTTP status.
- Rejects an oversized Content-Length before reading the body, and
  asserts `.buffer()` is never invoked in that case.
- Rejects an oversized body even when the server lies about / omits
  Content-Length.
- Surfaces ESSRF, redirect, and `size` overflow errors thrown by the
  fetch layer.
- Confirms Buffer inputs bypass the fetcher entirely.
2026-05-04 11:16:40 +09:00
Danny Avila
4a5fc701d2
📂 fix: Preserve Nested Skill Paths in Code-Env Uploads (#12877)
* fix(code): preserve code env upload filepaths

* chore: Reorder import statements in crud.js
2026-04-29 08:07:46 -04:00
Danny Avila
46a86d849f
🛂 fix: Skip Inherited / Mark Skill Files Read-Only in Code-Env Pipeline (#12866)
* 🛂 fix: Skip Re-Download of Inherited Code-Env Files (No More 403 Storms)

When a bash/code-interpreter call lists or operates on inputs the user
already owns (skill files primed via primeInvokedSkills, files inherited
from a prior session), codeapi echoes those files back in the tool
result with `inherited: true`. We were treating every entry as a
generated artifact and calling processCodeOutput on each, which:

1. Hit `/api/files/code/download/<session_id>/<file_id>` with the
   user's session key. Skill files are uploaded under the skill's
   entity_id, so every download 403'd — producing dozens of
   "Unauthorized download" log lines per turn.

2. Surfaced those inputs as ghost file chips in the UI even though
   they were never generated by the run.

3. Wasted a download round-trip even when no auth boundary was
   crossed — the file is already persisted at its origin.

Fix: skip files where `file.inherited === true` in all three
artifact-files loops (`tools.js`, `createToolEndCallback`, and
`createResponsesToolEndCallback`). Skill files remain available to
subsequent calls via primeInvokedSkills / session inheritance — we
just don't redundantly re-download them.

Pairs with codeapi-side change that adds the `inherited` flag.

* 🔒 feat: Mark Skill Files as `read_only` During Code-Env Priming

Pairs with codeapi `read_only` upload flag (ClickHouse/ai#1345). When
LibreChat primes a skill into the code-env, every file in the batch
(SKILL.md plus all bundled scripts/schemas/docs) is now uploaded with
`read_only: true`. Codeapi seals these inputs at the filesystem layer
(chmod 444) and the walker echoes the original refs as `inherited:
true` regardless of whether sandboxed code modified the bytes on disk.

Without this, the previous PR's `inherited` skip handled only the
unchanged case. A modified skill file (pip writing pyc near a .py, a
script accidentally truncating LICENSE.txt, etc.) still flowed through
the modified-input branch on codeapi, got a fresh user-owned file_id,
uploaded as a "generated" artifact, and surfaced in the UI as a chip
the user couldn't actually authorize a download for.

Changes:

- `api/server/services/Files/Code/crud.js`:
  `batchUploadCodeEnvFiles({ ..., read_only })` forwards the flag as
  a multipart form field. Default `false` preserves existing behavior
  for user-attached files and prior-session inheritance.

- `packages/api/src/agents/skillFiles.ts`: type signature gains
  `read_only?: boolean`; `primeSkillFiles` passes `true`.

- `packages/api/src/agents/skillFiles.spec.ts`: assert the upload call
  carries `read_only: true`.

The flag is intentionally not skill-specific. Any future
infrastructure-input flow (system fixtures, cached datasets, etc.) can
opt in the same way.
2026-04-29 08:26:25 +09:00
Danny Avila
c9dee962e7
📂 fix: Preserve Nested Folder Paths for Code-Execution Artifacts (#12848)
* 📂 fix: Preserve Nested Folder Paths for Code-Execution Artifacts

When codeapi reports a generated file at a nested path (`a/b/file.txt`),
`processCodeOutput` was running it through `sanitizeFilename` — which
calls `path.basename()` and then collapses `/` to `_`. The DB row ended
up with `filename: "file.txt"`, `primeFiles` shipped that flat name back
to the next sandbox session, and `cat /mnt/data/a/b/file.txt` 404'd.

Fix: split the sanitizer into two helpers in `packages/api/src/utils/files.ts`:

  - `sanitizeArtifactPath` — segment-wise sanitize while preserving
    `/`. Falls back to basename on `..` traversal, absolute paths, and
    other malformed inputs. The DB record uses this so the next prime()
    can recreate the nested path in the sandbox.

  - `flattenArtifactPath` — encode `/` as `__` for the local
    `saveBuffer` strategies, which key by single-component filename and
    would otherwise create unintended subdirectories under uploads/.

`process.js` is updated to use both: DB filename keeps the path, storage
key flattens. `claimCodeFile` is also keyed on `safeName` so the
(filename, conversationId) compound key stays consistent with the
record `createFile` writes.

Tests:
  +13 unit tests in `files.spec.ts` (sanitizeArtifactPath table,
  flattenArtifactPath round-trip).
  +1 integration test in `process.spec.js` asserting the DB-row vs
  storage-key split for a nested path.
  Updated `process-traversal.spec.js` to mock the new helpers.

64 pass / 0 fail across `Files/Code/`; 36 pass / 0 fail in
`packages/api/src/utils/files.spec.ts`.

Companion: ClickHouse/ai#1327 — the codeapi-side counterpart that stops
phantom file IDs from reaching this code path in the first place. These
two are independent but the matplotlib bug is most cleanly resolved when
both ship.

* 🛡️ fix: Re-add 255-char per-segment cap in sanitizeArtifactPath (codex review P2)

`sanitizeArtifactPath` dropped the 255-char basename cap that
`sanitizeFilename` enforces. Long artifact names then flowed unbounded
into `processCodeOutput`'s storage key (`${file_id}__${flatName}`) and
tripped `ENAMETOOLONG` on filesystems that enforce `NAME_MAX` —
saveBuffer fails, and the file falls back to a download URL instead of
persisting / priming. This was a regression specifically for flat
filenames that the original `sanitizeFilename` would have truncated
safely.

Re-add the cap as a per-path-component limit so it applies cleanly to
both flat and nested paths:

  - Leaf segment: extension-preserving truncation, matching
    `sanitizeFilename`'s shape (`<truncated-stem>-<6 hex>.<ext>`).
  - Non-leaf (directory) segments: plain truncate-and-disambiguate
    (`<truncated-name>-<6 hex>`); directory names don't carry semantic
    extensions worth preserving.
  - Defensive fallback when `path.extname` returns a pathologically long
    "extension" (e.g. `_.aaaa…aaa` after the dotfile underscore prefix
    rewrite turns a long hidden file into a non-dotfile with a 300-char
    "extension"): collapse to whole-segment truncation rather than
    leaving the cap unmet.

+6 unit tests covering: long leaf (regression case), long leaf under a
preserved directory, long non-leaf segment, deeply nested mixed-length,
exact-255 boundary (no truncation), and the dotfile + truncation
interaction.

* 🛡️ fix: Cap flattened storage key against NAME_MAX in processCodeOutput (codex review P1)

Per-segment caps on the path-preserving form aren't enough. Once segments
are joined with `__` for the storage key, deeply-nested or moderately
long paths can still produce a flat form that overflows once
`${file_id}__` is prepended — `${file_id}__a__b__c.csv` for a 3-level
100-char-each path is ~344 chars, well past filesystem NAME_MAX (255).
saveBuffer then trips ENAMETOOLONG and falls back to a download URL,
and the artifact never persists / primes.

`flattenArtifactPath` gets an optional `maxLength` parameter. When set,
the function truncates the flat form to fit, preserving the leaf
extension with the same disambiguating-hex-suffix shape sanitizeFilename
uses. Default (`undefined`) keeps existing call sites uncapped — the cap
is opt-in for callers that are actually building a filesystem key.
Pathologically long "extensions" from `path.extname` (e.g. `.aaaa…aaa`)
fall back to whole-key truncation rather than leaving the cap unmet.

processCodeOutput composes the storage key after `file_id` is known and
passes `255 - file_id.length - 2` as the budget so the full
`${file_id}__${flatName}` string fits in one filesystem path component.

+7 unit tests in files.spec.ts:
  - Pass-through when no maxLength supplied (cap is opt-in).
  - Pass-through when flat form fits within maxLength.
  - Truncation with leaf extension preserved (the regression case).
  - Leaf-only overflow with extension preservation.
  - Pathological long-extension fallback (whole-key truncation).
  - No-extension stem truncation.
  - Boundary equality (off-by-one guard).

+1 integration test in process.spec.js: processCodeOutput passes the
file_id-aware budget (`255 - file_id.length - 2`) to flattenArtifactPath.

114/114 across files.spec.ts + Files/Code (49 + 65).

* 🛡️ fix: Determinize + clamp artifact-path truncation (codex review P2 ×2)

Two follow-ups to Codex review on the path/flat-key cap:

1. **Deterministic truncation suffixes**. The previous helpers used
   `crypto.randomBytes(3)` for the disambiguator, mirroring
   `sanitizeFilename`'s shape. That made the truncated form non-
   deterministic: a re-upload of the same long filename would compute a
   *different* storage key, orphaning the previous on-disk file under
   the reused `file_id` returned by `claimCodeFile`.

   New `deterministicHexSuffix(input)` helper hashes the input with
   SHA-256 and takes the first 6 hex chars. Same input → same suffix
   (storage key stable across re-uploads); different inputs sharing a
   truncation prefix still get different suffixes (collision avoidance).
   24 bits ≈ 16M values is collision-safe for our scale (single-digit
   artifacts per turn per (filename, conversationId) bucket).

   Applied to `truncateLeafSegment`, `truncateDirSegment`, and
   `flattenArtifactPath` — every truncation site in the new helpers.
   `sanitizeFilename` (pre-existing) is intentionally left alone; its
   tests rely on the random-bytes mock and it's outside this PR's scope.

2. **Final clamp on flattenArtifactPath result**. The old `Math.max(1,
   maxLength - ext.length - 7)` floor could let the result slip past
   `maxLength` when the extension was nearly as large as the budget
   (e.g. `maxLength=5`, `ext=".txt"`: budget computed as 0, but result
   was `-<6 hex>.txt` = 11 chars). Drop the `Math.max(1, …)` floor and
   add a final `truncated.slice(0, maxLength)` so the contract holds
   for any input. Also short-circuit `maxLength <= 0` to `''` for
   pathological budgets.

Tests updated to compute the expected hash inline (the existing
`randomBytes` mock doesn't apply to the new code path), plus 4 new
regression tests:
  - sanitizeArtifactPath: same input → same output, different inputs →
    different outputs (determinism + collision avoidance).
  - flattenArtifactPath: same input → same output, different inputs
    sharing a truncation prefix → different outputs.
  - flattenArtifactPath: clamp holds when ext.length > maxLength - 7.
  - flattenArtifactPath: returns '' for maxLength <= 0.

53 unit tests pass. 65 integration tests pass.

* 🛡️ fix: Total-path cap + basename for classifier (codex P2 + comprehensive review)

Four follow-ups from the latest reviews on this PR:

1. **Codex P2: total-path cap in sanitizeArtifactPath**. Per-segment
   caps weren't enough — a deeply nested path (3+ at-cap segments) can
   still produce a joined form past Mongo's 1024-byte indexed-key limit
   (4.0 and earlier reject; later versions configurable). Added
   `ARTIFACT_PATH_TOTAL_MAX = 512` and a leaf-only fallback when the
   joined form exceeds it. Same shape as the absolute-path /
   `..`-traversal fallbacks above; the leaf is already segment-capped to
   ≤255, so the final result stays within bounds.

2. **Codex P2: pass basename to classifier/extractor in process.js**.
   With the path-preserving sanitizer, `safeName` can now be a nested
   string like `reports.v1/Makefile`. The classifier's `extensionOf`
   reads that as `v1/Makefile` (the slice after the dot in the directory
   name) and the bare-name branch rejects because it sees a `.`
   anywhere. Result: extensionless artifacts under dotted folders
   (Makefile, Dockerfile, etc.) get misclassified as `other` and skip
   text extraction. Pass `path.basename(safeName)` to both
   `classifyCodeArtifact` and `extractCodeArtifactText` so
   classification matches what the old flat-name flow produced.

3. **Review nit: drop dead `sanitizeFilename` mock in process.spec.js**.
   process.js no longer imports `sanitizeFilename`; the mock was
   misleading dead code.

4. **Review nit: rename misleading `'embedded parent traversal'` test**.
   `path.posix.normalize('a/../escape.txt')` resolves to `escape.txt`
   which goes through the normal segment-split path, not the
   `sanitizeFilename` fallback. Test name now says "resolves embedded
   parent traversal via path normalization" to match the actual code
   path.

+3 regression tests:
  - sanitizeArtifactPath falls back to leaf-only when joined > 512.
  - sanitizeArtifactPath keeps nested path within the 512 budget.
  - process.spec: passes basename (`Makefile` from `reports.v1/Makefile`)
    to classifyCodeArtifact + extractCodeArtifactText.

Existing "caps every segment in a deeply-nested path" test now uses 2
segments (not 3) so the joined form stays under the new total cap; the
3-segment scenario is covered by the new fallback test instead.

55 unit + 66 integration = 121/121 pass.

* 📝 docs: Correct sanitizeArtifactPath JSDoc to match actual schema index

Two doc-only fixes from the latest comprehensive review (both NIT):

1. **Index field list was wrong**. JSDoc claimed the compound unique
   index was `{ file_id, filename, conversationId, context }`. The
   actual index in `packages/data-schemas/src/schema/file.ts:92-95` is
   `{ filename, conversationId, context, tenantId }` with a partial
   filter for `context: FileContext.execute_code`. The cap rationale
   (Mongo 4.0 indexed-key limit) is correct and unchanged; just the
   field list was wrong. Added the schema file path so future readers
   can find the source of truth.

2. **Trade-off acknowledgement**. The reviewer noted that the
   leaf-only fallback loses directory structure, which means the
   model's `cat /mnt/data/<deep>/<path>/file.txt` would 404 on the
   pathological-depth case — partially re-introducing the original
   flat-name bug for >512-char paths. This is intentional (DB write
   failure is strictly worse than losing structure), but the trade-off
   wasn't called out explicitly in the JSDoc. Added a paragraph
   acknowledging it and noting that the cap is monotonically better
   than the pre-PR behavior, where ALL artifacts were treated this way
   regardless of depth.

No code or test changes — pure JSDoc correction. Tests still 55/0.

* 🛡️ fix: Disambiguate sanitized artifact names to keep claimCodeFile keys unique (codex P2)

`sanitizeArtifactPath` is not injective — multiple raw inputs can collapse
onto the same regex-and-normalize output. Codex's example:
`reports 2026/out.csv` and `reports_2026/out.csv` both sanitize to
`reports_2026/out.csv`. `claimCodeFile` is keyed on the schema's compound
unique `(filename, conversationId, context, tenantId)` index, so the
later upload silently matches the earlier record and overwrites the first
artifact's bytes via the reused `file_id` — a single conversation can
drop files when both names are valid in the sandbox.

This collision space isn't strictly new — pre-PR `sanitizeFilename`
(basename-only) had the same property — but the path-preserving form
gives us enough information to fix it for the first time.

**Fix.** When character-level sanitization changed something (regex
replacement, path normalization, dotfile prefix, empty-segment collapse),
embed a deterministic SHA-256 prefix of the **raw input** in the leaf
segment via the new `embedDisambiguatorInLeaf` helper. Same raw input →
same safe form (idempotent for re-uploads); different raw inputs that
would have collided → different safe forms.

**Why "character-level"** specifically:
- The disambiguator fires when `preCapJoined !== inputName` (post-regex
  + dotfile + empty-segment, BUT pre-truncation).
- Truncation alone is already disambiguated by `truncateLeafSegment`'s
  own seg-hash; firing the input-hash branch on truncation would just
  stack a second hash for no collision-avoidance benefit and clutter
  human-readable filenames.

**Three known collision shapes covered:**
1. `out 1.csv` vs `out_1.csv` (and `out@1.csv` vs `out#1.csv`, etc.)
2. `dir//file.txt` vs `dir/file.txt` (empty-segment collapse)
3. `.x` vs `_.x` (dotfile-prefix step)

**Disambiguator + truncation interaction:** for very long mutated leaves,
`truncateLeafSegment` caps at 255 first, then `embedDisambiguatorInLeaf`
re-trims to insert the input hash. The seg-hash from the first pass is
replaced by the input-hash from the second pass — that's intentional
(input-hash is the load-bearing collision-avoidance suffix; seg-hash was
only ever decorative once the input-hash exists). Final clamp ensures
the result never exceeds `ARTIFACT_PATH_SEGMENT_MAX` regardless of input.

**Disambiguator + total-cap fallback:** when joined > 512, we fall back
to the leaf-only form. The leaf has already had the disambiguator
embedded, so collision avoidance survives the pathological-depth case.

**`embedDisambiguatorInLeaf`** uses `dot <= 1` to detect "no real
extension" (covers extensionless names AND dotfile-prefixed leaves like
`_.hidden` — without this, `_.hidden` would split as stem `_` + ext
`.hidden` and produce the awkward `_-<hash>.hidden`).

**Updated 5 existing tests** that asserted the old collision-prone
outputs — they now verify the disambiguator-included form. The
character-level-only firing rule was load-bearing here: tests for
"clean inputs (no mutation)" and "long inputs (truncation only)" still
pass without any disambiguator clutter.

**+7 regression tests** in a new `collision avoidance (Codex review P2)`
describe block:
1. Different raw inputs sanitizing to the same form get distinct safes
2. Whitespace-vs-underscore in directory segment
3. Dotfile-prefix collision
4. Idempotency: same raw → same safe across calls
5. Clean inputs skip the disambiguator (cosmetic guarantee)
6. Disambiguator survives leaf truncation (long mutated leaf)
7. Disambiguator survives total-cap fallback (pathological depth)

62 unit + 66 integration = 128/128 pass.
2026-04-28 12:52:04 +09:00
Danny Avila
24e29aa8cb
🌱 fix: Inject Code-Tool Files Into Graph Sessions on First Call (+ read_file Sandbox Fallback) (#12831)
* 🌱 fix: Seed Code Tool Files Into Graph Sessions on First Call

Files attached to an agent's `tool_resources.execute_code` (user uploads
or generated artifacts from a prior turn) were silently dropped on the
first `execute_code` invocation of a turn. The agents-side `ToolNode`
populates `_injected_files` only when its `sessions` map already has an
`EXECUTE_CODE` entry — but that entry is only written by a previous
successful execution, so call #1 had nothing to inject. CodeExecutor
then fell back to a `/files/{session_id}` fetch, but `session_id` was
also empty on call #1, leaving the sandbox without the primed files.

Mirror the existing skill-priming pattern (`primeInvokedSkills` →
`initialSessions`) for code-resource files: eagerly call `primeFiles`
before `createRun` and merge the result into `initialSessions` via a
new `seedCodeFilesIntoSessions` helper. Skill files and code-resource
files now share the same `EXECUTE_CODE` entry; the prior representative
`session_id` is preserved on merge.

* 🔬 chore: Add Diagnostic Logging for Code-Files Seeding

Temporary debug logs to diagnose why first-call file injection is not
firing in real agent runs. Logs `wantsCodeExec`, available tool-resource
keys, primed file count, and the seeded EXECUTE_CODE entry. Will revert
once the failure mode is identified.

* 🪛 refactor: Capture primedCodeFiles per-agent at init, merge across run

Replace the client.js eager `primeFiles` call with a per-agent capture at
initialization time so every agent in a multi-agent run (primary +
handoff + addedConvo) contributes its `tool_resources.execute_code`
files to the shared `Graph.sessions` seed.

- handleTools.js (eager loadTools): the `execute_code` factory closes
  over a `primedCodeFiles` slot and surfaces it in the return.
- ToolService.js loadToolDefinitionsWrapper (event-driven): captures
  `files` from the existing `primeCodeFiles` call (was dropping them
  while only keeping `toolContext`) and surfaces them.
- packages/api initialize.ts: the loadTools callback contract now
  includes `primedCodeFiles`, threaded onto `InitializedAgent`.
- client.js: iterate `[primary, ...agentConfigs.values()]` and merge
  each agent's `primedCodeFiles` into `initialSessions`. Drop the
  primary-only `primeCodeFiles` call and diagnostic logs from the prior
  attempt — wrong layer (single-agent), wrong gate (`agent.tools`
  contained Tool instances after init, so the `.includes("execute_code")`
  string check always failed).

* 🔬 chore: Add per-agent diagnostic logs for code-files seeding

Logs `tool_resources` keys + file counts inside loadToolDefinitionsWrapper
and per-agent `primedCodeFiles` + final initialSessions inside
AgentClient. Will revert once the failure mode is confirmed.

* 🔬 chore: Add file-lookup diagnostics inside initializeAgent

Logs the inputs and intermediate counts of the conversation-file lookup
chain (convo file ids, thread message ids, code-generated and
user-code file counts) so we can pinpoint why `tool_resources.execute_code`
is arriving empty at `loadToolDefinitionsWrapper` despite the agent
having `execute_code` in its tools list.

* 🔬 chore: Probe execute_code files without messageId filter

Adds a relaxed `getFiles({conversationId, context: execute_code})` probe
that runs only when `getCodeGeneratedFiles` returns empty. Lists what's
actually in the DB for this conversation so we can confirm whether the
file is missing entirely or whether the messageId filter is rejecting it.

* 🔬 chore: Fix probe getFiles arg order (sort vs projection)

Probe was passing a projection object as the sort arg, which mongoose
rejected with `Invalid sort value`. Move it to the third arg
(selectFields) so the probe actually runs.

* 🪢 fix: Preserve Original messageId on Code-Output File Update

Each `processCodeOutput` call was overwriting the persisted file's
`messageId` with the *current* run's id. When a turn re-creates an
existing file (filename + conversationId match → `claimCodeFile`
returns the existing record, `isUpdate=true`), the file's link to
the assistant message that originally produced it gets clobbered.

`initializeAgent` later runs `getCodeGeneratedFiles({messageId: $in: <thread>})`
to seed `tool_resources.execute_code` from prior-turn artifacts. With a
stale `messageId` (e.g. from a failed read attempt that re-shelled the
same filename), the file no longer matches the parent-walk thread, so
`tool_resources` arrives empty at agent init, the new
`primedCodeFiles` channel has nothing to seed, and the LLM can't see
its own prior-turn artifacts on the next turn — defeating the
just-added Graph-sessions seeding fix.

Preserve the existing `claimed.messageId` on update; first-creation
behavior is unchanged. The runtime return value still includes the
current run's `messageId` (via `Object.assign(file, { messageId })`)
so the artifact is correctly attributed to the live tool_call.

* 🧹 chore: Remove diagnostic logs from code-files seeding path

Drops the temporary debug logs added to trace the empty-tool_resources
failure mode. Production code paths (loadToolDefinitionsWrapper,
client.js seed loop, initializeAgent file lookup) are left as the
permanent shape: capture primedCodeFiles, merge across agents, seed
initialSessions before run start.

* 🪛 feat: read_file Sandbox Fallback for /mnt/data + Non-Skill Paths

When the model called `read_file` with a code-execution path (e.g.
`/mnt/data/sentinel.txt`), the handler returned a misleading
`Use format: {skillName}/{path}` error. Adds a sandbox-aware fallback:

- Short-circuit `/mnt/data/...` (can never be a skill reference) →
  route to a sandbox `cat` via the new host-provided `readSandboxFile`
  callback, which POSTs to the codeapi `/exec` endpoint.
- Skip the skill resolver entirely when `accessibleSkillIds` is empty
  — the resolved-output of `resolveAgentScopedSkillIds` already
  collapses the admin capability + ephemeral badge + persisted
  `skills_enabled` chain, so an empty value is the authoritative
  "skills aren't in scope for this agent" signal.
- For `{firstSegment}/...` paths, consult the catalog-derived
  `activeSkillNames` Set (no DB read) to detect non-skill names and
  fall through to the sandbox before the model has to retry with
  `bash_tool`.

`activeSkillNames` is captured from `injectSkillCatalog`, threaded onto
`InitializedAgent`, into `agentToolContexts`, then through
`enrichWithSkillConfigurable` into `mergedConfigurable` for the handler.

The host implementation of `readSandboxFile` lives in
`api/server/services/Files/Code/process.js` and shells `cat <path>`
through the seeded sandbox session — `tc.codeSessionContext`
(emitted by ToolNode for `read_file` calls in `@librechat/agents`
v3.1.72+) provides the `session_id` + `_injected_files` so the read
lands in the same sandbox that holds prior-turn artifacts. When the
seeded context isn't available (older agents version, no codeapi
configured), the handler returns a model-visible error pointing at
`bash_tool` instead of silently failing.

Tests: 8 new `handleReadFileCall` cases cover the new short-circuits,
the skills-not-enabled gate, the activeSkillNames lookup, the
sandbox-fallback success path, and the bash_tool retry hint on
fallback failure. Existing `read_file` tests now opt into "skills are
in scope" via a `skillsInScope()` fixture (production wouldn't reach
the skill lookup with empty `accessibleSkillIds`).

* 🔧 chore: Update @librechat/agents dependency to version 3.1.72

Bumps the version of the @librechat/agents package across package-lock.json and relevant package.json files to ensure compatibility with the latest features and fixes.

* 🪛 refactor: Centralize Tool-Session Seed in buildInitialToolSessions Helper

Addresses review feedback on the per-agent merge in client.js:

- **Run-wide semantics, named explicitly.** The merge into a single
  `Graph.sessions[EXECUTE_CODE]` was a deliberate match to the
  agents-library design (`Graph.sessions` is shared across every
  `ToolNode` in the run), but the inline `for (const a of agents)`
  loop in `AgentClient.chatCompletion` made it look per-agent. Move
  the logic to a TS helper `buildInitialToolSessions` that documents
  the run-wide-by-design contract in one place. The CJS controller
  now contains a single call site, no business logic.

- **Subagent walk (P2).** The previous loop only iterated
  `[primary, ...agentConfigs.values()]`. Pure subagents are pruned
  out of `agentConfigs` after init and retained on each parent's
  `subagentAgentConfigs`, so their primed code files were silently
  dropped from the seed. The helper now walks recursively, with a
  visited-Set keyed on object identity that terminates safely on a
  malformed agent graph (cycle).

- **`jest.setup.cjs` polyfill for undici `File`.** Reviewer hit
  `ReferenceError: File is not defined` running the targeted spec on
  WSL — a known Node 18 issue where `globalThis.File` from
  `node:buffer` isn't auto-exposed. Polyfill it inside a Jest setup
  file so the suite boots regardless of Node patch version.

Helper test coverage (8 new): skill-only / agent-only / both,
recursive subagent walk, cycle-safe walk, primary+subagent
deduplication, undefined/null entries in the agents iterable, and
representative session_id preservation across the merge.

16 tests pass total in `codeFilesSession.spec.ts` (8 prior + 8 new).
No behavior change vs. the previous commit for the existing
primary+agentConfigs case — subagent inclusion is the only new
behavior, and it matches what the existing seeding logic would have
done if subagents had been in `agentConfigs`.

* 🪛 fix: FIFO Walk Order in buildInitialToolSessions (P3 review)

The traversal used `Array.pop()` (LIFO), which visited the LAST
top-level agent first. The docstring says "primary first"; the code
contradicted it. When no skill seed exists the first-visited agent's
first file supplies the representative `session_id` written to
`Graph.sessions[EXECUTE_CODE]` — so a LIFO walk silently flipped which
agent that came from. `ToolNode` ultimately uses per-file `session_id`s
for runtime injection (so behavior was indistinguishable for current
callers), but the discrepancy was a footgun for any future consumer
that read the representative.

Switch to FIFO via `Array.shift()` to match both the docstring and the
existing `loadSubagentsFor` walk pattern in
`Endpoints/agents/initialize.js`. Add a regression test that asserts
the primary's `session_id` is the representative (and that all three
agents' files still contribute, with per-file `session_id`s preserved).

* 🔬 test: Lock In Code-Files Bug Fixes Per Comprehensive Review

Addresses MAJOR + MINOR + NIT findings from the multi-pass review:

**Finding #4 (MINOR) — empty relativePath misses sandbox fallback.**
A model calling `read_file("output/")` where "output" isn't a skill
name dead-ended with `Missing file path after skill name` instead of
being routed to the sandbox like every other malformed-path branch.
Add the same `codeEnvAvailable → handleSandboxFileFallback` pattern,
plus two regression tests.

**Finding #7 (NIT) — duplicate `skillsInScope()` helper.**
Hoist the identical helper out of two nested describe blocks to
module scope. Single source of truth.

**Finding #1 (MAJOR) — `persistedMessageId` had zero test coverage.**
The fix preserves a file's original `messageId` on update so
`getCodeGeneratedFiles` can still match it on subsequent turns. A
regression in the `isUpdate ? (claimed.messageId ?? messageId) : messageId`
ternary would silently re-introduce the original cross-turn priming
bug. Five new tests cover:
- UPDATE preserves `claimed.messageId` in the persisted record
- UPDATE falls back to current run id when `claimed.messageId` is
  absent (legacy records predating the field)
- CREATE uses current run id (no claimed record exists)
- The runtime return value uses the LIVE id (artifact attribution)
  even when the persisted record kept the original
- The image branch follows the same contract (would silently regress
  if the ternary diverged across the two file-build branches)

The tests use a `snapshotCreateFileArgs()` helper because
`processCodeOutput` mutates the file object after `createFile`
returns (`Object.assign(file, { messageId, toolCallId })`) and a
naive `createFile.mock.calls[0][0]` would reflect the post-mutation
state instead of what was actually persisted.

**Finding #2 (MAJOR) — `readSandboxFile` had no direct tests.**
The model-controlled `file_path` flows through a POSIX single-quote
escape into a shell `cat` command, making this a security boundary.
A quoting regression would let a malicious filename break out of the
quoted argument and inject arbitrary shell. 20 new tests across:
- Shell quoting (7): plain filenames, embedded `'`, `$()`, backticks,
  newlines, shell metachars, multiple consecutive single-quotes
- Payload shape (6): /exec URL, bash language, conditional
  session_id / files inclusion, dedicated keepAlive:false agents
- Response handling (6): `{content}` on success, null on missing
  base URL or absent stdout, throws on stderr-only, partial-success
  returns stdout, transport errors are logged then rethrown
- Timeout (1): matches processCodeOutput's 15s SLA

Audited findings #5 (acknowledged tech debt — readSandboxFile in JS
workspace), #6 (pre-existing positional-args debt on
enrichWithSkillConfigurable), and #8 (cosmetic JSDoc style) — no
action taken per the reviewer's own assessment.

Audited finding #3 (walk order vs docstring) — already addressed in
commit 007f32341 which converted to FIFO via `queue.shift()` plus a
regression test. The audit was performed against an earlier PR head.

Tests: 152 packages/api + 195 api JS = 347 pass. Typecheck clean.

* 🪛 fix: Pure-Subagent codeEnv + Primed-Skill Routing + ToolService Early Returns

Three findings from the second-pass review:

**P2 — Pure subagents missed `codeEnvAvailable`** (initialize.js).
The pure-subagent init path didn't forward the endpoint-level
`codeEnvAvailable` flag to `initializeAgent`, unlike the primary,
handoff, and addedConvo paths. A code-enabled subagent loaded only
through `subagentAgentConfigs` initialized with
`codeEnvAvailable: false`, so even though the recursive seed walk
found its primed code files, the subagent's own `bash_tool` /
`read_file` sandbox fallback were silently gated off. Forward the
flag and add `codeEnvAvailable: config.codeEnvAvailable` to the
`agentToolContexts.set` for symmetry with the other paths.

**P2 — Primed skills outside the catalog cap were misrouted to
sandbox** (handlers.ts). Manual ($-popover) and always-apply primes
are intentionally resolved off the wider `accessibleSkillIds` ACL
set BEFORE catalog injection — see `resolveManualSkills` for why a
skill outside the `SKILL_CATALOG_LIMIT` cap can still be authorized
for direct manual invocation. The `activeSkillNames` shortcut ran
before reading `skillPrimedIdsByName`, so a primed skill not in the
catalog would fall through to the sandbox instead of resolving via
the pinned `_id`. Read the primed map first and bypass the shortcut
for primed names. New regression test asserts a primed-but-not-
cataloged skill resolves through the existing skill path with
`getSkillByName` invoked and `readSandboxFile` NOT called.

**P3 — `loadAgentTools` early returns dropped `primedCodeFiles`**
(ToolService.js). The non-`definitionsOnly` path captures the field
correctly, but two early-return branches (no-action-tools fast path,
no-action-sets fast path) omitted it. Any traditional
`loadAgentTools(..., definitionsOnly: false)` caller using
execute_code without action tools would have its first-call session
seed silently empty. Add `primedCodeFiles` to both early returns
for consistency with the final return shape.

Tests: 153 packages/api + 195 api JS = 348 pass.

* 🧹 chore: Document jest.mock arrow-indirection pattern in process.spec.js

Per the second-pass review's Finding #2 (NIT, "would help future
readers"): the mock setup mixes direct `jest.fn()` references with
arrow-function indirection (`(...args) => mockX(...args)`). The
indirection isn't stylistic — it's required because `jest.mock(...)`
is hoisted above the outer `const` declarations at parse time, so a
direct reference would capture `undefined`. Inline comment explains
the pattern so the next reader doesn't have to reverse-engineer it
or accidentally "simplify" the mocks and break per-test
`mockReturnValueOnce` / `mockImplementationOnce` overrides.

* 🪛 fix: Five Issues from Pass-N + Codex Review (incl. 404 root cause)

Five real bugs surfaced by another review pass + Codex PR comments
+ the codeapi-side logs we collected during manual testing:

**1) `processCodeOutput` 404 root cause (`callbacks.js`).**
The codeapi worker emits TWO distinct `session_id`s on a tool result:
  - `artifact.session_id` is the EXEC session — the sandbox VM that
    ran the bash command. Files don't live there; it's torn down
    post-execution.
  - `file.session_id` is the STORAGE session — the file-server
    bucket prefix where artifacts actually live.
`callbacks.js` was passing the EXEC id to `processCodeOutput`, which
builds `/download/{session_id}/{id}` and 404s because the file-server
doesn't know about that path. This explains every "Error
downloading/processing code environment file" we saw during testing.
Use `file.session_id ?? output.artifact.session_id` (per-file id with
artifact-level fallback for older worker payloads).

**2) `primeFiles` reupload pushed STALE sandbox ids (`process.js`).**
When `getSessionInfo` returns null (file expired/missing in sandbox),
`reuploadFile` re-uploads via `handleFileUpload`, gets a NEW
`fileIdentifier`, and persists it on the DB record. But `pushFile`
was a closure capturing the OLD `(session_id, id)` parsed at the top
of the loop, so the in-memory `files[]` array (now consumed by
`buildInitialToolSessions` to seed `Graph.sessions`) silently
referenced a sandbox object that no longer existed. The first tool
call would 404 trying to mount it; only the next turn's metadata
re-read would correct course. Parameterize `pushFile` with optional
`(session_id, id)` overrides; in `reuploadFile` parse the new
identifier and pass through. 2 regression tests.

**3) Codex P2 — Cap sandbox fallback output before line-numbering
(`handlers.ts`).** The new `handleSandboxFileFallback` returned
`addLineNumbers(result.content)` without a size guard, so reading a
multi-MB `/mnt/data/*` artifact materialized the file twice in
memory (raw + line-numbered) before downstream truncation. Match the
existing skill-file path's `MAX_READABLE_BYTES` (256KB): truncate
raw first, then number, surface the truncation to the model so it
can use `bash_tool` (`head` / `tail`) for the rest. 2 tests
(oversized truncates with hint, in-cap doesn't).

**4) Codex P2 — Dedupe seeded code files by `(session_id, id)`
(`codeFilesSession.ts`).** Multiple agents in a run commonly carry
the same primed execute-code resources (shared conversation files);
without dedupe, `_injected_files` grows proportionally to agent
count and bloats every `/exec` POST. Use a `(session_id, id)`
identity key so first-seen wins (preserves source ordering); name
alone isn't sufficient because two distinct primed uploads can
share a filename across different sessions. 4 tests covering dedup
across iterations, against pre-existing entries, name-collision
distinct-session preservation, and the multi-agent realistic case
in `buildInitialToolSessions`.

**5) Pass-N P2 — Polyfill `globalThis.File` in api Jest setup
(`api/test/jestSetup.js`).** `packages/api/jest.setup.cjs` had the
polyfill; the legacy api workspace's Jest config has its own
`setupFiles` that didn't, so on Node 18 / WSL the api focused tests
still failed at import time with `ReferenceError: File is not
defined` from undici. Mirror the polyfill.

Tests: 159 packages/api + 206 api JS = 365 pass. Typecheck clean.

* 🔧 chore: Update @librechat/agents dependency to version 3.1.73

Bumps the version of the @librechat/agents package across package-lock.json and relevant package.json files to ensure compatibility with the latest features and fixes.
2026-04-27 08:56:39 +09:00
Danny Avila
8c073b4400
📄 feat: Auto-render Text-Based Code Execution Artifacts Inline (#12829)
* 📄 feat: Auto-render Text-Based Code Execution Artifacts Inline

Eagerly extract text content from non-image artifacts produced by code
execution tools and render it inline in the message instead of behind a
click-to-download file card. Reuses the SkillFiles binary-detection
helper and the existing parseDocument dispatcher so docx, xlsx, csv,
html, code, and other text-renderable formats land directly under the
tool call.

PPTX is intentionally classified but not yet extracted — follow-up.

* 🌐 chore: Remove unused com_download_expires locale key

Removed in en/translation.json so the detect-unused-i18n-keys CI check
passes. The only reference was a commented-out localize() call in
LogContent.tsx that was deleted in the previous commit.

* 🩹 fix: Address PR review on code artifact text extraction

- extract.ts: build the temp document path from a randomUUID and pass
  path.basename(name) as originalname so a malicious artifact name
  cannot escape os.tmpdir() (P1 traversal flagged by codex/Copilot).
- process.js: classify and extract using safeName, not the raw name —
  defense in depth alongside the temp-path fix.
- classify.ts: add a bare-name lookup so extensionless text artifacts
  (Makefile, Dockerfile, …) classify as utf8-text instead of falling
  through to other.
- Attachment.tsx: wire aria-expanded / aria-controls on the show-all
  toggle for screen reader support.
- LogContent.tsx: restore a download chip (LogLink) on inline-text
  attachments so users can still pull down the underlying file.
- Tests: cover extensionless filenames and the temp-path traversal
  invariant.

* 🩹 fix: Address comprehensive PR review on code artifact extraction

- extract.ts: walk back to a UTF-8 code-point boundary before truncating
  so cuts cannot land mid-multibyte and emit U+FFFD (CJK/emoji concern).
  truncate() now accepts the original buffer to skip a redundant encode.
- extract.ts: add an 8s timeout around parseDocument via Promise.race so
  a pathological docx/xlsx cannot stall the response path.
- process.js: always set `text` (string or null) on the file payload —
  createFile uses findOneAndUpdate with $set semantics, so omitting the
  field leaves a stale value behind when an artifact's content changes.
- Attachment.tsx: switch the show-all toggle from char-count threshold
  to a useLayoutEffect ref measurement on scrollHeight, and use
  overflow-hidden when collapsed (overflow-auto when expanded) so the
  collapsed box has a single clear interaction model.
- Attachment.tsx + LogContent.tsx: lift `isImageAttachment` /
  `isTextAttachment` into a shared attachmentTypes module. LogContent
  keeps its looser image check (no width/height required) because the
  legacy log surface receives attachments without dimensions.
- Tests: cover multi-byte boundary, the always-set-text contract on
  updates, and the new shared predicates.

* 🧪 test: Component test for TextAttachment + direct withTimeout coverage

- Attachment.tsx: re-order local imports longest-to-shortest per
  AGENTS.md (attachmentTypes ahead of FileContainer/Image).
- extract.ts: export withTimeout so it can be unit-tested directly
  (it's also used internally — exporting carries no runtime cost).
- extract.spec.ts: three small unit tests on withTimeout that cover
  resolve, propagated rejection, and timeout rejection paths with
  real timers.
- TextAttachment.test.tsx: ten cases for the new React component —
  text rendering in <pre>, download chip presence/absence, ref-based
  collapse measurement (with scrollHeight stubbed via prototype),
  aria-expanded toggle, fall-through to FileAttachment for missing
  and empty text, and AttachmentGroup routing.

* 🩹 fix: Canonicalize document MIME by extension before parseDocument

When the classifier puts a file on the document path via its extension
(.docx, .xlsx, …) but the buffer sniffer returned a generic value like
application/zip or application/octet-stream, we previously forwarded
that generic MIME to parseDocument, which dispatches strictly by MIME
and silently rejected it — exactly defeating the extension-first
classification this PR added.

extractDocument now remaps the MIME from the extension (falling back
to the original sniffed MIME if the extension is unrecognized, so files
that reached the document branch via MIME detection still work). Adds
a parameterized test across docx/xlsx/xls/ods/odt against zip/octet
sniffs to guard the regression.

* 🩹 fix: Reuse existing withTimeout from utils/promise

The previous commit's local withTimeout export collided with the
already-exported `withTimeout` from `~/utils/promise`, breaking the
@librechat/api tsc job (TS2308 ambiguous re-export).

Drops the duplicate, imports from `~/utils/promise`, and removes the
now-redundant unit tests (the helper has its own coverage in
utils/promise.spec.ts). The third argument shifts from a label to the
fully-formed timeout error message that the existing helper expects.

* 🧹 chore: TextAttachment test polish (NITs)

- Use the conventional `import Attachment, { AttachmentGroup }` form
  rather than `default as Attachment`.
- Save the original `scrollHeight` property descriptor and restore it
  in afterAll, so the prototype patch never leaks past this suite.
2026-04-26 02:04:00 -07:00
Danny Avila
35bf04b26c 🧰 refactor: Unify code-execution tools (#12767)
* 🛠️ feat: Add registerCodeExecutionTools helper

Idempotently registers `bash_tool` + `read_file` in the run's tool
registry and tool-definition list via a registry `.has()` dedupe. Sets
up the single code-execution tool path shared by:

- `initializeAgent` (when an agent has `execute_code` in its tools and
  the capability is enabled for the run)
- `injectSkillCatalog` (when skills are active; unconditional read_file,
  bash_tool follows `codeEnvAvailable`)

Both callers reach the helper in the same initialization sequence, so
the second call becomes a no-op and exactly one copy of each tool
reaches the LLM — no more double registration for agents that combine
`execute_code` capability with active skills.

Unit-tested on a fresh run, idempotence (second call, overlap with
prior tooldefs, partial overlap), and the no-registry variant.

* 🔀 refactor: Route injectSkillCatalog bash_tool + read_file through registerCodeExecutionTools

The `skill` tool is still registered inline (it's skill-path-specific),
but `bash_tool` + `read_file` now flow through the shared idempotent
helper so a prior registration from the execute_code path doesn't
produce a duplicate copy later in the same run. Behavior preserved:

- `read_file` always registers when any active skill is in scope —
  manually-primed `disable-model-invocation: true` skills still need it
  to load `references/*` from storage.
- `bash_tool` follows `codeEnvAvailable` exactly as before.

Adds a test pinning the cross-call dedupe: when `injectSkillCatalog`
runs AFTER `registerCodeExecutionTools` has already seeded the registry
+ tool definitions with bash_tool/read_file, the resulting
`toolDefinitions` still contains exactly one copy of each.

* 🪄 feat: Expand `execute_code` tool name into bash_tool + read_file at initialize-time

When an agent's `tools` include `execute_code` and the `execute_code`
capability is enabled for the run, `initializeAgent` now registers
`bash_tool` + `read_file` via `registerCodeExecutionTools` before
`injectSkillCatalog`. The legacy `execute_code` tool definition is no
longer handed to the LLM — `execute_code` remains on the agent
document as a capability-trigger marker, but the runtime expands it
into the skill-flavored tool pair.

Call ordering matters: the `execute_code` registration runs BEFORE
`injectSkillCatalog`, so the skill path's own `registerCodeExecutionTools`
call inside `injectSkillCatalog` becomes a no-op via the registry's
`.has()` check. Exactly one copy of each tool reaches the LLM whether
the agent has:

- only `execute_code` (legacy path)
- only skills
- both

No data migration needed — `agent.tools: ['execute_code']` stays in
the DB unchanged; the expansion is a runtime operation.

Three tests cover the matrix: execute_code + capability on →
bash_tool + read_file registered; execute_code + capability off →
neither registered; no execute_code + capability on → neither
registered.

* 🗑️ refactor: Drop CodeExecutionToolDefinition from the builtin registry

Removes the legacy `execute_code` entry from `agentToolDefinitions` and
the corresponding import. With the initialize-time expansion in place,
nothing consults `getToolDefinition('execute_code')` for a tool schema
any more — the capability gate still filters on the string
`execute_code`, but the actual tool definitions the LLM sees come from
`registerCodeExecutionTools` (i.e. `bash_tool` + `read_file`).

`loadToolDefinitions` in `packages/api/src/tools/definitions.ts`
silently drops `execute_code` when it no longer resolves in the
registry — that's the expected path and is now covered by an updated
test. No caller of `getToolDefinition('execute_code')` expects a
non-undefined result after this change.

* 🔌 refactor: Read CODE_API_KEY from env for primeCodeFiles + PTC

Finishes the Phase 4 server-env-keyed rollout on the two remaining
`loadAuthValues({ authFields: [EnvVar.CODE_API_KEY] })` sites in
`ToolService.js`:

- `primeCodeFiles` (user-attached file priming on execute_code agents)
- Programmatic Tool Calling (`createProgrammaticToolCallingTool`)

Both now read `process.env[EnvVar.CODE_API_KEY]` directly, matching
`bash_tool`'s pattern. The per-user plugin-auth path is no longer
consulted for code-env credentials anywhere in the hot path — the
agents library owns the actual tool-call execution and also reads the
env var internally.

Priming still fires for existing user-file workflows so the legacy
`toolContextMap[execute_code]` hint ("files available at /mnt/data/...")
stays in the prompt; only the key lookup changed.

* 🔧 fix: Type the pre-seeded dedupe-test tools as LCTool

CI TypeScript type checks caught `{ parameters: {} }` in the new
cross-call dedupe test: `LCTool.parameters` is a `JsonSchemaType`,
not `{}`. Use `{ type: 'object', properties: {} }` and type the
local registry Map through the parameter-derived shape so the
pre-seeded values match what `toolRegistry.set` expects.

* 🛡️ fix: Run execute_code expansion before GOOGLE_TOOL_CONFLICT gate

Codex review caught a latent regression: the original Phase 8 placement
ran `registerCodeExecutionTools` after `hasAgentTools` was computed,
so an execute-code-only agent on Google/Vertex with provider-specific
`options.tools` populated would no longer trip `GOOGLE_TOOL_CONFLICT`
— the legacy `CodeExecutionToolDefinition` used to populate
`toolDefinitions` before the guard, but after dropping it from the
registry, `toolDefinitions` stayed empty until my expansion ran
downstream of the guard. Mixed provider + agent tools would silently
flow through to the LLM.

Fix moves the `execute_code` expansion to BEFORE `hasAgentTools`
computation. `bash_tool` + `read_file` now contribute to the check
the same way the legacy `execute_code` def did. Covered by a new
test that pins the Google+execute_code+provider-tools scenario —
the `rejects.toThrow(/google_tool_conflict/)` path would have
silently passed on the prior placement.

* 🔗 fix: Thread codeEnvAvailable through handoff sub-agents

Round-2 codex review caught the other half of the execute_code
expansion gap: `discoverConnectedAgents` omitted `codeEnvAvailable`
from its forwarded `initializeAgent` params, so handoff sub-agents
with `agent.tools: ['execute_code']` lost the `bash_tool` + `read_file`
registration (pre-Phase 8 the legacy `CodeExecutionToolDefinition`
would have landed in their `toolDefinitions` via the registry).

- Add `codeEnvAvailable?` to `DiscoverConnectedAgentsParams` and
  forward it verbatim on every sub-agent `initializeAgent` call.
- Update the three JS call sites that construct the primary's
  `codeEnvAvailable` (`services/Endpoints/agents/initialize.js`,
  `controllers/agents/openai.js`, `controllers/agents/responses.js`)
  to pass the same flag into `discoverConnectedAgents` — one
  authoritative source per request.
- Two regression tests in `discovery.spec.ts` pin the true/false
  passthrough so a future refactor that drops the param-forwarding
  surfaces immediately.

Left intentionally unchanged: `packages/api/src/agents/openai/service.ts`
(public API helper with no in-repo caller). External consumers of
`createAgentChatCompletion` who want code execution should pass a
`codeEnvAvailable`-aware `initializeAgent` via `deps` — documenting
the full public-API surface is out of scope for this Phase 8 PR.

* 🔗 fix: Thread codeEnvAvailable through addedConvo + memory-agent paths

Round-3 codex review caught the last two production `initializeAgent`
callers missing the Phase-8 capability flag:

- `api/server/services/Endpoints/agents/addedConvo.js` (multi-convo
  parallel agent execution). Added `codeEnvAvailable` to
  `processAddedConvo`'s destructured params and forwarded it into
  the per-added-agent `initializeAgent` call. Caller in
  `api/server/services/Endpoints/agents/initialize.js` passes the
  same `codeEnvAvailable` it computed for the primary.
- `api/server/controllers/agents/client.js` (`useMemory` — memory
  extraction agent). Computes its own `codeEnvAvailable` from
  `appConfig?.endpoints?.[EModelEndpoint.agents]?.capabilities` and
  forwards into `initializeAgent`. Memory agents rarely list
  `execute_code`, but if one does, pre-Phase 8 they got the legacy
  `execute_code` tool registered unconditionally — the passthrough
  restores parity.

With this, every production caller of `initializeAgent` explicitly
resolves the capability: main chat flow (primary + handoff), OpenAI
chat completions (primary + handoff), Responses API (primary + handoff),
added convo parallel agents, and memory agents. The one remaining
caller, `packages/api/src/agents/openai/service.ts::createAgentChatCompletion`,
is a public API helper with no in-repo consumer (external callers
must pass a capability-aware `initializeAgent` via `deps`).

* 🪤 fix: Remove duplicate appConfig declaration causing TDZ ReferenceError

The Responses API controller had TWO `const appConfig = req.config;`
bindings inside `createResponse`: one at the top of the function
(added by the Phase 4 `bash_tool` decouple) and one inside the try
block (added by the polish PR #12760). Because `const` is block-scoped
with a temporal dead zone, the inner redeclaration put `appConfig` in
TDZ for the entire try block, so any earlier reference inside the
try — notably `appConfig?.endpoints?.[EModelEndpoint.agents]?.allowedProviders`
at line 348 — threw `ReferenceError: Cannot access 'appConfig'
before initialization`. The error was silently swallowed by the
outer try/catch, leaving `recordCollectedUsage` unreached and the
six `responses.unit.spec.js` token-usage tests failing.

Removing the inner redeclaration fixes the six failing tests
(verified: 11/11 pass locally post-fix, 0 regressions elsewhere).
The outer function-scoped binding already provides `appConfig` to
every downstream reference.

* 🔗 fix: Thread codeEnvAvailable through the OpenAI chat-completion public API

Round-4 codex review (legitimate on the type-safety angle, even though the
runtime concern was already covered): the `createAgentChatCompletion`
helper defines its own narrower `InitializeAgentParams` interface locally,
and the type was missing `codeEnvAvailable`. External consumers who
supply a capability-aware `deps.initializeAgent` couldn't route
`codeEnvAvailable` through without a type-cast workaround.

- Widen the local `InitializeAgentParams` interface to include
  `codeEnvAvailable?: boolean` (matches the real
  `packages/api/src/agents/initialize.ts` type).
- Derive `codeEnvAvailable` inside `createAgentChatCompletion` from
  `deps.appConfig?.endpoints?.agents?.capabilities` (the same source
  the in-repo controllers use) and forward to `deps.initializeAgent`.
  Uses a string literal `'execute_code'` lookup so this file stays free
  of a `librechat-data-provider` import — keeping the dependency surface
  of the public helper minimal.

With this, external consumers of `createAgentChatCompletion` who pass
`appConfig` with the agents capabilities get `bash_tool` + `read_file`
registration automatically; consumers who don't pass `appConfig` retain
the existing "explicit opt-in" semantics (the flag stays `undefined`,
expansion is skipped).

* 🧹 chore: Review-driven polish — observability log, JSDoc DRY, test gaps, no-op allocation

Addresses the comprehensive review of PR #12767:

- **Finding #1** (MINOR, observability): `initializeAgent` now emits a
  debug log when an agent lists `execute_code` in its tools but the
  runtime gate is off (`params.codeEnvAvailable` !== true). The
  event-driven `loadToolDefinitionsWrapper` path doesn't log
  capability-disabled warnings, so without this the tool silently
  vanishes from the LLM's definitions with zero trace. Operators
  debugging "why isn't code interpreter working?" now get a signal at
  the initialize layer.

- **Finding #5** (NIT, allocation): `registerCodeExecutionTools` now
  returns the input `toolDefinitions` array by reference on the no-op
  path (both tools already registered by a prior caller in the same
  run) instead of allocating a fresh spread array every time. The
  common dual-call scenario — `initializeAgent` then
  `injectSkillCatalog` — saves one O(n) copy per request.

- **Finding #4** (NIT, DRY): Collapsed the duplicated 6-line JSDoc
  comment in `openai.js`, `responses.js`, and `addedConvo.js` into
  either a one-line `@see DiscoverConnectedAgentsParams.codeEnvAvailable`
  pointer (the two JS call sites) or a compact 3-line block referring
  back to the canonical source (addedConvo's @param).

- **Finding #2** (MINOR, test gap): Added
  `api/server/services/Endpoints/agents/addedConvo.spec.js` with three
  cases covering `codeEnvAvailable=true`, `codeEnvAvailable=false`,
  and omitted (undefined) passthrough. A future refactor that drops
  the param from destructuring now surfaces here instead of silently
  regressing multi-convo parallel agents with `execute_code`.

- **Finding #3** (MINOR, test gap): Added
  `api/server/controllers/agents/__tests__/client.memory.spec.js`
  pinning the capability-flag derivation that `AgentClient::useMemory`
  uses — six cases covering present/absent/null/undefined config shapes
  plus an enum-literal pin (`'execute_code'` / `'agents'`). Catches
  enum renames or config-path shifts that would otherwise silently
  strip `bash_tool` + `read_file` from memory agents.

Finding #7 (jest.mock scoping, confidence 40) left as-is: the
reviewer's own risk assessment noted `buildToolSet` doesn't touch
the mocked exports, and restructuring a file-level `jest.mock` to
`jest.doMock` + dynamic `import()` introduces more complexity than
the speculative risk justifies. The existing mock is scoped to the
test file and contains the same stubs the adjacent
`skills.test.ts` already uses.

Finding #6 (PR description commit count) addressed out-of-band via
PR description update.

All existing tests pass, typecheck clean, lint clean across touched
files. New tests: 9 cases across 2 new spec files.

* 🧽 refactor: Replace hardcoded 'execute_code' string with AgentCapabilities enum in service.ts

Follow-up review (conf 55) caught that `openai/service.ts`'s Phase 8
`codeEnvAvailable` derivation used the literal `'execute_code'` while
every in-repo controller uses `AgentCapabilities.execute_code` from
`librechat-data-provider`. The file deliberately uses local type
interfaces to keep the public API helper's type surface small, but
that pattern was never a ban on single-value imports from the data
provider — `packages/api` already depends on it. Importing the enum
value means a future rename of `AgentCapabilities.execute_code`
propagates to this file automatically, matching the in-repo
controllers' behavior.

Other follow-up findings left as-is per the reviewer's own verdict:

- #2 (memory spec mirrors the production expression rather than
  calling `AgentClient::useMemory` directly): reviewer flagged as
  "not blocking" / "design-philosophy observation." The test file's
  JSDoc already explicitly documents the tradeoff and pins the enum
  literals to catch the most likely drift vector. Standing up
  `AgentClient` + all its mocks for a one-line regression guard is
  disproportionate.
- #3 (`addedConvo.spec.js` mock signature vs. underlying
  `loadAddedAgent` arity): reviewer's own confidence 25 noted the
  mock matches the wrapper's actual call pattern in the production
  file. Not a real gap.
- #4 was self-retracted as a false alarm.

* 🗑️ refactor: Fully deprecate CODE_API_KEY — remove all LibreChat-side references

The code-execution sandbox no longer authenticates via a per-run
`CODE_API_KEY` (frontend or backend). Auth moved server-side into the
agents library / sandbox service, so LibreChat drops every reference:

**Backend plumbing:**
- `api/server/services/Files/Code/crud.js`: `getCodeOutputDownloadStream`,
  `uploadCodeEnvFile`, `batchUploadCodeEnvFiles` no longer accept
  `apiKey` or send the `X-API-Key` header.
- `api/server/services/Files/Code/process.js`: `processCodeOutput`,
  `getSessionInfo`, `primeFiles` drop the `apiKey` param throughout.
- `api/server/services/ToolService.js`: stop reading
  `process.env[EnvVar.CODE_API_KEY]` for `primeCodeFiles` and PTC; the
  agents library handles auth internally. Remove the now-dead
  `loadAuthValues` + `EnvVar` imports. Drop the misleading
  "LIBRECHAT_CODE_API_KEY" hint from the bash_tool error log.
- `api/server/services/Files/process.js`: remove the `loadAuthValues`
  call around `uploadCodeEnvFile`.
- `api/server/routes/files/files.js`: code-env file download no longer
  fetches a per-user key.
- `api/server/controllers/tools.js`: `execute_code` is no longer a
  tool that needs verifyToolAuth with `[EnvVar.CODE_API_KEY]` — the
  endpoint always reports system-authenticated so the client skips
  the key-entry dialog. `processCodeOutput` called without `apiKey`.
- `api/server/controllers/agents/callbacks.js`: `processCodeOutput`
  invoked without the loadAuthValues round trip, for both LegacyHandler
  and Responses-API handlers.
- `api/app/clients/tools/util/handleTools.js`: `createCodeExecutionTool`
  called with just `user_id` + files.

**packages/api:**
- `packages/api/src/agents/skillFiles.ts`: `PrimeSkillFilesParams`,
  `PrimeInvokedSkillsDeps`, `primeSkillFiles`, `primeInvokedSkills` all
  drop the `apiKey` param; the gate is purely `codeEnvAvailable`.
- `packages/api/src/agents/handlers.ts`: `handleSkillToolCall` drops
  the `process.env[EnvVar.CODE_API_KEY]` read; skill-file priming is
  now gated solely on `codeEnvAvailable`. `ToolExecuteOptions`
  signatures drop apiKey from `batchUploadCodeEnvFiles` and
  `getSessionInfo`.
- `packages/api/src/agents/skillConfigurable.ts`: JSDoc no longer
  references the env var.
- `packages/api/src/tools/classification.ts`: PTC creation no longer
  gated on `loadAuthValues`; `buildToolClassification` drops the
  `loadAuthValues` dep entirely (no LibreChat-side callers need it for
  this path anymore).
- `packages/api/src/tools/definitions.ts`: `LoadToolDefinitionsDeps`
  drops the `loadAuthValues` field.

**Frontend:**
- Delete `client/src/hooks/Plugins/useAuthCodeTool.ts`,
  `useCodeApiKeyForm.ts`, and
  `client/src/components/SidePanel/Agents/Code/ApiKeyDialog.tsx` —
  the install/revoke dialogs for CODE_API_KEY are fully dead.
- `BadgeRowContext.tsx`: drop `codeApiKeyForm` from the context type and
  provider. `codeInterpreter` toggle treated as always authenticated
  (sandbox auth is server-side).
- `ToolsDropdown.tsx`, `ToolDialogs.tsx`, `CodeInterpreter.tsx`,
  `RunCode.tsx`, `SidePanel/Agents/Code/Action.tsx` +`Form.tsx`: all
  API-key dialog trigger refs, "Configure code interpreter" gear
  buttons, and auth-verification plumbing removed. The
  "Code Interpreter" toggle is now a plain `AgentCapabilities.execute_code`
  checkbox — no key-entry gate.
- `client/src/locales/en/translation.json`: drop the three
  `com_ui_librechat_code_api*` keys and `com_ui_add_code_interpreter_api_key`.
  Other locales are externally automated per CLAUDE.md.

**Config:**
- `.env.example`: remove the `# LIBRECHAT_CODE_API_KEY=your-key` section
  and its header.

**Tests:**
- `crud.spec.js`: assertions flipped to pin "no X-API-Key header" and
  "no apiKey param".
- `skillFiles.spec.ts`: removed env-var save/restore; tests now pin
  that the batch-upload path is gated solely on `codeEnvAvailable` and
  that no apiKey is threaded through.
- `handlers.spec.ts`: same — just the `codeEnvAvailable` gate pins
  remain.
- `classification.spec.ts`: remove the two tests that asserted
  `loadAuthValues` was (not) called for PTC.
- `definitions.spec.ts`: drop every `loadAuthValues: mockLoadAuthValues`
  entry from the deps shape.
- `process.spec.js`: strip the mock of `EnvVar.CODE_API_KEY`.

**Comment hygiene:**
- `tools.ts`, `initialize.ts`, `registry/definitions.ts`: shortened
  stale comment references to "legacy `execute_code` tool" without
  naming the retired env var.

Tests verified: 678 packages/api tests pass, 836 backend api tests
pass. Typecheck clean, lint clean. Only remaining CODE_API_KEY
mentions in the code are two regression-guard assertions:
- `crud.spec.js`: pins "no X-API-Key header" stays absent.
- `skillConfigurable.spec.ts`: pins `configurable` never grows a
  `codeApiKey` field.

* 🧹 chore: Remove the last two CODE_API_KEY name mentions in LibreChat

Follow-up to the prior full deprecation commit: two tests still named
the retired identifier in their regression-guard assertions.

- `packages/api/src/agents/skillConfigurable.spec.ts`: drop the
  "does not inject a codeApiKey key" test. The `codeApiKey` field is
  gone from the production configurable shape, so an absence-assertion
  naming it re-introduces the retired identifier in code.
- `api/server/services/Files/Code/crud.spec.js`: rename the
  "without an X-API-Key header" case back to "should request stream
  response from the correct URL" and drop the
  `expect(headers).not.toHaveProperty('X-API-Key')` assertion. The
  surrounding request-shape checks (URL, timeout, responseType) still
  pin the behavior; the explicit header-absence line was named-after
  the deprecated contract.

Result: `grep -rn "CODE_API_KEY\|codeApiKey\|LIBRECHAT_CODE_API_KEY"`
against the LibreChat source tree returns zero hits. The only
remaining `X-API-Key` strings in this repo are on unrelated OpenAPI
Action + MCP server auth configurations, where the string is
user-facing config, not a LibreChat-owned identifier.

Tests: 677 packages/api pass (2 pre-existing summarization e2e failures
unrelated); 126 api-workspace controller/service tests pass.
Typecheck and lint clean.

* 🎯 fix: Narrow codeEnvAvailable to per-agent (admin cap AND agent.tools)

Before this commit, `codeEnvAvailable` was computed in the three JS
controllers as the admin-level capability flag only
(`enabledCapabilities.has(AgentCapabilities.execute_code)`) and passed
through `initializeAgent` → `injectSkillCatalog` / `primeInvokedSkills` /
`enrichWithSkillConfigurable` unchanged. A skills-only agent whose
`tools` array didn't include `execute_code` still got `bash_tool`
registered (via `injectSkillCatalog`) and skill files re-primed to the
sandbox on every turn — wrong, because the agent never opted in to
code execution.

**Fix:** `initializeAgent` now computes the per-agent effective value
once as `params.codeEnvAvailable === true && agent.tools.includes(Tools.execute_code)`,
reuses the same boolean for:

1. The `execute_code` → `bash_tool + read_file` expansion gate
   (previously already consulted `agent.tools`; now shares the single
   `effectiveCodeEnvAvailable` binding).
2. The `injectSkillCatalog` call (previously got the raw admin flag).
3. The returned `InitializedAgent.codeEnvAvailable` field (new, typed as
   required boolean).

**Controllers (initialize.js, openai.js, responses.js):** store
`primaryConfig.codeEnvAvailable` in `agentToolContexts.set(primaryId, ...)`,
capture `config.codeEnvAvailable` in every handoff `onAgentInitialized`
callback, and read it from the per-agent ctx inside the
`toolExecuteOptions.loadTools` runtime closure. The hoisted
`const codeEnvAvailable = enabledCapabilities.has(...)` locals in the
two OpenAI-compat controllers are gone — they were shadowing the
narrowed per-agent value.

**primeInvokedSkills:** `handlePrimeInvokedSkills` in
`services/Endpoints/agents/initialize.js` now uses
`primaryConfig.codeEnvAvailable` (per-agent, narrowed) instead of the
raw admin flag. A skills-only primary agent won't re-prime historical
skill files to the sandbox even when the admin enabled the capability
globally.

**Efficiency:** one extra `&&` in `initializeAgent`. No runtime hot-path
cost — the `includes()` scan on `agent.tools` was already happening for
the `execute_code` expansion gate; it's now just bound to a local. Tool
execution closures read `ctx.codeEnvAvailable === true` (property
access + strict equality, O(1)).

**Ephemeral-agent note:** per-agent narrowing is authoritative for both
persisted and ephemeral flows. The ephemeral toggle
(`ephemeralAgent.execute_code`) is reconciled into `agent.tools`
upstream in `packages/api/src/agents/added.ts`, so
`agent.tools.includes('execute_code')` is the single source of truth
by the time `initializeAgent` runs.

**Tests:** two new regression tests pin the narrowing contract:

- `initialize.test.ts` — four-quadrant matrix on
  `InitializedAgent.codeEnvAvailable` (cap on × agent asks, cap on ×
  doesn't ask, cap off × asks, neither). Catches future refactors that
  drop either half of the AND.
- `skills.test.ts` — `injectSkillCatalog` with `codeEnvAvailable: false`
  against an active skill catalog must NOT register `bash_tool` even
  though it still registers `read_file` + `skill`. This is the state
  a skills-only agent gets post-narrowing.

All 191 affected packages/api tests pass + 836 backend api tests pass.
Typecheck clean, lint clean.

* 🧽 refactor: Comprehensive-review polish — hoist tool defs, pin verifyToolAuth contract, doc appConfig

Addresses the comprehensive review of Phase 8. Findings mapped:

**#1 (MINOR): `verifyToolAuth` unconditional auth for execute_code**
- Added doc comment explicitly stating the deployment contract
  (admin capability → reachable sandbox; no per-check health probe
  to keep UI-gate queries O(1)).
- New `api/server/controllers/__tests__/tools.verifyToolAuth.spec.js`
  with 4 regression tests pinning the contract:
  1. `authenticated: true` + `SYSTEM_DEFINED` for execute_code.
  2. 404 for unknown tool IDs.
  3. `loadAuthValues` is never consulted (catches a future revert
     that would resurface the per-user key-entry dialog).
  4. Response `message` is never `USER_PROVIDED`.

**#2 (MINOR): `openai/service.ts` undocumented `appConfig` dependency**
- Expanded the `ChatCompletionDependencies.appConfig` JSDoc to spell
  out that omitting it silently disables code execution for agents
  with `execute_code` in their tools. External consumers of
  `createAgentChatCompletion` now have the contract documented at
  the type boundary.

**#5 (NIT): `registerCodeExecutionTools` re-allocates tool defs**
- Hoisted `READ_FILE_DEF` and `BASH_TOOL_DEF` to module-level
  `Object.freeze`d constants. The shapes derive entirely from
  static `@librechat/agents` exports, so a single frozen object per
  tool is safe to share across every agent init. Eliminates the
  ~4-property allocations on every call (including the common
  second-call no-op path).

**#6 (NIT): Verbose history-priming comment in initialize.js**
- Trimmed the 16-line `handlePrimeInvokedSkills` block to a 5-line
  summary with `@see InitializedAgent.codeEnvAvailable` pointer.
  The canonical narrowing explanation lives on the type; the
  controller comment is just the ACL-vs-capability rationale.

**Skipped:**

- #3 (memory spec tests a mirror function): reviewer self-dismissed
  as a design tradeoff; the enum-literal pin already catches the
  highest-risk drift vector.
- #4 (cross-repo contract for `createCodeExecutionTool`): user will
  explicitly install the latest `@librechat/agents` dev version
  once the companion PR publishes, so the version pin will be
  authoritative.
- #7 (migration/deprecation note for self-hosters): out of scope
  per user direction — release notes handle this.

Tests verified: 679 packages/api + 840 backend api tests pass.
Typecheck + lint clean.

* 🔧 chore: Update @librechat/agents version to 3.1.68-dev.1 across package-lock and package.json files

This commit updates the version of the `@librechat/agents` package from `3.1.68-dev.0` to `3.1.68-dev.1` in the `package-lock.json` and relevant `package.json` files. This change ensures consistency across the project and incorporates any updates or fixes from the new version.
2026-04-25 04:02:01 -04:00
Danny Avila
64ec5f18b8 ⚙️ feat: Skill runtime integration: catalog, tools, execution, file priming (#12649)
* feat: Skill runtime integration — catalog injection, tool registration, execute handler

Wires the @librechat/agents SkillTool primitive into LibreChat's agent runtime:

**Enums:**
- Add `skills` to AgentCapabilities + defaultAgentCapabilities

**Data layer:**
- Add `getSkillByName(name, accessibleIds)` — compound query that
  combines name lookup + ACL check in one findOne

**Agent initialization (packages/api/src/agents/initialize.ts):**
- Accept `accessibleSkillIds` param and `listSkillsByAccess` db method
- Query accessible skills, format catalog via `formatSkillCatalog()`,
  append to `additional_instructions` (appears in agent system prompt)
- Register `SkillToolDefinition` + `createSkillTool()` when catalog
  is non-empty (tool appears in model's tool list)
- Store `accessibleSkillIds` and `skillCount` on InitializedAgent

**Execute handler (packages/api/src/agents/handlers.ts):**
- Add `getSkillByName` to `ToolExecuteOptions`
- `handleSkillToolCall()` intercepts `Constants.SKILL_TOOL`:
  extracts skillName, loads body from DB with ACL check,
  substitutes $ARGUMENTS, returns ToolExecuteResult with
  injectedMessages (skill body as isMeta user message)

**Caller wiring:**
- initialize.js: query skill IDs via findAccessibleResources,
  pass to initializeAgent + store on agentToolContexts,
  add getSkillByName to toolExecuteOptions,
  pass accessibleSkillIds through loadTools configurable
- openai.js + responses.js: same pattern for their flows

Requires @librechat/agents >= 3.1.65 (PR #91 exports).

* feat: Skills toggle in tools menu + backend capability gating

Frontend:
- Add skills?: boolean to TEphemeralAgent type
- Add LAST_SKILLS_TOGGLE_ to LocalStorageKeys for persistence
- Add skillsEnabled to useAgentCapabilities hook
- Add skills useToolToggle to BadgeRowContext with localStorage init
- New Skills.tsx badge component (Scroll icon, cyan theme,
  permission-gated via PermissionTypes.SKILLS)
- Add skills entry to ToolsDropdown with toggle + pin
- Render Skills badge in BadgeRow ephemeral section

Backend:
- Extract injectSkillCatalog() into packages/api/src/agents/skills.ts
  (reduces initializeAgent module size, reusable helper)
- initializeAgent delegates to helper instead of inline block
- Capability-gate the findAccessibleResources query:
  - Agents endpoint: checks AgentCapabilities.skills in admin config
  - OpenAI/Responses controllers: checks ephemeralAgent.skills toggle
- ACL query runs once per run, result shared across all agents

* refactor: remove createSkillTool() instance from injectSkillCatalog

SkillTool is event-driven only. The tool definition in toolDefinitions
is sufficient for the LLM to see the tool schema. No tool instance is
needed since the host handler intercepts via ON_TOOL_EXECUTE before
tool.invoke() is ever called.

Removes tools from InjectSkillCatalogParams/Result, drops the
createSkillTool import.

* feat: skill file priming, bash tool, and invoked skills state

Multi-file skill support:
- New primeSkillFiles() helper (packages/api/src/agents/skillFiles.ts)
  uploads skill files + SKILL.md body to code execution environment
- handleSkillToolCall primes files on invocation when skill.fileCount > 0,
  returns session info as artifact so ToolNode stores the session
- Skill-primed files available to subsequent bash/code tool calls

Bash tool auto-registration:
- BashExecutionToolDefinition added alongside SkillToolDefinition when
  skills are enabled, giving the model a bash tool for running scripts

Conversation state:
- Add invokedSkillIds field to conversation schema (Mongoose + Zod)
- handleSkillToolCall updates conversation with $addToSet on success
- Enables re-priming skill files on subsequent runs (future)

Dependency wiring:
- Pass listSkillFiles, getStrategyFunctions, uploadCodeEnvFile,
  updateConversation through ToolExecuteOptions
- Pass req and codeApiKey through mergedConfigurable
- All three controller entry points wired (initialize.js, openai.js,
  responses.js)

* fix: load bash_tool instance in loadToolsForExecution, remove file listing

- Add createBashExecutionTool to loadToolsForExecution alongside PTC/ToolSearch
  pattern: loads CODE_API_KEY, creates bash tool instance on demand
- Add BASH_TOOL and SKILL_TOOL to specialToolNames set so they don't go
  through the generic loadTools path (bash is created here, skill is
  intercepted in handler before tool.invoke)
- Remove file name listing from skill content text — it's the skill
  author's responsibility to disclose files in SKILL.md, not the framework

* feat: batch upload for skill files, replace sequential uploads

- Add batchUploadCodeEnvFiles() to crud.js: single POST to /upload/batch
  with all files in one multipart request, returns shared session_id
- Rewrite primeSkillFiles to collect all streams (SKILL.md + bundled files)
  then do one batch upload instead of N sequential uploads
- Replace uploadCodeEnvFile with batchUploadCodeEnvFiles across all callers
  (handlers.ts, initialize.js, openai.js, responses.js)

* refactor: remove invokedSkillIds from conversation schema

Skills aren't re-loaded between runs, so conversation-level state for
invoked skills doesn't help. Skill state will live on messages instead
(like tool_search discoveredTools and summaries), enabling in-place
re-injection on follow-up runs.

Removes invokedSkillIds from: convo Mongoose schema, IConversation
interface, Zod schema, ToolExecuteOptions.updateConversation, and
all three caller wiring points.

* feat: smart skill file re-priming with session freshness checking

Schema:
- Add codeEnvIdentifier field to ISkillFile (type + Mongoose schema)
- Add updateSkillFileCodeEnvIds batch method (uses tenantSafeBulkWrite)
- Export checkIfActive from Code/process.js

Extraction:
- Add extractInvokedSkillsFromHistory() to run.ts — scans message
  history for AIMessage tool_calls where name === 'skill', extracts
  skillName args. Follows same pattern as extractDiscoveredToolsFromHistory.

Smart re-priming in primeSkillFiles:
- Before batch uploading, checks if existing codeEnvIdentifiers are
  still active via getSessionInfo + checkIfActive (23h threshold)
- If session is still active, returns cached references (zero uploads)
- If stale or missing, batch-uploads everything and persists new
  identifiers on SkillFile documents (fire-and-forget)
- Single session check covers all files (batch shares one session_id)

Wiring:
- Pass getSessionInfo, checkIfActive, updateSkillFileCodeEnvIds
  through ToolExecuteOptions and all three controller entry points

* feat: wire skill file re-priming at run start via initialSessions

Flow:
1. initialize.js creates primeInvokedSkills callback with all deps
2. client.js calls it with message history before createRun
3. extractInvokedSkillsFromHistory scans for skill tool calls
4. For each invoked skill with files, primeSkillFiles uploads/checks
5. Returns initialSessions map passed to createRun
6. createRun passes initialSessions to Run.create (via RunConfig)
7. Run constructor seeds Graph.sessions, making skill files available
   to subsequent bash/code tool calls via ToolNode session injection

Requires @librechat/agents with initialSessions on RunConfig (PR #94).

* refactor: use CODE_EXECUTION_TOOLS set for code tool checks

Import CODE_EXECUTION_TOOLS from @librechat/agents and replace inline
constant checks in handlers.ts and callbacks.js. Fixes missing bash
tool coverage in the session context injection (handlers.ts) and code
output processing (callbacks.js).

* refactor: move primeInvokedSkills to packages/api, add skill body re-injection

Moves primeInvokedSkills from an inline closure in initialize.js (with
dynamic requires) to a proper exported function in packages/api
skillFiles.ts with explicit typed dependencies.

Key changes:
- primeInvokedSkills now returns both initialSessions (for file priming)
  AND injectedMessages (skill bodies for context continuity)
- createRun accepts invokedSkillMessages and appends skill bodies to
  systemContent so the model retains skill instructions across runs
- initialize.js calls the packaged function with all deps passed explicitly
- client.js passes both initialSessions and injectedMessages to createRun

* fix: move dynamic requires to top-level module imports

Move primeInvokedSkills, getStrategyFunctions, batchUploadCodeEnvFiles,
getSessionInfo, and checkIfActive from inline requires to top-level
module requires where they belong.

* refactor: skill body reconstruction via formatAgentMessages, not systemContent

Replaces the lazy systemContent approach with proper message-level
reconstruction:

SDK (formatAgentMessages):
- New invokedSkillBodies param (Map<string, string>)
- Reconstructs HumanMessages after skill ToolMessages at the correct
  position in the message sequence, matching where ToolNode originally
  injected them

LibreChat:
- extractInvokedSkillsFromPayload replaces extractInvokedSkillsFromHistory
  (works with raw TPayload before formatAgentMessages, not BaseMessage[])
- primeInvokedSkills now takes payload instead of messages, returns
  skillBodies Map instead of injectedMessages
- client.js calls primeInvokedSkills BEFORE formatAgentMessages, passes
  skillBodies through as the 4th param
- Removed invokedSkillMessages from createRun (no more systemContent hack)
- Single-pass: skill detection happens inside formatAgentMessages' existing
  tool_call processing loop, zero extra message iterations

* refactor: rename skillBodies to skills for consistency with SDK param

* refactor: move auth loading into primeInvokedSkills, pass loadAuthValues as dep

The payload/accessibleSkillIds guard and CODE_API_KEY loading now live
inside primeInvokedSkills (packages/api) rather than in the CJS caller.
initialize.js passes loadAuthValues as a dependency and the callback
is only created when skillsCapabilityEnabled.

* feat: ReadFile tool + conditional bash registration + skill path namespacing

ReadFile tool (read_file):
- General-purpose file reader, event-driven (ON_TOOL_EXECUTE)
- Schema: { file_path: string } — "{skillName}/{path}" convention
- handleReadFileCall: resolves skill name from path, ACL check, reads
  from DB cache or storage, binary detection, size limits (256KB),
  lazy caching (512KB), line numbers in output
- SKILL.md special case: reads skill.body directly
- Dispatched alongside SKILL_TOOL in createToolExecuteHandler
- Added to specialToolNames in ToolService

Conditional tool registration:
- ReadFile + SkillTool: always registered when skills enabled
- BashTool: only registered when codeEnvAvailable === true
- codeEnvAvailable passed through InitializeAgentParams from caller

Skill file path namespacing:
- primeSkillFiles now uploads as "{skillName}/SKILL.md" and
  "{skillName}/{relativePath}" instead of flat names
- Prevents file collisions when multiple skills are invoked

Wiring:
- getSkillFileByPath + updateSkillFileContent passed through
  ToolExecuteOptions in all three callers

* feat: return images/PDFs as artifacts from read_file, tighten caching

Binary artifact support:
- Images (png, jpeg, gif, webp) returned as base64 in artifact.content
  with type: 'image_url', processed by existing callback attachment flow
- PDFs returned as base64 artifact similarly
- Binary size limit: 10MB (MAX_BINARY_BYTES)
- Other binary files still return metadata + bash fallback

Caching:
- Text cached only on first read (file.content == null check)
- Binary flag cached only on first detection (file.isBinary == null)
- Skill files are immutable; no redundant cache writes

Registration:
- ReadFileToolDefinition now includes responseFormat: 'content_and_artifact'

* chore: update @librechat/agents to version 3.1.66-dev.0 and add peer dependencies in package-lock.json and package.json files

* fix: resolve review findings #1,#2,#4,#5,#6,#10,#13

Critical:
- #1: primeInvokedSkills now accumulates files across all skills into
  one session entry instead of overwriting. Parallel processing via
  Promise.allSettled.
- #2: codeEnvAvailable now computed and passed in openai.js and
  responses.js (was missing, bash tool never registered in those flows)

Major:
- #4: relativePath in updateSkillFileCodeEnvIds now strips the
  {skillName}/ prefix to match SkillFile documents. SKILL.md filter
  uses endsWith instead of exact match.
- #5: File priming guarded on apiKey being non-empty (skip when not
  configured instead of failing with auth error)
- #6: Skills processed in parallel via Promise.allSettled instead of
  sequential for-of loop

Minor:
- #10: Use top-level imports in initialize.js instead of inline requires
- #13: Log warning when skill catalog reaches the 100-skill limit

* fix: resolve followup review findings N1,N2,N4

N1 (CRITICAL): Wire skill deps into responses.js non-streaming path.
Was completely missing getSkillByName, file strategy functions, etc.

N2 (MAJOR): Single batch upload for ALL skills' files. Resolves skills
in parallel (Phase 1), then collects all file streams across skills
and does ONE batchUploadCodeEnvFiles call (Phase 2). All files share
one session_id, eliminating cross-session isolation issues.

N4 (MINOR): Move inline require() to top-level in openai.js and
responses.js, consistent with initialize.js.

* fix: add mocks for new file strategy imports in controller tests

* fix: restore session freshness check, parallelize file lookups, add warnings

R1: Re-add session freshness check before batch upload. Checks any
existing codeEnvIdentifier via getSessionInfo + checkIfActive. If the
session is still active (23h window), returns cached file references
with zero re-uploads.

R2: listSkillFiles calls parallelized via Promise.all (were sequential
in the for-of loop).

R3: Log warning when skill record lookup fails during identifier
persistence (was a silent empty-string fallback).

* fix: guard freshness cache on single-session consistency

* fix: multi-session freshness check (code env handles mixed sessions natively)

The code execution environment fetches each file by its own
{session_id, fileId} pair independently — no single-session
requirement. Removed the sessionIds.size === 1 guard.

Now checks ALL distinct sessions for freshness. If every session
is still active (23h window), returns cached references with per-file
session_ids preserved. If any session expired, falls through to
re-upload everything in a single batch.

* perf: parallelize session freshness checks via Promise.all

* fix: add optional chaining for session info retrieval in primeInvokedSkills

Updated the primeInvokedSkills function to use optional chaining for getSessionInfo and checkIfActive methods, ensuring safer access and preventing potential runtime errors when these methods are undefined.

* fix: address review findings #1-#9 + Codex P1/P2 + session probe

Critical:
- #1/Codex P1: Add codeApiKey loading to openai.js and responses.js
  loadTools configurable (was missing, file priming broken in 2/3 paths)
- Codex P1: Fix cached file name prefix in primeSkillFiles cache path
  (was sf.relativePath, now ${skill.name}/${sf.relativePath})

Major:
- Codex P2: Honor ephemeral skills toggle in agents endpoint
  (check ephemeralAgent?.skills !== false alongside admin capability)
- #4: Early size check using file.bytes from DB before streaming
  (prevents full-file buffer for oversized files)

Minor:
- #5: Replace Record<string, any> with Record<string, boolean | string>
- #6: Localize Pin/Unpin aria-labels with com_ui_pin/com_ui_unpin
- #8: Parallelize stream acquisition in primeSkillFiles via
  Promise.allSettled
- #9: Log warning for partial batch upload failures with filenames

Performance:
- Session probe optimization: getSessionInfo now hits per-object
  endpoint (GET /sessions/{sid}/objects/{fid}) instead of listing
  entire session (GET /files/{sid}?detail=summary). O(1) stat vs
  O(N) list + linear scan.

* refactor: extract shared skill wiring helper + add unit tests

DRY (#3):
- New skillDeps.js exports getSkillToolDeps() with all 9 skill-related
  deps (getSkillByName, listSkillFiles, getStrategyFunctions, etc.)
- Replaces 5 identical copy-paste blocks across initialize.js, openai.js,
  responses.js (streaming + non-streaming paths)
- One place to maintain when skill deps change

Tests (#2):
- 8 unit tests for extractInvokedSkillsFromPayload covering:
  string args, object args, missing skill tool_calls, non-assistant
  messages, malformed JSON, empty skillName, empty payload, dedup

* fix: remove @jest/globals import, use global jest env

* fix: resolve round 2 review findings R2-1 through R2-7

R2-1 (toggle semantics): openai.js + responses.js now check admin
  capability (AgentCapabilities.skills) alongside ephemeral toggle.
  Aligns with initialize.js.

R2-2 (swallowed error): primeInvokedSkills now logs
  updateSkillFileCodeEnvIds failures (was .catch(() => {}))

R2-4 (test cast): Record<string, string> → Record<string, unknown>

R2-5 (DRY regression): Extract enrichWithSkillConfigurable() into
  skillDeps.js. Replaces 4 identical loadAuthValues blocks.
  Each loadTools callback is now a one-liner. JSDoc added (R2-6).

R2-7 (sequential streams): primeInvokedSkills now uses
  Promise.allSettled for parallel stream acquisition.

* fix: require explicit skills toggle + treat partial cache as miss

- initialize.js: change ephemeralSkillsToggle !== false to === true
  (unset toggle no longer enables skills)
- primeSkillFiles cache: require ALL files to have codeEnvIdentifier
  before using cache (partial persistence = cache miss = re-upload)
- primeInvokedSkills cache: same check (allFilesWithIds.length must
  equal total file count)

* fix: pass entity_id=skillId on batch upload, eliminates per-user cache thrashing

primeSkillFiles now passes entity_id: skill._id.toString() to
batchUploadCodeEnvFiles. This scopes the code env session to the
skill, not the user. All users sharing a skill share the same
uploaded files — no more cache thrashing from overwriting each
other's codeEnvIdentifier.

The stored codeEnvIdentifier now includes ?entity_id= suffix so
freshness checks pass the entity_id through to the per-object
stat endpoint. Both primeSkillFiles and primeInvokedSkills
store consistent identifier formats.

* fix: pass entity_id on multi-skill batch upload, consistent identifier format

* Revert "fix: pass entity_id on multi-skill batch upload, consistent identifier format"

This reverts commit c85ce2161e.

* refactor: per-skill upload in primeInvokedSkills, eliminate multi-skill batch

Replace the monolithic multi-skill batch upload with per-skill
primeSkillFiles calls. Each skill gets its own session with
entity_id=skillId, ensuring:

- Correct session auth (entity_id matches on freshness checks)
- Per-skill freshness caching (only expired skills re-upload)
- Shared skill sessions work across users (same entity_id=skillId)
- Code env handles mixed session_ids natively

The big batch block (stream collection, single upload, identifier
mapping) is replaced by a simple loop over primeSkillFiles, which
already handles freshness caching, batch upload, and identifier
persistence per-skill.

* fix: resolve review findings #1,#3-5,#7,#9-11

Critical:
- #1: Strip ?entity_id= query string before splitting codeEnvIdentifier
  into session_id/fileId (was corrupting cached file IDs in 4 locations)

Major:
- #4: Parallelize per-skill primeSkillFiles via Promise.allSettled
- #5: Add logger.warn to all empty .catch(() => {}) on cache writes

Minor:
- #7: Add logger.debug to enrichWithSkillConfigurable catch block
- #9: Use error instanceof Error guard in batchUploadCodeEnvFiles
- #10: Move enrichWithSkillConfigurable to TypeScript in packages/api
  (skillConfigurable.ts), skillDeps.js wraps with loadAuthValues dep
- #11: Reduce MAX_BINARY_BYTES from 10MB to 5MB (~11.5MB peak with b64)

* fix: forward entity_id in session probe + always register bash tool

Codex P2 (entity_id in probe): getSessionInfo now preserves and
forwards query params (including entity_id) to the per-object stat
endpoint. Without this, identifiers stored as ...?entity_id=... would
fail auth checks because the entity_id scope was dropped.

Codex P2 (bash tool availability): Remove codeEnvAvailable gate from
injectSkillCatalog. Bash tool definition is now always registered when
skills are enabled. Actual tool instance creation still happens at
execution time in loadToolsForExecution (which loads per-user
credentials). This ensures users with per-user CODE_API_KEY get
bash without requiring a global env var at init time.

Removes codeEnvAvailable from InjectSkillCatalogParams,
InitializeAgentParams, and all three controller entry points.

* fix: add debug logging to primeInvokedSkills catch, rename export alias

* fix: stub bash tool when no key + remove PDF artifact path

Codex P1 (bash tool): When CODE_API_KEY is unavailable, create a stub
tool that returns "Code execution is not available. Use read_file
instead." This prevents "tool not found" errors from the model
repeatedly calling bash_tool in no-code-env deployments while still
registering the definition for per-user credential users.

Codex P2 (PDF artifacts): Remove PDF image_url artifact path. The
host artifact pipeline processes image_url via saveBase64Image which
fails for PDFs. PDFs now fall through to the generic binary handler
("Use bash to process"). TODO comment for future document artifact
support.

Also: isImageOrPdf → isImage in early size checks (PDFs are no
longer treated as artifact candidates).

* fix: remove dead PDF_MIME constant, hoist skillToolDeps, document session_id

- #7: Remove unused PDF_MIME constant (dead code after PDF artifact removal)
- #11: Hoist skillToolDeps to module-level constant (avoid per-call allocation)
- #6: Document that CodeSessionContext.session_id is a representative value;
  ToolNode uses per-file session_id from the files array

* fix: call toolEndCallback for skill/read_file artifacts + clear codeEnvIdentifier on re-upload

Codex P1 (toolEndCallback bypass): skill and read_file handler branches
returned early, bypassing the toolEndCallback that processes artifacts
(image attachments). Now calls toolEndCallback when the result has an
artifact, using the same metadata pattern as the normal tool.invoke path.

Codex P1 (stale identifiers): upsertSkillFile now $unset's
codeEnvIdentifier alongside content and isBinary when a file is
re-uploaded. Prevents the freshness cache from returning references
to old file content after a skill file is replaced.

* fix: add session_id comment at cached path, rename skillResult to handlerResult

* fix: return content_and_artifact from bash stub so result.content is populated

* fix: deterministic skill lookup, dedup warning, and multi-session freshness check

- getSkillByName: add sort({updatedAt:-1}) so name collisions resolve
  deterministically to the most recently updated skill
- injectSkillCatalog: warn when multiple accessible skills share a name
- primeSkillFiles: check ALL distinct sessions for freshness, not just
  the first file's session, preventing stale refs after partial bulkWrite

* refactor: update icon import in Skills component

- Replaced the Scroll icon with ScrollText in the Skills component for improved clarity and consistency in the UI.

* fix: SKILL.md cache parity, gate bash_tool on code env, fix read_file too-large message

- primeSkillFiles: filter SKILL.md from returned files array on fresh
  upload so cached and non-cached paths return identical file sets
  (SKILL.md is still on disk in the session for bash access)
- injectSkillCatalog: only register bash_tool when codeEnvAvailable is
  true; thread the flag from all three CJS callers via execute_code
  capability check
- handleReadFileCall: tell the model to invoke the skill first before
  suggesting /mnt/data paths for oversized files

* fix: use EnvVar constant, deduplicate auth lookup, validate batch upload, stream byte limit

- Replace hardcoded 'LIBRECHAT_CODE_API_KEY' with EnvVar.CODE_API_KEY
  in skillConfigurable.ts and skillFiles.ts
- Resolve code API key once at run start in initialize.js and pass to
  both primeInvokedSkills and enrichWithSkillConfigurable via optional
  preResolvedCodeApiKey param, eliminating redundant loadAuthValues calls
- Add response structure validation in batchUploadCodeEnvFiles before
  accessing session_id/files to surface unexpected responses early
- Add streaming byte counter in handleReadFileCall that aborts and
  destroys the stream when accumulated bytes exceed MAX_BINARY_BYTES,
  preventing full file buffering when DB metadata is inaccurate

* refactor: update icon import in ToolsDropdown component

- Replaced the Scroll icon with ScrollText in the ToolsDropdown component for improved clarity and consistency in the UI.

* fix: partial upload failure detection, EnvVar in initialize.js, declaration ordering

- primeSkillFiles: return null (failure) when batch upload partially
  succeeds — missing bundled files would cause runtime bash/read
  failures with missing paths in code env
- initialize.js: replace hardcoded 'LIBRECHAT_CODE_API_KEY' with
  EnvVar.CODE_API_KEY imported from @librechat/agents
- initialize.js: move enabledCapabilities, accessibleSkillIds, and
  codeApiKey declarations before the toolExecuteOptions closure that
  references them (eliminates reliance on temporal dead zone hoisting)
2026-04-25 04:02:00 -04:00
Danny Avila
181d705579
🧹 fix: Clean Up Orphaned Agent File Stubs After Deletion (#12781)
* 🧹 fix: Prune Orphaned File References on File Deletion

Deleting a file via the Manage Files tab left its file_id in every agent's
tool_resources.*.file_ids. Stubs accumulate until the frontend dedupe keys
them as duplicates and blocks all new uploads (issue #12776).

- Add removeAgentResourceFilesFromAllAgents in packages/data-schemas: a
  single updateMany/$pullAll across every EToolResources category.
- Invoke it from processDeleteRequest after db.deleteFiles so every
  referencing agent is cleaned up, not just the one passed in req.body.
- Wrap the cleanup in try/catch so a stale agent update cannot mask a
  successful file deletion.

* 🧼 fix: Prune Orphaned File References on Agent Update

Already-affected agents would stay broken even after the delete-time fix:
the stubs sit on the agent document until something strips them. Heal them
on the next save (issue #12776).

- Add collectToolResourceFileIds + stripFileIdsFromToolResources helpers
  in @librechat/api — centralizing the tool_resources traversal used by
  the controller and the follow-up migration script.
- In updateAgentHandler, check the effective tool_resources against the
  files collection. When orphans are found, either strip them from the
  incoming tool_resources (if the update sets them) or run the bulk
  cleanup (if the update leaves tool_resources untouched).

* 🧰 chore: Add Migration to Clean Up Orphaned Agent File References

Complements the delete-time and save-time fixes by healing agents that
already accumulated orphan stubs before the upgrade (issue #12776). The
script is idempotent — re-running it on a clean database is a no-op.

- Add config/migrate-orphaned-agent-files.js following the existing
  migrate-*.js convention: --dry-run by default omitted (writes by
  default) and --batch-size= tuning knob. Streams agents via cursor.
- Register migrate:orphaned-agent-files and :dry-run npm scripts.
- Reuse collectToolResourceFileIds from @librechat/api so migration and
  runtime share the same traversal logic.

* 🩹 fix: Address Codex/Copilot Review on Orphaned Agent File Cleanup

Refines the #12776 fix series based on automated review feedback.

- Scope save-time pruning to the current agent only. When a PATCH
  carries tool_resources, strip orphans from the incoming payload and
  pay the DB round-trip only then. Removes the collection-wide
  updateMany previously triggered when tool_resources was absent
  (Codex P2 / Copilot).
- Wrap the orphan check in try/catch so a transient db.getFiles
  failure can't turn a good save into a 500 (comprehensive review #1).
- Replace Object.values(EToolResources) casts with an explicit list of
  agent-side categories in both orphans.ts and agent.ts. code_interpreter
  belongs to the Assistants API and isn't a key of AgentToolResources —
  including it was a type lie and generated dead MongoDB clauses
  (comprehensive review #3, #8).
- Export TOOL_RESOURCE_KEYS from @librechat/api and consume it in the
  migration script, dropping one duplicated definition (#4).
- Cap migration results.details at 50 sample entries so the memory
  footprint stays bounded on deployments with thousands of corrupted
  agents (Codex P3).
- Add migrate:orphaned-agent-files:batch npm script to match the
  convention set by migrate-agent-permissions / migrate-prompt-permissions
  (#7).
- Add controller-level tests covering the three orphan-pruning paths:
  strip from incoming tool_resources, leave alone when tool_resources
  is absent, swallow db.getFiles errors and still save (#6).
- Back pre-existing "should validate tool_resources in updates" test's
  file_ids with real File docs — the new pruning would otherwise strip
  them, and that test is about OCR conversion / schema filtering, not
  file existence. Register the File model in beforeAll so the fixture
  works.

* 🩹 fix: Tighten TOOL_RESOURCE_KEYS Type and Align Migration Sample Output

Two follow-ups from the second review pass.

- Type data-schemas' TOOL_RESOURCE_KEYS as ReadonlyArray<keyof
  AgentToolResources> instead of readonly string[]. Data-schemas depends
  on data-provider, so the import is clean. Catches typos and aligns
  with the matching export in @librechat/api — doesn't guarantee
  exhaustiveness, but that's a TypeScript limitation, not a workspace
  one.
- Align the migration's console output with DETAIL_SAMPLE_LIMIT: print
  every collected detail (up to 50) and, when more agents were affected
  than the sample size allowed, show a truncation notice. The old hard
  cap of 25 meant affected agents in the 26-50 range were collected
  but never shown.

*  test: Add Integration Coverage for Orphan Cleanup Paths (#12776)

Exercise the delete-time and migration paths end-to-end against a real
in-memory Mongo. Catches integration bugs the isolated unit tests on
each layer couldn't.

- api/server/services/Files/process.integration.spec.js — the primary
  repro: seed an Agent + File, call processDeleteRequest, assert the
  file_id disappears from every referencing agent's tool_resources
  while unrelated agents stay untouched. Also covers the no-op case
  and confirms a failure in the new cleanup step cannot roll back the
  file deletion itself.
- api/test/migrate-orphaned-agent-files.spec.js — drives the migration
  module: --dry-run reports without writing, apply mode prunes across
  every tool_resource category, re-running is idempotent, and
  DETAIL_SAMPLE_LIMIT caps the in-memory sample on wide corruption.
  Mocks only the connect helper (the spec owns the mongoose instance)
  so the real migration code path — cursor, $pullAll, reduce — runs.

* 🔒 fix: Run Orphan Cleanup Migration in System Tenant Context

Codex P2 catch: under TENANT_ISOLATION_STRICT=true, the migration
throws on the very first Agent.countDocuments() because the tenant
isolation plugin fail-closes on queries without tenant context — which
makes migrate:orphaned-agent-files unusable on the exact deployments
most likely to have accumulated corruption.

- Wrap the scan/prune body in runAsSystem so queries bypass the tenant
  filter (SYSTEM_TENANT_ID sentinel). The migration legitimately needs
  cross-tenant visibility — this is the same pattern seedDatabase and
  the S3 refresh job already use.
- Add a regression test that spies on Agent.countDocuments() and
  asserts the active tenantStorage context is SYSTEM_TENANT_ID during
  the call. Pins the wrap against future regressions without the
  brittleness of toggling the strict-mode env var (which caches on
  first read).

Note: the delete-time and save-time paths already run inside an
authenticated HTTP request where tenantStorage.run is set by auth
middleware, so the cleanup naturally scopes to the current tenant —
which is the correct behavior there since file ownership is
tenant-scoped.

* 🧹 chore: Drop Unused path Import From Process Integration Spec

Leftover from an earlier iteration that resolved the migration path
via path.resolve before I switched to a relative require. The import
does nothing now — removing it.
2026-04-22 11:35:48 -07:00
Danny Avila
6183303653
🔉 fix: Normalize audio MIME types in STT format validation (#12674)
* fix: normalize audio MIME types in STT format validation

Use getFileExtensionFromMime() to normalize non-standard MIME types
(e.g. audio/x-m4a, audio/x-wav, audio/x-flac) before checking against
the accepted formats list in azureOpenAIProvider. This is the same class
of bug as #12608 (text/x-markdown), but for STT audio validation.

Only audio/ and video/ MIME prefixes are normalized to prevent
non-audio types from matching via the webm default fallback.

Export getFileExtensionFromMime for testability.

Fixes #12632

* fix: reject unknown audio subtypes in STT format validation

Use MIME_TO_EXTENSION_MAP for normalization instead of
getFileExtensionFromMime() which falls back to 'webm' for unrecognized
types. Gate raw subtype matching on audio/video prefix to prevent
non-audio types (e.g. text/webm) from passing validation.

Resolves Codex review comment about unknown subtypes silently passing.

---------

Co-authored-by: Tobias Jonas <t.jonas@innfactory.de>
2026-04-15 09:58:07 -04:00
Danny Avila
877c2efc85
🏗️ feat: bulkWrite isolation, pre-auth context, strict-mode fixes (#12445)
* fix: wrap seedDatabase() in runAsSystem() for strict tenant mode

seedDatabase() was called without tenant context at startup, causing
every Mongoose operation inside it to throw when
TENANT_ISOLATION_STRICT=true. Wrapping in runAsSystem() gives it the
SYSTEM_TENANT_ID sentinel so the isolation plugin skips filtering,
matching the pattern already used for performStartupChecks and
updateInterfacePermissions.

* fix: chain tenantContextMiddleware in optionalJwtAuth

optionalJwtAuth populated req.user but never established ALS tenant
context, unlike requireJwtAuth which chains tenantContextMiddleware
after successful auth. Authenticated users hitting routes with
optionalJwtAuth (e.g. /api/banner) had no tenant isolation.

* feat: tenant-safe bulkWrite wrapper and call-site migration

Mongoose's bulkWrite() does not trigger schema-level middleware hooks,
so the applyTenantIsolation plugin cannot intercept it. This adds a
tenantSafeBulkWrite() utility that injects the current ALS tenant
context into every operation's filter/document before delegating to
native bulkWrite.

Migrates all 8 runtime bulkWrite call sites:
- agentCategory (seedCategories, ensureDefaultCategories)
- conversation (bulkSaveConvos)
- message (bulkSaveMessages)
- file (batchUpdateFiles)
- conversationTag (updateTagsForConversation, bulkIncrementTagCounts)
- aclEntry (bulkWriteAclEntries)

systemGrant.seedSystemGrants is intentionally not migrated — it uses
explicit tenantId: { $exists: false } filters and is exempt from the
isolation plugin.

* feat: pre-auth tenant middleware and tenant-scoped config cache

Adds preAuthTenantMiddleware that reads X-Tenant-Id from the request
header and wraps downstream in tenantStorage ALS context. Wired onto
/oauth, /api/auth, /api/config, and /api/share — unauthenticated
routes that need tenant scoping before JWT auth runs.

The /api/config cache key is now tenant-scoped
(STARTUP_CONFIG:${tenantId}) so multi-tenant deployments serve the
correct login page config per tenant.

The middleware is intentionally minimal — no subdomain parsing, no
OIDC claim extraction. The private fork's reverse proxy or auth
gateway sets the header.

* feat: accept optional tenantId in updateInterfacePermissions

When tenantId is provided, the function re-enters inside
tenantStorage.run({ tenantId }) so all downstream Mongoose queries
target that tenant's roles instead of the system context. This lets
the private fork's tenant provisioning flow call
updateInterfacePermissions per-tenant after creating tenant-scoped
ADMIN/USER roles.

* fix: tenant-filter $lookup in getPromptGroup aggregation

The $lookup stage in getPromptGroup() queried the prompts collection
without tenant filtering. While the outer PromptGroup aggregate is
protected by the tenantIsolation plugin's pre('aggregate') hook,
$lookup runs as an internal MongoDB operation that bypasses Mongoose
hooks entirely.

Converts from simple field-based $lookup to pipeline-based $lookup
with an explicit tenantId match when tenant context is active.

* fix: replace field-level unique indexes with tenant-scoped compounds

Field-level unique:true creates a globally-unique single-field index in
MongoDB, which would cause insert failures across tenants sharing the
same ID values.

- agent.id: removed field-level unique, added { id, tenantId } compound
- convo.conversationId: removed field-level unique (compound at line 50
  already exists: { conversationId, user, tenantId })
- message.messageId: removed field-level unique (compound at line 165
  already exists: { messageId, user, tenantId })
- preset.presetId: removed field-level unique, added { presetId, tenantId }
  compound

* fix: scope MODELS_CONFIG, ENDPOINT_CONFIG, PLUGINS, TOOLS caches by tenant

These caches store per-tenant configuration (available models, endpoint
settings, plugin availability, tool definitions) but were using global
cache keys. In multi-tenant mode, one tenant's cached config would be
served to all tenants.

Appends :${tenantId} to cache keys when tenant context is active.
Falls back to the unscoped key when no tenant context exists (backward
compatible for single-tenant OSS deployments).

Covers all read, write, and delete sites:
- ModelController.js: get/set MODELS_CONFIG
- PluginController.js: get/set PLUGINS, get/set TOOLS
- getEndpointsConfig.js: get/set/delete ENDPOINT_CONFIG
- app.js: delete ENDPOINT_CONFIG (clearEndpointConfigCache)
- mcp.js: delete TOOLS (updateMCPTools, mergeAppTools)
- importers.js: get ENDPOINT_CONFIG

* fix: add getTenantId to PluginController spec mock

The data-schemas mock was missing getTenantId, causing all
PluginController tests to throw when the controller calls
getTenantId() for tenant-scoped cache keys.

* fix: address review findings — migration, strict-mode, DRY, types

Addresses all CRITICAL, MAJOR, and MINOR review findings:

F1 (CRITICAL): Add agents, conversations, messages, presets to
SUPERSEDED_INDEXES in tenantIndexes.ts so dropSupersededTenantIndexes()
drops the old single-field unique indexes that block multi-tenant inserts.

F2 (CRITICAL): Unknown bulkWrite op types now throw in strict mode
instead of silently passing through without tenant injection.

F3 (MAJOR): Replace wildcard export with named export for
tenantSafeBulkWrite, hiding _resetBulkWriteStrictCache from the
public package API.

F5 (MAJOR): Restore AnyBulkWriteOperation<IAclEntry>[] typing on
bulkWriteAclEntries — the unparameterized wrapper accepts parameterized
ops as a subtype.

F7 (MAJOR): Fix config.js tenant precedence — JWT-derived
req.user.tenantId now takes priority over the X-Tenant-Id header for
authenticated requests.

F8 (MINOR): Extract scopedCacheKey() helper into tenantContext.ts and
replace all 11 inline occurrences across 7 files.

F9 (MINOR): Use simple localField/foreignField $lookup for the
non-tenant getPromptGroup path (more efficient index seeks).

F12 (NIT): Remove redundant BulkOp type alias.
F13 (NIT): Remove debug log that leaked raw tenantId.

* fix: add new superseded indexes to tenantIndexes test fixture

The test creates old indexes to verify the migration drops them.
Missing fixture entries for agents.id_1, conversations.conversationId_1,
messages.messageId_1, and presets.presetId_1 caused the count assertion
to fail (expected 22, got 18).

* fix: restore logger.warn for unknown bulk op types in non-strict mode

* fix: block SYSTEM_TENANT_ID sentinel from external header input

CRITICAL: preAuthTenantMiddleware accepted any string as X-Tenant-Id,
including '__SYSTEM__'. The tenantIsolation plugin treats SYSTEM_TENANT_ID
as an explicit bypass — skipping ALL query filters. A client sending
X-Tenant-Id: __SYSTEM__ to pre-auth routes (/api/share, /api/config,
/api/auth, /oauth) would execute Mongoose operations without tenant
isolation.

Fixes:
- preAuthTenantMiddleware rejects SYSTEM_TENANT_ID in header
- scopedCacheKey returns the base key (not key:__SYSTEM__) in system
  context, preventing stale cache entries during runAsSystem()
- updateInterfacePermissions guards tenantId against SYSTEM_TENANT_ID
- $lookup pipeline separates $expr join from constant tenantId match
  for better index utilization
- Regression test for sentinel rejection in preAuthTenant.spec.ts
- Remove redundant getTenantId() call in config.js

* test: add missing deleteMany/replaceOne coverage, fix vacuous ALS assertions

bulkWrite spec:
- deleteMany: verifies tenant-scoped deletion leaves other tenants untouched
- replaceOne: verifies tenantId injected into both filter and replacement
- replaceOne overwrite: verifies a conflicting tenantId in the replacement
  document is overwritten by the ALS tenant (defense-in-depth)
- empty ops array: verifies graceful handling

preAuthTenant spec:
- All negative-case tests now use the capturedNext pattern to verify
  getTenantId() inside the middleware's execution context, not the
  test runner's outer frame (which was always undefined regardless)

* feat: tenant-isolate MESSAGES cache, FLOWS cache, and GenerationJobManager

MESSAGES cache (streamAudio.js):
- Cache key now uses scopedCacheKey(messageId) to prefix with tenantId,
  preventing cross-tenant message content reads during TTS streaming.

FLOWS cache (FlowStateManager):
- getFlowKey() now generates ${type}:${tenantId}:${flowId} when tenant
  context is active, isolating OAuth flow state per tenant.

GenerationJobManager:
- tenantId added to SerializableJobData and GenerationJobMetadata
- createJob() captures the current ALS tenant context (excluding
  SYSTEM_TENANT_ID) and stores it in job metadata
- SSE subscription endpoint validates job.metadata.tenantId matches
  req.user.tenantId, blocking cross-tenant stream access
- Both InMemoryJobStore and RedisJobStore updated to accept tenantId

* fix: add getTenantId and SYSTEM_TENANT_ID to MCP OAuth test mocks

FlowStateManager.getFlowKey() now calls getTenantId() for tenant-scoped
flow keys. The 4 MCP OAuth test files mock @librechat/data-schemas
without these exports, causing TypeError at runtime.

* fix: correct import ordering per AGENTS.md conventions

Package imports sorted shortest to longest line length, local imports
sorted longest to shortest — fixes ordering violations introduced by
our new imports across 8 files.

* fix: deserialize tenantId in RedisJobStore — cross-tenant SSE guard was no-op in Redis mode

serializeJob() writes tenantId to the Redis hash via Object.entries,
but deserializeJob() manually enumerates fields and omitted tenantId.
Every getJob() from Redis returned tenantId: undefined, causing the
SSE route's cross-tenant guard to short-circuit (undefined && ... → false).

* test: SSE tenant guard, FlowStateManager key consistency, ALS scope docs

SSE stream tenant tests (streamTenant.spec.js):
- Cross-tenant user accessing another tenant's stream → 403
- Same-tenant user accessing own stream → allowed
- OSS mode (no tenantId on job) → tenant check skipped

FlowStateManager tenant tests (manager.tenant.spec.ts):
- completeFlow finds flow created under same tenant context
- completeFlow does NOT find flow under different tenant context
- Unscoped flows are separate from tenant-scoped flows

Documentation:
- JSDoc on getFlowKey documenting ALS context consistency requirement
- Comment on streamAudio.js scopedCacheKey capture site

* fix: SSE stream tests hang on success path, remove internal fork references

The success-path tests entered the SSE streaming code which never
closes, causing timeout. Mock subscribe() to end the response
immediately. Restructured assertions to verify non-403/non-404.

Removed "private fork" and "OSS" references from code and test
descriptions — replaced with "deployment layer", "multi-tenant
deployments", and "single-tenant mode".

* fix: address review findings — test rigor, tenant ID validation, docs

F1: SSE stream tests now mock subscribe() with correct signature
(streamId, writeEvent, onDone, onError) and assert 200 status,
verifying the tenant guard actually allows through same-tenant users.

F2: completeFlow logs the attempted key and ALS tenantId when flow
is not found, so reverse proxy misconfiguration (missing X-Tenant-Id
on OAuth callback) produces an actionable warning.

F3/F10: preAuthTenantMiddleware validates tenant ID format — rejects
colons, special characters, and values exceeding 128 chars. Trims
whitespace. Prevents cache key collisions via crafted headers.

F4: Documented cache invalidation scope limitation in
clearEndpointConfigCache — only the calling tenant's key is cleared;
other tenants expire via TTL.

F7: getFlowKey JSDoc now lists all 8 methods requiring consistent
ALS context.

F8: Added dedicated scopedCacheKey unit tests — base key without
context, base key in system context, scoped key with tenant, no
ALS leakage across scope boundaries.

* fix: revert flow key tenant scoping, fix SSE test timing

FlowStateManager: Reverts tenant-scoped flow keys. OAuth callbacks
arrive without tenant ALS context (provider redirects don't carry
X-Tenant-Id), so completeFlow/failFlow would never find flows
created under tenant context. Flow IDs are random UUIDs with no
collision risk, and flow data is ephemeral (TTL-bounded).

SSE tests: Use process.nextTick for onDone callback so Express
response headers are flushed before res.write/res.end are called.

* fix: restore getTenantId import for completeFlow diagnostic log

* fix: correct completeFlow warning message, add missing flow test

The warning referenced X-Tenant-Id header consistency which was only
relevant when flow keys were tenant-scoped (since reverted). Updated
to list actual causes: TTL expiry, missing flow, or routing to a
different instance without shared Keyv storage.

Removed the getTenantId() call and import — no longer needed since
flow keys are unscoped.

Added test for the !flowState branch in completeFlow — verifies
return false and logger.warn on nonexistent flow ID.

* fix: add explicit return type to recursive updateInterfacePermissions

The recursive call (tenantId branch calls itself without tenantId)
causes TypeScript to infer circular return type 'any'. Adding
explicit Promise<void> satisfies the rollup typescript plugin.

* fix: update MCPOAuthRaceCondition test to match new completeFlow warning

* fix: clearEndpointConfigCache deletes both scoped and unscoped keys

Unauthenticated /api/endpoints requests populate the unscoped
ENDPOINT_CONFIG key. Admin config mutations clear only the
tenant-scoped key, leaving the unscoped entry stale indefinitely.
Now deletes both when in tenant context.

* fix: tenant guard on abort/status endpoints, warn logs, test coverage

F1: Add tenant guard to /chat/status/:conversationId and /chat/abort
matching the existing guard on /chat/stream/:streamId. The status
endpoint exposes aggregatedContent (AI response text) which requires
tenant-level access control.

F2: preAuthTenantMiddleware now logs warn for rejected __SYSTEM__
sentinel and malformed tenant IDs, providing observability for
bypass probing attempts.

F3: Abort fallback path (getActiveJobIdsForUser) now has tenant
check after resolving the job.

F4: Test for strict mode + SYSTEM_TENANT_ID — verifies runAsSystem
bypasses tenantSafeBulkWrite without throwing in strict mode.

F5: Test for job with tenantId + user without tenantId → 403.

F10: Regex uses idiomatic hyphen-at-start form.

F11: Test descriptions changed from "rejects" to "ignores" since
middleware calls next() (not 4xx).

Also fixes MCPOAuthRaceCondition test assertion to match updated
completeFlow warning message.

* fix: test coverage for logger.warn, status/abort guards, consistency

A: preAuthTenant spec now mocks logger and asserts warn calls for
__SYSTEM__ sentinel, malformed characters, and oversized headers.

B: streamTenant spec expanded with status and abort endpoint tests —
cross-tenant status returns 403, same-tenant returns 200 with body,
cross-tenant abort returns 403.

C: Abort endpoint uses req.user.tenantId (not req.user?.tenantId)
matching stream/status pattern — requireJwtAuth guarantees req.user.

D: Malformed header warning now includes ip in log metadata,
matching the sentinel warning for consistent SOC correlation.

* fix: assert ip field in malformed header warn tests

* fix: parallelize cache deletes, document tenant guard, fix import order

- clearEndpointConfigCache uses Promise.all for independent cache
  deletes instead of sequential awaits
- SSE stream tenant guard has inline comment explaining backward-compat
  behavior for untenanted legacy jobs
- conversation.ts local imports reordered longest-to-shortest per
  AGENTS.md

* fix: tenant-qualify userJobs keys, document tenant guard backward-compat

Job store userJobs keys now include tenantId when available:
- Redis: stream:user:{tenantId:userId}:jobs (falls back to
  stream:user:{userId}:jobs when no tenant)
- InMemory: composite key tenantId:userId in userJobMap

getActiveJobIdsByUser/getActiveJobIdsForUser accept optional tenantId
parameter, threaded through from req.user.tenantId at all call sites
(/chat/active and /chat/abort fallback).

Added inline comments on all three SSE tenant guards explaining the
backward-compat design: untenanted legacy jobs remain accessible
when the userId check passes.

* fix: parallelize cache deletes, document tenant guard, fix import order

Fix InMemoryJobStore.getActiveJobIdsByUser empty-set cleanup to use
the tenant-qualified userKey instead of bare userId — prevents
orphaned empty Sets accumulating in userJobMap for multi-tenant users.

Document cross-tenant staleness in clearEndpointConfigCache JSDoc —
other tenants' scoped keys expire via TTL, not active invalidation.

* fix: cleanup userJobMap leak, startup warning, DRY tenant guard, docs

F1: InMemoryJobStore.cleanup() now removes entries from userJobMap
before calling deleteJob, preventing orphaned empty Sets from
accumulating with tenant-qualified composite keys.

F2: Startup warning when TENANT_ISOLATION_STRICT is active — reminds
operators to configure reverse proxy to control X-Tenant-Id header.

F3: mergeAppTools JSDoc documents that tenant-scoped TOOLS keys are
not actively invalidated (matching clearEndpointConfigCache pattern).

F5: Abort handler getActiveJobIdsForUser call uses req.user.tenantId
(not req.user?.tenantId) — consistent with stream/status handlers.

F6: updateInterfacePermissions JSDoc clarifies SYSTEM_TENANT_ID
behavior — falls through to caller's ALS context.

F7: Extracted hasTenantMismatch() helper, replacing three identical
inline tenant guard blocks across stream/status/abort endpoints.

F9: scopedCacheKey JSDoc documents both passthrough cases (no context
and SYSTEM_TENANT_ID context).

* fix: clean userJobMap in evictOldest — same leak as cleanup()
2026-03-28 16:43:50 -04:00
Danny Avila
9f6d8c6e93
🧵 feat: ALS Context Middleware, Tenant Threading, and Config Cache Invalidation (#12407)
* feat: add tenant context middleware for ALS-based isolation

Introduces tenantContextMiddleware that propagates req.user.tenantId
into AsyncLocalStorage, activating the Mongoose applyTenantIsolation
plugin for all downstream DB queries within a request.

- Strict mode (TENANT_ISOLATION_STRICT=true) returns 403 if no tenantId
- Non-strict mode passes through for backward compatibility
- No-op for unauthenticated requests
- Includes 6 unit tests covering all paths

* feat: register tenant middleware and wrap startup/auth in runAsSystem()

- Register tenantContextMiddleware in Express app after capability middleware
- Wrap server startup initialization in runAsSystem() for strict mode compat
- Wrap auth strategy getAppConfig() calls in runAsSystem() since they run
  before user context is established (LDAP, SAML, OpenID, social login, AuthService)

* feat: thread tenantId through all getAppConfig callers

Pass tenantId from req.user to getAppConfig() across all callers that
have request context, ensuring correct per-tenant cache key resolution.

Also fixes getBaseConfig admin endpoint to scope to requesting admin's
tenant instead of returning the unscoped base config.

Files updated:
- Controllers: UserController, PluginController
- Middleware: checkDomainAllowed, balance
- Routes: config
- Services: loadConfigModels, loadDefaultModels, getEndpointsConfig, MCP
- Audio services: TTSService, STTService, getVoices, getCustomConfigSpeech
- Admin: getBaseConfig endpoint

* feat: add config cache invalidation on admin mutations

- Add clearOverrideCache(tenantId?) to flush per-principal override caches
  by enumerating Keyv store keys matching _OVERRIDE_: prefix
- Add invalidateConfigCaches() helper that clears base config, override
  caches, tool caches, and endpoint config cache in one call
- Wire invalidation into all 5 admin config mutation handlers
  (upsert, patch, delete field, delete overrides, toggle active)
- Add strict mode warning when __default__ tenant fallback is used
- Add 3 new tests for clearOverrideCache (all/scoped/base-preserving)

* chore: update getUserPrincipals comment to reflect ALS-based tenant filtering

The TODO(#12091) about missing tenantId filtering is resolved by the
tenant context middleware + applyTenantIsolation Mongoose plugin.
Group queries are now automatically scoped by tenantId via ALS.

* fix: replace runAsSystem with baseOnly for pre-tenant code paths

App configs are tenant-owned — runAsSystem() would bypass tenant
isolation and return cross-tenant DB overrides. Instead, add
baseOnly option to getAppConfig() that returns YAML-derived config
only, with zero DB queries.

All startup code, auth strategies, and MCP initialization now use
getAppConfig({ baseOnly: true }) to get the YAML config without
touching the Config collection.

* fix: address PR review findings — middleware ordering, types, cache safety

- Chain tenantContextMiddleware inside requireJwtAuth after passport auth
  instead of global app.use() where req.user is always undefined (Finding 1)
- Remove global tenantContextMiddleware registration from index.js
- Update BalanceMiddlewareOptions to include tenantId, remove redundant cast (Finding 4)
- Add warning log when clearOverrideCache cannot enumerate keys on Redis (Finding 3)
- Use startsWith instead of includes for cache key filtering (Finding 12)
- Use generator loop instead of Array.from for key enumeration (Finding 3)
- Selective barrel export — exclude _resetTenantMiddlewareStrictCache (Finding 5)
- Move isMainThread check to module level, remove per-request check (Finding 9)
- Move mid-file require to top of app.js (Finding 8)
- Parallelize invalidateConfigCaches with Promise.all (Finding 10)
- Remove clearOverrideCache from public app.js exports (internal only)
- Strengthen getUserPrincipals comment re: ALS dependency (Finding 2)

* fix: restore runAsSystem for startup DB ops, consolidate require, clarify baseOnly

- Restore runAsSystem() around performStartupChecks, updateInterfacePermissions,
  initializeMCPs, and initializeOAuthReconnectManager — these make Mongoose
  queries that need system context in strict tenant mode (NEW-3)
- Consolidate duplicate require('@librechat/api') in requireJwtAuth.js (NEW-1)
- Document that baseOnly ignores role/userId/tenantId in JSDoc (NEW-2)

* test: add requireJwtAuth tenant chaining + invalidateConfigCaches tests

- requireJwtAuth: 5 tests verifying ALS tenant context is set after
  passport auth, isolated between concurrent requests, and not set
  when user has no tenantId (Finding 6)
- invalidateConfigCaches: 4 tests verifying all four caches are cleared,
  tenantId is threaded through, partial failure is handled gracefully,
  and operations run in parallel via Promise.all (Finding 11)

* fix: address Copilot review — passport errors, namespaced cache keys, /base scoping

- Forward passport errors in requireJwtAuth before entering tenant
  middleware — prevents silent auth failures from reaching handlers (P1)
- Account for Keyv namespace prefix in clearOverrideCache — stored keys
  are namespaced as "APP_CONFIG:_OVERRIDE_:..." not "_OVERRIDE_:...",
  so override caches were never actually matched/cleared (P2)
- Remove role from getBaseConfig — /base should return tenant-scoped
  base config, not role-merged config that drifts per admin role (P2)
- Return tenantStorage.run() for cleaner async semantics
- Update mock cache in service.spec.ts to simulate Keyv namespacing

* fix: address second review — cache safety, code quality, test reliability

- Decouple cache invalidation from mutation response: fire-and-forget
  with logging so DB mutation success is not masked by cache failures
- Extract clearEndpointConfigCache helper from inline IIFE
- Move isMainThread check to lazy once-per-process guard (no import
  side effect)
- Memoize process.env read in overrideCacheKey to avoid per-request
  env lookups and log flooding in strict mode
- Remove flaky timer-based parallelism assertion, use structural check
- Merge orphaned double JSDoc block on getUserPrincipals
- Fix stale [getAppConfig] log prefix → [ensureBaseConfig]
- Fix import order in tenant.spec.ts (package types before local values)
- Replace "Finding 1" reference with self-contained description
- Use real tenantStorage primitives in requireJwtAuth spec mock

* fix: move JSDoc to correct function after clearEndpointConfigCache extraction

* refactor: remove Redis SCAN from clearOverrideCache, rely on TTL expiry

Redis SCAN causes 60s+ stalls under concurrent load (see #12410).
APP_CONFIG defaults to FORCED_IN_MEMORY_CACHE_NAMESPACES, so the
in-memory store.keys() path handles the standard case. When APP_CONFIG
is Redis-backed, overrides expire naturally via overrideCacheTtl (60s
default) — an acceptable window for admin config mutations.

* fix: remove return from tenantStorage.run to satisfy void middleware signature

* fix: address second review — cache safety, code quality, test reliability

- Switch invalidateConfigCaches from Promise.all to Promise.allSettled
  so partial failures are logged individually instead of producing one
  undifferentiated error (Finding 3)
- Gate overrideCacheKey strict-mode warning behind a once-per-process
  flag to prevent log flooding under load (Finding 4)
- Add test for passport error forwarding in requireJwtAuth — the
  if (err) { return next(err) } branch now has coverage (Finding 5)
- Add test for real partial failure in invalidateConfigCaches where
  clearAppConfigCache rejects (not just the swallowed endpoint error)

* chore: reorder imports in index.js and app.js for consistency

- Moved logger and runAsSystem imports to maintain a consistent import order across files.
- Improved code readability by ensuring related imports are grouped together.
2026-03-26 17:35:00 -04:00
Marco Beretta
0c66823c26
🧩 feat: Redesign Tool Call UI with Contextual Icons, Smart Grouping, and Rich Output Rendering (#12163)
* feat: redesign tool call UI with type-specific icons, smart grouping, and rich output rendering

Replace the generic spinner/checkmark tool call UI with a modern, Cursor-inspired design:

- Add per-tool-type icons (Plug for MCP, Terminal for code, Globe for web search, etc.)
- Group 2+ consecutive tool calls into collapsible "Used N tools" sections
- Stack unique tool icons in grouped headers with overlapping circle design
- Replace raw JSON output with intelligent renderers (table, error, text)
- Restructure ToolCallInfo: output first, parameters collapsible at bottom
- Add shared useExpandCollapse hook for consistent animations
- Add CodeWindowHeader for ExecuteCode windowed view
- Remove FinishedIcon (purple checkmark) entirely

* feat: display custom MCP server icons in tool calls

Add useMCPIconMap hook to resolve MCP server names to their configured
icon paths. ToolIcon and StackedToolIcons now accept custom icon URLs,
showing actual server logos (e.g., Home Assistant, GitHub) instead of
the generic Plug icon for MCP tool calls.

* refactor: unify container styling across code blocks, mermaid, and tool output

Replace hardcoded gray colors with theme tokens throughout:
- CodeBlock: bg-gray-900/700 -> bg-surface-secondary/tertiary + border-border-light
- Mermaid dialog: bg-gray-700 -> bg-surface-secondary, text-gray-200 -> text-text-secondary
- Mermaid containers: rounded-xl -> rounded-lg, remove shadow-md for consistency
- ResultSwitcher: bg-gray-700 -> bg-surface-secondary with border separator
- RunCode: hover:bg-gray-700 -> hover:bg-surface-hover
- ErrorOutput: add border for visual consistency
- MermaidHeader/CodeWindowHeader: consistent focus outlines using border-heavy

* refactor: simplify tool output to plain text, remove custom renderers

Remove over-engineered tool output system (TableOutput, ErrorOutput,
detectOutputType) in favor of simple text extraction. Tool output now
extracts the text content from MCP content blocks and displays it as
clean readable text — no tables, no error styling, no JSON formatting.

Parameters only show key-value badges for simple objects; complex JSON
is hidden instead of dumped raw. Matches Cursor-style simplicity.

* fix: handle error messages and format JSON arrays in tool output

- Strip verbose MCP error prefixes (Error: [MCP][server][tool] tool call
  failed: Error POSTing...) and show just the meaningful error message
- Display errors in red text
- Format uniform JSON arrays as readable lists (name — path) instead
  of raw JSON dumps
- Format plain JSON objects as key: value lines

* feat: improve JSON display in tool call output and parameters

- Replace flat formatObject with recursive formatValue for proper
  indented display of nested JSON structures
- Add ComplexInput component for tool parameters with nested objects,
  arrays, or long strings (previously hidden)
- Broaden hasParams check to show parameters for all object types
- Add font-mono to output renderer for better alignment

* feat: add localization keys for tool errors, web search, and code UI

* refactor: move Mermaid components into dedicated directory module

* refactor: extract CodeBar, FloatingCodeBar, and copy utilities from CodeBlock

* feat: replace manual SVG icons with @icons-pack/react-simple-icons

Supports 50+ programming languages with tree-shaken brand icons
instead of hand-crafted SVGs for 19 languages.

* refactor: simplify code execution UI with persistent code toggle

* refactor: use useExpandCollapse hook in Thinking and Reasoning

* feat: improve tool call error states, subtitles, and group summaries

* feat: redesign web search with inline source display

* feat: improve agent handoff with keyboard accessibility

* feat: reorganize exports order in hooks and utils

* refactor: unify CopyCodeButton with animated icon transitions and iconOnly support

* feat: add run code state machine with animated success/error feedback

* refactor: improve ResultSwitcher with lucide icons and accessibility

* refactor: update CopyButton component

* refactor: replace CopyCodeButton with CopyButton component across multiple files

* test: add ImageGen test stubs

* test: add RetrievalCall test stubs

* feat: merge ImageGen with ToolIcon and localized progress text

* feat: modernize RetrievalCall with ToolIcon and collapsible output

* test: add getToolIconType action delimiter tests

* test: add ImageGen collapsible output tests

* feat: add action ToolIcon type with Zap icon

* fix: replace AgentHandoff div with semantic button

* feat: add aria-live regions to tool components

* feat: redesign execute_code tool UI with syntax highlighting and language icons

- Remove filename labels (script.py, main.rs) and line counter from CodeWindowHeader
- Replace generic FileCode icon with language-specific LangIcon
- Add syntax highlighting via highlight.js to code blocks
- Add SquareTerminal icon to ExecuteCode progress text
- Use shared CopyButton component in CodeWindowHeader
- Remove active:scale-95 press animation from CopyButton and RunCode

* feat: dynamic tool status text sizing based on markdown font-size variable

- Add tool-status-text CSS class using calc(0.9 * --markdown-font-size)
- Update progress-text-wrapper to use dynamic sizing instead of base size
- Apply tool-status-text to WebSearch, ToolCallGroup, AgentHandoff, ImageGen
- Replace hardcoded text-sm/text-xs with dynamic class across all tools
- Animate chevron rotation in ProgressText and ToolCallGroup
- Update subtitle text color from tertiary to secondary

* fix: consistent spacing and text styles across all tool components

- Standardize tool status row spacing to my-1/my-1.5 across all components
- Update ToolCallInfo text from tertiary to secondary, add vertical padding
- Animate ToolCallInfo parameters chevron rotation
- Update OutputRenderer link colors from tertiary to secondary

* feat: unify tool call grouping for all tool types

All consecutive tool calls (MCP, execute_code, web_search, image_gen,
file_search, code_interpreter) are now grouped under a single
collapsible "Used N tools" header instead of only grouping generic
tool calls.

- Remove SPECIAL_TOOL_NAMES blacklist from groupToolCalls
- Replace getToolCallData with getToolMeta to handle all tool types
- Use renderPart callback in ToolCallGroup for proper component routing
- Add file_search and code_interpreter mappings to getToolIconType

* feat: friendly tool group labels, more icons, and output copy button

- Show friendly names in group summary (Code, Web Search, Image
  Generation) instead of raw tool names
- Display MCP server names instead of individual function names
- Deduplicate labels and show up to 3 with +N overflow
- Increase stacked icons from 3 to 4
- Add icon-only copy button to tool output (OutputRenderer)

* fix: execute_code spacing and syntax-highlighted code visibility

Match ToolCall spacing by using my-1.5 on status line and moving my-2
inside overflow-hidden. Replace broken hljs.highlight() with lowlight
(same engine used by rehype-highlight for markdown code blocks) to
render syntax-highlighted code as React elements. Handle object args
in useParseArgs to support both string and Record arg formats.

* feat: replace showCode with auto-expand tools setting

Replace the execute_code-only "Always show code when using code
interpreter" global toggle with a new "Auto-expand tool details"
setting that controls all tool types. Each tool instance now uses
independent local state initialized from the setting, so expanding
one tool no longer affects others. Applies to ToolCall, ExecuteCode,
ToolCallGroup, and CodeAnalyze components.

* fix: apply auto-expand tools setting to WebSearch and RetrievalCall

* fix: only auto-expand tools when content is available

Defer auto-expansion until tool output or content arrives, preventing
empty bordered containers from showing while tools are still running.
Uses useEffect to expand when output becomes available during streaming.

* feat: redesign file_search tool output, citations, and file preview

- Redesign RetrievalCall with per-file cards using OutputRenderer
  (truncated content with show more/less, copy button) matching MCP
  tool pattern
- Route file_search tool calls from Agents API to RetrievalCall
  instead of generic ToolCall
- Add FilePreviewDialog for viewing files (PDF iframe, text content)
  with download option, opened from clickable filenames
- Redesign file citations: FileText icon in badge, relevance and
  page numbers in hovercard, click opens file preview instead of
  downloading
- Add file preview to message file attachments (Files.tsx)
- Fix hovercard animation to slide top-to-bottom and dismiss
  instantly on file click to prevent glitching over dialog
- Add localization keys for relevance, extracted content, preview
- Add top margin to ToolCallGroup

* chore: remove leftover .planning files

* fix: polish FilePreviewDialog, CodeBlock, LangIcon, and Sources

* fix: prevent keyboard focus on collapsed tool content

Add inert attribute to all expand/collapse wrapper divs so
collapsed content is removed from tab order and hidden from
assistive technology. Skip disabled ProgressText buttons from
tab order with tabIndex={-1}.

* feat: integrate file metadata into file_search UI

Pass fileType (MIME) and fileBytes from backend file records through
to the frontend. Add file-type-specific icons, file size display,
pages sorted by relevance, multi-snippet content per file, smart
preview detection by MIME type, and copy button in file preview dialog.

* fix: review fixes — inverted type check, wrong dimension, missing import, fail-open perms, timer leaks, dead code cleanup

* fix: update CodeBlock styling for improved visual consistency

* fix(chat): open composite file citations in preview

* fix(chat): restore file previews for parsed search results

* chore(git): ignore bg-shell artifacts

* fix(chat): restore readable code content in light theme

* style(chat): align code and output surfaces by theme

* chore(i18n): remove 6 unused translation keys

* fix(deps): replace private registry URL with public npm registry in lockfile

* fix: CI lint, build, and test failures

- Add missing scaleImage utility (fixes Vite build error)
- Export scaleImage from utils/index.ts
- Remove unused imports from Part.tsx (FunctionToolCall, CodeToolCall, Agents)
- Fix prettier formatting in Part.tsx (multi-line → single-line imports, conditions)
- Remove excess blank lines in Part.tsx
- Remove unused CodeEditorRef import from Artifacts.tsx
- Add useProgress mock to OpenAIImageGen.test.tsx
- Add scaleImage mock to OpenAIImageGen.test.tsx
- Update OpenAIImageGen tests to match redesigned component structure
- Remove dead collapsible output panel tests from ImageGen.test.tsx
- Add @icons-pack/react-simple-icons to Jest transformIgnorePatterns (ESM fix)

* refactor: reorganize imports order across multiple components for consistency

* fix: add scaleImage tests, delete dead ImageGen wrapper, wire up onUIAction in ToolCallInfo

- Add 7 unit tests for scaleImage utility covering null ref, scaling,
  no-upscale, height clamping, landscape, and panoramic images
- Delete unused Content/ImageGen.tsx re-export wrapper (ImageGen is
  imported from Parts/OpenAIImageGen via the Parts barrel)
- Wire up onUIAction in ToolCallInfo to use handleUIAction + ask from
  useMessagesOperations, matching UIResourceCarousel's behavior
  (was previously a silent no-op)

* refactor: optimize imports and enhance lazy loading for language icons

* fix: address review findings for tool call UI redesign

- Fix unstable array-index keys in ToolCallGroup (streaming state corruption)
- Add plain-text fallback in InputRenderer for non-JSON tool args
- Localize FRIENDLY_NAMES via translation keys instead of hardcoded English
- Guard autoCollapse against user-initiated manual expansion
- Fix CODE_INTERPRETER hasOutput to check actual outputs instead of hardcoding true
- Add logger.warn for Citations fail-closed behavior on permission errors
- Add Terminal icon to CodeAnalyze ProgressText for visual consistency
- Fix getMCPServerName to use indexOf instead of fragile split
- Use useLayoutEffect for inert attribute in useExpandCollapse (a11y)
- Memoize style object in useExpandCollapse to avoid defeating React.memo
- Memoize groupSequentialToolCalls in ContentParts to avoid recomputation
- Use source.link as stable key instead of array index in WebSearch
- Hoist rehypePlugins outside CodeMarkdown to prevent per-render recreation

* fix: revert useMemo after conditional returns in ContentParts

The useMemo placed after early returns violated React Rules of Hooks —
hook call count would change when transitioning between edit/view mode.
Reverted to the original plain forEach which is correct and equally
performant since content changes on every streaming token anyway.

* chore: remove unused com_ui_variables_info translation key

* fix: update tests and jest config for ESM compatibility after rebase

- Add ESM-only packages to transformIgnorePatterns (@dicebear, unified
  ecosystem, react-dnd, lowlight, etc.) to fix Jest parse failures
  introduced by dev rebase
- Update ToolCall.test.tsx to match new component API (CSS
  expand/collapse instead of conditional rendering, simplified props)
- Update ToolCallInfo.test.tsx to mock OutputRenderer (avoids ESM
  chain), align with current component interface (input/output/attachments)

* refactor: replace @icons-pack/react-simple-icons with inline SVGs

Inline the 51 Simple Icons SVG paths used by LangIcon directly into
langIconPaths.ts, eliminating the runtime dependency on
@icons-pack/react-simple-icons (which requires Node >= 24).

- LangIcon now renders a plain <svg> with the path data instead of
  lazy-loading React components from the package
- Removes Suspense/React.lazy overhead for code block language icons
- SVG paths sourced from Simple Icons (CC0 1.0 license)
- Package kept in package.json for now (will be removed separately)

* fix: replace Plug icon with Wrench for MCP tools, remove unused i18n keys

- MCP tools without a custom iconPath now show Wrench instead of Plug,
  matching the generic tool fallback and avoiding the "plugin" metaphor
- Remove unused translation keys: com_assistants_action_attempt,
  com_assistants_attempt_info, com_assistants_domain_info,
  com_ui_ui_resources

* fix: address second review findings

- Combine 3x getToolMeta loop into single toolMetadata pass (ToolCallGroup)
- Extract sortPagesByRelevance to shared util (was duplicated in
  FilePreviewDialog and RetrievalCall)
- Deduplicate AGENT_STYLE_TOOLS Set (export from OpenAIImageGen/index.ts)
- Localize "source/sources" in WebSearch aria-label
- Add autoExpand useEffect to CodeAnalyze for live setting changes
- Log download errors in FilePreviewDialog instead of silently swallowing
- Replace @ts-ignore with @ts-expect-error + explanation in Code.tsx
- Remove dead currentContent alias in CodeMarkdown

* chore: remove @icons-pack/react-simple-icons dependency from package.json and package-lock.json

- Deleted the @icons-pack/react-simple-icons entry from both package.json and package-lock.json, following the previous refactor to use inline SVGs for icons.

* fix: address triage audit findings

- Remove unused gIdx variable (ESLint error)
- Fix singular/plural in web search sources aria-label
- Separate inline type import in ToolCallGroup per AGENTS.md

* fix: remove invalid placeholderDimensions prop from Image component

* chore: import order

* chore: import order

* fix: resolve TypeScript errors in PR-touched files

- Remove non-existent placeholderDimensions prop from Image in Files.tsx
- Fix localize count param type (number, not string) in WebSearch.tsx
- Pass full resource object instead of partial in UIResourceCarousel.tsx
- Add 'as const' to toggleSwitchConfigs localizationKey in General.tsx
- Fix SearchResultData type in Citation.test.tsx
- Fix TAttachment and UIResource test fixture types across test files

* docs: document formatBytes difference in FilePreviewDialog

The local formatBytes returns a human-readable string with units
("1.5 MB"), while ~/utils/formatBytes returns a raw number. They
serve different purposes, so the local copy is retained with a
JSDoc comment explaining the distinction.

* fix: address remaining review items

- Replace cancelled IIFE with documented ternary in OpenAIImageGen,
  explaining the agent vs legacy path distinction
- Add .catch() fallback to loadLowlight() in useLazyHighlight — falls
  back to plain text if the chunk fails to load
- Fix import ordering in ToolCallGroup.tsx (type imports grouped before
  local value imports per AGENTS.md)

* fix: blob URL leak and useGetFiles over-fetch

- FilePreviewDialog: add cancelledRef guard to loadPreview so blob URLs
  are never created after the dialog closes (prevents orphaned object
  URLs on unmount during async PDF fetch)
- RetrievalCall: filter useGetFiles by fileIds from fileSources instead
  of fetching the entire user file corpus for display-only name matching

* chore: fix com_nav_auto_expand_tools alphabetical order in translation.json

* fix: render non-object JSON params instead of returning null in InputRenderer

* refactor: render JSON tool output as syntax-highlighted code block

Replace the custom YAML-ish formatValue/formatObjectArray rendering
with JSON.stringify + hljs language-json styling. Structured API
responses (like GitHub search results) now display as proper
syntax-highlighted JSON with indentation instead of a flat key-value
text dump.

- Remove formatValue, formatObjectArray, isUniformObjectArray helpers
- Add isJson flag to extractText return type
- JSON output rendered in <code class="hljs language-json"> block
- Text content blocks (type: "text") still extracted and rendered
  as plain text
- Error output unchanged

* fix: extract cancelled IIFE to named function in OpenAIImageGen

Replace nested ternary with a named computeCancelled() function that
documents the agent vs legacy path branching. Resolves eslint
no-nested-ternary warning.

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2026-03-25 12:31:39 -04:00
Danny Avila
dd72b7b17e
🔄 chore: Consolidate agent model imports across middleware and tests from rebase
- Updated imports for `createAgent` and `getAgent` to streamline access from a unified `~/models` path.
- Enhanced test files to reflect the new import structure, ensuring consistency and maintainability across the codebase.
- Improved clarity by removing redundant imports and aligning with the latest model updates.
2026-03-21 14:28:55 -04:00
Atef Bellaaj
a0fed6173c
🗂️ refactor: Migrate S3 Storage to TypeScript in packages/api (#11947)
* Migrate S3 storage module with unit and integration tests

  - Migrate S3 CRUD and image operations to packages/api/src/storage/s3/
  - Add S3ImageService class with dependency injection
  - Add unit tests using aws-sdk-client-mock
  - Add integration tests with real s3 bucket (condition presence of  AWS_TEST_BUCKET_NAME)

* AI Review Findings Fixes

* chore: tests and refactor S3 storage types

- Added mock implementations for the 'sharp' library in various test files to improve image processing testing.
- Updated type references in S3 storage files from MongoFile to TFile for consistency and type safety.
- Refactored S3 CRUD operations to ensure proper handling of file types and improve code clarity.
- Enhanced integration tests to validate S3 file operations and error handling more effectively.

* chore: rename test file

* Remove duplicate import of refreshS3Url

* chore: imports order

* fix: remove duplicate imports for S3 URL handling in UserController

* fix: remove duplicate import of refreshS3FileUrls in files.js

* test: Add mock implementations for 'sharp' and '@librechat/api' in UserController tests

- Introduced mock functions for the 'sharp' library to facilitate image processing tests, including metadata retrieval and buffer conversion.
- Enhanced mocking for '@librechat/api' to ensure consistent behavior in tests, particularly for the needsRefresh and getNewS3URL functions.

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2026-03-21 14:28:55 -04:00
Danny Avila
0412f05daf
🪢 chore: Consolidate Pricing and Tx Imports After tx.js Module Removal (#12086)
* 🧹 chore: resolve imports due to rebase

* chore: Update model mocks in unit tests for consistency

- Consolidated model mock implementations across various test files to streamline setup and reduce redundancy.
- Removed duplicate mock definitions for `getMultiplier` and `getCacheMultiplier`, ensuring a unified approach in `recordCollectedUsage.spec.js`, `openai.spec.js`, `responses.unit.spec.js`, and `abortMiddleware.spec.js`.
- Enhanced clarity and maintainability of test files by aligning mock structures with the latest model updates.

* fix: Safeguard token credit checks in transaction tests

- Updated assertions in `transaction.spec.ts` to handle potential null values for `updatedBalance` by using optional chaining.
- Enhanced robustness of tests related to token credit calculations, ensuring they correctly account for scenarios where the balance may not be found.

* chore: transaction methods with bulk insert functionality

- Introduced `bulkInsertTransactions` method in `transaction.ts` to facilitate batch insertion of transaction documents.
- Updated test file `transactions.bulk-parity.spec.ts` to utilize new pricing function assignments and handle potential null values in calculations, improving test robustness.
- Refactored pricing function initialization for clarity and consistency.

* refactor: Enhance type definitions and introduce new utility functions for model matching

- Added `findMatchingPattern` and `matchModelName` utility functions to improve model name matching logic in transaction methods.
- Updated type definitions for `findMatchingPattern` to accept a more specific tokensMap structure, enhancing type safety.
- Refactored `dbMethods` initialization in `transactions.bulk-parity.spec.ts` to include the new utility functions, improving test clarity and functionality.

* refactor: Update database method imports and enhance transaction handling

- Refactored `abortMiddleware.js` to utilize centralized database methods for message handling and conversation retrieval, improving code consistency.
- Enhanced `bulkInsertTransactions` in `transaction.ts` to handle empty document arrays gracefully and added error logging for better debugging.
- Updated type definitions in `transactions.ts` to enforce stricter typing for token types, enhancing type safety across transaction methods.
- Improved test setup in `transactions.bulk-parity.spec.ts` by refining pricing function assignments and ensuring robust handling of potential null values.

* refactor: Update database method references and improve transaction multiplier handling

- Refactored `client.js` to update database method references for `bulkInsertTransactions` and `updateBalance`, ensuring consistency in method usage.
- Enhanced transaction multiplier calculations in `transaction.spec.ts` to provide fallback values for write and read multipliers, improving robustness in cost calculations across structured token spending tests.
2026-03-21 14:28:53 -04:00
Danny Avila
8ba2bde5c1
📦 refactor: Consolidate DB models, encapsulating Mongoose usage in data-schemas (#11830)
* chore: move database model methods to /packages/data-schemas

* chore: add TypeScript ESLint rule to warn on unused variables

* refactor: model imports to streamline access

- Consolidated model imports across various files to improve code organization and reduce redundancy.
- Updated imports for models such as Assistant, Message, Conversation, and others to a unified import path.
- Adjusted middleware and service files to reflect the new import structure, ensuring functionality remains intact.
- Enhanced test files to align with the new import paths, maintaining test coverage and integrity.

* chore: migrate database models to packages/data-schemas and refactor all direct Mongoose Model usage outside of data-schemas

* test: update agent model mocks in unit tests

- Added `getAgent` mock to `client.test.js` to enhance test coverage for agent-related functionality.
- Removed redundant `getAgent` and `getAgents` mocks from `openai.spec.js` and `responses.unit.spec.js` to streamline test setup and reduce duplication.
- Ensured consistency in agent mock implementations across test files.

* fix: update types in data-schemas

* refactor: enhance type definitions in transaction and spending methods

- Updated type definitions in `checkBalance.ts` to use specific request and response types.
- Refined `spendTokens.ts` to utilize a new `SpendTxData` interface for better clarity and type safety.
- Improved transaction handling in `transaction.ts` by introducing `TransactionResult` and `TxData` interfaces, ensuring consistent data structures across methods.
- Adjusted unit tests in `transaction.spec.ts` to accommodate new type definitions and enhance robustness.

* refactor: streamline model imports and enhance code organization

- Consolidated model imports across various controllers and services to a unified import path, improving code clarity and reducing redundancy.
- Updated multiple files to reflect the new import structure, ensuring all functionalities remain intact.
- Enhanced overall code organization by removing duplicate import statements and optimizing the usage of model methods.

* feat: implement loadAddedAgent and refactor agent loading logic

- Introduced `loadAddedAgent` function to handle loading agents from added conversations, supporting multi-convo parallel execution.
- Created a new `load.ts` file to encapsulate agent loading functionalities, including `loadEphemeralAgent` and `loadAgent`.
- Updated the `index.ts` file to export the new `load` module instead of the deprecated `loadAgent`.
- Enhanced type definitions and improved error handling in the agent loading process.
- Adjusted unit tests to reflect changes in the agent loading structure and ensure comprehensive coverage.

* refactor: enhance balance handling with new update interface

- Introduced `IBalanceUpdate` interface to streamline balance update operations across the codebase.
- Updated `upsertBalanceFields` method signatures in `balance.ts`, `transaction.ts`, and related tests to utilize the new interface for improved type safety.
- Adjusted type imports in `balance.spec.ts` to include `IBalanceUpdate`, ensuring consistency in balance management functionalities.
- Enhanced overall code clarity and maintainability by refining type definitions related to balance operations.

* feat: add unit tests for loadAgent functionality and enhance agent loading logic

- Introduced comprehensive unit tests for the `loadAgent` function, covering various scenarios including null and empty agent IDs, loading of ephemeral agents, and permission checks.
- Enhanced the `initializeClient` function by moving `getConvoFiles` to the correct position in the database method exports, ensuring proper functionality.
- Improved test coverage for agent loading, including handling of non-existent agents and user permissions.

* chore: reorder memory method exports for consistency

- Moved `deleteAllUserMemories` to the correct position in the exported memory methods, ensuring a consistent and logical order of method exports in `memory.ts`.
2026-03-21 14:28:53 -04:00
Danny Avila
39f5f83a8a
🔌 fix: Isolate Code-Server HTTP Agents to Prevent Socket Pool Contamination (#12311)
* 🔧 fix: Isolate HTTP agents for code-server axios requests

Prevents socket hang up after 5s on Node 19+ when code executor has
file attachments. follow-redirects (axios dep) leaks `socket.destroy`
as a timeout listener on TCP sockets; with Node 19+ defaulting to
keepAlive: true, tainted sockets re-enter the global pool and destroy
active node-fetch requests in CodeExecutor after the idle timeout.

Uses dedicated http/https agents with keepAlive: false for all axios
calls targeting CODE_BASEURL in crud.js and process.js.

Closes #12298

* ♻️ refactor: Extract code-server HTTP agents to shared module

- Move duplicated agent construction from crud.js and process.js into
  a shared agents.js module to eliminate DRY violation
- Switch process.js from raw `require('axios')` to `createAxiosInstance()`
  for proxy configuration parity with crud.js
- Fix import ordering in process.js (agent constants no longer split imports)
- Add 120s timeout to uploadCodeEnvFile (was the only code-server call
  without a timeout)

*  test: Add regression tests for code-server socket isolation

- Add crud.spec.js covering getCodeOutputDownloadStream and
  uploadCodeEnvFile (agent options, timeout, URL, error handling)
- Add socket pool isolation tests to process.spec.js asserting
  keepAlive:false agents are forwarded to axios
- Update process.spec.js mocks for createAxiosInstance() migration

* ♻️ refactor: Move code-server agents to packages/api

Relocate agents.js from api/server/services/Files/Code/ to
packages/api/src/utils/code.ts per workspace conventions. Consumers
now import codeServerHttpAgent/codeServerHttpsAgent from @librechat/api.
2026-03-19 16:16:57 -04:00
Danny Avila
8e8fb01d18
🧱 fix: Enforce Agent Access Control on Context and OCR File Loading (#12253)
* 🔏 fix: Apply agent access control filtering to context/OCR resource loading

The context/OCR file path in primeResources fetched files by file_id
without applying filterFilesByAgentAccess, unlike the file_search and
execute_code paths. Add filterFiles dependency injection to primeResources
and invoke it after getFiles to enforce consistent access control.

* fix: Wire filterFilesByAgentAccess into all agent initialization callers

Pass the filterFilesByAgentAccess function from the JS layer into the TS
initializeAgent → primeResources chain via dependency injection, covering
primary, handoff, added-convo, and memory agent init paths.

* test: Add access control filtering tests for primeResources

Cover filterFiles invocation with context/OCR files, verify filtering
rejects inaccessible files, and confirm graceful fallback when filterFiles,
userId, or agentId are absent.

* fix: Guard filterFilesByAgentAccess against ephemeral agent IDs

Ephemeral agents have no DB document, so getAgent returns null and the
access map defaults to all-false, silently blocking all non-owned files.
Short-circuit with isEphemeralAgentId to preserve the pass-through
behavior for inline-built agents (memory, tool agents).

* fix: Clean up resources.ts and JS caller import order

Remove redundant optional chain on req.user.role inside user-guarded
block, update primeResources JSDoc with filterFiles and agentId params,
and reorder JS imports to longest-to-shortest per project conventions.

* test: Strengthen OCR assertion and add filterFiles error-path test

Use toHaveBeenCalledWith for the OCR filtering test to verify exact
arguments after the OCR→context merge step. Add test for filterFiles
rejection to verify graceful degradation (logs error, returns original
tool_resources).

* fix: Correct import order in addedConvo.js and initialize.js

Sort by total line length descending: loadAddedAgent (91) before
filterFilesByAgentAccess (84), loadAgentTools (91) before
filterFilesByAgentAccess (84).

* test: Add unit tests for filterFilesByAgentAccess and hasAccessToFilesViaAgent

Cover every branch in permissions.js: ephemeral agent guard, missing
userId/agentId/files early returns, all-owned short-circuit, mixed
owned + non-owned with VIEW/no-VIEW, agent-not-found fail-closed,
author path scoped to attached files, EDIT gate on delete, DB error
fail-closed, and agent with no tool_resources.

* test: Cover file.user undefined/null in permissions spec

Files with no user field fall into the non-owned path and get run
through hasAccessToFilesViaAgent. Add two cases: attached file with
no user field is returned, unattached file with no user field is
excluded.
2026-03-15 23:02:36 -04:00
Danny Avila
ad08df4db6
🔏 fix: Scope Agent-Author File Access to Attached Files Only (#12251)
* 🛡️ fix: Scope agent-author file access to attached files only

The hasAccessToFilesViaAgent helper short-circuited for agent authors,
granting access to all requested file IDs without verifying they were
attached to the agent's tool_resources. This enabled an IDOR where any
agent author could delete arbitrary files by supplying their agent_id
alongside unrelated file IDs.

Now both the author and non-author paths check file IDs against the
agent's tool_resources before granting access.

* chore: Use Object.values/for...of and add JSDoc in getAttachedFileIds

* test: Add boundary cases for agent file access authorization

- Agent with no tool_resources denies all access (fail-closed)
- Files across multiple resource types are all reachable
- Author + isDelete: true still scopes to attached files only
2026-03-15 18:54:34 -04:00
Danny Avila
f67bbb2bc5
🧹 fix: Sanitize Artifact Filenames in Code Execution Output (#12222)
* fix: sanitize artifact filenames to prevent path traversal in code output

* test: Mock sanitizeFilename function in process.spec.js to return the original filename

- Added a mock implementation for the `sanitizeFilename` function in the `process.spec.js` test file to return the original filename, ensuring that tests can run without altering the filename during the testing process.

* fix: use path.relative for traversal check, sanitize all filenames, add security logging

- Replace startsWith with path.relative pattern in saveLocalBuffer, consistent
  with deleteLocalFile and getLocalFileStream in the same file
- Hoist sanitizeFilename call before the image/non-image branch so both code
  paths store the sanitized name in MongoDB
- Log a warning when sanitizeFilename mutates a filename (potential traversal)
- Log a specific warning when saveLocalBuffer throws a traversal error, so
  security events are distinguishable from generic network errors in the catch

* test: improve traversal test coverage and remove mock reimplementation

- Remove partial sanitizeFilename reimplementation from process-traversal tests;
  use controlled mock returns to verify processCodeOutput wiring instead
- Add test for image branch sanitization
- Use mkdtempSync for test isolation in crud-traversal to avoid parallel worker
  collisions
- Add prefix-collision bypass test case (../user10/evil vs user1 directory)

* fix: use path.relative in isValidPath to prevent prefix-collision bypass

Pre-existing startsWith check without path separator had the same class
of prefix-collision vulnerability fixed in saveLocalBuffer.
2026-03-14 03:09:26 -04:00
Danny Avila
046e92217f
🧩 feat: OpenDocument Format File Upload and Native ODS Parsing (#11959)
*  feat: Add support for OpenDocument MIME types in file configuration

Updated the applicationMimeTypes regex to include support for OASIS OpenDocument formats, enhancing the file type recognition capabilities of the data provider.

* feat: document processing with OpenDocument support

Added support for OpenDocument Spreadsheet (ODS) MIME type in the file processing service and updated the document parser to handle ODS files. Included tests to verify correct parsing of ODS documents and updated file configuration to recognize OpenDocument formats.

* refactor: Enhance document processing to support additional Excel MIME types

Updated the document processing logic to utilize a regex for matching Excel MIME types, improving flexibility in handling various Excel file formats. Added tests to ensure correct parsing of new MIME types, including multiple Excel variants and OpenDocument formats. Adjusted file configuration to include these MIME types for better recognition in the file processing service.

* feat: Add support for additional OpenDocument MIME types in file processing

Enhanced the document processing service to support ODT, ODP, and ODG MIME types. Updated tests to verify correct routing through the OCR strategy for these new formats. Adjusted documentation to reflect changes in handled MIME types for improved clarity.
2026-02-26 14:39:49 -05:00
Danny Avila
7ce898d6a0
📄 feat: Local Text Extraction for PDF, DOCX, and XLS/XLSX (#11900)
* feat: Added "document parser" OCR strategy

The document parser uses libraries to parse the text out of known document types.
This lets LibreChat handle some complex document types without having to use a
secondary service (like Mistral or standing up a RAG API server).

To enable the document parser, set the ocr strategy to "document_parser" in
librechat.yaml.

We now support:

- PDFs using pdfjs
- DOCX using mammoth
- XLS/XLSX using SheetJS

(The associated packages were also added to the project.)

* fix: applied Copilot code review suggestions

- Properly calculate length of text based on UTF8.

- Avoid issues with loading / blocking PDF parsing.

* fix: improved docs on parseDocument()

* chore: move to packages/api for TS support

* refactor: make document processing the default ocr strategy

- Introduced support for additional document types in the OCR strategy, including PDF, DOCX, and XLS/XLSX.
- Updated the file upload handling to dynamically select the appropriate parsing strategy based on the file type.
- Refactored the document parsing functions to use asynchronous imports for improved performance and maintainability.

* test: add unit tests for processAgentFileUpload functionality

- Introduced a new test suite for the processAgentFileUpload function in process.spec.js.
- Implemented various test cases to validate OCR strategy selection based on file types, including PDF, DOCX, XLSX, and XLS.
- Mocked dependencies to ensure isolated testing of file upload handling and strategy selection logic.
- Enhanced coverage for scenarios involving OCR capability checks and default strategy fallbacks.

* chore: update pdfjs-dist version and enhance document parsing tests

- Bumped pdfjs-dist dependency to version 5.4.624 in both api and packages/api.
- Refactored document parsing tests to use 'originalname' instead of 'filename' for file objects.
- Added a new test case for parsing XLS files to improve coverage of document types supported by the parser.
- Introduced a sample XLS file for testing purposes.

* feat: enforce text size limit and improve OCR fallback handling in processAgentFileUpload

- Added a check to ensure extracted text does not exceed the 15MB storage limit, throwing an error if it does.
- Refactored the OCR handling logic to improve fallback behavior when the configured OCR fails, ensuring a more robust document processing flow.
- Enhanced unit tests to cover scenarios for oversized text and fallback mechanisms, ensuring proper error handling and functionality.

* fix: correct OCR URL construction in performOCR function

- Updated the OCR URL construction to ensure it correctly appends '/ocr' to the base URL if not already present, improving the reliability of the OCR request.

---------

Co-authored-by: Dan Lew <daniel@mightyacorn.com>
2026-02-22 14:22:45 -05:00
Danny Avila
7692fa837e
🪣 fix: S3 path-style URL support for MinIO, R2, and custom endpoints (#11894)
* 🪣 fix: S3 path-style URL support for MinIO, R2, and custom endpoints

`extractKeyFromS3Url` now uses `AWS_BUCKET_NAME` to automatically detect and
strip the bucket prefix from path-style URLs, fixing `NoSuchKey` errors on URL
refresh for any S3-compatible provider using a custom endpoint (MinIO, Cloudflare
R2, Hetzner, Backblaze B2, etc.). No additional configuration required — the
bucket name is already a required env var for S3 to function.

`initializeS3` now passes `forcePathStyle: true` to the S3Client constructor
when `AWS_FORCE_PATH_STYLE=true` is set. Required for providers whose SSL
certificates do not support virtual-hosted-style bucket subdomains (e.g. Hetzner
Object Storage), which previously caused 401 / SignatureDoesNotMatch on upload.

Additional fixes:
- Suppress error log noise in `extractKeyFromS3Url` catch path: plain S3 keys
  no longer log as errors, only inputs that start with http(s):// do
- Fix test env var ordering so module-level constants pick up `AWS_BUCKET_NAME`
  and `S3_URL_EXPIRY_SECONDS` correctly before the module is required
- Add missing `deleteRagFile` mock and assertion in `deleteFileFromS3` tests
- Add `AWS_BUCKET_NAME` cleanup to `afterEach` to prevent cross-test pollution
- Add `initializeS3` unit tests covering endpoint, forcePathStyle, credentials,
  singleton, and IRSA code paths
- Document `AWS_FORCE_PATH_STYLE` in `.env.example`, `dotenv.mdx`, and `s3.mdx`

* 🪣 fix: Enhance S3 URL key extraction for custom endpoints

Updated `extractKeyFromS3Url` to support precise key extraction when using custom endpoints with path-style URLs. The logic now accounts for the `AWS_ENDPOINT_URL` and `AWS_FORCE_PATH_STYLE` environment variables, ensuring correct key handling for various S3-compatible providers.

Added unit tests to verify the new functionality, including scenarios for endpoints with base paths. This improves compatibility and reduces potential errors when interacting with S3-like services.
2026-02-21 18:36:48 -05:00
Rene Heijdens
5d2b7fa4d5
🪣 fix: Proper Key Extraction from S3 URL (#11241)
*  feat: Enhance S3 URL handling and add comprehensive tests for CRUD operations

* 🔒 fix: Improve S3 URL key extraction with enhanced logging and additional test cases

* chore: removed some duplicate testcases and fixed incorrect apostrophes

* fix: Log error for malformed URLs

* test: Add additional test case for extracting keys from S3 URLs

* fix: Enhance S3 URL key extraction logic and improve error handling with additional test cases

* test: Add test case for stripping bucket from custom endpoint URLs with forcePathStyle enabled

* refactor: Update S3 path style handling and enhance environment configuration for S3-compatible services

* refactor: Remove S3_FORCE_PATH_STYLE dependency and streamline S3 URL key extraction logic

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2026-02-21 15:07:16 -05:00
ethanlaj
2513e0a423
🔧 feat: deleteRagFile utility for Consistent RAG API document deletion (#11493)
* 🔧 feat: Implement deleteRagFile utility for RAG API document deletion across storage strategies

* chore: import order

* chore: import order & remove unnecessary comments

---------

Co-authored-by: Danny Avila <danacordially@gmail.com>
2026-02-14 13:57:01 -05:00
Danny Avila
9054ca9c15
🆔 fix: Atomic File Dedupe, Bedrock Tokens Fix, and Allowed MIME Types (#11675)
* feat: Add support for Apache Parquet MIME types

- Introduced 'application/x-parquet' to the full MIME types list and code interpreter MIME types list.
- Updated application MIME types regex to include 'x-parquet' and 'vnd.apache.parquet'.
- Added mapping for '.parquet' files to 'application/x-parquet' in code type mapping, enhancing file format support.

* feat: Implement atomic file claiming for code execution outputs

- Added a new `claimCodeFile` function to atomically claim a file_id for code execution outputs, preventing duplicates by using a compound key of filename and conversationId.
- Updated `processCodeOutput` to utilize the new claiming mechanism, ensuring that concurrent calls for the same filename converge on a single record.
- Refactored related tests to validate the new atomic claiming behavior and its impact on file usage tracking and versioning.

* fix: Update image file handling to use cache-busting filepath

- Modified the `processCodeOutput` function to generate a cache-busting filepath for updated image files, improving browser caching behavior.
- Adjusted related tests to reflect the change from versioned filenames to cache-busted filepaths, ensuring accurate validation of image updates.

* fix: Update step handler to prevent undefined content for non-tool call types

- Modified the condition in useStepHandler to ensure that undefined content is only assigned for specific content types, enhancing the robustness of content handling.

* fix: Update bedrockOutputParser to handle maxTokens for adaptive models

- Modified the bedrockOutputParser logic to ensure that maxTokens is not set for adaptive models when neither maxTokens nor maxOutputTokens are provided, improving the handling of adaptive thinking configurations.
- Updated related tests to reflect these changes, ensuring accurate validation of the output for adaptive models.

* chore: Update @librechat/agents to version 3.1.38 in package.json and package-lock.json

* fix: Enhance file claiming and error handling in code processing

- Updated the `processCodeOutput` function to use a consistent file ID for claiming files, preventing duplicates and improving concurrency handling.
- Refactored the `createFileMethods` to include error handling for failed file claims, ensuring robust behavior when claiming files for conversations.
- These changes enhance the reliability of file management in the application.

* fix: Update adaptive thinking test for Opus 4.6 model

- Modified the test for configuring adaptive thinking to reflect that no default maxTokens should be set for the Opus 4.6 model.
- Updated assertions to ensure that maxTokens is undefined, aligning with the expected behavior for adaptive models.
2026-02-07 13:26:18 -05:00
Danny Avila
75c02a1a18
🗂️ feat: Better Persistence for Code Execution Files Between Sessions (#11362)
* refactor: process code output files for re-use (WIP)

* feat: file attachment handling with additional metadata for downloads

* refactor: Update directory path logic for local file saving based on basePath

* refactor: file attachment handling to support TFile type and improve data merging logic

* feat: thread filtering of code-generated files

- Introduced parentMessageId parameter in addedConvo and initialize functions to enhance thread management.
- Updated related methods to utilize parentMessageId for retrieving messages and filtering code-generated files by conversation threads.
- Enhanced type definitions to include parentMessageId in relevant interfaces for better clarity and usage.

* chore: imports/params ordering

* feat: update file model to use messageId for filtering and processing

- Changed references from 'message' to 'messageId' in file-related methods for consistency.
- Added messageId field to the file schema and updated related types.
- Enhanced file processing logic to accommodate the new messageId structure.

* feat: enhance file retrieval methods to support user-uploaded execute_code files

- Added a new method `getUserCodeFiles` to retrieve user-uploaded execute_code files, excluding code-generated files.
- Updated existing file retrieval methods to improve filtering logic and handle edge cases.
- Enhanced thread data extraction to collect both message IDs and file IDs efficiently.
- Integrated `getUserCodeFiles` into relevant endpoints for better file management in conversations.

* chore: update @librechat/agents package version to 3.0.78 in package-lock.json and related package.json files

* refactor: file processing and retrieval logic

- Added a fallback mechanism for download URLs when files exceed size limits or cannot be processed locally.
- Implemented a deduplication strategy for code-generated files based on conversationId and filename to optimize storage.
- Updated file retrieval methods to ensure proper filtering by messageIds, preventing orphaned files from being included.
- Introduced comprehensive tests for new thread data extraction functionality, covering edge cases and performance considerations.

* fix: improve file retrieval tests and handling of optional properties

- Updated tests to safely access optional properties using non-null assertions.
- Modified test descriptions for clarity regarding the exclusion of execute_code files.
- Ensured that the retrieval logic correctly reflects the expected outcomes for file queries.

* test: add comprehensive unit tests for processCodeOutput functionality

- Introduced a new test suite for the processCodeOutput function, covering various scenarios including file retrieval, creation, and processing for both image and non-image files.
- Implemented mocks for dependencies such as axios, logger, and file models to isolate tests and ensure reliable outcomes.
- Validated behavior for existing files, new file creation, and error handling, including size limits and fallback mechanisms.
- Enhanced test coverage for metadata handling and usage increment logic, ensuring robust verification of file processing outcomes.

* test: enhance file size limit enforcement in processCodeOutput tests

- Introduced a configurable file size limit for tests to improve flexibility and coverage.
- Mocked the `librechat-data-provider` to allow dynamic adjustment of file size limits during tests.
- Updated the file size limit enforcement test to validate behavior when files exceed specified limits, ensuring proper fallback to download URLs.
- Reset file size limit after tests to maintain isolation for subsequent test cases.
2026-01-28 17:44:32 -05:00
Cha
35137c21e6
🔥 fix: Firebase Support for Nano Banana Tool (#11228) 2026-01-06 11:19:38 -05:00
Danny Avila
04a4a2aa44
🧵 refactor: Migrate Endpoint Initialization to TypeScript (#10794)
* refactor: move endpoint initialization methods to typescript

* refactor: move agent init to packages/api

- Introduced `initialize.ts` for agent initialization, including file processing and tool loading.
- Updated `resources.ts` to allow optional appConfig parameter.
- Enhanced endpoint configuration handling in various initialization files to support model parameters.
- Added new artifacts and prompts for React component generation.
- Refactored existing code to improve type safety and maintainability.

* refactor: streamline endpoint initialization and enhance type safety

- Updated initialization functions across various endpoints to use a consistent request structure, replacing `unknown` types with `ServerResponse`.
- Simplified request handling by directly extracting keys from the request body.
- Improved type safety by ensuring user IDs are safely accessed with optional chaining.
- Removed unnecessary parameters and streamlined model options handling for better clarity and maintainability.

* refactor: moved ModelService and extractBaseURL to packages/api

- Added comprehensive tests for the models fetching functionality, covering scenarios for OpenAI, Anthropic, Google, and Ollama models.
- Updated existing endpoint index to include the new models module.
- Enhanced utility functions for URL extraction and model data processing.
- Improved type safety and error handling across the models fetching logic.

* refactor: consolidate utility functions and remove unused files

- Merged `deriveBaseURL` and `extractBaseURL` into the `@librechat/api` module for better organization.
- Removed redundant utility files and their associated tests to streamline the codebase.
- Updated imports across various client files to utilize the new consolidated functions.
- Enhanced overall maintainability by reducing the number of utility modules.

* refactor: replace ModelService references with direct imports from @librechat/api and remove ModelService file

* refactor: move encrypt/decrypt methods and key db methods to data-schemas, use `getProviderConfig` from `@librechat/api`

* chore: remove unused 'res' from options in AgentClient

* refactor: file model imports and methods

- Updated imports in various controllers and services to use the unified file model from '~/models' instead of '~/models/File'.
- Consolidated file-related methods into a new file methods module in the data-schemas package.
- Added comprehensive tests for file methods including creation, retrieval, updating, and deletion.
- Enhanced the initializeAgent function to accept dependency injection for file-related methods.
- Improved error handling and logging in file methods.

* refactor: streamline database method references in agent initialization

* refactor: enhance file method tests and update type references to IMongoFile

* refactor: consolidate database method imports in agent client and initialization

* chore: remove redundant import of initializeAgent from @librechat/api

* refactor: move checkUserKeyExpiry utility to @librechat/api and update references across endpoints

* refactor: move updateUserPlugins logic to user.ts and simplify UserController

* refactor: update imports for user key management and remove UserService

* refactor: remove unused Anthropics and Bedrock endpoint files and clean up imports

* refactor: consolidate and update encryption imports across various files to use @librechat/data-schemas

* chore: update file model mock to use unified import from '~/models'

* chore: import order

* refactor: remove migrated to TS agent.js file and its associated logic from the endpoints

* chore: add reusable function to extract imports from source code in unused-packages workflow

* chore: enhance unused-packages workflow to include @librechat/api dependencies and improve dependency extraction

* chore: improve dependency extraction in unused-packages workflow with enhanced error handling and debugging output

* chore: add detailed debugging output to unused-packages workflow for better visibility into unused dependencies and exclusion lists

* chore: refine subpath handling in unused-packages workflow to correctly process scoped and non-scoped package imports

* chore: clean up unused debug output in unused-packages workflow and reorganize type imports in initialize.ts
2025-12-11 16:37:16 -05:00
Danny Avila
4a2de417b6
🔧 fix: Error handling in Firebase and Local file deletion (#10894)
- Added try-catch blocks to handle errors during document deletion from the RAG API.
- Implemented logging for 404 errors indicating that the document may have already been deleted.
- Improved error logging for other deletion errors in both Firebase and Local file services.
2025-12-10 15:06:48 -05:00
Danny Avila
5879b3f518
🔊 fix: Validate language format for OpenAI STT model (#10875)
* 🔊 fix: Validate language format for OpenAI STT model

* fix: Normalize input language model assignment in STTService

* refactor: Enhance error logging and language validation in STT and TTS services

* fix: Improve language validation in getValidatedLanguageCode function
2025-12-09 22:25:45 -05:00
alfo-dev
b4892d81d3
🔊 fix: Missing Proxy config in TTS and STT Services (#10852)
* Fix TTS STT proxy

* Add STT proxy env var

* Add TTS proxy env var

* chore: import order

* chore: import order

---------

Co-authored-by: Danny Avila <danacordially@gmail.com>
2025-12-09 20:23:03 -05:00
Danny Avila
3950b9ee53
📦 chore: Update Packages for Security & Remove Unnecessary (#10620)
* 🗑️ chore: Remove @microsoft/eslint-formatter-sarif from dependencies and update ESLint CI workflow

- Removed @microsoft/eslint-formatter-sarif from package.json and package-lock.json.
- Updated ESLint CI workflow to eliminate SARIF upload logic and related environment variables.

* chore: Remove ts-jest from dependencies in jest.config and package files

* chore: Update package dependencies to latest versions

- Upgraded @rollup/plugin-commonjs from 25.0.2 to 29.0.0 across multiple packages.
- Updated rimraf from 5.0.1 to 6.1.2 in packages/api, client, data-provider, and data-schemas.
- Added new dependencies: @isaacs/balanced-match and @isaacs/brace-expansion in package-lock.json.
- Updated glob from 8.1.0 to 13.0.0 and adjusted related dependencies accordingly.

* chore: remove prettier-eslint dependency from package.json

* chore: npm audit fix

* fix: correct `getBasePath` import
2025-11-21 14:53:58 -05:00
catmeme
7aa8d49f3a
🧭 fix: Add Base Path Support for Login/Register and Image Paths (#10116)
* fix: add basePath pattern to support login/register and image paths

* Fix linter errors

* refactor: Update import statements for getBasePath and isEnabled, and add path utility functions with tests

- Refactored imports in addImages.js and StableDiffusion.js to use getBasePath from '@librechat/api'.
- Consolidated isEnabled and getBasePath imports in validateImageRequest.js.
- Introduced new path utility functions in path.ts and corresponding unit tests in path.spec.ts to validate base path extraction logic.

* fix: Update domain server base URL in MarkdownComponents and refactor authentication redirection logic

- Changed the domain server base URL in MarkdownComponents.tsx to use the API base URL.
- Refactored the useAuthRedirect hook to utilize React Router's navigate for redirection instead of window.location, ensuring a smoother SPA experience.
- Added unit tests for the useAuthRedirect hook to verify authentication redirection behavior.

* test: Mock isEnabled in validateImages.spec.js for improved test isolation

- Updated validateImages.spec.js to mock the isEnabled function from @librechat/api, ensuring that tests can run independently of the actual implementation.
- Cleared the DOMAIN_CLIENT environment variable before tests to avoid interference with basePath resolution.

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-11-21 11:25:14 -05:00
Danny Avila
937563f645
🖼️ feat: File Size and MIME Type Filtering at Agent level (#10446)
* refactor: add image file size validation as part of payload build

* feat: implement file size and MIME type filtering in endpoint configuration

* chore: import order
2025-11-10 21:36:48 -05:00
Danny Avila
2524d33362
📂 refactor: Cleanup File Filtering Logic, Improve Validation (#10414)
* feat: add filterFilesByEndpointConfig to filter disabled file processing by provider

* chore: explicit define of endpointFileConfig for better debugging

* refactor: move `normalizeEndpointName` to data-provider as used app-wide

* chore: remove overrideEndpoint from useFileHandling

* refactor: improve endpoint file config selection

* refactor: update filterFilesByEndpointConfig to accept structured parameters and improve endpoint file config handling

* refactor: replace defaultFileConfig with getEndpointFileConfig for improved file configuration handling across components

* test: add comprehensive unit tests for getEndpointFileConfig to validate endpoint configuration handling

* refactor: streamline agent endpoint assignment and improve file filtering logic

* feat: add error handling for disabled file uploads in endpoint configuration

* refactor: update encodeAndFormat functions to accept structured parameters for provider and endpoint

* refactor: streamline requestFiles handling in initializeAgent function

* fix: getEndpointFileConfig partial config merging scenarios

* refactor: enhance mergeWithDefault function to support document-supported providers with comprehensive MIME types

* refactor: user-configured default file config in getEndpointFileConfig

* fix: prevent file handling when endpoint is disabled and file is dragged to chat

* refactor: move `getEndpointField` to `data-provider` and update usage across components and hooks

* fix: prioritize endpointType based on agent.endpoint in file filtering logic

* fix: prioritize agent.endpoint in file filtering logic and remove unnecessary endpointType defaulting
2025-11-10 19:05:30 -05:00
Rakshit
772b706e20
🎙️ fix: Azure OpenAI Speech-to-Text 400 Bad Request Error (#10355) 2025-11-05 10:27:34 -05:00