mirror of
https://github.com/danny-avila/LibreChat.git
synced 2026-05-13 16:07:30 +00:00
* 🪟 feat: Add allowedAddresses Exemption List For SSRF-Guarded Targets LibreChat already blocks SSRF-prone targets (private IPs, loopback, link-local, .internal/.local TLDs) at every server-side fetch site that consumes user-controllable URLs — custom-endpoint baseURLs, MCP servers, OpenAPI Actions, and OAuth endpoints. The only existing escape hatch is `allowedDomains`, but that flips the field into a strict whitelist: adding `127.0.0.1` to permit a self-hosted Ollama also blocks every public destination that isn't in the list. Introduce `allowedAddresses` as the orthogonal primitive: a private- IP-space exemption list. When a hostname or its resolved IP appears in the list, the SSRF block is bypassed for that target. Public destinations remain reachable. Operators can now run self-hosted LLMs / MCP servers / Action endpoints on private addresses without weakening the default-deny posture for everything else. Schema additions in `packages/data-provider/src/config.ts`: - `endpoints.allowedAddresses` (new — gates `validateEndpointURL`) - `mcpSettings.allowedAddresses` (parallel to `allowedDomains`) - `actions.allowedAddresses` (parallel to `allowedDomains`) Core changes in `packages/api/src/auth/`: - New `isAddressAllowed(hostnameOrIP, allowedAddresses)` — pure, case-insensitive, bracket-stripped literal match. - Threaded the list through `isSSRFTarget`, `resolveHostnameSSRF`, `isDomainAllowedCore`, `isActionDomainAllowed`, `isMCPDomainAllowed`, `isOAuthUrlAllowed`, and `validateEndpointURL`. - Extended `createSSRFSafeAgents` and `createSSRFSafeUndiciConnect` to accept the list, building an SSRF-safe DNS lookup that exempts matching hostnames/IPs at TCP connect time (TOCTOU-safe). Wiring: - Custom and OpenAI endpoint initialize sites pass `endpoints.allowedAddresses` to `validateEndpointURL`. - `MCPServersRegistry` stores `allowedAddresses` and exposes it via `getAllowedAddresses()`. The factory, connection class, manager, `UserConnectionManager`, and `ConnectionsRepository` all thread it through to the SSRF utilities. - `MCPOAuthHandler.initiateOAuthFlow`, `refreshOAuthTokens`, and `validateOAuthUrl` accept the list and consult it on every URL validation along the OAuth chain. - `ToolService`, `ActionService`, and the assistants/agents action routes pass `actions.allowedAddresses` to `isActionDomainAllowed` and to `createSSRFSafeAgents` for runtime action calls. - `initializeMCPs.js` reads `mcpSettings.allowedAddresses` from the app config and forwards it to the registry constructor. Documentation: - `librechat.example.yaml` shows the new field next to each existing `allowedDomains` block, with a note clarifying that `allowedAddresses` is an exemption list (not a whitelist). Tests: - Unit tests for `isAddressAllowed` covering literal IPs, hostnames, IPv6 brackets, case insensitivity, and partial-match rejection. - Exemption tests for every entry point: `isSSRFTarget`, `resolveHostnameSSRF`, `validateEndpointURL`, `isActionDomainAllowed`, `isMCPDomainAllowed`, `isOAuthUrlAllowed`. - Existing tests updated to reflect the new optional parameter. Default behavior is unchanged: omitted = empty list = no exemptions. * 🩹 fix: Plumb allowedAddresses Through AppConfig endpoints Type The initial PR added `endpoints.allowedAddresses` to the data-provider config schema and consumed it in the endpoint initialize sites, but the runtime `AppConfig.endpoints` shape in `@librechat/data-schemas` was a hand-maintained subset that didn't include the new field — so `tsc` rejected `appConfig.endpoints.allowedAddresses`. Add the field to `AppConfig['endpoints']` in `packages/data-schemas/src/types/app.ts` and forward it from the loaded config in `packages/data-schemas/src/app/endpoints.ts` so the runtime config carries the value. Update `initializeMCPs.spec.js` to expect the third positional argument (`allowedAddresses`) on the `createMCPServersRegistry` call. * 🩹 fix: Enforce allowedDomains Before allowedAddresses In isOAuthUrlAllowed The initial implementation checked the address exemption first, so a URL whose hostname appeared in `allowedAddresses` would return true even when the admin had configured `allowedDomains` as a strict bound on OAuth endpoints. A malicious MCP server could advertise OAuth metadata, token, or revocation URLs at any address the admin had permitted for an unrelated reason (a self-hosted LLM at `127.0.0.1`, for example) and pass validation, expanding SSRF reach beyond the configured domain whitelist. Reorder: when `allowedDomains` is set, treat it as authoritative — return true only if the URL matches a domain entry, otherwise fall through to false. The address exemption only applies when no `allowedDomains` is configured (mirrors how the downstream SSRF check in `validateOAuthUrl` consults `allowedAddresses`). Add a regression test asserting that an `allowedAddresses` entry does not broaden a configured `allowedDomains` list. Reported by chatgpt-codex-connector on PR #12933. * 🩹 fix: Forward allowedAddresses To Remaining OAuth Callers Two `MCPOAuthHandler` callers still used the pre-feature signatures and were silently dropping the new `allowedAddresses` argument: - `api/server/routes/mcp.js` invoked `initiateOAuthFlow` with the old 5-argument shape, so OAuth flows initiated through the route handler ignored the registry's `getAllowedAddresses()` and would reject any metadata/authorization/token URL on a permitted private host. - `api/server/controllers/UserController.js#maybeUninstallOAuthMCP` invoked `revokeOAuthToken` without the address exemption, so uninstalling an OAuth-backed MCP server on a permitted private host would fail at the revocation step even though the rest of the MCP connection path now permits it. Both sites now read `allowedAddresses` from the registry alongside `allowedDomains` and forward it. Reported by Copilot on PR #12933. * 🩹 fix: Update Test Mocks And Assertions For OAuth allowedAddresses The previous commit started passing `allowedAddresses` to `MCPOAuthHandler.initiateOAuthFlow` from `api/server/routes/mcp.js` and to `MCPOAuthHandler.revokeOAuthToken` from `api/server/controllers/UserController.js`, but the corresponding test files mocked the registry without `getAllowedAddresses` (causing `TypeError`s) and asserted the old positional shape on `toHaveBeenCalledWith`. Update the mocks and assertions to match the new arity: - `api/server/routes/__tests__/mcp.spec.js`: add `getAllowedDomains`/`getAllowedAddresses` to the registry mock and expect the additional positional args on `initiateOAuthFlow`. - `api/server/controllers/__tests__/maybeUninstallOAuthMCP.spec.js`: add a `getAllowedAddresses` mock alongside the existing `getAllowedDomains` and seed it in `setupOAuthServerFound`. - `api/server/controllers/__tests__/UserController.mcpOAuth.spec.js`: add `getAllowedAddresses` to the registry mock and expect the trailing `null` arg on the three `revokeOAuthToken` assertions. * 🛡️ fix: Address Comprehensive Review — Scope allowedAddresses To Private IP Space Major findings from the comprehensive PR review (severity → fix): **CRITICAL — `validateOAuthUrl` SSRF fallback bypass.** When `allowedDomains` is configured and a URL fails the whitelist, the SSRF fallback in `validateOAuthUrl` was still passing `allowedAddresses` to `isSSRFTarget` / `resolveHostnameSSRF`, letting a malicious MCP server advertise OAuth endpoints at any address the admin had permitted for an unrelated reason. Suppress `allowedAddresses` in the fallback when `allowedDomains` is active — the address exemption is opt-in for the no-whitelist mode only. **MAJOR — WebSocket transport SSRF check ignored exemptions.** The `constructTransport` WebSocket branch called `resolveHostnameSSRF(wsHostname)` without `this.allowedAddresses`, so a permitted private MCP server would pass `isMCPDomainAllowed` but be blocked at transport creation. Forward the exemption. **Scope `allowedAddresses` to private IP space only (operator directive).** The exemption list is for permitting private/internal targets; it must not be a back-door to broaden trust to public destinations. - Schema (`packages/data-provider/src/config.ts`): new `allowedAddressesSchema` rejects URLs (`://`), paths/CIDR (`/`), whitespace, and public IPv4/IPv6 literals at config-load time. Wired into `endpoints`, `mcpSettings`, and `actions`. - Runtime (`packages/api/src/auth/domain.ts`): `isAddressAllowed` now drops public-IP candidates and public-IP entries on the match path — defense in depth so a misconfigured runtime list never grants exemption. - Hot path (`packages/api/src/auth/agent.ts`): `buildSSRFSafeLookup` pre-normalizes the list into a `Set<string>` once at construction and applies the same scoping filter, so the connect-time DNS lookup is an O(1) Set membership check instead of a full re-iterate-and-normalize on every outbound request. **Test coverage for the connect-time and OAuth-fallback paths.** - `agent.spec.ts`: new describe block exercising `buildSSRFSafeLookup` and `createSSRFSafe*` with `allowedAddresses` — hostname-literal exemption, resolved-IP exemption, public-IP scoping, URL/CIDR/whitespace rejection, and the default no-list block. - `handler.allowedAddresses.test.ts` (new): integration tests for `validateOAuthUrl` — covers both the no-domains-set "permit private" path and the strict-bound regression where `allowedAddresses` must NOT bypass `allowedDomains`. **Documentation & cleanup.** - `connection.ts` redirect SSRF check: explicit comment that `allowedAddresses` is intentionally NOT consulted for redirect targets (server-controlled, must not inherit the admin's exemption). - `MCPConnectionFactory.test.ts`: replaced an `eslint-disable` with a proper `import { getTenantId } from '@librechat/data-schemas'`. The disable was added to make a pre-existing `require()` quiet — the cleaner fix is to use the existing top-level import. Updated `MCPConnectionSSRF.test.ts` WebSocket SSRF assertions to match the new two-argument call shape (`hostname, allowedAddresses`). * 🩹 fix: Require Absolute URL Before allowedAddresses Trust Bypass In isOAuthUrlAllowed `parseDomainSpec` is lenient — it silently prepends `https://` to schemeless inputs so it can match patterns like bare `example.com`. That leniency leaked into `isOAuthUrlAllowed`'s new `allowedAddresses` short-circuit: a value like `10.0.0.5/oauth` (no scheme) would parse successfully via the prepended default, hit the address-exemption path, return `true`, and skip `validateOAuthUrl`'s strict `new URL(url)` parse-or-throw — only to fail later in OAuth discovery with a less clear runtime error. Add a strict `new URL(url)` gate at the top of `isOAuthUrlAllowed`. Schemeless inputs now fall through to `validateOAuthUrl`'s explicit "Invalid OAuth <field>" rejection. Tests added in both `auth/domain.spec.ts` (unit) and the OAuth handler integration spec (end-to-end). Reported by chatgpt-codex-connector (P2) on PR #12933. * 🛡️ fix: Address Follow-Up Comprehensive Review — Schema Tests, Shared Normalization, host:port Auditing the second comprehensive review: **F1 MAJOR — schema validation untested.** `allowedAddressesSchema` had zero coverage, so a regression in the three refinement stages or the three wiring locations (`endpoints` / `mcpSettings` / `actions`) would silently let invalid entries reach the runtime. Added a dedicated `describe('allowedAddressesSchema')` block in `config.spec.ts` covering: valid private IPs (v4 + v6, including the previously-missed 192.0.0.0/24 range), accepted hostnames, all rejection categories (URLs, CIDR, paths, whitespace tabs/newlines, host:port, public IP literals), and full `configSchema.parse()` integration at each of the three nesting points. **F2 MINOR — `isPrivateIPv4Literal` divergence.** The schema reimpl in `packages/data-provider` was discarding the `c` octet, so the `192.0.0.0/24` (RFC 5736 IETF protocol assignments) range that the authoritative `isPrivateIPv4` accepts was being rejected with a misleading "public IP" error. Destructure `c` and add the missing range check; covered by the new schema tests. **F3 MINOR — DRY violation across `domain.ts` and `agent.ts`.** Both files had independent normalization implementations with a subtle whitespace-check divergence (`/\s/` vs `.includes(' ')`). Extracted the shared logic into a new `packages/api/src/auth/allowedAddresses.ts` module that both consumers import: - `normalizeAddressEntry(entry)` — single-entry shape check - `looksLikeHostPort(entry)` — host:port detector (used by F4) - `normalizeAllowedAddressesSet(list)` — pre-normalized Set for the connect-time hot path - `isAddressInAllowedSet(candidate, set)` — membership check that enforces private-IP scoping on the candidate Both `isAddressAllowed` (preflight) and `buildSSRFSafeLookup` (connect) now go through the same primitives; the whitespace divergence is gone. To break the import cycle (`allowedAddresses` needs `isPrivateIP`, `domain` previously owned it), extracted IP private-range detection into a leaf `auth/ip.ts` module. `domain.ts` re-exports `isPrivateIP` for backward compatibility with existing call sites. **F4 MINOR — `host:port` silently misclassified.** Entries like `localhost:8080` previously slipped through the URL/path guard, were mis-detected as IPv6, failed `isPrivateIP`, and were silently dropped with a misleading "public IP" schema error. Added an explicit `looksLikeHostPort` check with a clear error: "allowedAddresses entries must not include a port — list the bare hostname or IP only." Bare `::1`, `[::1]`, and other valid IPv6 literals are intentionally not matched (regex distinguishes by colon count and the bracketed `[ipv6]:port` form). **F5 MINOR — hostname-trust documentation gap.** Hostname entries short-circuit `resolveHostnameSSRF` before any DNS lookup — that's a deliberate design (admin trusts the name) but it means the exemption follows whatever the name resolves to at runtime. Added an explicit note in `librechat.example.yaml` for both `mcpSettings.allowedAddresses` and `endpoints.allowedAddresses`: "a hostname entry trusts whatever IP that name resolves to. Only list hostnames whose DNS you control. Prefer literal IPs when you can." **F6** (8 positional params) is flagged for follow-up; refactor to an options object is a breaking-API change deferred to a separate PR. **F7** (redirect/WebSocket asymmetry, NIT, conf 40) — skipping; the existing inline comment is sufficient. * 🧹 chore: Address Follow-Up NITs — Import Order And Mirror-Function Naming Three NITs from the latest comprehensive review: **NIT #1 (conf 85) — local import order.** AGENTS.md requires local imports sorted longest-to-shortest. Both `domain.ts` and `agent.ts` had `./ip` (shorter) before `./allowedAddresses` (longer). Swapped. **NIT #2 (conf 60) — missing cross-reference.** The schema-side `isHostPortShape` in `packages/data-provider/src/config.ts` had no note pointing at the canonical runtime mirror. Added a JSDoc paragraph explaining the mirror relationship and why a local copy exists (the data-provider package can't import from `@librechat/api` without creating a circular dependency). **NIT #3 (conf 50) — naming inconsistency.** Renamed `isHostPortShape` → `looksLikeHostPort` so the schema mirror matches the runtime helper exactly. Kept as a separate function (not a shared import) for the same circular-dependency reason; the matching name makes it obvious they should stay in lockstep.
551 lines
19 KiB
JavaScript
551 lines
19 KiB
JavaScript
const mongoose = require('mongoose');
|
|
const { logger, webSearchKeys } = require('@librechat/data-schemas');
|
|
const {
|
|
getNewS3URL,
|
|
needsRefresh,
|
|
MCPOAuthHandler,
|
|
MCPTokenStorage,
|
|
normalizeHttpError,
|
|
extractWebSearchEnvVars,
|
|
} = require('@librechat/api');
|
|
const {
|
|
Tools,
|
|
CacheKeys,
|
|
Constants,
|
|
FileSources,
|
|
ResourceType,
|
|
} = require('librechat-data-provider');
|
|
const { updateUserPluginAuth, deleteUserPluginAuth } = require('~/server/services/PluginService');
|
|
const { verifyOTPOrBackupCode } = require('~/server/services/twoFactorService');
|
|
const { verifyEmail, resendVerificationEmail } = require('~/server/services/AuthService');
|
|
const { getMCPManager, getFlowStateManager, getMCPServersRegistry } = require('~/config');
|
|
const { invalidateCachedTools } = require('~/server/services/Config/getCachedTools');
|
|
const { processDeleteRequest } = require('~/server/services/Files/process');
|
|
const { getAppConfig } = require('~/server/services/Config');
|
|
const { getLogStores } = require('~/cache');
|
|
const db = require('~/models');
|
|
|
|
const getUserController = async (req, res) => {
|
|
const appConfig = await getAppConfig({ role: req.user?.role, tenantId: req.user?.tenantId });
|
|
/** @type {IUser} */
|
|
const userData = req.user.toObject != null ? req.user.toObject() : { ...req.user };
|
|
/**
|
|
* These fields should not exist due to secure field selection, but deletion
|
|
* is done in case of alternate database incompatibility with Mongo API
|
|
* */
|
|
delete userData.password;
|
|
delete userData.totpSecret;
|
|
delete userData.backupCodes;
|
|
if (appConfig.fileStrategy === FileSources.s3 && userData.avatar) {
|
|
const avatarNeedsRefresh = needsRefresh(userData.avatar, 3600);
|
|
if (!avatarNeedsRefresh) {
|
|
return res.status(200).send(userData);
|
|
}
|
|
const originalAvatar = userData.avatar;
|
|
try {
|
|
userData.avatar = await getNewS3URL(userData.avatar);
|
|
await db.updateUser(userData.id, { avatar: userData.avatar });
|
|
} catch (error) {
|
|
userData.avatar = originalAvatar;
|
|
logger.error('Error getting new S3 URL for avatar:', error);
|
|
}
|
|
}
|
|
res.status(200).send(userData);
|
|
};
|
|
|
|
const getTermsStatusController = async (req, res) => {
|
|
try {
|
|
const user = await db.getUserById(req.user.id, 'termsAccepted');
|
|
if (!user) {
|
|
return res.status(404).json({ message: 'User not found' });
|
|
}
|
|
res.status(200).json({ termsAccepted: !!user.termsAccepted });
|
|
} catch (error) {
|
|
logger.error('Error fetching terms acceptance status:', error);
|
|
res.status(500).json({ message: 'Error fetching terms acceptance status' });
|
|
}
|
|
};
|
|
|
|
const acceptTermsController = async (req, res) => {
|
|
try {
|
|
const user = await db.updateUser(req.user.id, { termsAccepted: true });
|
|
if (!user) {
|
|
return res.status(404).json({ message: 'User not found' });
|
|
}
|
|
res.status(200).json({ message: 'Terms accepted successfully' });
|
|
} catch (error) {
|
|
logger.error('Error accepting terms:', error);
|
|
res.status(500).json({ message: 'Error accepting terms' });
|
|
}
|
|
};
|
|
|
|
const deleteUserFiles = async (req) => {
|
|
try {
|
|
const userFiles = await db.getFiles({ user: req.user.id });
|
|
await processDeleteRequest({
|
|
req,
|
|
files: userFiles,
|
|
});
|
|
} catch (error) {
|
|
logger.error('[deleteUserFiles]', error);
|
|
}
|
|
};
|
|
|
|
/**
|
|
* Deletes MCP servers solely owned by the user and cleans up their ACLs.
|
|
* Disconnects live sessions for deleted servers before removing DB records.
|
|
* Servers with other owners are left intact; the caller is responsible for
|
|
* removing the user's own ACL principal entries separately.
|
|
*
|
|
* Also handles legacy (pre-ACL) MCP servers that only have the author field set,
|
|
* ensuring they are not orphaned if no permission migration has been run.
|
|
* @param {string} userId - The ID of the user.
|
|
*/
|
|
const deleteUserMcpServers = async (userId) => {
|
|
try {
|
|
const MCPServer = mongoose.models.MCPServer;
|
|
const AclEntry = mongoose.models.AclEntry;
|
|
if (!MCPServer) {
|
|
return;
|
|
}
|
|
|
|
const userObjectId = new mongoose.Types.ObjectId(userId);
|
|
const soleOwnedIds = await db.getSoleOwnedResourceIds(userObjectId, ResourceType.MCPSERVER);
|
|
|
|
const authoredServers = await MCPServer.find({ author: userObjectId })
|
|
.select('_id serverName')
|
|
.lean();
|
|
|
|
const migratedEntries =
|
|
authoredServers.length > 0
|
|
? await AclEntry.find({
|
|
resourceType: ResourceType.MCPSERVER,
|
|
resourceId: { $in: authoredServers.map((s) => s._id) },
|
|
})
|
|
.select('resourceId')
|
|
.lean()
|
|
: [];
|
|
const migratedIds = new Set(migratedEntries.map((e) => e.resourceId.toString()));
|
|
const legacyServers = authoredServers.filter((s) => !migratedIds.has(s._id.toString()));
|
|
const legacyServerIds = legacyServers.map((s) => s._id);
|
|
|
|
const allServerIdsToDelete = [...soleOwnedIds, ...legacyServerIds];
|
|
|
|
if (allServerIdsToDelete.length === 0) {
|
|
return;
|
|
}
|
|
|
|
const aclOwnedServers =
|
|
soleOwnedIds.length > 0
|
|
? await MCPServer.find({ _id: { $in: soleOwnedIds } })
|
|
.select('serverName')
|
|
.lean()
|
|
: [];
|
|
const allServersToDelete = [...aclOwnedServers, ...legacyServers];
|
|
|
|
const mcpManager = getMCPManager();
|
|
if (mcpManager) {
|
|
await Promise.all(
|
|
allServersToDelete.map(async (s) => {
|
|
await mcpManager.disconnectUserConnection(userId, s.serverName);
|
|
await invalidateCachedTools({ userId, serverName: s.serverName });
|
|
}),
|
|
);
|
|
}
|
|
|
|
await AclEntry.deleteMany({
|
|
resourceType: ResourceType.MCPSERVER,
|
|
resourceId: { $in: allServerIdsToDelete },
|
|
});
|
|
|
|
await MCPServer.deleteMany({ _id: { $in: allServerIdsToDelete } });
|
|
} catch (error) {
|
|
logger.error('[deleteUserMcpServers] General error:', error);
|
|
}
|
|
};
|
|
|
|
const updateUserPluginsController = async (req, res) => {
|
|
const appConfig = await getAppConfig({ role: req.user?.role, tenantId: req.user?.tenantId });
|
|
const { user } = req;
|
|
const { pluginKey, action, auth, isEntityTool } = req.body;
|
|
try {
|
|
if (!isEntityTool) {
|
|
await db.updateUserPlugins(user._id, user.plugins, pluginKey, action);
|
|
}
|
|
|
|
if (auth == null) {
|
|
return res.status(200).send();
|
|
}
|
|
|
|
let keys = Object.keys(auth);
|
|
const values = Object.values(auth); // Used in 'install' block
|
|
|
|
const isMCPTool = pluginKey.startsWith('mcp_') || pluginKey.includes(Constants.mcp_delimiter);
|
|
|
|
// Early exit condition:
|
|
// If keys are empty (meaning auth: {} was likely sent for uninstall, or auth was empty for install)
|
|
// AND it's not web_search (which has special key handling to populate `keys` for uninstall)
|
|
// AND it's NOT (an uninstall action FOR an MCP tool - we need to proceed for this case to clear all its auth)
|
|
// THEN return.
|
|
if (
|
|
keys.length === 0 &&
|
|
pluginKey !== Tools.web_search &&
|
|
!(action === 'uninstall' && isMCPTool)
|
|
) {
|
|
return res.status(200).send();
|
|
}
|
|
|
|
/** @type {number} */
|
|
let status = 200;
|
|
/** @type {string} */
|
|
let message;
|
|
/** @type {IPluginAuth | Error} */
|
|
let authService;
|
|
|
|
if (pluginKey === Tools.web_search) {
|
|
/** @type {TCustomConfig['webSearch']} */
|
|
const webSearchConfig = appConfig?.webSearch;
|
|
keys = extractWebSearchEnvVars({
|
|
keys: action === 'install' ? keys : webSearchKeys,
|
|
config: webSearchConfig,
|
|
});
|
|
}
|
|
|
|
if (action === 'install') {
|
|
for (let i = 0; i < keys.length; i++) {
|
|
authService = await updateUserPluginAuth(user.id, keys[i], pluginKey, values[i]);
|
|
if (authService instanceof Error) {
|
|
logger.error('[authService]', authService);
|
|
({ status, message } = normalizeHttpError(authService));
|
|
}
|
|
}
|
|
} else if (action === 'uninstall') {
|
|
// const isMCPTool was defined earlier
|
|
if (isMCPTool && keys.length === 0) {
|
|
// This handles the case where auth: {} is sent for an MCP tool uninstall.
|
|
// It means "delete all credentials associated with this MCP pluginKey".
|
|
authService = await deleteUserPluginAuth(user.id, null, true, pluginKey);
|
|
if (authService instanceof Error) {
|
|
logger.error(
|
|
`[authService] Error deleting all auth for MCP tool ${pluginKey}:`,
|
|
authService,
|
|
);
|
|
({ status, message } = normalizeHttpError(authService));
|
|
}
|
|
try {
|
|
// if the MCP server uses OAuth, perform a full cleanup and token revocation
|
|
await maybeUninstallOAuthMCP(user.id, pluginKey, appConfig);
|
|
} catch (error) {
|
|
logger.error(
|
|
`[updateUserPluginsController] Error uninstalling OAuth MCP for ${pluginKey}:`,
|
|
error,
|
|
);
|
|
}
|
|
} else {
|
|
// This handles:
|
|
// 1. Web_search uninstall (keys will be populated with all webSearchKeys if auth was {}).
|
|
// 2. Other tools uninstall (if keys were provided).
|
|
// 3. MCP tool uninstall if specific keys were provided in `auth` (not current frontend behavior).
|
|
// If keys is empty for non-MCP tools (and not web_search), this loop won't run, and nothing is deleted.
|
|
for (let i = 0; i < keys.length; i++) {
|
|
authService = await deleteUserPluginAuth(user.id, keys[i]); // Deletes by authField name
|
|
if (authService instanceof Error) {
|
|
logger.error('[authService] Error deleting specific auth key:', authService);
|
|
({ status, message } = normalizeHttpError(authService));
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
if (status === 200) {
|
|
// If auth was updated successfully, disconnect MCP sessions as they might use these credentials
|
|
if (pluginKey.startsWith(Constants.mcp_prefix)) {
|
|
try {
|
|
const mcpManager = getMCPManager();
|
|
if (mcpManager) {
|
|
// Extract server name from pluginKey (format: "mcp_<serverName>")
|
|
const serverName = pluginKey.replace(Constants.mcp_prefix, '');
|
|
logger.info(
|
|
`[updateUserPluginsController] Attempting disconnect of MCP server "${serverName}" for user ${user.id} after plugin auth update.`,
|
|
);
|
|
await mcpManager.disconnectUserConnection(user.id, serverName);
|
|
await invalidateCachedTools({ userId: user.id, serverName });
|
|
}
|
|
} catch (disconnectError) {
|
|
logger.error(
|
|
`[updateUserPluginsController] Error disconnecting MCP connection for user ${user.id} after plugin auth update:`,
|
|
disconnectError,
|
|
);
|
|
// Do not fail the request for this, but log it.
|
|
}
|
|
}
|
|
return res.status(status).send();
|
|
}
|
|
|
|
const normalized = normalizeHttpError({ status, message });
|
|
return res.status(normalized.status).send({ message: normalized.message });
|
|
} catch (err) {
|
|
logger.error('[updateUserPluginsController]', err);
|
|
return res.status(500).json({ message: 'Something went wrong.' });
|
|
}
|
|
};
|
|
|
|
const deleteUserController = async (req, res) => {
|
|
const { user } = req;
|
|
|
|
try {
|
|
const existingUser = await db.getUserById(
|
|
user.id,
|
|
'+totpSecret +backupCodes _id twoFactorEnabled',
|
|
);
|
|
if (existingUser && existingUser.twoFactorEnabled) {
|
|
const { token, backupCode } = req.body;
|
|
const result = await verifyOTPOrBackupCode({ user: existingUser, token, backupCode });
|
|
|
|
if (!result.verified) {
|
|
const msg =
|
|
result.message ??
|
|
'TOTP token or backup code is required to delete account with 2FA enabled';
|
|
return res.status(result.status ?? 400).json({ message: msg });
|
|
}
|
|
}
|
|
|
|
await db.deleteMessages({ user: user.id });
|
|
await db.deleteAllUserSessions({ userId: user.id });
|
|
await db.deleteTransactions({ user: user.id });
|
|
await db.deleteUserKey({ userId: user.id, all: true });
|
|
await db.deleteBalances({ user: user._id });
|
|
await db.deletePresets(user.id);
|
|
try {
|
|
await db.deleteConvos(user.id);
|
|
} catch (error) {
|
|
logger.error('[deleteUserController] Error deleting user convos, likely no convos', error);
|
|
}
|
|
await deleteUserPluginAuth(user.id, null, true);
|
|
await db.deleteUserById(user.id);
|
|
await db.deleteAllSharedLinks(user.id);
|
|
await deleteUserFiles(req);
|
|
await db.deleteFiles(null, user.id);
|
|
await db.deleteToolCalls(user.id);
|
|
await db.deleteUserAgents(user.id);
|
|
await db.deleteAllAgentApiKeys(user._id);
|
|
await db.deleteAssistants({ user: user.id });
|
|
await db.deleteConversationTags({ user: user.id });
|
|
await db.deleteAllUserMemories(user.id);
|
|
await db.deleteUserPrompts(user.id);
|
|
await db.deleteUserSkills(user.id);
|
|
await deleteUserMcpServers(user.id);
|
|
await db.deleteActions({ user: user.id });
|
|
await db.deleteTokens({ userId: user.id });
|
|
await db.removeUserFromAllGroups(user.id);
|
|
await db.deleteAclEntries({ principalId: user._id });
|
|
logger.info(`User deleted account. Email: ${user.email} ID: ${user.id}`);
|
|
res.status(200).send({ message: 'User deleted' });
|
|
} catch (err) {
|
|
logger.error('[deleteUserController]', err);
|
|
return res.status(500).json({ message: 'Something went wrong.' });
|
|
}
|
|
};
|
|
|
|
const verifyEmailController = async (req, res) => {
|
|
try {
|
|
const verifyEmailService = await verifyEmail(req);
|
|
if (verifyEmailService instanceof Error) {
|
|
return res.status(400).json(verifyEmailService);
|
|
} else {
|
|
return res.status(200).json(verifyEmailService);
|
|
}
|
|
} catch (e) {
|
|
logger.error('[verifyEmailController]', e);
|
|
return res.status(500).json({ message: 'Something went wrong.' });
|
|
}
|
|
};
|
|
|
|
const resendVerificationController = async (req, res) => {
|
|
try {
|
|
const result = await resendVerificationEmail(req);
|
|
if (result instanceof Error) {
|
|
return res.status(400).json(result);
|
|
} else {
|
|
return res.status(200).json(result);
|
|
}
|
|
} catch (e) {
|
|
logger.error('[verifyEmailController]', e);
|
|
return res.status(500).json({ message: 'Something went wrong.' });
|
|
}
|
|
};
|
|
|
|
/** Best-effort cleanup of stored MCP OAuth tokens and flow state. */
|
|
const clearStoredMCPOAuthState = async (userId, serverName) => {
|
|
try {
|
|
await MCPTokenStorage.deleteUserTokens({
|
|
userId,
|
|
serverName,
|
|
deleteToken: async (filter) => {
|
|
await db.deleteTokens(filter);
|
|
},
|
|
});
|
|
} catch (error) {
|
|
logger.warn(
|
|
`[clearStoredMCPOAuthState] Failed to delete MCP OAuth tokens for ${serverName}:`,
|
|
error,
|
|
);
|
|
}
|
|
|
|
try {
|
|
const flowsCache = getLogStores(CacheKeys.FLOWS);
|
|
const flowManager = getFlowStateManager(flowsCache);
|
|
const flowId = MCPOAuthHandler.generateFlowId(userId, serverName);
|
|
const results = await Promise.allSettled([
|
|
flowManager.deleteFlow(flowId, 'mcp_get_tokens'),
|
|
flowManager.deleteFlow(flowId, 'mcp_oauth'),
|
|
]);
|
|
for (const result of results) {
|
|
if (result.status === 'rejected') {
|
|
logger.warn(
|
|
`[clearStoredMCPOAuthState] Failed to clear MCP OAuth flow state for ${serverName}:`,
|
|
result.reason,
|
|
);
|
|
}
|
|
}
|
|
} catch (error) {
|
|
logger.warn(
|
|
`[clearStoredMCPOAuthState] Failed to clear MCP OAuth flow state for ${serverName}:`,
|
|
error,
|
|
);
|
|
}
|
|
};
|
|
|
|
/** Revokes MCP OAuth tokens at the provider when possible, then clears local state. */
|
|
const maybeUninstallOAuthMCP = async (userId, pluginKey, appConfig) => {
|
|
if (!pluginKey.startsWith(Constants.mcp_prefix)) {
|
|
// this is not an MCP server, so nothing to do here
|
|
return;
|
|
}
|
|
|
|
const serverName = pluginKey.replace(Constants.mcp_prefix, '');
|
|
const serverConfig =
|
|
(await getMCPServersRegistry().getServerConfig(serverName, userId)) ??
|
|
appConfig?.mcpServers?.[serverName];
|
|
const oauthServers = await getMCPServersRegistry().getOAuthServers(userId);
|
|
if (!oauthServers.has(serverName) || !serverConfig) {
|
|
await clearStoredMCPOAuthState(userId, serverName);
|
|
return;
|
|
}
|
|
|
|
// 1. get client info used for revocation (client id, secret)
|
|
let clientTokenData = null;
|
|
try {
|
|
clientTokenData = await MCPTokenStorage.getClientInfoAndMetadata({
|
|
userId,
|
|
serverName,
|
|
findToken: db.findToken,
|
|
});
|
|
} catch (error) {
|
|
logger.warn(
|
|
`[maybeUninstallOAuthMCP] Unable to load OAuth client metadata for ${serverName}; clearing local MCP OAuth state only.`,
|
|
error,
|
|
);
|
|
await clearStoredMCPOAuthState(userId, serverName);
|
|
return;
|
|
}
|
|
if (clientTokenData == null) {
|
|
logger.info(
|
|
`[maybeUninstallOAuthMCP] Missing OAuth client metadata for ${serverName}; clearing local MCP OAuth state only.`,
|
|
);
|
|
await clearStoredMCPOAuthState(userId, serverName);
|
|
return;
|
|
}
|
|
const { clientInfo, clientMetadata } = clientTokenData;
|
|
|
|
// 2. get decrypted tokens before deletion
|
|
let tokens = null;
|
|
try {
|
|
tokens = await MCPTokenStorage.getTokens({
|
|
userId,
|
|
serverName,
|
|
findToken: db.findToken,
|
|
});
|
|
} catch (error) {
|
|
logger.warn(
|
|
`[maybeUninstallOAuthMCP] Unable to load OAuth tokens for ${serverName}; clearing local token state.`,
|
|
error,
|
|
);
|
|
}
|
|
|
|
// 3. revoke OAuth tokens at the provider
|
|
const revocationEndpoint =
|
|
serverConfig.oauth?.revocation_endpoint ?? clientMetadata.revocation_endpoint;
|
|
const revocationEndpointAuthMethodsSupported =
|
|
serverConfig.oauth?.revocation_endpoint_auth_methods_supported ??
|
|
clientMetadata.revocation_endpoint_auth_methods_supported;
|
|
const oauthHeaders = serverConfig.oauth_headers ?? {};
|
|
const registry = getMCPServersRegistry();
|
|
const allowedDomains = registry.getAllowedDomains();
|
|
const allowedAddresses = registry.getAllowedAddresses();
|
|
|
|
if (tokens?.access_token) {
|
|
try {
|
|
await MCPOAuthHandler.revokeOAuthToken(
|
|
serverName,
|
|
tokens.access_token,
|
|
'access',
|
|
{
|
|
serverUrl: serverConfig.url,
|
|
clientId: clientInfo.client_id,
|
|
clientSecret: clientInfo.client_secret ?? '',
|
|
revocationEndpoint,
|
|
revocationEndpointAuthMethodsSupported,
|
|
},
|
|
oauthHeaders,
|
|
allowedDomains,
|
|
allowedAddresses,
|
|
);
|
|
} catch (error) {
|
|
logger.error(
|
|
`[maybeUninstallOAuthMCP] Error revoking OAuth access token for ${serverName}:`,
|
|
error,
|
|
);
|
|
}
|
|
}
|
|
|
|
if (tokens?.refresh_token) {
|
|
try {
|
|
await MCPOAuthHandler.revokeOAuthToken(
|
|
serverName,
|
|
tokens.refresh_token,
|
|
'refresh',
|
|
{
|
|
serverUrl: serverConfig.url,
|
|
clientId: clientInfo.client_id,
|
|
clientSecret: clientInfo.client_secret ?? '',
|
|
revocationEndpoint,
|
|
revocationEndpointAuthMethodsSupported,
|
|
},
|
|
oauthHeaders,
|
|
allowedDomains,
|
|
allowedAddresses,
|
|
);
|
|
} catch (error) {
|
|
logger.error(
|
|
`[maybeUninstallOAuthMCP] Error revoking OAuth refresh token for ${serverName}:`,
|
|
error,
|
|
);
|
|
}
|
|
}
|
|
|
|
// 4. delete tokens from the DB and clear the flow state after revocation attempts
|
|
await clearStoredMCPOAuthState(userId, serverName);
|
|
};
|
|
|
|
module.exports = {
|
|
getUserController,
|
|
getTermsStatusController,
|
|
acceptTermsController,
|
|
deleteUserController,
|
|
verifyEmailController,
|
|
updateUserPluginsController,
|
|
resendVerificationController,
|
|
deleteUserMcpServers,
|
|
maybeUninstallOAuthMCP,
|
|
};
|