mtproto_proxy/doc/handler-downstream-flow.md
Sergey Prokhorov 121d8b7413
docs: split-mode setup guide, architecture diagrams, cert script, build
README:
- New 'Split-mode setup' section: motivation, firewall rules, step-by-step
  instructions for both VPN tunnel and TLS distribution options
- Split-mode bullet added to Features list
- Notes on DPI-resistant tunnels (Shadowsocks, VLESS/XRay, Hysteria2) for
  Russian deployment; standard VPN protocols (WireGuard, OpenVPN) may be blocked
- Install instructions updated to use `make init-config` (copies templates,
  auto-detects public IP) instead of manual cp; ROLE= documented throughout
- Split-mode Step 4 uses `make ROLE=back/front` so template-change detection
  works correctly after `git pull`

Makefile:
- ROLE ?= both variable selects config templates (both/front/back)
- Config prereq rules use $(SYS_CONFIG_SRC) / $(VM_ARGS_SRC) based on ROLE
- New `init-config` target: force-copies templates, auto-detects public IP,
  prints edit reminder; replaces manual cp in install workflow

scripts/gen_dist_certs.sh:
- Two-step workflow: `init <dir>` on back server (CA + back cert),
  `add-node <dir> <name>` per front server (cert signed by existing CA)
- Generates per-node ssl_dist.<name>.conf with paths substituted (no
  NODE_NAME placeholder to edit manually)
- ssl_dist.<name>.conf is now used directly (no rename to ssl_dist.conf);
  vm.args examples and README updated to match

config/vm.args.{front,back}.example:
- -ssl_dist_optfile points to role-specific filename (ssl_dist.front.conf /
  ssl_dist.back.conf) so cert files can be copied as-is without renaming

AGENTS.md:
- Role-overview Mermaid flowchart showing front/back/both process split
- Data-plane section replaced with links to doc/ (no duplication)
- Supervision tree, key interactions, split-mode config keys updated

doc/handler-downstream-flow.md, doc/migration-flow.md:
- Mermaid box grouping to visually separate FRONT and BACK node participants
- erpc:call reference corrected (was rpc:call)

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-04-12 00:34:45 +02:00

2.3 KiB

Handler ↔ downstream lookup and handshake

Shows how mtp_handler locates an mtp_down_conn for a new client connection and the steady-state data flow that follows.

Key actors:

  • mtp_handler — one process per Telegram client TCP connection
  • mtp_dc_pool — manages a pool of downstream connections for one DC
  • mtp_down_conn — multiplexed TCP connection to a Telegram DC
  • Telegram DC — the upstream Telegram data-centre server

In split mode (node_role = front / back) mtp_handler runs on the front node and mtp_dc_pool / mtp_down_conn run on the back node. The pool is addressed as {mtp_dc_pool_N, BackNode} — Erlang distribution makes the gen_server:call and all subsequent casts transparent across nodes. Multiple front nodes can share the same back node; the pools multiplex over all upstream connections regardless of origin.

sequenceDiagram
    participant Client as Telegram client
    box LightBlue "FRONT node"
        participant Handler as mtp_handler
    end
    box LightGreen "BACK node"
        participant Pool as mtp_dc_pool
        participant Down as mtp_down_conn
    end
    participant TG as Telegram DC

    Client->>Handler: TCP connect + Hello bytes

    Note over Handler: decode protocol headers<br/>(fake-TLS / obfuscated / secure)<br/>stage: hello → tunnel

    Note over Handler: resolve pool:<br/>single-node: whereis(dc_to_pool_name(DcId))<br/>split mode:  erpc:call(BackNode, erlang, whereis, [PoolName])<br/>→ returns {PoolName, BackNode}<br/>(falls back to default DC from mtp_config if not found)
    Handler->>Pool: mtp_dc_pool:get(Pool, self(), Opts) [sync]
    Pool-->>Down: upstream_new(Handler, Opts) [cast]
    Pool->>Handler: Downstream pid

    Note over Handler: down = Downstream<br/>stage = tunnel

    loop steady-state data exchange
        Client->>Handler: TCP data
        Handler->>Down: mtp_down_conn:send(Down, Data) [sync]
        Down->>TG: TCP data (RPC-framed)
        TG->>Down: TCP data
        Down->>Handler: ok
        Down-->>Handler: {proxy_ans, Down, Data} [cast]
        Handler->>Client: TCP data
        Handler-->>Down: mtp_down_conn:ack(Down, Count, Size) [cast]
    end

    Client->>Handler: TCP close
    Handler-->>Pool: mtp_dc_pool:return(Pool, self()) [cast]
    Pool-->>Down: upstream_closed(Down, Handler) [cast]