Choose the right deployment path for running a Livepeer orchestrator - solo, pool node, O-T split, siphon, and dual mode workload configuration.
Key Decisions for Orchestrator Setup
Setup Type - On-chain or off-chain
Setup Path - What software is installed
Operational Mode - Whether you control and handle all operational requirements or delegate them
Workload Mode - What compute job workloads the node processes
This page acts as a guide to finding the Orchestrator setup path that matches your operational aims.covers the deployment options available for Orchestrators by category and their focus:
A pool node is not a pool operator. A pool node joins someone else’s pool and contributes GPU compute. A pool operator runs the orchestrator that accepts external workers. These are different deployment types with different requirements.
The standard path. A single go-livepeer process on one machine handles protocol operations, job routing, and GPU work.The operator controls: pricing, stake, workloads, reward calling, uptime - everything.
Setup Guide
Install, configure, connect, and verify.
A GPU-only process connecting to a pool operator’s orchestrator. No staking, no on-chain registration, no protocol management.The operator controls: which pool to join, GPU hardware, when to switch.The pool controls: registration, staking, pricing, reward calling, payout schedules.
Join a Pool
Evaluate pools, connect as a worker, start earning.
Orchestrator and transcoder as separate processes on separate machines. The orchestrator handles protocol (no GPU). The transcoder handles GPU work. Connected by a shared secret.The operator controls: everything, but responsibilities are split across machines.
O-T Split Setup
Separate protocol from GPU work. Connect multiple GPU machines.
A secure machine runs OrchestratorSiphon (Python) for keystore, reward calling, and on-chain operations. A GPU machine runs go-livepeer in transcoder mode. Reward calling continues even when the GPU machine is down.The operator controls: everything, via OrchestratorSiphon on the secure machine.
Siphon Setup
Keystore isolation and reward safety.
An orchestrator that accepts external GPU workers. Manages on-chain operations and distributes earnings to workers off-chain. Extension of the O-T split pattern.The operator controls: on-chain identity, pricing, worker acceptance, fee distribution.
Pool Operators
Worker management, fee distribution, and pool economics.
Deployment type and workload mode are independent decisions. Any deployment type above can run any workload mode below.Dual mode is the most common production configuration. NVENC/NVDEC (video) use dedicated silicon that does not compete with CUDA cores (AI). Both workloads share VRAM. A 24 GB GPU supports video transcoding alongside one warm AI model.For full dual mode setup instructions, see .For a detailed breakdown of all AI pipeline types, VRAM requirements, and demand data, see .