Set operational flags for video transcoding, AI inference, or dual mode - docker-compose templates, pricing, networking, and GPU assignment.
Choose the tab that matches the intended workload. All three modes use the same binary and differ only by flags. Keep Arbitrum connection and staking flags for the connect-and-activate step.
Build the docker-compose file that will carry the node through the rest of setup. Pick one operating mode: video transcoding, AI inference, or dual mode.
The -serviceAddr flag declares the public address gateways connect to. This address must be reachable from the internet:
Copy
Ask AI
# IP address-serviceAddr 203.0.113.42:8935# Domain name (preferred - survives IP changes without re-registration)-serviceAddr orch.yourdomain.com:8935
Port 8935 must be open inbound on the firewall. Test reachability from a different machine before connecting on-chain:
Copy
Ask AI
curl -k https://YOUR_PUBLIC_IP:8935/status
Any response (including a JSON error) confirms the port is open. A connection timeout means the firewall is blocking it.Port 7935 (-cliAddr) is for local CLI access and Prometheus metrics. Keep it bound to localhost unless external monitoring is configured.
-pricePerUnit sets video transcoding price in wei per pixel (not ETH). A typical starting range is 500-2,000 wei per pixel. Set it below 1,000 wei per pixel initially and adjust based on job volume.For AI pricing, set price_per_unit per pipeline in aiModels.json. Values are in wei per output pixel (or per ms for audio, per token for LLM).For competitive positioning guidance, see .
Always mount the data directory so the keystore and data survive container restarts:
Copy
Ask AI
volumes: - ~/.lpData:/root/.lpData
Without this mount, go-livepeer creates a new Ethereum account on every container start, losing the previous keystore and all bonded LPT.For AI workloads, also mount the models directory: