Skip to main content
This page routes to the right starting point based on the operator’s current situation. Not every operator follows the same path - hardware, LPT access, and goals determine where to start.

Choose Your Starting Point

Start with the Quickstart. Verify the technology works on the hardware before committing time or money. Off-chain, no staking, no ETH - pure hardware verification.

Quickstart

Verify video transcoding and AI inference work on the GPU. Under 1 hour.

Operator Rationale

Review costs, revenue streams, and break-even analysis before committing.
Join a pool. No LPT staking, no on-chain activation, no protocol management. Connect the GPU to an existing pool operator and start processing jobs. The pool operator handles everything else.
Joining a pool requires finding a pool operator willing to accept workers. This is a social process (Discord, community pools directory) that may take days. The technical setup is fast once pool access is obtained.

Join a Pool

Evaluate pools, connect as a worker, start earning.

Community Pools

Directory of active orchestrator pools accepting workers.
AI inference does not require active set membership. Routing is capability-based, not stake-based. An operator with a capable GPU and minimal LPT can register on the AIServiceRegistry and start receiving AI jobs immediately.

AI Inference Operations

How AI routing works, aiModels.json configuration, and pipeline types.

Setup Guide

Full setup flow including AI configuration and on-chain registration.
Dual mode is additive. The existing video configuration does not change. Add -aiWorker, -aiModels, and an AI runner container. NVENC/NVDEC (video) use dedicated silicon that does not compete with CUDA (AI). Both workloads share VRAM.

Dual Mode Configuration

The exact configuration delta for adding AI to a running video node.

Model and Demand Reference

Which AI models fit the available VRAM and are in demand on the network.
The standard path. Full control over pricing, workloads, and protocol participation. Requires LPT for staking, ETH for gas, and ongoing operational commitment.

Setup Guide

Complete setup: install, configure, connect to Arbitrum, verify.

Requirements

Hardware, software, network, and token prerequisites by node mode.
Commercial orchestrator operation. Serve application workloads under SLAs. Per-gateway pricing. O-T split architecture for reliability. Fleet deployment patterns.

Business Case

The commercial orchestrator model - service fees, SLAs, per-gateway pricing.

Scale Operations

Fleet deployment, multi-GPU, multi-machine architecture.
Orchestrators carry governance weight. Total bonded stake (self + delegated) determines voting power on LIPs, treasury proposals, and protocol parameters. Operating a well-run node attracts delegation, which compounds governance influence.

Operator Impact

Governance weight, the sovereign compute thesis, and what gets voted on.

Network Participation

How to vote on LIPs, quorum requirements, and SPE proposals.

Quick Reference

All Sections

Operator Considerations

Should I operate? Costs, revenue, business case, protocol influence.

Deployment Details

Which path? Solo, pool, O-T split, siphon, dual mode.

Workloads and AI

What to run? Video, AI diffusion, LLM, realtime, audio.

Staking and Earning

How to earn? Service fees, inflation rewards, delegation.

Config and Optimisation

How to tune? Pricing, capacity, model management.

Monitoring and Tools

How to keep running? Explorer, metrics, troubleshooting.

Advanced Operations

How to scale? Gateway relationships, pools, fleet ops.

Roadmap and Funding

What support exists? SPE grants, community, operator profiles.

Tutorials

Show me. End-to-end walkthroughs from zero to earning.
Last modified on March 17, 2026