Skip to main content
The quickstart is an off-chain hardware verification pass. It ends before production setup, staking, or revenue configuration.
The quickstart runs two off-chain smoke tests: a video transcoding test and an AI inference test. Both run locally with Docker. No LPT, no ETH, and no Arbitrum. You only need your GPU and a few commands.

Choose your test

Video Transcoding Test

20-30 min. Verify GPU transcoding works. Runs an orchestrator and gateway on the same machine, sends a test stream via ffmpeg, confirms HLS output.

AI Inference Test

35-65 min. Verify AI inference works. Runs an orchestrator and AI runner with one warm model, sends a test prompt, confirms an image is returned. Requires 24 GB VRAM for diffusion; 8 GB for LLM alternative.
Both tests are on the same page. Complete the video test first - the prerequisites and Docker setup are shared.

What you need

  • NVIDIA GPU — any model for the video test; 24 GB VRAM for AI diffusion (or 8-16 GB for the LLM alternative)
  • Docker Engine — with NVIDIA Container Toolkit for GPU passthrough
  • ffmpeg — for the video test only (ffmpeg -version to check)
  • Linux — required for the AI test (video test also works on WSL2 or macOS Docker)
No Arbitrum wallet, no LPT, no ETH.

After the quickstart

Setup Guide

Configure for production: on-chain, staking, reward calling. The setup section takes a verified GPU to an earning node.

Operator Rationale

Still evaluating? Review the cost-benefit analysis before committing to the full setup.

Join a Pool

The fastest path to earning without full setup - contribute GPU capacity to an existing operator pool.

Workload Options

See which workloads earn and which fit your hardware before committing.
Last modified on March 17, 2026