Step-by-step guide to building a custom AI container using the PyTrickle integration layer and deploying it as a worker on the Livepeer network.
Bring Your Own Container (BYOC) lets you run any custom AI model on the Livepeer network inside your own Docker container. Your container receives a live video (or audio) stream, processes it with your model, and returns the processed output — all over the Livepeer network’s trickle streaming protocol.BYOC was hardened to production-grade in Phase 4 (January 2026). The Embody SPE and Streamplace are currently running production BYOC workloads.If you are building with ComfyUI workflows specifically, see Build with ComfyStream — ComfyStream is already BYOC-compatible and may be all you need.
Exposes a REST API that the Livepeer gateway calls to start, stop, and update your processing session
Connects to the trickle streaming layer — subscribes to an input stream URL and publishes to an output stream URL
PyTrickle handles both of these for you. You implement one Python class (FrameProcessor), and PyTrickle handles the streaming, encoding, decoding, and API surface.
from pytrickle import FrameProcessor, StreamServerfrom pytrickle.frames import VideoFrame, AudioFramefrom typing import Optional, Listimport torchclass MyAIProcessor(FrameProcessor): """Custom AI video processor.""" def __init__(self, **kwargs): super().__init__(**kwargs) self.model = None async def initialize(self): """Load your model here. Called once on startup.""" # Load your model self.model = load_my_model() # your model loading logic async def process_video_async(self, frame: VideoFrame) -> Optional[VideoFrame]: """Process a single video frame. Called once per frame.""" tensor = frame.tensor # PyTorch tensor: (H, W, C) or (C, H, W) # Run your model with torch.no_grad(): processed = self.model(tensor) return frame.replace_tensor(processed) async def process_audio_async(self, frame: AudioFrame) -> Optional[List[AudioFrame]]: """Process audio. Return None to drop, return frame list to pass through.""" return [frame] def update_params(self, params: dict): """Handle real-time parameter updates from the gateway or client.""" pass # implement if your model supports dynamic configurationasync def main(): processor = MyAIProcessor() await processor.start() server = StreamServer( frame_processor=processor, port=8000, capability_name="live-video-to-video", # must match the pipeline type expected by the gateway ) await server.run_forever()
Your BYOC container runs on an orchestrator. The orchestrator pulls your image, starts it, and routes live-video-to-video jobs to it.To register your container with an orchestrator, you (or the orchestrator you are working with) configure go-livepeer to use BYOC mode and point to your container image:
Copy
Ask AI
# Placeholder — exact flags not confirmed# REVIEW: Verify the correct go-livepeer flags for BYOC container registrationlivepeer \ -orchestrator \ -byoc \ -byocImage <your-registry>/<your-image>:latest \ # ... other orchestrator flags
For current orchestrators accepting BYOC workloads, see the MuxionLabs BYOC example apps — these include working deployment configurations that other orchestrators have used.
BYOC orchestrator onboarding is actively scaling as of Phase 4 (January 2026). If you cannot find a willing orchestrator, reach out in the Livepeer Discord#developers channel.
ComfyStream is already integrated with PyTrickle (Phase 4). To run ComfyStream as a BYOC worker, use the muxionlabs/comfystream image instead of building from scratch:
Copy
Ask AI
# REVIEW: Confirm the exact muxionlabs/comfystream image name and tagdocker pull muxionlabs/comfystream:latest