OnlyFlow: Optical Flow based Motion Conditioning for Video Diffusion Models

Mathis Koroglu, Hugo Caselles-Dupré, Guillaume Jeanneret Sanmiguel, Matthieu Cord
Arxiv Report | Project Page | Github

Quick Start:

  1. Select desired Base Model.
  2. Select Motion Module. We recommend trying guoyww/animatediff-motion-adapter-v1-5-3 for the best results.
  3. Provide Positive Prompt and Negative Prompt. You are encouraged to refer to each model's webpage on HuggingFace Hub or CivitAI to learn how to write prompts for them.
  4. Upload a video to extract optical flow from.
  5. Select a 'Flow Scale' to modulate the input video optical flow conditioning.
  6. Select a 'CFG' and 'Diffusion Steps' to control the quality of the generated video and prompt adherence.
  7. Select a 'Temporal Downsample' to reduce the number of frames in the input video.
  8. If you want to use a custom dimension, check the Custom Dimension box and adjust the Width and Height sliders.
  9. If the video is too long, you can adjust the generation window offset with the Context Stride slider.
  10. Click Generate, wait for ~1/3 min, and enjoy the result!

If you have any error concerning GPU limits, please try again later when your ZeroGPU quota is reset, or try with a shorter video. Otherwise, you can also duplicate this space and select a custom GPU plan.

INPUTS

Select or type a base model id
Select or type a motion module id
0 2
0 100
0 30
1 30

OUTPUTS