Presto

Create long videos with rich content and long-range coherence from text prompts.

Paper Code (Soon)

Long Video Diffusion Generation with Segmented Cross-Attention and Content-Rich Video Data Curation

01.AI

Presto generates long videos with rich content and long-range coherence.

Abstract

We introduce Presto, a novel video diffusion model designed to generate 15-second videos with long-range coherence and rich content.

Extending video generation methods to maintain scenario diversity over long durations presents significant challenges. To address this, we propose a Segmented Cross-Attention (SCA) strategy, which splits hidden states into segments along the temporal dimension, allowing each segment to cross-attend to a corresponding sub-caption. SCA requires no additional parameters, enabling seamless incorporation into current DiT-based architectures.

To facilitate high-quality long video generation, we build the LongTake-HD dataset, consisting of 261k content-rich videos with scenario coherence, annotated with an overall video caption and five progressive sub-captions.

We show that our Presto outperforms existing video diffusion models on automatic metrics and human evaluations. This demonstrates that our proposed method significantly enhances the content richness, maintains long-range coherence, and captures intricate textual details.

Architecture

arch

Presto utilizes LLMs to decouple text input into multiple progressive sub-captions. We propose a Segmented Cross-Attention mechanism to effectively incorporate these prompts concurrently.

LongTake-HD

dataset

Our LongTake-HD dataset has 261k instances, each comprising a content-rich video filtered from public sources, annotated with an overall video caption and five progressive sub-captions. We apply rigorous filtering criteria to ensure scenario coherence and content richness in videos.

Long Video

As the intensified dynamics decelerate via the interpolation model, our Presto can generate 15-second videos. However, adopting traditional interpolation models may result in artifacts, as the scenario motion is too significant to these models. A better way is to use image-conditioned video diffusion model, which we leave as our future work.