Parallelized Autoregressive Visual Generation

1University of Hong Kong, 2ByteDance, 3Peking University
Project lead, *Corresponding author

Parallelized autoregressive generation achieves significant speedup while maintaining generation quality.

Abstract

Autoregressive models have emerged as a powerful approach for visual generation but suffer from slow inference speed due to their sequential token-by-token prediction process. In this paper, we propose a simple yet effective approach for parallelized autoregressive visual generation that improves generation efficiency while preserving the advantages of autoregressive modeling. Our key insight is that parallel generation depends on visual token dependencies—tokens with weak dependencies can be generated in parallel, while strongly dependent adjacent tokens are difficult to generate together, as their independent sampling may lead to inconsistencies. Based on this observation, we develop a parallel generation strategy that generates distant tokens with weak dependencies in parallel while maintaining sequential generation for strongly dependent local tokens. Our approach can be seamlessly integrated into standard autoregressive models without modifying the architecture or tokenizer. Experiments on ImageNet and UCF-101 demonstrate that our method achieves a 3.6× speedup with comparable quality and up to 9.5× speedup with minimal quality degradation across both image and video generation tasks.

Key Insight

Parallel autoregressive generation depends on visual token dependencies.

Strongly dependent adjacent tokens are difficult to generate together.

Distant tokens with weak dependencies can be generated in parallel.

Method Overview

Comparison of different parallel generation strategies. Both strategies generate initial tokens [1,2,3,4] sequentially then generate multiple tokens in parallel per step, following the order [5a-5d] to [6a-6d] to [7a-7d], etc. (a) Our approach generates weakly dependent tokens across non-local regions in parallel, preserving coherent patterns and local details. (b) The naive method generates strongly dependent tokens within local regions simultaneously, while independent sampling for strongly correlated tokens can cause inconsistent generation and disrupted patterns, such as distorted tiger faces and fragmented zebra stripes.

Approach

Generation Process

Illustration of our parallel generation process. Stage 1: sequential generation of initial tokens (1-4) for each region (separated by dotted lines) to establish global structure. Stage 2: parallel generation at aligned positions across different regions (e.g., 5a-5d), then moving to next aligned positions (6a-6d, 7a-7d, etc.) for parallel generation. Same numbers indicate tokens generated in the same step, and letter suffix (a,b,c,d) denotes different regions .

Model Architecture

Overview of our parallel autoregressive generation framework. (a) Model implementation. The model first generates initial tokens sequentially [1,2,3,4], then uses learnable tokens [M1,M2,M3] to help transition into parallel prediction mode. (b) Comparison of visible context between our parallel prediction approach (left) and traditional single-token prediction (right).

Visual Comparison

Visual Comparison of Different Generation Strategies

Comparison of different generation strategies. Top: Our method with sequential initial tokens followed by parallel distant token prediction produces high-quality and coherent images. Middle: Direct parallel prediction without sequential initial tokens leads to inconsistent global structures. Bottom: Parallel prediction of adjacent tokens results in distorted local patterns and broken details.

Qualitative Results

Image Generation Results

Visualization comparison of our parallel generation and traditional autoregressive generation (LlamaGen). Our approach (PAR) achieves 3.6-9.5× speedup over LlamaGen with comparable quality, reducing the generation time from 12.41s to 3.46s (PAR-4×) and 1.31s (PAR-16×) per image. Time measurements are conducted with a batch size of 1 on a single A100 GPU.

Video Generation Results

Video generation results on UCF-101. Each row shows sampled frames from a 17-frame sequence at 128×128 resolution, generated by PAR-1×, PAR-4×, and PAR-16× respectively across different action categories.

Quantitative Results

Image Generation Quantitative Results

Class-conditional image generation on ImageNet 256×256 benchmark. PAR-4× and PAR-16× means generating 4 and 16 tokens per step in parallel, respectively.

Video Generation Quantitative Results

Comparison of class-conditional video generation methods on UCF-101 benchmark.

More Visualizations

4x Speedup Generation Results

Additional image generation results of PAR-4× across different ImageNet categories.

16x Speedup Generation Results

Additional image generation results of PAR-16× across different ImageNet categories.