SimpleAR: Autoregressive Visual Generation at 1024×1024 Using Only 0.5B Parameters

SimpleAR is a minimalist autoregressive visual generation framework that, with only 0.5 B parameters, achieves competitive 1024×1024 image synthesis through a three‑stage pipeline of large‑scale pretraining, supervised fine‑tuning, and GRPO‑based reinforcement learning, and demonstrates significant inference speedups using KV‑cache, vLLM, and speculative decoding.

AIWalker
AIWalker
AIWalker
SimpleAR: Autoregressive Visual Generation at 1024×1024 Using Only 0.5B Parameters

Overview of SimpleAR

SimpleAR is a compact autoregressive (AR) visual generation framework that fully covers pretraining, supervised fine‑tuning (SFT), and reinforcement learning (RL). Despite using only a 0.5 B‑parameter vanilla AR model, it can generate high‑fidelity 1024×1024 images and attain competitive scores on text‑to‑image benchmarks such as GenEval (0.59) and DPG‑Bench (79.66).

Autoregressive Image Generation

AR models treat image synthesis as a sequential token‑generation process, predicting each visual token conditioned on previously generated tokens and the text prompt. SimpleAR employs a unified decoder‑only transformer that jointly models text and visual tokens, eliminating the need for a separate text encoder used by many diffusion models.

Three‑Stage Training Strategy

Large‑scale pretraining on diverse visual datasets (CC3M, CC12M, OpenImages, SAM1B, MegaImage) to learn generalizable visual token representations.

Supervised fine‑tuning (SFT) on curated datasets (JourneyDB, synthetic 1M, 10M internal) to improve fidelity and instruction alignment.

Reinforcement learning (RL) using Group Relative Policy Optimization (GRPO) to further refine multimodal alignment and reduce bias.

Pretraining and SFT Details

Both stages use a language‑modeling loss where the next token prediction is conditioned on preceding visual tokens and the current text token. The visual tokenizer is a Cosmos‑Tokenizer with a 64K‑entry codebook and a down‑sampling factor of 16. Training hyper‑parameters: pretraining at 1e‑4 learning rate, SFT at 2e‑5, batch size 256; RL at 1e‑5, batch size 28, no warm‑up or LR decay. All experiments run on 32 NVIDIA A100 GPUs.

RL with GRPO

After SFT, a trainable policy model and a frozen reference model are initialized. For each text prompt, the policy samples candidate outputs, and the objective maximizes a reward consisting of a CLIP‑based score and a KL penalty term. Two reward modules were compared: CLIP‑ViT‑H‑14 and its fine‑tuned variant HPSv2. CLIP yielded a larger GenEval gain (+0.6) for the 0.5 B model.

Inference and Acceleration Techniques

AR inference is inherently sequential, leading to high latency. SimpleAR adopts several LLM‑inspired optimizations:

KV Cache : stores key‑value embeddings from previous attention layers to avoid recomputation.

vLLM Serving : leverages paged attention and efficient memory management for high‑throughput inference.

Speculative Jacobi Decoding (SJD) : generates multiple candidate token sequences with a draft model and validates them with the target model, reducing the number of autoregressive steps.

With KV‑cache, inference time drops by ~34 %. Using vLLM, generating a 1024×1024 image takes about 13.55 s on an A100 node.

Experimental Evaluation

Setup

The transformer backbone follows the Qwen architecture and is initialized with LLM weights. Experiments evaluate GenEval and DPG‑Bench scores, as well as inference throughput.

Results

SimpleAR outperforms other AR models and many diffusion methods despite its small size. Scaling to 1.5 B parameters improves GenEval by +0.04 and DPG‑Bench by +1.85, showing predictable scaling behavior similar to large language models.

RL ablation shows that both CLIP‑ViT‑H‑14 and HPSv2 improve performance, with CLIP providing the larger boost. Reward values increase steadily during training and correlate positively with GenEval scores.

Inference speed experiments demonstrate that KV‑cache saves 34 % of time, vLLM reduces generation time to ~14 s, and SJD cuts the number of decoding steps by roughly half while maintaining comparable quality.

Qualitative Analysis

Generated samples exhibit high fidelity, aesthetic quality, and strong instruction following. Failure cases reveal limitations in handling complex poses, objects, or text, and occasional violations of physical laws due to limited data and model capacity.

SimpleAR 1024×1024 generation result
SimpleAR 1024×1024 generation result
Pretraining and SFT loss diagram
Pretraining and SFT loss diagram
GRPO objective diagram
GRPO objective diagram
GenEval and DPG benchmark comparison
GenEval and DPG benchmark comparison
SFT/RL GenEval performance
SFT/RL GenEval performance
Inference speed with KV Cache and vLLM
Inference speed with KV Cache and vLLM
Speculative Jacobi Decoding
Speculative Jacobi Decoding
DPG prompt generation result
DPG prompt generation result
GenEval prompt generation result
GenEval prompt generation result
Failure cases
Failure cases

Conclusion

SimpleAR demonstrates that a minimalist autoregressive architecture can rival diffusion models in high‑resolution text‑to‑image synthesis while remaining parameter‑efficient. The three‑stage training pipeline, combined with RL fine‑tuning and modern inference optimizations, provides a viable path for scalable, fast, and controllable visual generation.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

Inference accelerationbenchmarkreinforcement learningPretrainingautoregressive generationvisual AI
AIWalker
Written by

AIWalker

Focused on computer vision, image processing, color science, and AI algorithms; sharing hardcore tech, engineering practice, and deep insights as a diligent AI technology practitioner.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.