Fundamentals of AI‑Generated Image Creation: Diffusion Models and Stable Diffusion

This article provides a comprehensive overview of AI‑generated content (AIGC) for image creation, explaining the role of diffusion models, the architecture of Stable Diffusion—including CLIP, UNet, and VAE—and the underlying mathematical concepts such as Markov chains, Langevin dynamics, and Gaussian distributions.

Nightwalker Tech
Nightwalker Tech
Nightwalker Tech
Fundamentals of AI‑Generated Image Creation: Diffusion Models and Stable Diffusion

What is AIGC? AIGC (AI‑Generated Content) refers to content produced by artificial intelligence rather than human creators, covering text, images, and, increasingly, video. In the image domain, tools like Midjourney, DALL·E, and Stable Diffusion enable users to generate pictures from textual prompts.

Capabilities of Image AIGC Image AIGC can turn creative ideas into visual artifacts without requiring artistic skill. Users supply a prompt, and the model synthesizes an image that matches the described concept.

Basic AI Principles Modern AIGC relies on machine‑learning techniques, especially deep learning. The core pipeline involves training on massive datasets, learning a model, and then using that model for inference (prediction).

Diffusion Models Overview Among generative models, diffusion models have become dominant for high‑quality image synthesis. They work by progressively adding Gaussian noise to training images (forward process) and then learning to reverse this noising step (reverse process) to generate new samples.

Stable Diffusion Architecture Stable Diffusion builds on latent diffusion and consists of three main components:

Step 1: Text encoding – a CLIP text encoder converts the prompt into a 77×768 embedding. Step 2: Denoising – a modified UNet (with attention and residual blocks) iteratively removes noise from a latent representation, guided by the text embedding. Various schedulers (PNDM, DDIM, K‑LMS, etc.) control the number of denoising steps. Step 3: Decoding – a VAE decoder transforms the final latent representation back into a pixel‑space image.
Stable Diffusion workflow diagram
Stable Diffusion workflow diagram

Key Sub‑Modules

CLIP (Contrastive Language‑Image Pre‑training) learns joint text‑image embeddings by maximizing similarity of matching pairs and minimizing it for mismatched pairs.

UNet serves as the denoising network; its encoder‑decoder structure with skip connections and attention blocks enables efficient high‑resolution generation.

VAE (Variational Auto‑Encoder) encodes images into a latent space for diffusion and decodes latents back to images, providing a probabilistic framework.

Mathematical Foundations The diffusion process is modeled as a Markov chain with Gaussian noise added at each timestep. The reverse process learns to estimate the noise distribution, often using KL‑divergence as a training objective. Langevin dynamics and the Langevin equation describe the stochastic differential equations underlying the sampling.

Mathematical illustration of diffusion
Mathematical illustration of diffusion

Practical Implications Diffusion models allow a trade‑off between generation quality and speed by adjusting the number of denoising steps. Faster samplers (e.g., DPM‑Solver) achieve high quality with far fewer steps than early DDPM implementations.

References and Resources For deeper study, see the original papers: Latent Diffusion (arXiv:2112.10752), Diffusion Models (arXiv:1503.03585), PNDM (arXiv:2202.09778), DDPM (arXiv:2006.11239), CLIP (arXiv:2103.00020), VAE (arXiv:1312.6114). Open‑source implementations are available on GitHub (e.g., Stability‑AI/stablediffusion, OpenAI/CLIP).

AIdiffusion modelStable DiffusionAIGC
Nightwalker Tech
Written by

Nightwalker Tech

[Nightwalker Tech] is the tech sharing channel of "Nightwalker", focusing on AI and large model technologies, internet architecture design, high‑performance networking, and server‑side development (Golang, Python, Rust, PHP, C/C++).

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.