Inside MIT’s Deep Generative Models Course: Topics, Schedule, and Resources

MIT’s 6.S978 Deep Generative Models seminar, taught by Associate Professor He Kaiming, offers graduate students a 15‑week deep dive into VAEs, autoregressive models, GANs, diffusion techniques, and cross‑disciplinary applications, with detailed weekly topics, required assignments, and publicly available lecture PDFs.

NewBeeNLP
NewBeeNLP
NewBeeNLP
Inside MIT’s Deep Generative Models Course: Topics, Schedule, and Resources

MIT's 6.S978 Deep Generative Models course, taught by Associate Professor He Kaiming since February 2023, is a 15‑week graduate‑level seminar that explores the theory and practice of deep generative modeling.

Target audience: graduate students who are conducting or planning research on deep generative models.

Core topics covered: Variational Autoencoders (VAE), Autoregressive (AR) models, Generative Adversarial Networks (GAN), Diffusion models, and their applications in computer vision, robotics, biology, materials science, and other domains.

Course format: lecturer talks, guest lectures, and student seminars (paper reading, presentations, and discussions). Students must attend all sessions, complete bi‑weekly problem sets, present a 20‑minute paper discussion with a 10‑minute Q&A, and deliver a final project and demo.

Attend all lectures and seminars

Complete a problem set every two weeks

Present a paper (20 min talk + 10 min discussion/QA)

Finish a final project and demonstration

Weekly schedule (up to week 10):

Week 1 – Introduction to deep generative models

Week 2 – Modeling image priors, Variational Autoencoders (VAE)

Week 3 – Normalizing flows, Autoregressive (AR) models

Week 4 – AR models and tokenizers

Week 5 – AR, diffusion, Generative Adversarial Networks (GAN)

Week 6 – GANs for diffusion

Week 7 – Energy‑based models, score matching, diffusion models

Week 8 – Diffusion models, denoising diffusion

Week 9 – Discrete diffusion, flow matching 1

Week 10 – Flow matching 2 and guest lecture by CMU Assistant Professor Zhu Junyan on “Ensuring Data Ownership in Generative Models”

The first five lecture slides are publicly available:

Lecture 1: https://mit-6s978.github.io/assets/pdfs/lec1_intro.pdf

Lecture 2: https://mit-6s978.github.io/assets/pdfs/lec2_vae.pdf

Lecture 3: https://mit-6s978.github.io/assets/pdfs/lec3_ar.pdf

Lecture 4: https://mit-6s978.github.io/assets/pdfs/lec4_gan.pdf

Lecture 5: https://mit-6s978.github.io/assets/pdfs/lec5_diffusion.pdf

Future weeks will cover video, 3D geometry, robotics, materials science, protein folding, and biology, and will feature a guest lecture by OpenAI Strategic Exploration team lead Song Yang on “Consistency Models”.

Course website: https://mit-6s978.github.io/

GANDiffusion Modelsvariational autoencoderMITDeep Generative ModelsHe Kaiming
NewBeeNLP
Written by

NewBeeNLP

Always insightful, always fun

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.