Code DAO
Author

Code DAO

We deliver AI algorithm tutorials and the latest news, curated by a team of researchers from Peking University, Shanghai Jiao Tong University, Central South University, and leading AI companies such as Huawei, Kuaishou, and SenseTime. Join us in the AI alchemy—making life better!

100
Articles
0
Likes
0
Views
0
Comments
Recent Articles

Latest from Code DAO

100 recent articles max
Code DAO
Code DAO
Dec 11, 2021 · Artificial Intelligence

Using DCGAN to Generate Synthetic Marine Plastic Images

This article explains how to apply a Deep Convolutional GAN in PyTorch to create realistic synthetic images of marine plastic, addressing dataset scarcity, detailing the network architecture, training procedure, and showing loss curves and generated samples.

DCGANGANImage Generation
0 likes · 13 min read
Using DCGAN to Generate Synthetic Marine Plastic Images
Code DAO
Code DAO
Dec 10, 2021 · Artificial Intelligence

Understanding Variational Autoencoders: From Dimensionality Reduction to Generative Modeling

This article explains the principles of variational autoencoders, starting with dimensionality reduction techniques such as PCA and standard autoencoders, highlighting their limitations for data generation, and then detailing VAE's regularized latent space, variational inference, re‑parameterization, and loss formulation.

Deep LearningKL divergenceVAE
0 likes · 18 min read
Understanding Variational Autoencoders: From Dimensionality Reduction to Generative Modeling
Code DAO
Code DAO
Dec 8, 2021 · Artificial Intelligence

Understanding Compact Transformers: Build and Train Vision & NLP Models on a Personal PC

This article walks through the design of Compact Transformers, explaining scaled dot‑product self‑attention, positional embeddings, multi‑head attention, and Vision Transformer architecture, and provides full PyTorch code so readers can train lightweight CV and NLP classifiers on a single PC.

Compact TransformersMulti-Head AttentionPatch Embedding
0 likes · 19 min read
Understanding Compact Transformers: Build and Train Vision & NLP Models on a Personal PC
Code DAO
Code DAO
Dec 8, 2021 · Artificial Intelligence

Optimizers and Schedulers in Neural Network Architecture: A Detailed Guide

This article explains how optimizers and learning‑rate schedulers work, how to configure their hyperparameters and parameter groups, and how to apply differential learning rates and adaptive schedules in PyTorch and Keras to improve model training and transfer‑learning performance.

KerasPyTorchhyperparameter tuning
0 likes · 10 min read
Optimizers and Schedulers in Neural Network Architecture: A Detailed Guide
Code DAO
Code DAO
Dec 7, 2021 · Artificial Intelligence

How to Cluster Text with TF‑IDF, KMeans and PCA in Python

This article walks through a complete Python workflow that loads the 20 Newsgroups dataset, preprocesses the documents, vectorizes them with TF‑IDF, groups them using KMeans, reduces dimensions with PCA, and visualizes the resulting clusters, illustrating each step with code and plots.

KMeansNLPPCA
0 likes · 13 min read
How to Cluster Text with TF‑IDF, KMeans and PCA in Python
Code DAO
Code DAO
Dec 6, 2021 · Artificial Intelligence

Why So Many Optimizers? Core Algorithms Behind Neural Network Training

This article explains the fundamental gradient‑descent optimizers used in neural networks—SGD, Momentum, RMSProp, Adam and their variants—illustrates loss‑surface challenges such as local minima, saddle points and ravines, and shows how techniques like mini‑batching, momentum, adaptive learning rates and scheduling address these issues.

AdamDeep LearningMomentum
0 likes · 11 min read
Why So Many Optimizers? Core Algorithms Behind Neural Network Training
Code DAO
Code DAO
Dec 5, 2021 · Artificial Intelligence

Why DropBlock Outperforms Dropout as an Image Regularizer

This article demonstrates how to implement DropBlock in PyTorch, explains why Dropout fails on image data, details the gamma calculation and mask generation, and shows visual comparisons that illustrate the superiority of contiguous region dropping over random pixel dropout.

Computer VisionDeep LearningDropBlock
0 likes · 11 min read
Why DropBlock Outperforms Dropout as an Image Regularizer
Code DAO
Code DAO
Dec 5, 2021 · Artificial Intelligence

Why Neural Networks Need Batch Normalization: Principles and Mechanics

The article explains the principle behind Batch Normalization, why it is essential for training deep neural networks, how it standardizes activations, the role of learnable scale and shift parameters, the computation steps during training and inference, and discusses placement strategies within a model.

Batch NormalizationDeep Learninggradient descent
0 likes · 9 min read
Why Neural Networks Need Batch Normalization: Principles and Mechanics