Tag

Megatron

0 views collected around this technical thread.

DataFunTalk
DataFunTalk
Dec 6, 2023 · Artificial Intelligence

Distributed Training Techniques and Quantitative Analysis for Large Language Models (GPT‑175B)

This article presents a comprehensive overview of state‑of‑the‑art distributed training methods for large language models, using GPT‑175B as a case study to analyze memory, communication, and compute overheads, and to recommend practical optimization strategies such as tensor, pipeline, and sequence parallelism, ZeRO‑1 optimizer, and selective activation checkpointing.

GPU memory optimizationLLMMegatron
0 likes · 22 min read
Distributed Training Techniques and Quantitative Analysis for Large Language Models (GPT‑175B)
Alibaba Cloud Infrastructure
Alibaba Cloud Infrastructure
Sep 13, 2023 · Artificial Intelligence

Pai‑Megatron‑Patch: Design Principles, Key Features, and End‑to‑End Usage for Large Language Model Training

This article introduces the open‑source Pai‑Megatron‑Patch tool from Alibaba Cloud, explains its non‑intrusive patch architecture, enumerates supported models and features such as weight conversion, Flash‑Attention 2.0, FP8 training with Transformer Engine, and provides detailed command‑line examples for model conversion, pre‑training, supervised fine‑tuning, inference, and RLHF reinforcement learning pipelines.

Deep LearningFP8LLM
0 likes · 19 min read
Pai‑Megatron‑Patch: Design Principles, Key Features, and End‑to‑End Usage for Large Language Model Training
DataFunSummit
DataFunSummit
May 25, 2023 · Artificial Intelligence

Intel Announces Aurora genAI: A Trillion-Parameter Generative AI Model Powered by the Aurora Supercomputer

Intel revealed its Aurora genAI project, a generative AI model with up to one trillion parameters that will run on the Aurora supercomputer—leveraging NVIDIA Megatron and Microsoft DeepSpeed frameworks, delivering over 2 Exaflops performance and targeting scientific as well as broader AI applications.

AuroraGenerative AIHPC
0 likes · 9 min read
Intel Announces Aurora genAI: A Trillion-Parameter Generative AI Model Powered by the Aurora Supercomputer
DataFunSummit
DataFunSummit
Jan 5, 2023 · Artificial Intelligence

GPU Acceleration Techniques for Large AI Models: Parallelism, Fusion, and Simplification

These notes explain how GPUs address the massive data, serial dependencies, and high computational complexity of modern AI by employing three acceleration strategies—parallelism, operator fusion, and simplification—illustrated with Megatron-LM, MoE models, and practical compression techniques such as quantization, distillation, and pruning.

AIGPUMegatron
0 likes · 16 min read
GPU Acceleration Techniques for Large AI Models: Parallelism, Fusion, and Simplification
DataFunTalk
DataFunTalk
Jan 4, 2023 · Artificial Intelligence

GPU Acceleration Techniques for Large AI Models: Parallelism, Fusion, and Simplification

This article explains how GPUs address the massive data, serial dependencies, and high computational complexity of modern AI by employing three acceleration strategies—parallelism, operator fusion, and simplification—detailing methods such as model, pipeline, and tensor parallelism, Megatron framework, MoE models, and various model compression techniques.

AIGPUMegatron
0 likes · 17 min read
GPU Acceleration Techniques for Large AI Models: Parallelism, Fusion, and Simplification