Tag

Large Model Training

0 views collected around this technical thread.

DataFunSummit
DataFunSummit
Feb 17, 2025 · Artificial Intelligence

NorthStar Large‑Model Training Framework: Architecture, APIs, Pipeline and Multi‑GPU Strategies

The article introduces the NorthStar large‑model training framework developed by DeWu, detailing its background challenges, pipeline architecture, rich API support, multi‑GPU training modes, multi‑level embedding storage, hardware selection considerations, and a brief Q&A on data versus model parallelism.

AI FrameworkEmbedding StorageLarge Model Training
0 likes · 9 min read
NorthStar Large‑Model Training Framework: Architecture, APIs, Pipeline and Multi‑GPU Strategies
DataFunSummit
DataFunSummit
Feb 4, 2025 · Artificial Intelligence

Training Optimization for Large-Scale Multimodal Models in Content Safety

This article examines the challenges of content safety, outlines the limitations of current task‑specific multimodal models, and proposes large‑model‑inspired training optimizations—including diversified data construction, automated annotation, parameter fine‑tuning, and multi‑task evaluation—to improve efficiency, accuracy, and scalability of multimodal AI systems.

AI optimizationLarge Model Trainingcontent safety
0 likes · 26 min read
Training Optimization for Large-Scale Multimodal Models in Content Safety
DataFunSummit
DataFunSummit
Jan 6, 2025 · Artificial Intelligence

Efficient Large‑Model Training with LLaMA‑Factory: Overview, Techniques, and Applications

This article explains how to train large language models efficiently using LLaMA‑Factory, covering low‑resource training challenges, memory‑saving optimizations for parameters, gradients and activations, framework features, quick‑start guidance, performance tuning, real‑world case studies, and a detailed Q&A.

AIDeepSpeedLLaMA-Factory
0 likes · 10 min read
Efficient Large‑Model Training with LLaMA‑Factory: Overview, Techniques, and Applications
Baidu Geek Talk
Baidu Geek Talk
Jul 10, 2024 · Artificial Intelligence

Baidu HPN Network: Solving Hash Collision for 95% Physical Network Bandwidth Efficiency in Large Model Training

Baidu's HPN network solves hash‑collision bottlenecks in large‑model training by combining TOR‑affinity scheduling with Dynamic Load Balancing on self‑developed switches, boosting physical network bandwidth efficiency to about 95%, improving throughput by roughly 10% and adding a further 1.5% training‑speed gain via the BCCL library.

Baidu CloudCollective CommunicationDLB Dynamic Load Balancing
0 likes · 12 min read
Baidu HPN Network: Solving Hash Collision for 95% Physical Network Bandwidth Efficiency in Large Model Training
DataFunTalk
DataFunTalk
Jan 29, 2024 · Artificial Intelligence

PAI‑ChatLearn: A Flexible Large‑Scale RLHF Training Framework for Massive Models

The article introduces PAI‑ChatLearn, a flexible and high‑performance framework developed by Alibaba Cloud's PAI team that supports full‑pipeline RLHF training for large models, explains the evolution of parallel training strategies, details the framework’s architecture and configuration, and showcases performance results and practical usage examples.

AI FrameworkLarge Model TrainingPAI-ChatLearn
0 likes · 17 min read
PAI‑ChatLearn: A Flexible Large‑Scale RLHF Training Framework for Massive Models
Architects' Tech Alliance
Architects' Tech Alliance
Sep 11, 2023 · Artificial Intelligence

Open Acceleration Specification AI Server Design Guide (2023): Architecture, OAM Modules, UBB Board, and System Design

The 2023 Open Acceleration Specification AI Server Design Guide details the hardware architecture, OAM module and UBB board specifications, cooling, management, fault diagnosis, and software platform needed to build high‑performance, scalable AI compute clusters for large‑model training.

AI accelerationHardware ArchitectureLarge Model Training
0 likes · 10 min read
Open Acceleration Specification AI Server Design Guide (2023): Architecture, OAM Modules, UBB Board, and System Design
Tencent Cloud Developer
Tencent Cloud Developer
Apr 14, 2023 · Artificial Intelligence

Tencent Cloud's Next-Generation HCC High-Performance Computing Cluster for Large Model Training

Tencent Cloud's new HCC high‑performance computing cluster triples previous generation performance with 3.2 TB/s server bandwidth, Xingsha servers and NVIDIA H800 GPUs delivering up to 1979 TFlops, while its Xingmai 3.2 T ETH RDMA network, TB‑level storage via COS + GooseFS, and multi‑form access (bare metal, cloud servers, containers, functions) enable efficient large‑model training.

AI computingGPU ClusterHigh Performance Computing
0 likes · 9 min read
Tencent Cloud's Next-Generation HCC High-Performance Computing Cluster for Large Model Training