Tag

GPU optimization

0 views collected around this technical thread.

DataFunSummit
DataFunSummit
Jan 21, 2025 · Artificial Intelligence

NVIDIA NeMo Full Stack: End‑to‑End Large Language Model Training, Alignment, and RLHF

This article presents NVIDIA's NeMo technology stack for end‑to‑end large language model (LLM) training, covering the full software pipeline, model alignment with reinforcement learning from human feedback (RLHF), performance optimizations such as model parallelism, FP8, TensorRT‑LLM inference, dynamic load balancing, and future research directions.

GPU optimizationLLMNeMo
0 likes · 24 min read
NVIDIA NeMo Full Stack: End‑to‑End Large Language Model Training, Alignment, and RLHF
DataFunSummit
DataFunSummit
Oct 5, 2024 · Artificial Intelligence

Optimizing TorchRec for Large‑Scale Recommendation Systems on PyTorch

This article details the performance‑focused optimizations applied to TorchRec, PyTorch's large‑scale recommendation system library, including CUDA graph capture, multithreaded kernel launches, pinned memory copies, and input‑distribution refinements that together achieve a 2.25× speedup on MLPerf DLRM‑DCNv2 across 16 DGX H100 nodes.

CUDA GraphGPU optimizationPyTorch
0 likes · 11 min read
Optimizing TorchRec for Large‑Scale Recommendation Systems on PyTorch
JD Retail Technology
JD Retail Technology
Aug 30, 2024 · Artificial Intelligence

GPU Optimization Practices for Training and Inference in JD Advertising Recommendation Systems

The article details JD Advertising's technical challenges and solutions for large‑scale sparse recommendation models, describing GPU‑focused storage, compute and I/O optimizations for both training and low‑latency inference, including distributed pipelines, heterogeneous deployment, batch aggregation, multi‑stream execution, and compiler extensions.

GPU optimizationInferenceRecommendation systems
0 likes · 13 min read
GPU Optimization Practices for Training and Inference in JD Advertising Recommendation Systems
Baidu Geek Talk
Baidu Geek Talk
Aug 26, 2024 · Artificial Intelligence

RLHF Performance Optimization: PPO Algorithm Acceleration Techniques

The article presents three RLHF‑PPO acceleration techniques—TRT‑LLM‑based text generation speedups, selective activation recomputation with sequence parallelism for dynamic memory reduction, and overlapping pipeline stages for system‑level parallelism—demonstrating a 350 % throughput boost on a 10 B model using 16 A100 GPUs.

GPU optimizationLarge Language ModelsPPO optimization
0 likes · 16 min read
RLHF Performance Optimization: PPO Algorithm Acceleration Techniques
Baidu Tech Salon
Baidu Tech Salon
Aug 20, 2024 · Artificial Intelligence

PaddlePaddle Neural Network Compiler (CINN): Architecture, Optimization Techniques, and Performance

The PaddlePaddle Neural Network Compiler (CINN) combines a PIR‑based frontend and a hardware‑specific backend to apply graph‑level optimizations, operator fusion, schedule transformations and automatic tuning, delivering up to 4× faster kernels and 30‑60% overall speed‑ups for deep‑learning and scientific workloads.

CINNGPU optimizationNeural Network Compiler
0 likes · 19 min read
PaddlePaddle Neural Network Compiler (CINN): Architecture, Optimization Techniques, and Performance
DataFunSummit
DataFunSummit
Aug 8, 2024 · Artificial Intelligence

GPU Throughput and Low‑Latency Optimization Practices in JD Advertising

This article presents JD Advertising's technical practices for improving GPU throughput and reducing latency in large‑scale recommendation scenarios, covering system challenges, storage and compute optimizations for training, low‑latency inference techniques, and compiler extensions to handle massive sparse models.

AIGPU optimizationRecommendation systems
0 likes · 13 min read
GPU Throughput and Low‑Latency Optimization Practices in JD Advertising
JD Tech
JD Tech
Mar 18, 2024 · Artificial Intelligence

High‑Performance Inference Architecture: Distributed Graph Heterogeneous Computing Framework and GPU Multi‑Stream Optimization

The article describes how JD’s advertising team tackled the high‑concurrency, low‑latency challenges of online recommendation inference by designing a distributed graph heterogeneous computing framework, optimizing GPU kernel launches with TensorBatch, deep‑learning compiler techniques, and a multi‑stream GPU architecture, achieving significant throughput and latency improvements.

AI inferenceGPU optimizationdeep learning compiler
0 likes · 14 min read
High‑Performance Inference Architecture: Distributed Graph Heterogeneous Computing Framework and GPU Multi‑Stream Optimization
DataFunTalk
DataFunTalk
Dec 1, 2023 · Artificial Intelligence

GPU‑Driven Model Service and Optimization Practices in Xiaohongshu's Search Scenario

This article details Xiaohongshu's end‑to‑end GPU‑centric transformation for search‑related machine‑learning models, covering model characteristics, training and inference frameworks, system‑level GPU and CPU optimizations, multi‑card and compilation techniques, and future directions for scaling large sparse and dense models.

GPU optimizationInferenceXiaohongshu
0 likes · 16 min read
GPU‑Driven Model Service and Optimization Practices in Xiaohongshu's Search Scenario
Baidu Tech Salon
Baidu Tech Salon
Nov 10, 2023 · Artificial Intelligence

Baidu Search Deep Learning Model Architecture and Optimization Practices

Baidu's Search Architecture team details how its deep‑learning models have evolved to deliver direct answer results via semantic embeddings, describes a massive online inference pipeline that rewrites queries, ranks relevance, and classifies types, and outlines optimization techniques—including data I/O, CPU/GPU balancing, pruning, quantization, and distillation—to achieve high‑throughput, low‑latency search.

BaiduGPU optimizationInference System
0 likes · 13 min read
Baidu Search Deep Learning Model Architecture and Optimization Practices
Alimama Tech
Alimama Tech
Sep 12, 2023 · Artificial Intelligence

Megatron-LLaMA: High-Performance Large Language Model Training Framework

Megatron-LLaMA is an open‑source high‑performance training framework for LLaMA models, offering tensor, pipeline, and sequence parallelism, an overlapped optimizer, and near‑linear scalability, achieving up to 176% speedup on 32 GPUs and robust performance even with limited network bandwidth.

DeepSpeedGPU optimizationLlama
0 likes · 10 min read
Megatron-LLaMA: High-Performance Large Language Model Training Framework
Xiaohongshu Tech REDtech
Xiaohongshu Tech REDtech
May 15, 2023 · Artificial Intelligence

GPU-Accelerated Inference Optimization for Large-Scale Machine Learning at Xiaohongshu

Xiaohongshu transformed its recommendation, advertising, and search inference pipeline by migrating to GPU‑centric hardware, deploying a custom TensorFlow‑Core Lambda service, and applying system‑level, virtualization, and compute‑level optimizations—including NUMA binding, kernel fusion, dynamic scaling, and FP16 quantization—achieving roughly 30× compute capacity growth, over 10% user‑metric gains, and more than 50% cluster‑resource savings.

GPU optimizationLarge ModelsMachine Learning Inference
0 likes · 20 min read
GPU-Accelerated Inference Optimization for Large-Scale Machine Learning at Xiaohongshu
Alimama Tech
Alimama Tech
Oct 26, 2022 · Artificial Intelligence

GPU Utilization Analysis and Optimization for Alibaba's Intelligent Creative Video Service

The paper analyzes why Alibaba Mama’s intelligent creative video service suffers low GPU utilization—due to Python GIL blocking, lack of kernel fusion, and serialized CUDA streams—and details service‑level changes (separate CPU/GPU processes, shared‑memory queues, priority scheduling) and operator‑level kernel‑fusion techniques (channels‑last layouts, custom pooling, TensorRT conversion) that raise utilization from ~30 % to near 100 % and boost throughput by 75 %.

GPU optimizationPythonTensorRT
0 likes · 20 min read
GPU Utilization Analysis and Optimization for Alibaba's Intelligent Creative Video Service
Alimama Tech
Alimama Tech
May 11, 2022 · Artificial Intelligence

PICASSO: An Industrial-Scale Sparse Training Engine for Wide-and-Deep Recommender Systems

PICASSO, Alibaba’s GPU‑centric sparse training engine for wide‑and‑deep recommender systems, merges identical embedding tables, interleaves data and kernel operations, and caches hot embeddings on GPU, eliminating the parameter server and delivering up to tenfold speedups over TensorFlow‑PS while maintaining model quality.

AlibabaGPU optimizationSparse Training
0 likes · 14 min read
PICASSO: An Industrial-Scale Sparse Training Engine for Wide-and-Deep Recommender Systems
Tencent Architect
Tencent Architect
Aug 4, 2021 · Artificial Intelligence

How We Accelerated Feature Hashing for Ad Ranking on GPUs

This article explains how Tencent's Light platform reduced the massive overhead of feature hashing in ad‑ranking by moving integer‑to‑string conversion and hash computation to the GPU, introducing custom contiguous string tensors, and achieving up to 12× speed‑up on V100 GPUs.

GPU optimizationPerformance TuningTensorFlow
0 likes · 14 min read
How We Accelerated Feature Hashing for Ad Ranking on GPUs
Kuaishou Large Model
Kuaishou Large Model
Jul 30, 2021 · Fundamentals

How QuanTaichi Cuts GPU Memory Needs for High‑Fidelity Physics Simulations

QuanTaichi introduces a new language abstraction and compiler system that quantizes simulation data, dramatically reducing memory and bandwidth usage so that high‑precision physical effects—once requiring multiple GPUs—can now run on a single GPU, even on mobile devices.

CompilerGPU optimizationTaichi
0 likes · 12 min read
How QuanTaichi Cuts GPU Memory Needs for High‑Fidelity Physics Simulations
DataFunTalk
DataFunTalk
Mar 25, 2021 · Artificial Intelligence

Optimizing MNN Mobile Neural Network Inference on GPU with OpenCL: Memory Objects, Work‑Group Tuning, and Auto‑Tuning

This article explains how the MNN deep‑learning framework leverages OpenCL to achieve high‑performance inference on mobile, PC and embedded GPUs by diversifying memory objects, aligning data, using local‑memory reductions, selecting optimal work‑group sizes, applying pre‑inference auto‑tuning, caching compiled programs, and providing practical GPU‑friendly model design guidelines.

GPU optimizationMNNOpenCL
0 likes · 20 min read
Optimizing MNN Mobile Neural Network Inference on GPU with OpenCL: Memory Objects, Work‑Group Tuning, and Auto‑Tuning
360 Smart Cloud
360 Smart Cloud
Mar 4, 2021 · Artificial Intelligence

Optimizing BERT Online Service Deployment at 360 Search

This article describes the challenges of deploying a large BERT model as an online service for 360 Search and details engineering optimizations—including framework selection, model quantization, knowledge distillation, stream scheduling, caching, and dynamic sequence handling—that dramatically improve latency, throughput, and resource utilization.

BERTFP16 quantizationGPU optimization
0 likes · 12 min read
Optimizing BERT Online Service Deployment at 360 Search
Sohu Tech Products
Sohu Tech Products
Dec 24, 2020 · Mobile Development

Reducing Frame Rate in iOS Animations to Lower GPU Usage

The article explains why lowering the frame rate of iOS animations can trade a slight loss in visual smoothness for significant GPU load reduction, describes the Core Animation rendering pipeline, compares different frame‑rate reduction techniques, and presents test results showing the impact on CPU, GPU, and overall app performance.

CADisplayLinkCore AnimationGPU optimization
0 likes · 11 min read
Reducing Frame Rate in iOS Animations to Lower GPU Usage
iQIYI Technical Product Team
iQIYI Technical Product Team
Jul 3, 2020 · Artificial Intelligence

Optimizing Video Inference Services for High GPU Utilization in AI Applications

By moving decoding, color conversion, preprocessing, inference, and re‑encoding entirely onto the GPU and enabling batch processing with flexible Python scripts, iQIYI’s video‑image enhancement service achieved ten‑fold throughput, over 90 % GPU utilization, and dramatically lower resource use, accelerating AI video inference deployment.

AI deploymentDeepStreamFFmpeg
0 likes · 14 min read
Optimizing Video Inference Services for High GPU Utilization in AI Applications