Tag

Model Efficiency

1 views collected around this technical thread.

Data Thinking Notes
Data Thinking Notes
Mar 9, 2025 · Artificial Intelligence

How DeepSeek R1 Uses Large‑Scale Reinforcement Learning to Rival OpenAI o1

DeepSeek R1, an open‑source large language model, leverages rule‑based, large‑scale reinforcement learning and mixed supervised‑fine‑tuning data to achieve deep reasoning comparable to OpenAI o1, illustrating China’s rapid AI progress, the importance of efficiency, and the democratizing impact of open AI research.

DeepSeekModel EfficiencyOpen‑Source AI
0 likes · 11 min read
How DeepSeek R1 Uses Large‑Scale Reinforcement Learning to Rival OpenAI o1
DataFunSummit
DataFunSummit
Mar 22, 2024 · Artificial Intelligence

Multi‑Layer Efficiency Challenges and Emerging Paradigms for Large Language Models

The article discusses how large AI models are moving toward a unified architecture that reduces task‑algorithm coupling, outlines the multi‑layer efficiency challenges—from model sparsity and quantization to software and infrastructure optimization—and highlights recent NVIDIA GTC 2024 and China AI Day events with registration details.

AI InfrastructureChina AI DayModel Efficiency
0 likes · 12 min read
Multi‑Layer Efficiency Challenges and Emerging Paradigms for Large Language Models
DataFunSummit
DataFunSummit
Mar 14, 2024 · Artificial Intelligence

Multi‑Level Efficiency Challenges and Emerging Paradigms for Large AI Models

The article examines how large AI models are moving toward a unified, low‑knowledge‑density paradigm that raises computational efficiency challenges across model, algorithm, framework, and infrastructure layers, while also highlighting NVIDIA's GTC 2024 China AI Day sessions that showcase practical solutions and upcoming training opportunities.

AI InfrastructureAI conferencesModel Efficiency
0 likes · 10 min read
Multi‑Level Efficiency Challenges and Emerging Paradigms for Large AI Models
Alimama Tech
Alimama Tech
Dec 21, 2022 · Artificial Intelligence

Adaptive Parameter Generation Network for Click-Through Rate Prediction

Adaptive Parameter Generation Network (APG) dynamically creates sample‑specific model parameters for click‑through‑rate prediction using low‑rank factorization, parameter sharing, and over‑parameterization, achieving up to 0.2% AUC improvement, 3% CTR lift, and up to 96.6% storage reduction with faster inference.

CTR predictionModel Efficiencyadaptive parameter generation
0 likes · 14 min read
Adaptive Parameter Generation Network for Click-Through Rate Prediction