Tag

RLHF

0 views collected around this technical thread.

DataFunSummit
DataFunSummit
Jun 10, 2025 · Artificial Intelligence

How Quwan’s Kaitian Model Tackles Emotional AI for Social Apps – Architecture, Training Tricks, and Safety

Quwan Technology presents its Kaitian social large model, designed for personalized, emotionally rich, multimodal AI interactions, detailing its scene‑specific goals, CPT+SFT+RLHF training pipeline, data desensitization, LoRA fine‑tuning, evaluation methods, pruning, latency trade‑offs, safety mechanisms, and future feedback loops.

AI safetyLoRARLHF
0 likes · 13 min read
How Quwan’s Kaitian Model Tackles Emotional AI for Social Apps – Architecture, Training Tricks, and Safety
DaTaobao Tech
DaTaobao Tech
Jun 4, 2025 · Artificial Intelligence

Understanding Large Language Model Architecture, Parameters, Memory, Storage, and Fine‑Tuning Techniques

This article provides a comprehensive overview of large language models (LLMs), covering their transformer architecture, parameter counts, GPU memory and storage requirements, and detailed fine‑tuning methods such as prompt engineering, data construction, LoRA, PEFT, RLHF, and DPO, along with practical deployment and inference acceleration strategies.

DPOFine-tuningLLM
0 likes · 17 min read
Understanding Large Language Model Architecture, Parameters, Memory, Storage, and Fine‑Tuning Techniques
JD Tech
JD Tech
Apr 30, 2025 · Artificial Intelligence

TimeHF: A Billion‑Scale Time Series Forecasting Model Guided by Human Feedback

The JD Supply Chain algorithm team introduces TimeHF, a billion‑parameter time‑series large model that leverages RLHF to boost demand‑forecast accuracy by over 10%, detailing dataset construction, the PCTLM architecture, a custom RLHF framework (TPO), and extensive SOTA experimental results.

Big DataLarge Language ModelsRLHF
0 likes · 10 min read
TimeHF: A Billion‑Scale Time Series Forecasting Model Guided by Human Feedback
JD Tech Talk
JD Tech Talk
Apr 11, 2025 · Artificial Intelligence

A Billion-Scale Pure Time Series Large Model: PCTLM with SFT and TPO for Forecasting

This article presents a pioneering billion‑parameter pure time‑series large model (PCTLM) trained on a 1.5‑billion‑sample dataset, introduces a novel RLHF framework (TPO) for time‑series forecasting, and demonstrates state‑of‑the‑art performance across multiple public benchmarks, surpassing existing models such as GPT4TS.

Big DataPCTLMRLHF
0 likes · 11 min read
A Billion-Scale Pure Time Series Large Model: PCTLM with SFT and TPO for Forecasting
DataFunSummit
DataFunSummit
Mar 30, 2025 · Artificial Intelligence

RLHF Techniques and Challenges in Large Language Models and Multimodal Applications

This article reviews reinforcement learning, RLHF, and related alignment techniques for large language models and multimodal systems, covering fundamentals, recent advances such as InstructGPT, Constitutional AI, RLAIF, Super Alignment, GPT‑4o, video LLMs, and experimental evaluations of proposed methods.

Large Language ModelsPreference LearningRLHF
0 likes · 26 min read
RLHF Techniques and Challenges in Large Language Models and Multimodal Applications
DataFunTalk
DataFunTalk
Mar 24, 2025 · Artificial Intelligence

DeepSeek R1: Open‑Source Reasoning Model and Multi‑Stage Training Insights

The interview explores DeepSeek R1's open‑source weights, its multi‑stage training pipeline—including pre‑training, supervised fine‑tuning, and RLHF—alongside innovations such as self‑consistency, chain‑of‑thought prompting, distillation, MoE architectures, and cost considerations, highlighting its impact on the future of large language models.

AI trainingChain-of-ThoughtDeepSeek
0 likes · 20 min read
DeepSeek R1: Open‑Source Reasoning Model and Multi‑Stage Training Insights
Code Mala Tang
Code Mala Tang
Mar 1, 2025 · Artificial Intelligence

Why Do Large Language Models Hallucinate and How Can We Fix It?

This article explains why large language models produce plausible‑looking but false information, traces the problem to the supervised fine‑tuning stage, and outlines mitigation techniques such as knowledge interrogation, RLHF, and tool‑augmented search to reduce hallucinations.

LLMRLHFWeb Search
0 likes · 12 min read
Why Do Large Language Models Hallucinate and How Can We Fix It?
Tencent Technical Engineering
Tencent Technical Engineering
Feb 24, 2025 · Artificial Intelligence

Understanding GRPO: Group Relative Policy Optimization in Reinforcement Learning and Large Language Models

The article reviews reinforcement-learning fundamentals and the progression from policy-gradient to PPO, then introduces Group Relative Policy Optimization (GRPO)—a critic-free method that normalizes rewards across multiple sampled outputs to compute group-relative advantages—and shows how DeepSeek-R1 leverages GRPO with rule-based rewards to achieve strong reasoning performance.

GRPOPPORLHF
0 likes · 16 min read
Understanding GRPO: Group Relative Policy Optimization in Reinforcement Learning and Large Language Models
DevOps
DevOps
Feb 23, 2025 · Artificial Intelligence

Understanding Reinforcement Learning, RLHF, PPO and GRPO for AI Applications

This article explains how DeepSeek‑R1‑Zero uses group‑relative policy optimization (GRPO) to enhance inference without labeled data, introduces reinforcement learning with human feedback (RLHF) and its components, and compares the PPO and GRPO algorithms, highlighting their suitable engineering scenarios and practical implications for AI applications.

AI model trainingGRPOPPO
0 likes · 15 min read
Understanding Reinforcement Learning, RLHF, PPO and GRPO for AI Applications
Top Architect
Top Architect
Feb 9, 2025 · Artificial Intelligence

DeepSeek‑R1: Training Pipeline, Reinforcement‑Learning Techniques, and Experimental Results

The article reviews DeepSeek‑R1’s training methodology—including cold‑start data collection, multi‑stage RL fine‑tuning, SFT data generation, and model distillation—highlights its performance comparable to OpenAI‑o1‑1217, and discusses key contributions, reward design, successful experiments, and failed attempts.

AI researchDeepSeekLLM
0 likes · 12 min read
DeepSeek‑R1: Training Pipeline, Reinforcement‑Learning Techniques, and Experimental Results
Architect
Architect
Feb 6, 2025 · Artificial Intelligence

DeepSeek‑R1: Reinforcement‑Learning‑Driven Long‑Chain Reasoning for Large Language Models

The article reviews DeepSeek‑R1, detailing its reinforcement‑learning‑based training pipeline that uses minimal supervised data, cold‑start fine‑tuning, multi‑stage RL, rejection‑sampling SFT, and distillation to achieve reasoning performance comparable to OpenAI‑o1‑1217, while also discussing successful contributions and failed experiments.

AI researchDeepSeek-R1LLM reasoning
0 likes · 11 min read
DeepSeek‑R1: Reinforcement‑Learning‑Driven Long‑Chain Reasoning for Large Language Models
DataFunSummit
DataFunSummit
Jan 21, 2025 · Artificial Intelligence

NVIDIA NeMo Full Stack: End‑to‑End Large Language Model Training, Alignment, and RLHF

This article presents NVIDIA's NeMo technology stack for end‑to‑end large language model (LLM) training, covering the full software pipeline, model alignment with reinforcement learning from human feedback (RLHF), performance optimizations such as model parallelism, FP8, TensorRT‑LLM inference, dynamic load balancing, and future research directions.

GPU optimizationLLMNeMo
0 likes · 24 min read
NVIDIA NeMo Full Stack: End‑to‑End Large Language Model Training, Alignment, and RLHF
Xiaohongshu Tech REDtech
Xiaohongshu Tech REDtech
Jan 2, 2025 · Artificial Intelligence

Xiaohongshu's Self-developed RLHF System for Multimodal Large Language Models: Design, Optimization, and Performance

Xiaohongshu’s team unveiled a self‑developed RLHF system that trains multimodal large language models using heterogeneous and homogeneous network architectures, extensive PPO optimizations, and Medusa speculative sampling, achieving over 50% throughput gains, reduced hardware needs, and 5‑20% performance improvements on zero‑shot benchmarks.

MedusaPPOPRM
0 likes · 21 min read
Xiaohongshu's Self-developed RLHF System for Multimodal Large Language Models: Design, Optimization, and Performance
php中文网 Courses
php中文网 Courses
Dec 13, 2024 · Artificial Intelligence

OpenAI Day 2: Launch of Reinforcement Learning from Human Feedback (RLHF) Model for Enhanced AI Capabilities

OpenAI announced on the second day of its twelve‑day event that it has integrated Reinforcement Learning from Human Feedback (RLHF) into its 001 series models, demonstrating significant reasoning improvements, showcasing legal and medical use cases, and promising a public release early next year.

AI Model Fine-tuningOpenAIRLHF
0 likes · 5 min read
OpenAI Day 2: Launch of Reinforcement Learning from Human Feedback (RLHF) Model for Enhanced AI Capabilities
DataFunTalk
DataFunTalk
Sep 23, 2024 · Artificial Intelligence

Comprehensive Guide to Selecting, Adapting, and Deploying Large Language Models for Enterprise Applications

This article provides an in‑depth, step‑by‑step guide on how enterprises can choose between open‑source and closed‑source large language models, adapt them through incremental pre‑training, instruction fine‑tuning, and reinforcement learning, and finally deploy them across front‑office, middle‑office, and back‑office scenarios to drive digital transformation.

Large Language ModelsRLHFRetrieval-Augmented Generation
0 likes · 28 min read
Comprehensive Guide to Selecting, Adapting, and Deploying Large Language Models for Enterprise Applications
Baidu Geek Talk
Baidu Geek Talk
Aug 26, 2024 · Artificial Intelligence

RLHF Performance Optimization: PPO Algorithm Acceleration Techniques

The article presents three RLHF‑PPO acceleration techniques—TRT‑LLM‑based text generation speedups, selective activation recomputation with sequence parallelism for dynamic memory reduction, and overlapping pipeline stages for system‑level parallelism—demonstrating a 350 % throughput boost on a 10 B model using 16 A100 GPUs.

GPU optimizationLarge Language ModelsPPO optimization
0 likes · 16 min read
RLHF Performance Optimization: PPO Algorithm Acceleration Techniques
Tencent Advertising Technology
Tencent Advertising Technology
Aug 15, 2024 · Artificial Intelligence

Enhancing Reinforcement Learning with Label-Sensitive Reward for Natural Language Understanding

This paper introduces RLLR, a label‑sensitive reward reinforcement learning method that improves natural language understanding tasks by aligning training objectives with label accuracy, and demonstrates its effectiveness across eight public NLU datasets and real‑world advertising feature evaluation, outperforming standard RLHF and SFT baselines.

Natural Language UnderstandingRLHFadvertising
0 likes · 14 min read
Enhancing Reinforcement Learning with Label-Sensitive Reward for Natural Language Understanding
DataFunSummit
DataFunSummit
Aug 8, 2024 · Artificial Intelligence

Exploring Training and Alignment Techniques for Financial Large Models

The announcement details a DataFun Summit 2024 session where Du Xiaoman AI researcher Huo Liangyu will present on the challenges, development, and alignment methods of the Xuan Yuan financial large language model, highlighting RLHF techniques, data collection, and real‑world deployment insights for the finance sector.

AIFinanceFinancial AI
0 likes · 6 min read
Exploring Training and Alignment Techniques for Financial Large Models
Kuaishou Tech
Kuaishou Tech
Jul 18, 2024 · Artificial Intelligence

Multidimensional Preference Model (MPS) for Text-to-Image Generation: Dataset, Architecture, and Experimental Analysis

This article introduces the Multidimensional Preference Model (MPS), the first multi‑dimensional scoring system for evaluating text‑to‑image generation, built on the newly released MHP dataset with extensive human annotations across aesthetic, semantic alignment, detail quality, and overall preference dimensions, and demonstrates its superior performance through comprehensive experiments and RLHF integration.

AI evaluationMHP datasetMPS
0 likes · 10 min read
Multidimensional Preference Model (MPS) for Text-to-Image Generation: Dataset, Architecture, and Experimental Analysis
Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
May 1, 2024 · Artificial Intelligence

Hyper‑SD: Trajectory‑Segmented Consistency Model for Accelerating Diffusion Image Generation

Hyper‑SD introduces a trajectory‑segmented consistency distillation framework that combines trajectory‑preserving and trajectory‑reconstruction strategies, integrates human‑feedback learning and score distillation, and achieves state‑of‑the‑art low‑step image generation performance on both SD1.5 and SDXL models.

AI accelerationModel DistillationRLHF
0 likes · 10 min read
Hyper‑SD: Trajectory‑Segmented Consistency Model for Accelerating Diffusion Image Generation