Tag

AUC

0 views collected around this technical thread.

DataFunTalk
DataFunTalk
May 20, 2021 · Artificial Intelligence

Fundamentals and Nuances of CTR (Click‑Through Rate) Modeling

This article explains the theoretical foundations of CTR modeling, why click‑through rates are intrinsically unpredictable at the micro level, the simplifying assumptions that make binary classification feasible, and how evaluation metrics like AUC, contradictory samples, theoretical AUC bounds, and calibration affect model performance.

AUCMachine Learningadvertising
0 likes · 18 min read
Fundamentals and Nuances of CTR (Click‑Through Rate) Modeling
DataFunTalk
DataFunTalk
May 17, 2021 · Artificial Intelligence

Comprehensive Overview of Machine Learning Model Evaluation Metrics

This article provides a comprehensive summary of machine learning model evaluation metrics, covering accuracy, precision, recall, F1, RMSE, ROC/AUC, KS test, and scoring cards, with explanations, formulas, code examples, and practical considerations for model performance assessment.

AUCKSMachine Learning
0 likes · 19 min read
Comprehensive Overview of Machine Learning Model Evaluation Metrics
Alimama Tech
Alimama Tech
May 13, 2021 · Artificial Intelligence

Fundamentals and Misconceptions of CTR (Click-Through Rate) Modeling

CTR modeling predicts click probabilities despite inherent microscopic randomness, treating each impression as an i.i.d. Bernoulli event and framing the task as binary classification; because data are noisy and imbalanced, evaluation relies on AUC rather than accuracy, with theoretical upper bounds set by feature quality, and calibration is needed to align predicted values with observed frequencies.

AUCMachine Learningbinary classification
0 likes · 20 min read
Fundamentals and Misconceptions of CTR (Click-Through Rate) Modeling
DataFunTalk
DataFunTalk
Apr 24, 2020 · Artificial Intelligence

Common Pitfalls in Recommendation Systems: Metrics, Exploration‑Exploitation, and Offline‑Online Discrepancies

The article surveys typical challenges in recommendation systems, including ambiguous evaluation metrics, the trade‑off between precise algorithms and user experience, the exploration‑exploitation dilemma, and why offline AUC improvements often lead to online CTR/CPM drops due to data leakage, feature inconsistency, and distribution shifts.

AUCctrdata leakage
0 likes · 14 min read
Common Pitfalls in Recommendation Systems: Metrics, Exploration‑Exploitation, and Offline‑Online Discrepancies
DataFunTalk
DataFunTalk
Oct 12, 2019 · Fundamentals

Understanding AUC: Interpretation, Properties, and Practical Considerations in Ranking Systems

This article provides a comprehensive overview of the AUC metric used in ranking tasks, discussing its various interpretations, key properties such as score‑independence and sampling robustness, its relationship to business metrics, common pitfalls, and advanced variations like group AUC.

AUCMachine LearningRanking
0 likes · 8 min read
Understanding AUC: Interpretation, Properties, and Practical Considerations in Ranking Systems
Baidu Waimai Technology Team
Baidu Waimai Technology Team
Aug 3, 2017 · Artificial Intelligence

Model Testing and Evaluation Metrics for Strategy Projects in the AI Era

This article explains the challenges of testing machine‑learning models for strategy projects, outlines the overall testing workflow, describes key offline and online evaluation metrics such as AUC and AB‑testing, and summarizes best‑practice procedures for assessing model performance, user experience, and effect differences.

AB testingAIAUC
0 likes · 8 min read
Model Testing and Evaluation Metrics for Strategy Projects in the AI Era