Tag

SHAP

0 views collected around this technical thread.

DataFunSummit
DataFunSummit
Sep 3, 2024 · Artificial Intelligence

Metric Attribution on Internet Platforms: Concepts, Methods, and Tool Applications

This article explains metric attribution for internet platforms, covering its definition, a three‑step analytical framework, deterministic and probabilistic methods such as metric decomposition, machine‑learning models with SHAP values, case studies, and a practical tool that guides users through attribution analysis.

Internet PlatformsMetric AttributionSHAP
0 likes · 15 min read
Metric Attribution on Internet Platforms: Concepts, Methods, and Tool Applications
DataFunTalk
DataFunTalk
Jul 13, 2024 · Artificial Intelligence

Metric Attribution in Internet Platforms: Concepts, Methods, and Case Studies

This article explains metric attribution for internet platforms, covering its definition, a three‑step framework, basic deterministic and probabilistic methods—including indicator decomposition, machine‑learning and SHAP techniques—illustrated with two detailed case studies and a brief overview of supporting tools.

Internet PlatformsMetric AttributionSHAP
0 likes · 15 min read
Metric Attribution in Internet Platforms: Concepts, Methods, and Case Studies
Model Perspective
Model Perspective
Mar 4, 2023 · Artificial Intelligence

How Shapley Values Reveal Fair Profit Splits and Explain Machine Learning Models

This article introduces the Shapley value concept, its fairness axioms, demonstrates its use in a profit‑allocation problem and in interpreting machine‑learning models with SHAP, and provides complete Python implementations for both cases.

PythonSHAPShapley value
0 likes · 12 min read
How Shapley Values Reveal Fair Profit Splits and Explain Machine Learning Models
Model Perspective
Model Perspective
Oct 31, 2022 · Artificial Intelligence

Understanding SHAP: How Shapley Values Explain Black‑Box Models

This article explains the SHAP (Shapley Additive Explanation) method, its theoretical foundations in game theory, the computation of Shapley Values, various algorithmic approximations like TreeSHAP and DeepSHAP, practical code examples, and the strengths and limitations of using SHAP for model interpretability.

SHAPShapley Valuesexplainable AI
0 likes · 11 min read
Understanding SHAP: How Shapley Values Explain Black‑Box Models
Model Perspective
Model Perspective
Oct 27, 2022 · Artificial Intelligence

Unlocking Black‑Box Models: A Practical Guide to PDP, ICE, and Post‑Hoc Interpretation

This article explains why post‑hoc interpretation methods such as PDP, ALE, LIME, and SHAP are essential for extracting insights from complex machine‑learning models, demonstrates their mathematical foundations, discusses limitations, and provides a complete Python example using XGBoost on a housing‑price dataset.

ICELIMESHAP
0 likes · 14 min read
Unlocking Black‑Box Models: A Practical Guide to PDP, ICE, and Post‑Hoc Interpretation
DataFunTalk
DataFunTalk
Sep 17, 2021 · Artificial Intelligence

Interpretable Machine Learning: Methods, Tools, and Financial Applications

This article introduces the importance of model interpretability, reviews common explanation techniques such as model‑specific and model‑agnostic methods, global and local analyses, partial dependence plots, ICE, ALE, and tools like LIME and SHAP, and demonstrates their practical use in anti‑fraud and device‑classification scenarios within a financial‑technology context.

LIMESHAPfinancial risk modeling
0 likes · 14 min read
Interpretable Machine Learning: Methods, Tools, and Financial Applications
DataFunTalk
DataFunTalk
Mar 22, 2021 · Artificial Intelligence

Model Interpretability for Insurance Claim Fraud Detection: Methods, Practice, and Outlook

This article presents a comprehensive overview of model interpretability techniques—global and local methods such as feature importance, LIME, and SHAP—and demonstrates their practical application in insurance claim fraud detection, highlighting challenges, implementation steps, and future research directions.

AIInsuranceLIME
0 likes · 13 min read
Model Interpretability for Insurance Claim Fraud Detection: Methods, Practice, and Outlook
DataFunTalk
DataFunTalk
Jan 3, 2020 · Artificial Intelligence

Survey of Machine Learning Model Interpretability Techniques

This article provides a comprehensive survey of model interpretability in machine learning, covering its importance, evaluation criteria, and a wide range of techniques such as permutation importance, partial dependence plots, ICE, LIME, SHAP, RETAIN, and LRP, along with practical code examples and visualizations.

ICELIMEPDP
0 likes · 39 min read
Survey of Machine Learning Model Interpretability Techniques
Didi Tech
Didi Tech
Oct 8, 2019 · Artificial Intelligence

Didi and Ant Financial Co‑Develop SQLFlow to Bring AI Capabilities to Data Analysts

Partnering with Ant Financial, Didi enhanced the open-source SQLFlow platform—translating SQL into end-to-end AI workflows with added deep-learning, XGBoost, clustering and SHAP explanation capabilities and Hive support—to create a “SQL garden” marketplace where analysts can deploy ready-made AI models via simple SQL, speeding enterprise AI adoption.

AISHAPXGBoost
0 likes · 9 min read
Didi and Ant Financial Co‑Develop SQLFlow to Bring AI Capabilities to Data Analysts
AntTech
AntTech
Sep 27, 2019 · Artificial Intelligence

Didi and Ant Financial Co‑Develop SQLFlow to Bring AI Capabilities to Data Analysts

The article describes how Didi's data science team partnered with Ant Financial to co‑build the open‑source SQLFlow platform, enabling analysts to launch AI models via simple SQL, detailing the models contributed, technical extensions, and the broader vision for a universal AI ecosystem.

AISHAPXGBoost
0 likes · 8 min read
Didi and Ant Financial Co‑Develop SQLFlow to Bring AI Capabilities to Data Analysts