Topic

model interpretability

Collection size
3 articles
Page 1 of 1
DataFunTalk
DataFunTalk
Sep 17, 2021 · Artificial Intelligence

Interpretable Machine Learning: Methods, Tools, and Financial Applications

This article introduces the importance of model interpretability, reviews common explanation techniques such as model‑specific and model‑agnostic methods, global and local analyses, partial dependence plots, ICE, ALE, and tools like LIME and SHAP, and demonstrates their practical use in anti‑fraud and device‑classification scenarios within a financial‑technology context.

LIMESHAPfinancial risk modeling
0 likes · 14 min read
Interpretable Machine Learning: Methods, Tools, and Financial Applications
DataFunTalk
DataFunTalk
Mar 22, 2021 · Artificial Intelligence

Model Interpretability for Insurance Claim Fraud Detection: Methods, Practice, and Outlook

This article presents a comprehensive overview of model interpretability techniques—global and local methods such as feature importance, LIME, and SHAP—and demonstrates their practical application in insurance claim fraud detection, highlighting challenges, implementation steps, and future research directions.

AIInsuranceLIME
0 likes · 13 min read
Model Interpretability for Insurance Claim Fraud Detection: Methods, Practice, and Outlook
DataFunTalk
DataFunTalk
Jan 3, 2020 · Artificial Intelligence

Survey of Machine Learning Model Interpretability Techniques

This article provides a comprehensive survey of model interpretability in machine learning, covering its importance, evaluation criteria, and a wide range of techniques such as permutation importance, partial dependence plots, ICE, LIME, SHAP, RETAIN, and LRP, along with practical code examples and visualizations.

ICELIMEPDP
0 likes · 39 min read
Survey of Machine Learning Model Interpretability Techniques