Tag

model interpretability

0 views collected around this technical thread.

Cognitive Technology Team
Cognitive Technology Team
Apr 30, 2025 · Artificial Intelligence

AI Claims of Human-Level Intelligence Unveiled: Reliance on Massive Rules Over True Reasoning

The article critiques AI giants’ claims of nearing human-level intelligence, highlighting research that shows current models rely on massive rule memorization rather than genuine reasoning, leading to brittleness in navigation, mathematics, and adaptability, and emphasizing the need to understand these limitations for future progress.

AI limitationsArtificial Intelligencelarge language models
0 likes · 8 min read
AI Claims of Human-Level Intelligence Unveiled: Reliance on Massive Rules Over True Reasoning
DevOps
DevOps
Mar 31, 2025 · Artificial Intelligence

Claude Team Unveils "Circuit Tracing" to Reveal Large Language Model Reasoning

The Claude research team introduced a novel "circuit tracing" technique that builds substitute models and attribution graphs to expose the internal reasoning steps of large language models, uncovering capabilities such as multilingual understanding, long‑term planning, multi‑step inference, and hidden mathematical computation strategies.

Artificial IntelligenceAttribution GraphsCircuit Tracing
0 likes · 9 min read
Claude Team Unveils "Circuit Tracing" to Reveal Large Language Model Reasoning
JD Retail Technology
JD Retail Technology
Nov 6, 2024 · Artificial Intelligence

Explainability Practices in JD Retail Recommendation System

This article describes the definition, architecture, and practical applications of explainability in JD's retail recommendation system, covering ranking, model, and traffic explainability, system challenges, data infrastructure, and specific techniques such as SHAP and Integrated Gradients for interpreting model decisions.

AIRankingexplainability
0 likes · 17 min read
Explainability Practices in JD Retail Recommendation System
Tencent Advertising Technology
Tencent Advertising Technology
Dec 20, 2022 · Artificial Intelligence

Modeling Advertising Attractiveness: Data Analysis, Pairwise Learning, and DeepFM Optimization

This article presents a comprehensive study on estimating video ad attractiveness by analyzing 3‑second completion rates, proposing pairwise MLP and DeepFM models, introducing hierarchical sampling and multimodal features, and demonstrating practical deployment improvements in material recommendation and ad ranking.

advertisingattractivenessdeepFM
0 likes · 16 min read
Modeling Advertising Attractiveness: Data Analysis, Pairwise Learning, and DeepFM Optimization
Model Perspective
Model Perspective
Oct 30, 2022 · Artificial Intelligence

How ALE Plots Overcome Partial Dependence Limitations in ML

The Accumulated Local Effect (ALE) plot, introduced by Daniel W. Apley in 2016, addresses the correlation issue inherent in Partial Dependence Plots, offering unbiased, faster, and more accurate feature impact visualizations for machine‑learning models, especially in domains like financial risk control.

ALEfeature importancemachine learning
0 likes · 9 min read
How ALE Plots Overcome Partial Dependence Limitations in ML
Model Perspective
Model Perspective
Oct 27, 2022 · Artificial Intelligence

Unlocking Black‑Box Models: A Practical Guide to PDP, ICE, and Post‑Hoc Interpretation

This article explains why post‑hoc interpretation methods such as PDP, ALE, LIME, and SHAP are essential for extracting insights from complex machine‑learning models, demonstrates their mathematical foundations, discusses limitations, and provides a complete Python example using XGBoost on a housing‑price dataset.

ICELIMESHAP
0 likes · 14 min read
Unlocking Black‑Box Models: A Practical Guide to PDP, ICE, and Post‑Hoc Interpretation
Model Perspective
Model Perspective
Oct 21, 2022 · Artificial Intelligence

How Explainable Boosting Machines (EBM) Combine Accuracy and Interpretability

Explainable Boosting Machines (EBM) integrate boosting trees into generalized additive models, using the FAST algorithm to efficiently detect high‑impact pairwise interactions, delivering near‑state‑of‑the‑art accuracy while preserving strong global and local interpretability, as demonstrated on breast‑cancer data.

FAST algorithmexplainable boosting machinegeneralized additive model
0 likes · 10 min read
How Explainable Boosting Machines (EBM) Combine Accuracy and Interpretability
DataFunTalk
DataFunTalk
Oct 15, 2022 · Artificial Intelligence

AutoDL: Automated and Interpretable Deep Learning – Research Highlights from Baidu Big Data Lab

This article reviews Baidu Big Data Lab's recent advances in automated deep learning (AutoDL), covering its research breakthroughs, integration with PaddlePaddle/PaddleHub, industrial deployments, transfer learning innovations, and future directions for AI automation and interpretability.

AI AutomationAutoDLNeural Architecture Search
0 likes · 19 min read
AutoDL: Automated and Interpretable Deep Learning – Research Highlights from Baidu Big Data Lab
Model Perspective
Model Perspective
Oct 9, 2022 · Artificial Intelligence

Why Model Interpretability Matters: Tackling the Black‑Box Problem in AI

This article explains the challenges of black‑box machine‑learning models, illustrates real‑world banking examples, and introduces explainable AI techniques such as intrinsic vs. post‑hoc and local vs. global explanations to improve trust, safety, and fairness.

AI ethicsblack-box modelsexplainable AI
0 likes · 13 min read
Why Model Interpretability Matters: Tackling the Black‑Box Problem in AI
Baidu Geek Talk
Baidu Geek Talk
Mar 28, 2022 · Artificial Intelligence

Robust Input Visualization Methods for Vision Transformers

The paper proposes a robust Grad‑CAM‑inspired visualization for Vision Transformers that combines attention weights and gradients to generate class‑specific saliency maps, demonstrates superior alignment with discriminative regions across ViT, Swin and Volo models, and shows a 76% false‑positive reduction in Baidu’s porn‑content risk control system.

Grad-CAMInput VisualizationSelf‑Attention
0 likes · 11 min read
Robust Input Visualization Methods for Vision Transformers
DataFunTalk
DataFunTalk
Sep 17, 2021 · Artificial Intelligence

Interpretable Machine Learning: Methods, Tools, and Financial Applications

This article introduces the importance of model interpretability, reviews common explanation techniques such as model‑specific and model‑agnostic methods, global and local analyses, partial dependence plots, ICE, ALE, and tools like LIME and SHAP, and demonstrates their practical use in anti‑fraud and device‑classification scenarios within a financial‑technology context.

LIMESHAPfinancial risk modeling
0 likes · 14 min read
Interpretable Machine Learning: Methods, Tools, and Financial Applications
Didi Tech
Didi Tech
Apr 16, 2021 · Artificial Intelligence

Governance Algorithms for O2O Ride-Hailing Platforms: Challenges, Framework, and Model Exploration

The paper presents Didi’s three‑layer governance‑algorithm framework for O2O ride‑hailing, addressing high business complexity, limited labeled data, interpretability, and multimodal features through small‑sample, transfer, and multi‑task learning, achieving notable gains in dispute resolution, NPS and CPO while highlighting remaining data and robustness challenges.

Ride-hailingfeature engineeringgovernance algorithms
0 likes · 15 min read
Governance Algorithms for O2O Ride-Hailing Platforms: Challenges, Framework, and Model Exploration
DataFunTalk
DataFunTalk
Mar 22, 2021 · Artificial Intelligence

Model Interpretability for Insurance Claim Fraud Detection: Methods, Practice, and Outlook

This article presents a comprehensive overview of model interpretability techniques—global and local methods such as feature importance, LIME, and SHAP—and demonstrates their practical application in insurance claim fraud detection, highlighting challenges, implementation steps, and future research directions.

AIInsuranceLIME
0 likes · 13 min read
Model Interpretability for Insurance Claim Fraud Detection: Methods, Practice, and Outlook
DataFunTalk
DataFunTalk
Dec 8, 2020 · Artificial Intelligence

Financial Big Data Risk Control Models: Techniques, Applications, and COVID‑19 Challenges

This article presents a comprehensive overview of financial big‑data risk control models at Du Xiaoman, covering traditional scoring cards, AI‑driven time‑series and text processing, graph‑based networks, model interpretability, probability calibration, stability analysis, and the specific challenges introduced by the COVID‑19 pandemic.

Artificial IntelligenceBig Datacredit scoring
0 likes · 14 min read
Financial Big Data Risk Control Models: Techniques, Applications, and COVID‑19 Challenges
DataFunTalk
DataFunTalk
Feb 13, 2020 · Artificial Intelligence

Deep Learning Techniques and Challenges in Autonomous Driving

This article reviews the rapid development of deep learning, its pivotal role in autonomous driving, outlines end‑to‑end perception‑to‑control pipelines, discusses the strengths and limitations of deep models, and proposes practical strategies such as task decomposition, multi‑method fusion, and sensor integration to improve safety and interpretability.

Computer Visionautonomous drivingdeep learning
0 likes · 8 min read
Deep Learning Techniques and Challenges in Autonomous Driving
DataFunTalk
DataFunTalk
Jan 3, 2020 · Artificial Intelligence

Survey of Machine Learning Model Interpretability Techniques

This article provides a comprehensive survey of model interpretability in machine learning, covering its importance, evaluation criteria, and a wide range of techniques such as permutation importance, partial dependence plots, ICE, LIME, SHAP, RETAIN, and LRP, along with practical code examples and visualizations.

ICELIMEPDP
0 likes · 39 min read
Survey of Machine Learning Model Interpretability Techniques
DataFunTalk
DataFunTalk
Jul 5, 2019 · Artificial Intelligence

Lead Quality Prediction for Real Estate: Data, Modeling, and Interpretability

This article presents a case study on building and deploying a lead‑quality classification model for a high‑value, low‑frequency real‑estate platform, covering business context, data challenges, sampling strategies, feature engineering, model selection, tuning, evaluation metrics, interpretability analysis, and observed performance improvements.

Real Estateclassificationfeature engineering
0 likes · 14 min read
Lead Quality Prediction for Real Estate: Data, Modeling, and Interpretability
DataFunTalk
DataFunTalk
Aug 12, 2018 · Artificial Intelligence

Interpretability of Deep Learning and Low‑Frequency Event Learning in Financial Applications

The article reviews the limitations of mainstream deep‑learning models in finance, proposes hybrid tree‑based and Wide&Deep architectures combined with attention, sensitivity and variance analysis to improve interpretability and low‑frequency event detection, and validates the approach with a large‑scale insurance recommendation case study.

Financeattention mechanismdeep learning
0 likes · 17 min read
Interpretability of Deep Learning and Low‑Frequency Event Learning in Financial Applications