Machine Heart
Machine Heart
Apr 16, 2026 · Artificial Intelligence

CPL++: A Self‑Aware, Self‑Correcting Framework for Weakly Supervised Visual Grounding

The CPL++ framework equips weakly supervised visual grounding models with confidence‑aware pseudo‑label learning, self‑supervised association correction, and dynamic validation, enabling the model to detect and amend erroneous region‑query links during training, which yields absolute performance gains of 1–6 % across five benchmark datasets.

Visual Groundingcomputer visionconfidence-aware
0 likes · 9 min read
CPL++: A Self‑Aware, Self‑Correcting Framework for Weakly Supervised Visual Grounding
Baobao Algorithm Notes
Baobao Algorithm Notes
Jul 26, 2022 · Artificial Intelligence

Boost Model Accuracy with 6 Proven Training Tricks

This article compiles six practical machine‑learning tricks—including adversarial training (FGM), EMA/SWA, R‑Drop contrastive loss, test‑time augmentation, pseudo‑labeling, and missing‑value imputation—explaining their principles, providing ready‑to‑use code snippets, and discussing their benefits and trade‑offs for stable and faster model training.

AIEMAR-Drop
0 likes · 10 min read
Boost Model Accuracy with 6 Proven Training Tricks
Baobao Algorithm Notes
Baobao Algorithm Notes
Mar 15, 2022 · Artificial Intelligence

Boost Model Performance with Only 5 Lines of Pseudo‑Label Code

This article explains how semi‑supervised pseudo‑label learning can dramatically improve model accuracy by using a tiny five‑line code snippet that generates pseudo‑labels for unlabeled data, retrains a second model, and avoids data leakage with a proper validation set.

AIdata labelingpseudo-labeling
0 likes · 4 min read
Boost Model Performance with Only 5 Lines of Pseudo‑Label Code
Baobao Algorithm Notes
Baobao Algorithm Notes
Mar 25, 2018 · Artificial Intelligence

How to Crush the Kaggle Toxic Comment Challenge: Data Prep, Models, and Ensemble Secrets

This article breaks down the Kaggle toxic comment classification competition, detailing thorough data cleaning, advanced word‑vector techniques, pseudo‑labeling, BPE tokenization, diverse neural models and ensemble strategies, and shares practical insights and pitfalls from the author's nine‑month competition journey.

BPEKaggleNLP
0 likes · 9 min read
How to Crush the Kaggle Toxic Comment Challenge: Data Prep, Models, and Ensemble Secrets