Artificial Intelligence 5 min read

Robust AI: Ant Group’s Self‑Supervised Feature‑Compatible Model Wins NeurIPS ISC2021 Image Representation Competition

The Ant Group’s TitanShield Team secured the image representation track at NeurIPS ISC2021 using a self‑supervised, feature‑compatible pre‑training model that dramatically cuts labeling effort, speeds up training, and lowers image adversarial risk by 80%, highlighting AI robustness as a critical challenge for content‑security applications.

AntTech
AntTech
AntTech
Robust AI: Ant Group’s Self‑Supervised Feature‑Compatible Model Wins NeurIPS ISC2021 Image Representation Competition

On December 10, the NeurIPS‑Facebook AI joint Image Similarity Matching Competition (ISC2021) concluded with 1,635 teams participating, making it one of the most influential contests at the conference. The Ant Group’s TitanShield Team (titanshield2) won the image representation track by a margin of 10 percentage points, employing an independently developed “feature‑compatible self‑supervised learning framework” pre‑training model that addresses fast‑changing sensitive content and delayed risk‑model updates in content‑security scenarios, reducing image adversarial risk by 80% and greatly enhancing AI robustness.

Robustness: AI’s First Major Test – As AI advances, safety and trustworthiness become the bottleneck for its next thirty years of development. Robustness, i.e., resistance to attacks and stability, is the first major exam for AI. In image recognition, errors can cause autonomous‑driving accidents; in copyright protection, image transformations can evade anti‑piracy models; and in content‑security, illicit material hidden in seemingly benign images is a common tactic of illicit actors. As Ant Group senior technical expert Bo Shan emphasized, “If attacks cannot be resisted and recognition results are untrustworthy, AI models lose their purpose and become new risk exposures.”

Trusted AI: The Anchor for Content‑Security – Sensitive information evolves rapidly, and insufficient training samples are core pain points for content‑security risk control. New illicit content such as emerging celebrity scandals or trendy copyrighted images appear faster than models can be trained, while scenarios like child sexual abuse material suffer from scarce labeled data, making effective AI‑driven risk control difficult. Moreover, as industry collaboration deepens, any weak link in the ecosystem can become a foothold for malicious actors, and the sensitivity of training data hampers joint risk‑control efforts.

The winning solution’s “feature‑compatible self‑supervised learning framework” mitigates these issues in several ways. First, it leverages public datasets for pre‑training, allowing AI to simulate similar risks in advance. Second, by replacing manually labeled “feature” samples with self‑supervised learning, the model reduces labeling requirements by 70 % and cuts training time from a week to three days, accelerating downstream convergence. Third, the innovative “feature‑compatible” approach enables sharing of feature information across business scenarios or partner companies, facilitating collaborative risk defense.

According to Ant Group, the model and its associated technologies have been fully deployed in Alipay’s content‑security engine, achieving an overall 80 % reduction in image adversarial risk.

self-supervised learningNeurIPSimage representationcontent securityAnt GroupAI robustness
AntTech
Written by

AntTech

Technology is the core driver of Ant's future creation.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.