Artificial Intelligence 6 min read

Interview with DYG Team Member Wang He: Team Story and Model Fusion Strategies for the Competition

In this interview, DYG team member Wang He introduces his teammates, explains why they formed the team, and shares detailed model‑fusion techniques—including input variation, diverse model architectures, and training‑target differences—to boost competition scores during the final stage.

Tencent Advertising Technology
Tencent Advertising Technology
Tencent Advertising Technology
Interview with DYG Team Member Wang He: Team Story and Model Fusion Strategies for the Competition

With only a week left before the competition’s final round, the DYG team continues to dominate the leaderboard, prompting the organizers to invite team member Wang He to share the team’s story and scoring strategies.

Wang He explains that the team name “DYG” comes from the initials of its three members: DaBai (D), who works in NLP and frequently ranks in the top five of domestic data contests; Yu (Y); and GuoDa (G), a PhD student from Sun Yat‑sen University who won the 2019 Tencent competition and has published first‑author papers at NeurIPS, AAAI, ACL, and EMNLP.

The team formed because Wang He and GuoDa have known each other since the 2018 Tencent competition and built strong collaboration, while DaBai’s immediate top‑rank performance attracted them to invite him, resulting in a highly coordinated trio.

For the final two weeks, their scoring approach centered on model fusion. They first create diverse predictions by varying input samples (adjusting sequence length, order, and applying data augmentation), then by employing different models and architectures such as LSTM, GRU, CNN, LSTM + Attention, Transformer, LightGBM, XGBoost, and by tweaking internal parameters like activation functions and dropout. They also vary training targets, e.g., using age ten‑class, gender binary, or combining them into multi‑class tasks. After generating three distinct result sets, they fuse them using methods such as voting, weighted averaging, or stacking, noting that stacking yielded a two‑to‑three‑ten‑thousand‑point improvement over simple weighting.

Wang He concludes with an encouraging message: the most important thing in a competition is not the score but the learning and continuous self‑improvement, wishing all participants efficient preparation and peak performance.

The article also invites readers to join the official competition QQ group, submit resumes on the official website, and read the original article for more updates.

machine learningAIModel FusionNLPcompetitionteam interview
Tencent Advertising Technology
Written by

Tencent Advertising Technology

Official hub of Tencent Advertising Technology, sharing the team's latest cutting-edge achievements and advertising technology applications.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.