Artificial Intelligence 8 min read

Reflections on Working as an Algorithm Engineer at Meituan and the Rise of Contrastive Learning

The author shares personal experiences as a Meituan algorithm engineer, emphasizing the critical role of labeled data, the emergence of contrastive (self‑supervised) learning across computer vision, NLP, and recommendation systems, and offers practical advice for algorithm engineers to stay competitive.

DataFunTalk
DataFunTalk
DataFunTalk
Reflections on Working as an Algorithm Engineer at Meituan and the Rise of Contrastive Learning

Hello everyone, I am Duibai. Although life is short and papers are long, we must keep striving because there are always people working harder, especially in the algorithm field.

Instead of discussing cutting‑edge papers today, I want to talk about my experience at Meituan, where I have worked on NLP, recommendation, and dynamic pricing for over half a year, rotating roughly every three months.

I realized that labeled data is crucial for algorithm engineers, yet most business data lacks labels and manual annotation is unrealistic without deep domain knowledge. Even when labels exist, they are often scarce or erroneous, making it hard to achieve supervised‑learning performance under strict OKRs.

Contrastive learning, a form of self‑supervised learning, has become a hot topic in both academia and industry. At ICLR 2020, deep‑learning pioneers Bengio, LeCun, and Hinton identified self‑supervised learning as the future of AI, prompting me to focus on this direction.

The rise of contrastive learning brings significant benefits:

In computer vision, it enables self‑supervised pre‑training to learn image priors without large labeled datasets.

In NLP, larger unlabeled corpora and more complex models improve downstream task performance.

In recommendation, it addresses data sparsity, long‑tail items, cross‑domain view aggregation, and enhances model robustness.

Leveraging contrastive learning helped me meet my OKR and even write nine articles on the topic, covering recent advances and practical implementations.

My dream is to publish a book on self‑supervised learning after writing 50 articles.

Beyond content creation, I have connected with many experts, received encouragement, and even helped job‑seeking newcomers, which motivates me to keep updating original material.

Key takeaways for algorithm engineers:

Stay familiar with both popular and cutting‑edge algorithms.

Master big‑data tools like Hadoop, Spark, and distributed prediction pipelines (TF/PyTorch support distributed training but not inference).

Be proficient with HiveSQL.

Often you must handle both algorithm design and engineering implementation yourself.

If your models cannot be deployed to generate business value, your work may be considered wasted and could jeopardize your position.

Therefore, continuous learning—studying a bit more each day—can make the difference between a successful model and failure.

I write this article to share my feelings about working on algorithms at Meituan and to encourage fellow engineers.

Wishing everyone great model performance, I am Duibai, and we will keep pushing forward together!

contrastive learningAI researchself-supervised learningalgorithm engineeringMeituan
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.