Artificial Intelligence 8 min read

Sequential Recommendation Algorithms: Overview and Techniques

This article surveys sequential recommendation methods, covering standard models such as pooling, RNN, CNN, attention, and Transformer, as well as long‑short term, multi‑interest, multi‑behavior approaches, and recent advances like contrastive learning, highlighting their impact on recommendation performance.

DataFunSummit
DataFunSummit
DataFunSummit
Sequential Recommendation Algorithms: Overview and Techniques

Author: Zhu Yongchun Affiliation: University of Chinese Academy of Sciences Research Interests: Cross‑domain recommendation, multi‑task learning

In real‑world recommendation systems, user embeddings learned from all data capture preferences but often miss sequential behavior; sequential recommendation explicitly models these behaviors to improve performance. This article introduces several categories of sequential recommendation algorithms.

1. Standard Sequential Recommendation

Standard sequential recommendation extracts user representations from single‑behavior sequences using methods such as Pooling, RNN, CNN, Memory Network, Attention, and Transformer.

1.1 Pooling

Item embeddings from user interactions are averaged to form a sequence feature, as used in Google's recommendation model [1]; this simple yet effective technique is widely adopted in industry.

1.2 RNN‑based

RNNs are powerful for sequence modeling across domains. GRU4Rec [2] incorporates RNNs into session‑based recommendation, treating interactions within a session as a sequence.

1.3 CNN‑based

TextCNN brings convolutional networks to sequence modeling; Caser [3] applies CNNs to sequential recommendation, addressing the limitation of Markov chain models that can only capture point‑level patterns.

1.4 Attention‑based

Attention mechanisms address the importance of different interactions; SASRec [4] proposes a self‑attention based sequential recommendation model.

Alibaba’s Deep Interest Network (DIN) [5] also leverages attention for ad recommendation and is widely used in industry.

1.5 Memory‑based

Memory networks store long‑term interactions to avoid forgetting; RUM [6] introduces a user memory module for this purpose.

1.6 Transformer‑based

Transformers have revolutionized NLP; BERT4Rec [7] adapts Transformer‑based pre‑training to recommendation.

2. Long‑Short Term Sequential Recommendation

Users exhibit both long‑term and short‑term interests; SHAN separates these behaviors and models them with a hierarchical attention network.

3. Multi‑Interest Sequential Recommendation

Since users often have multiple interests, methods encode sequences into several interest vectors [9].

4. Multi‑Behavior Sequential Recommendation

Users generate various behavior types (click, share, purchase, etc.); modeling multi‑behavior sequences captures richer preferences [10].

5. Other Sequential Recommendation Approaches

Contrastive learning has been applied to sequential recommendation tasks [11].

Related tasks include next‑basket recommendation [12].

6. Summary

Explicitly modeling users' historical interactions significantly improves recommendation performance; effective sequence modeling modules should consider long‑short term dynamics, multi‑behavior signals, and multi‑interest representations, while simple pooling can serve as a quick baseline when sequence features are less impactful.

7. References

[1] Deep Neural Networks for YouTube Recommendations. RecSys 2016.

[2] Session‑based Recommendations with Recurrent Neural Networks. ICLR 2016.

[3] Personalized Top‑N Sequential Recommendation via Convolutional Sequence Embedding. WSDM 2018.

[4] Self‑Attentive Sequential Recommendation. ICDM 2018.

[5] Deep Interest Network for Click‑Through Rate Prediction. KDD 2018.

[6] Sequential Recommendation with User Memory Networks. WSDM 2018.

[7] BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer. CIKM 2019.

[8] Sequential Recommender System based on Hierarchical Attention Networks. IJCAI 2018.

[9] Controllable Multi‑Interest Framework for Recommendation. KDD 2020.

[10] Incorporating User Micro‑behaviors and Item Knowledge into Multi‑task Learning for Session‑based Recommendation. SIGIR 2021.

[11] Disentangled Self‑Supervision in Sequential Recommenders. KDD 2020.

[12] Factorizing Personalized Markov Chains for Next‑Basket Recommendation. WWW 2010.

machine learningTransformerAttentionRNNsequential recommendation
DataFunSummit
Written by

DataFunSummit

Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.