Breaking the Recommendation Filter Bubble: Alibaba 1688’s Inference‑Driven AI

Alibaba’s 1688 platform leverages inference‑based large language models to enhance recommendation discovery, addressing the filter‑bubble problem by analyzing long‑term buyer behavior, compressing extensive activity streams, generating nuanced demand queries, and integrating multimodal data and market trend agents to deliver more diverse, explainable product suggestions for B‑type buyers.

DataFunSummit
DataFunSummit
DataFunSummit
Breaking the Recommendation Filter Bubble: Alibaba 1688’s Inference‑Driven AI

Background and Discovery Definition

In e‑commerce recommendation scenarios, the "discovery" problem—often called the filter‑bubble—causes users to see increasingly homogeneous items. 1688, a B‑type wholesale platform, aims to break this bubble by using inference‑type large models to infer user‑level discovery categories and business opportunities, helping buyers discover new, trending, and potential products.

Overall Architecture

The system is divided into three core layers:

Infrastructure layer : aggregates long‑term (up to one year) user behavior such as clicks, collections, add‑to‑cart, inquiries, and searches to capture stable business preferences of B‑type users.

Reasoning layer : infers explicit user‑demand trend queries from the aggregated behavior, serving as the key module that transforms raw signals into recommendation‑ready queries.

Application layer : performs item retrieval via vector search and applies downstream ranking and control mechanisms to ensure relevance and precision.

Challenges and Solutions

High model‑selection cost: rapid iteration of open‑source LLMs (e.g., DeepSeek, Qwen) requires extensive evaluation.

Expensive inference cost: personalized demand inference demands multiple model passes, leading to long latency.

Discovery evaluation difficulty: subjective quality of inferred queries and sparse online feedback make offline metrics like category width or PVR insufficient.

Resource constraints: long user behavior sequences produce large token inputs, stressing inference resources.

Long‑Term Behavior Compression Agent

To mitigate token‑length explosion, a compression agent aggregates eight weeks of user search queries and short‑title product titles (multimodal titles) into a five‑tuple <week, category, amount, quantity, decision_factor>. The agent dynamically merges weekly behavior, reducing token count while preserving key business signals.

Multimodal Short Title Generation

Traditional product titles contain SEO noise and lack visual information. 1688 built a multimodal short‑title generator based on Qwen‑VL that extracts style, material, and other visual cues from product images, producing concise titles that downstream text models can consume.

Prompt Engineering for the Compression Agent

The agent is framed as a data‑summary expert. Input includes time window, search terms, click‑item (duration_title), add‑to‑cart (title), and purchase (GMV_quantity_title). The prompt defines category and decision‑factor vocabularies (max 5 characters, excluding the category word) and guides the model to output structured rows such as

#250504_card_22_65_3_MagicChildSea:22|FlameBag:7|FiveYuanBag:5

.

User Business Portrait Agent

After behavior compression, a second LLM builds a business portrait for B‑type users, covering main category analysis (e.g., women’s wear 70%), vertical B‑buyer analysis, drop‑shipper type, core intent extraction, and a multi‑dimensional buyer profile (objective identity, target audience, operating strategy, procurement focus).

Category‑Factor‑Value (CFV) Preference Analysis

CFV captures the fine‑grained decision factors a user cares about within a category. It distinguishes long‑term, recent, and potential preferences, e.g., for "dress" the long‑term preference might be "French:9|Floral:8|Black:7|Waist:6" while recent preference could be "Elegant:9|Blue:8|Waist:7|Luxury:6".

Trend Content Mining and Query Generation

Market‑trend agents retrieve heterogeneous signals from platforms like Xiaohongshu, Google, and Taobao. After filtering for relevance, the system extracts trend selling points, rewrites them into long‑form queries (e.g., "2025 children’s dress trend, new styles, innovative colors"), and clusters them using LLM embeddings to produce normalized trend queries.

Solution Pipeline

Build a comprehensive seed‑query library covering leaf categories, hot search terms, and growth keywords.

Perform category‑trend analysis to identify key attributes (IP, style, color, design).

Integrate multi‑source RAG information (Xiaohongshu, Worthbuy, Google, Taobao).

Summarize trend selling points and rewrite them into e‑commerce‑friendly queries.

Normalize queries via embedding similarity clustering.

Nearline Design

To meet real‑time response constraints, 1688 adds a nearline pipeline on top of the offline flow. Triggers include full‑domain behavior (search, purchase) and a real‑time Blink data window (e.g., 5 product detail views). Functional modules consist of:

User offline‑online data collection (long‑term preferences, global clicks/favorites/add‑to‑cart/orders, RAG‑derived trend knowledge).

Two‑stage personalized recall: U2Q (User‑to‑Query generation) → Q2Vec (Query embedding) → Q2I (Query‑to‑Item) with both non‑personalized high‑relevance and dynamic weighting for personalization.

Unified coarse‑ and fine‑ranking scoring to ensure recommendation quality.

Optimizations keep the average end‑to‑end latency between 7–10 seconds.

Future Outlook

Upcoming work focuses on:

Refining the definition of discovery, moving from coarse category metrics to generative semantic ID clusters.

Enriching B‑type user demand inference with image search and inquiry data.

Strengthening multimodal reasoning to incorporate visual cues, especially for fashion items.

Deploying agentic RAG that autonomously selects external knowledge sources (Xiaohongshu, Google, Taobao) based on long‑term business needs.

Generating queries in a generative fashion using multimodal embeddings to improve recall diversity and precision.

Through these advances, 1688 aims to continuously improve recommendation discovery and business‑opportunity detection for wholesale buyers.

AIRecommendation Systemsuser modelingInferenceE‑commerce
DataFunSummit
Written by

DataFunSummit

Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.