From Concept to Deployment: The Evolution of 1688’s AI Purchasing Assistant “Yuanbao”
This article chronicles the development of 1688’s AI buyer assistant “Yuanbao”, detailing why an e‑commerce AI assistant is needed, its functional design, MVP constraints, the shift to a data‑driven 2.0 version, future prospects, and a Q&A, providing practical insights for AI product rollout in B‑to‑C platforms.
Guide This article shares the experience of 1688’s AI buyer assistant “Yuanbao” from its early creation to version iterations, offering references for applying AI in mature B‑to‑C products.
Today's presentation covers six points:
Why build an AI assistant for e‑commerce buyers?
What kind of AI procurement assistant should be built?
MVP version: coping with model capability constraints.
Version 2.0: shifting to a data‑driven agent design.
Future outlook.
Q&A session.
Why build an AI assistant for e‑commerce buyers?
1688 is a B2B platform where most buyers are enterprises, resellers, or offline stores. The AI assistant aims to help these B‑class buyers improve procurement efficiency.
AI has long been used in e‑commerce for marketing, recommendation, and customer service, but most buyer‑facing apps still lack AI‑enabled scenarios despite the potential of large language models (LLMs) to assist.
What new possibilities does the assistant bring?
Analysis of the buyer journey reveals three major pain points: long procurement chains, high information density across pages, and the need for precise, objective decision‑making by B‑class buyers.
The chain from demand generation to order completion is lengthy.
Information is scattered across many pages and layers, requiring buyers to be familiar with the product.
B‑class buyers demand high accuracy and often compare many items to find the best fit.
These issues lead to low procurement efficiency and high time cost.
Large models excel at understanding unstructured data and supporting decision‑making. They can capture buyer intent via voice or text, and also ingest non‑structured supply‑side data such as shop descriptions and user reviews, enabling a more proactive, assistant‑driven procurement flow.
The goal of the 1688 AI procurement assistant is to assist buyers at each node of the procurement chain, improving information acquisition and processing to boost efficiency.
What kind of AI procurement assistant should be built?
1. Functional entry points
We first analyzed user flows (home search, product detail, cart) and identified the main pain point: difficulty finding desired products and comparing details.
From a technical perspective, we matched these needs with LLM capabilities such as summarization, classification, and content polishing, selecting feasible yet valuable features.
We decided to focus on the cart and product‑detail pages, providing answers to product questions, summarizing buyer reviews, and comparing multiple items.
2. Interaction approach
Three common interaction modes for mature products are:
Single‑point embedding: low intrusion but users may not realize an AI assistant is present.
Independent chat page: easy to launch and decouple from existing features, but can interrupt user flow and requires user education.
Agent‑style commands (e.g., slash commands): balances low intrusion with flexibility but may need higher engineering effort.
Considering the exploratory nature of the project, we chose the independent chat page for its low cost and minimal impact on existing UX.
The assistant architecture is agent‑centric: a top‑level intent‑recognition agent routes queries to specialized agents (product comparison, review summarization, etc.). Underlying capabilities include LLMs, multimodal models, Retrieval‑Augmented Generation (RAG), and internal data tools. Data quality and database construction are critical for effective agent performance.
When a user enters a query, the intent router selects the appropriate agent, which then generates a response using tool and knowledge‑base support.
MVP version (model‑constraint handling)
During the initial MVP, we used Qwen 7B/14B models due to limited availability. The model’s general abilities (summarization, polishing) were sufficient, so we did not fine‑tune but used the small model directly.
However, the small model exhibited issues such as repetitive content, logical errors, and unstable output formats, especially in the product‑comparison agent.
We tried prompt engineering but saw limited gains. Instead, we externalized core decision logic into code, feeding the results to the model for final summarization and polishing.
For example, the comparison agent now receives structured metrics (return rate, shipping speed, seller rating) generated by code, which the model then turns into a natural‑language conclusion.
The MVP 1.0 version showed that while the model could produce correct answers, users found the output obvious and rigid, limiting its impact on procurement decisions.
Version 2.0: Data‑driven AI product design
With the release of Qwen 1.5 72B, we switched to the larger model without changing prompts or agents, which dramatically improved baseline capabilities.
Data quality now becomes the main bottleneck. We focus on four aspects:
Data richness – identifying needed data sources and ensuring they can be scaled.
Data quality – coverage, completeness, accuracy, and timeliness (e.g., price consistency across pages).
Data usage – deciding when to feed data via tools versus RAG, and performing filtering or post‑processing to present clean natural‑language inputs.
Data validation – offline testing of agents to pinpoint and resolve quality issues.
The product design workflow for AI agents adds AI‑specific checkpoints (ideal effect definition, feasibility verification, data validation) to the traditional product process.
After successful data validation, development proceeds.
Post‑launch cases (four agents)
1. Product comparison – richer data and model‑driven decision making raised user satisfaction from ~35% to ~80%.
2. Product suggestion – a knowledge base with ~20 product metrics and user‑profile information enables personalized recommendations.
3. Detail consultation – by extracting structured text from product images via OCR and feeding it to the model, the assistant can recommend sizes based on user height/weight, improving satisfaction by 20 pts.
4. B‑buyer market‑trend agent – identifies hot‑selling items, parses user intent, and returns category, trend, and price analysis, reducing the effort of browsing list pages.
Future outlook
When model capabilities are solid, data quality becomes the decisive factor for product success. Interaction via chat is not the final form; the ultimate goal is an end‑to‑end solution where users describe needs and the system delivers the optimal product instantly.
A “deep‑search” feature currently in limited beta will first match the most relevant products, then provide structured, explainable reasons, shortening the buyer’s journey and catering to long‑tail B‑buyer demands. (Available at aizhao.1688.com, PC only.)
Finally, we invite everyone to download the 1688 app and try the AI assistant “Yuanbao”.
Q&A
Q1: Why is the “deep‑search” feature more suitable for B‑class buyers than ordinary consumers?
A1: B‑class buyers have precise business goals and are willing to invest effort for better matches, making the feature more valuable for them.
Q2: How do you handle the model’s lack of industry knowledge?
A2: We use RAG to inject domain data, explain internal data before feeding it to the model, and incorporate external knowledge bases such as Quark Engine.
Thank you for listening.
DataFunSummit
Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.