LLM‑Powered Intent Understanding, RAG QA, and Knowledge Base Maintenance for Recycling

This article details how Zhuanzhuan leverages large language models to enhance on‑site device inspection through a three‑stage pipeline—intent understanding, retrieval‑augmented generation QA, and automated knowledge‑base upkeep—highlighting technical innovations, workflow integration, and the resulting operational benefits.

Zhuanzhuan Tech
Zhuanzhuan Tech
Zhuanzhuan Tech
LLM‑Powered Intent Understanding, RAG QA, and Knowledge Base Maintenance for Recycling

Introduction

As a company focused on the circular economy, Zhuanzhuan promotes green consumption and has upgraded its service by offering on‑site collection for used electronics and integrated retail, consignment, and recycling stores, aiming to improve user trust and convenience.

In the on‑site collection workflow, the quality‑inspection step is critical; engineers must quickly and accurately assess diverse products, a task that is hard to standardize and scale. To assist engineers, Zhuanzhuan built an LLM‑based intelligent QA system that acts as a real‑time AI expert.

This paper examines three core application scenarios—intent understanding, RAG QA, and knowledge‑base maintenance—detailing the technical innovations and practical solutions employed.

Scenario 1: Intent Understanding

Intent understanding bridges user queries and system knowledge, determining the success of the QA system.

1.1 Core Goal

Engineers often pose vague or informal questions. The goal is to guide them to refine their queries, lowering the cognitive barrier and capturing true needs.

1.2 Dual‑Module Collaboration

The solution forms a closed loop of “intent recognition → intent rewriting”. The recognition module performs coarse‑grained classification and semantic parsing, while the rewriting module decides whether to directly standardize the query or ask follow‑up questions to fill missing details.

Intent Recognition Sub‑module

The main challenges are extracting the intent direction (e.g., SKU identification, repair intent, functional support) and key entities (components, fault symptoms). The model is built in two stages: first, a generic LLM with carefully crafted prompts; second, domain‑adapted pre‑training and instruction fine‑tuning using data collected from the first stage.

Intent recognition model iteration
Intent recognition model iteration

Intent Rewriting Sub‑module

The rewriting module must decide, based on recognition results and business rules, whether to answer directly or to ask clarifying questions, focusing on missing “component” and “symptom” entities.

During online inference, the system queries a “component‑symptom” knowledge graph built from the QA knowledge base; if guidance is needed, it generates a precise follow‑up.

Intent rewriting decision mechanism
Intent rewriting decision mechanism

1.3 Core Value

The approach combines LLM semantic power with a structured knowledge graph, delivering three benefits: turning open questions into selectable options for smoother interaction, providing structured data for downstream RAG QA, and encapsulating Zhuanzhuan’s domain expertise as a competitive advantage.

Scenario 2: RAG QA

RAG QA is the user‑facing component that delivers value; users care only about the answer quality.

2.1 Core Goal

After intent understanding, the system must provide accurate, timely, and authoritative answers to specific queries.

2.2 Three‑Stage Refined Pipeline (Retrieval → Re‑ranking → Generation)

The initial RAG implementation suffered from noisy retrieval results and hallucinations in generation. The improved pipeline first retrieves all relevant knowledge, then re‑ranks to filter out noise, and finally generates answers using a clean context and strict prompts, reducing hallucinations.

RAG QA pipeline optimization
RAG QA pipeline optimization

2.3 Core Value

The solution fuses LLM language capabilities with Zhuanzhuan’s specialized knowledge base, ensuring answers are both natural and technically accurate, thereby creating a key technical moat.

Scenario 3: Knowledge‑Base Maintenance

A high‑quality, large‑scale knowledge base is essential for stable RAG performance.

3.1 Core Goal

As business complexity grows and new inspection standards emerge, manual KB upkeep can’t keep pace; an intelligent maintenance pipeline is needed.

3.2 Human‑Machine Collaborative Knowledge Factory

The workflow consists of four steps:

QA Mining: extract QA pairs from real conversations using LLM semantic extraction.

Standard Question Mining: cluster similar user questions, select representative “standard questions”, and map others as “similar questions”.

Answer Generation: feed clustered QA data into LLM prompts to produce a single, consistent official answer for each standard question.

Quality Inspection & Ingestion: human reviewers verify question‑answer mappings and compliance before storing them in the KB.

Human‑machine collaborative knowledge factory
Human‑machine collaborative knowledge factory

3.3 Core Value

The LLM‑driven factory offers practicality, efficiency, and deployability: it aligns KB content with real user needs, automates extraction and generation while keeping humans in the loop for quality, and prioritizes high‑frequency issues through clustering.

Conclusion

In Zhuanzhuan’s intelligent QA scenario, the three LLM applications are tightly linked: intent understanding acts as the brain, RAG QA as the heart, and knowledge‑base maintenance as the blood‑forming system, together forming an evolving, increasingly intelligent solution.

By combining domain fine‑tuning, RAG architecture, knowledge graphs, and human‑machine collaboration, the system achieves continuous improvement and superior performance.

References

[1] Improving Generalization in Intent Detection: GRPO with Reward‑Based Curriculum Sampling (https://arxiv.org/pdf/2504.13592)

[2] Retrieval‑Augmented Generation for AI‑Generated Content: A Survey (https://arxiv.org/pdf/2402.19473)

[3] DoorDash uses large models, knowledge graphs, and clustering to reshape its customer‑service knowledge base (https://mp.weixin.qq.com/s/VDtoyQvPTBCZj_ErSOZj3g)

AILLMRAGKnowledge BaseIntent Understanding
Zhuanzhuan Tech
Written by

Zhuanzhuan Tech

A platform for Zhuanzhuan R&D and industry peers to learn and exchange technology, regularly sharing frontline experience and cutting‑edge topics. We welcome practical discussions and sharing; contact waterystone with any questions.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.