Deep Learning Based Automatic QA Tool – qa_match Open‑Source Project Overview
The article reviews the open‑source qa_match tool from 58.com, detailing its deep‑learning based question‑answer matching architecture, hierarchical knowledge‑base support, lightweight pre‑training model SPTM, and practical applications, while summarizing the live‑stream presentation and Q&A session.
qa_match is an open‑source question‑answer matching tool developed by 58.com that leverages deep learning to support both single‑layer and double‑layer knowledge‑base QA through intent matching, domain classification, and a lightweight pre‑training language model (SPTM) to improve downstream tasks.
The project repository is available at https://github.com/wuba/qa_match , and detailed articles introduce its architecture and recent updates.
The presentation covered six main topics: background of automatic QA, an LSTM‑based domain classification model, a DSSM‑based intent recognition model, fusion of classification and matching models, the lightweight SPTM pre‑training model, and application examples.
Speaker He Rui, senior algorithm engineer at 58.com AI Lab, shared insights on model training, data preparation, and deployment, highlighting that offline DSSM training uses a default positive‑to‑negative sample ratio of 1:200 and that the knowledge base size in the demo is about 90k entries.
The Q&A session addressed practical concerns such as negative sample configuration, knowledge‑base generation scale, standard question count, effects of using Transformers for pre‑training, reasons for removing the NSP task, balancing result‑rate metrics, and hardware specifications (training on P40 GPU + Xeon E5‑2620 v4, inference on CPU).
The recap concludes with thanks to the audience and provides a video review of the session.
58 Tech
Official tech channel of 58, a platform for tech innovation, sharing, and communication.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.