Design and Implementation of the Internal Intelligent QA Chatbot “Jarvis”
This article describes the motivation, micro‑service architecture, code implementation, V1.0 browser‑based NLP prototype, V2.0 AI‑enhanced version with BM25 and BERT, integration with ChatUI, DingTalk bot, command‑based automation, and future plans for the internal intelligent QA chatbot named Jarvis.
With the rise of conversational AI, the company needed an internal intelligent QA bot to reduce repetitive manual support work. "Jarvis" (Just A Rather Very Intelligent System) was created to centralize FAQs, automate ONCALL tracking, and provide a closed‑loop for problem data.
Architecture Design
The system follows a micro‑service design, making it easy to extend and integrate with existing company capabilities. Its core advantages are lightweight deployment, proximity to users, and simple API‑driven interaction.
QA Answering Capability
The first version (V1.0) uses node-nlp for quick exploration because it is familiar to front‑end developers. The implementation steps are:
Jarvis V1.0 Version
Step 1: Project Setup
Create the following project structure:
├── buildable.js
├── dist
│ └── bundle.js
├── index.html
└── package.jsonIn buildable.js add the core NLP imports:
const core = require('@nlpjs/core');
const nlp = require('@nlpjs/nlp');
const langenmin = require('@nlpjs/lang-en-min');
const requestrn = require('@nlpjs/request-rn');
window.nlpjs = { ...core, ...nlp, ...langenmin, ...requestrn };Update package.json with the required dependencies and a build script:
{
"name": "nlpjs-web",
"version": "1.0.0",
"scripts": { "build": "browserify ./buildable.js | terser --compress --mangle > ./dist/bundle.js" },
"devDependencies": {
"@nlpjs/core": "^4.14.0",
"@nlpjs/lang-en-min": "^4.14.0",
"@nlpjs/nlp": "^4.15.0",
"@nlpjs/request-rn": "^4.14.3",
"browserify": "^17.0.0",
"terser": "^5.3.8"
}
}Reference the bundled script in index.html and add a simple chat UI:
<html>
<head>
<title>NLP in a browser</title>
<script src="./dist/bundle.js"></script>
<script>
const {containerBootstrap, Nlp, LangEn, fs} = window.nlpjs;
const setupNLP = async corpus => {
const container = containerBootstrap();
container.register('fs', fs);
container.use(Nlp);
container.use(LangEn);
const nlp = container.get('nlp');
nlp.settings.autoSave = false;
await nlp.addCorpus(corpus);
nlp.train();
return nlp;
};
const onChatSubmit = nlp => async event => {
event.preventDefault();
const chat = document.getElementById('chat');
const chatInput = document.getElementById('chatInput');
chat.innerHTML += `<p>you: ${chatInput.value}</p>`;
const response = await nlp.process('en', chatInput.value);
chat.innerHTML += `<p>chatbot: ${response.answer}</p>`;
chatInput.value = '';
};
(async () => {
const nlp = await setupNLP('https://raw.githubusercontent.com/jesus-seijas-sp/nlpjs-examples/master/01.quickstart/02.filecorpus/corpus-en.json');
const chatForm = document.getElementById('chatbotForm');
chatForm.addEventListener('submit', onChatSubmit(nlp));
})();
</script>
</head>
<body>
<h1>NLP in a browser</h1>
<div id="chat"></div>
<form id="chatbotForm">
<input type="text" id="chatInput"/>
<input type="submit" value="send"/>
</form>
</body>
</html>Run npm run build to generate dist/bundle.js , then open index.html in a browser to interact with the chatbot.
Jarvis V2.0 Version
To overcome V1.0 limitations, the second version leverages company AI resources: BM25 for fast matching, BERT for semantic parsing, and a RESTful API for backend communication. This enables web, mobile, and plugin clients.
Web UI is built with ChatUI. The following files illustrate the setup:
Using ChatUI
index.html (simplified):
<!DOCTYPE html>
<html lang="zh-CN">
<head>
<meta name="renderer" content="webkit"/>
<meta name="force-rendering" content="webkit"/>
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"/>
<meta charset="UTF-8"/>
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=0, minimum-scale=1.0, maximum-scale=1.0, viewport-fit=cover"/>
<title>Jarvis</title>
<link rel="stylesheet" href="//g.alicdn.com/chatui/sdk-v2/0.2.4/sdk.css">
</head>
<body>
<div id="root"></div>
<script src="//g.alicdn.com/chatui/sdk-v2/0.2.4/sdk.js"></script>
<script src="//g.alicdn.com/chatui/extensions/0.0.7/isv-parser.js"></script>
<script src="/setup.js"></script>
<script src="//g.alicdn.com/chatui/icons/0.3.0/index.js" async></script>
</body>
</html>setup.js creates a ChatSDK instance and defines request handling:
var bot = new ChatSDK({
config: {
navbar: { title: '智能助理' },
robot: { avatar: '//gw.alicdn.com/tfs/TB1U7FBiAT2gK0jSZPcXXcKkpXa-108-108.jpg' },
messages: [{ type: 'text', content: { text: '智能助理为您服务,请问有什么可以帮您?' } }]
},
requests: {
send: function(msg) {
if (msg.type === 'text') {
return { url: '//api.server.com/ask', data: { q: msg.content.text } };
}
}
},
handlers: {
parseResponse: function(res, requestType) {
if (requestType === 'send' && res.Messages) {
return isvParser({ data: res });
}
return res;
}
}
});
bot.run();Opening the page displays a fully functional chatbot powered by the AI backend.
DingTalk Bot Integration
The bot is also exposed as a DingTalk enterprise robot. After creating an app in the DingTalk developer console, the bot receives messages via HTTPS and can reply with text, markdown, or card types.
Example request header JSON:
{
"Content-Type": "application/json; charset=utf-8",
"timestamp": "1577262236757",
"sign": "xxxxxxxxxx"
}Sending a text message using ding-bot-sdk :
const Bot = require('ding-bot-sdk');
const bot = new Bot({ access_token: 'xxx', secret: 'xxx' });
bot.send({
"msgtype": "text",
"text": { "content": "我就是我, @150XXXXXXXX是不一样的烟火" },
"at": { "atMobiles": ["150XXXXXXXX"], "isAtAll": false }
});Automation Capability
The second major part of Jarvis is automation. Commands (e.g., ONCALL, 值班) trigger scripts, while a threshold‑based weak‑matching mode falls back to command suggestions when the confidence score is low.
Command mode uses short keywords; the system extracts parameters after a delimiter (e.g., -- ).
Threshold mode example (pseudo‑code):
const THRESHOLD = 0.25;
const questionStr = '今天谁值班';
const instructionMap = [
{ instruction: '值班', handler: () => console.log('获取当前值班人员') },
{ instruction: 'oncall', handler: () => console.log('触发ONCALL相关') }
];
const { scroe, qaAns } = await getQA(questionStr);
if (scroe > THRESHOLD) {
return qaAns;
}
const [{ instruction, handler }] = instructionMap.filter(({ instruction }) => questionStr.indexOf(instruction) > -1);
return handler();System orchestration combines multiple internal services (on‑call system, ICS, voice system, ticket system) to provide a closed‑loop solution for incident tracking and QA model improvement.
Promotion and Adoption
To drive adoption, the team first clarifies the product’s value (problem solved, solution approach, user scenarios) and then promotes it among R&D peers. Early adopters provide feedback, which is collected via surveys or telemetry for continuous improvement.
Summary and Future Plans
Jarvis currently offers QA answering, command‑driven automation, and integration with DingTalk and web clients. Future work includes contextual dialogue, richer semantic QA, broader system integrations, and a generalized orchestration framework to further enhance developer productivity and happiness.
Sohu Tech Products
A knowledge-sharing platform for Sohu's technology products. As a leading Chinese internet brand with media, video, search, and gaming services and over 700 million users, Sohu continuously drives tech innovation and practice. We’ll share practical insights and tech news here.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.