Design and Implementation of the Internal Intelligent QA Chatbot “Jarvis”
This article describes the end‑to‑end design, architecture, code implementation, and deployment steps for an internal intelligent QA chatbot named “Jarvis”, covering its V1.0 browser‑based prototype, V2.0 AI‑enhanced version, DingTalk integration, automation features, and future roadmap.
Introduction
With the rise of conversational AI, the team built an internal intelligent QA chatbot called “Jarvis” to automate repetitive FAQ handling, close the ONCALL loop, and provide a unified interface for developers, testers, and business users.
Architecture Design
“Jarvis” follows a micro‑service architecture that makes it easy to extend and integrate with existing company capabilities. The system is divided into a lightweight front‑end client and a back‑end AI service hosted by the AI team.
QA Answering Capability – V1.0
The first version uses node-nlp for quick prototyping because it is easy for front‑end developers. The implementation steps are:
Step 1: Project Setup
Create the following file structure:
├── buildable.js
├── dist
│ └── bundle.js
├── index.html
└── package.jsonWrite the core code in buildable.js :
const core = require('@nlpjs/core');
const nlp = require('@nlpjs/nlp');
const langenmin = require('@nlpjs/lang-en-min');
const requestrn = require('@nlpjs/request-rn');
window.nlpjs = { ...core, ...nlp, ...langenmin, ...requestrn };Add the following dependencies to package.json :
{
"name": "nlpjs-web",
"version": "1.0.0",
"scripts": {
"build": "browserify ./buildable.js | terser --compress --mangle > ./dist/bundle.js"
},
"devDependencies": {
"@nlpjs/core": "^4.14.0",
"@nlpjs/lang-en-min": "^4.14.0",
"@nlpjs/nlp": "^4.15.0",
"@nlpjs/request-rn": "^4.14.3",
"browserify": "^17.0.0",
"terser": "^5.3.8"
}
}Reference the bundled script in index.html and add a simple chat UI:
<html>
<head>
<title>NLP in a browser</title>
<script src="./dist/bundle.js"></script>
<script>
const { containerBootstrap, Nlp, LangEn, fs } = window.nlpjs;
const setupNLP = async corpus => {
const container = containerBootstrap();
container.register('fs', fs);
container.use(Nlp);
container.use(LangEn);
const nlp = container.get('nlp');
nlp.settings.autoSave = false;
await nlp.addCorpus(corpus);
nlp.train();
return nlp;
};
const onChatSubmit = nlp => async event => {
event.preventDefault();
const chat = document.getElementById('chat');
const chatInput = document.getElementById('chatInput');
chat.innerHTML += `<p>you: ${chatInput.value}</p>`;
const response = await nlp.process('en', chatInput.value);
chat.innerHTML += `<p>chatbot: ${response.answer}</p>`;
chatInput.value = '';
};
(async () => {
const nlp = await setupNLP('https://raw.githubusercontent.com/jesus-seijas-sp/nlpjs-examples/master/01.quickstart/02.filecorpus/corpus-en.json');
const chatForm = document.getElementById('chatbotForm');
chatForm.addEventListener('submit', onChatSubmit(nlp));
})();
</script>
</head>
<body>
<h1>NLP in a browser</h1>
<div id="chat"></div>
<form id="chatbotForm">
<input type="text" id="chatInput"/>
<input type="submit" value="send"/>
</form>
</body>
</html>Run npm run build to generate dist/bundle.js and open index.html in a browser to interact with the prototype.
V2.0 – AI‑Enhanced Version
The second version moves the heavy‑weight model training to the AI team, using BM25 for fast retrieval and BERT for semantic parsing. The front‑end now consumes a RESTful API, enabling web, mobile, and plugin clients.
ChatUI Integration
Use the open‑source ChatUI SDK to build a richer UI. The minimal files are:
<!DOCTYPE html>
<html lang="zh-CN">
<head>
<meta name="renderer" content="webkit"/>
<meta name="force-rendering" content="webkit"/>
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"/>
<meta charset="UTF-8"/>
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=0, minimum-scale=1.0, maximum-scale=1.0, viewport-fit=cover"/>
<title>Jarvis</title>
<link rel="stylesheet" href="//g.alicdn.com/chatui/sdk-v2/0.2.4/sdk.css">
</head>
<body>
<div id="root"></div>
<script src="//g.alicdn.com/chatui/sdk-v2/0.2.4/sdk.js"></script>
<script src="//g.alicdn.com/chatui/extensions/0.0.7/isv-parser.js"></script>
<script src="/setup.js"></script>
<script src="//g.alicdn.com/chatui/icons/0.3.0/index.js" async></script>
</body>
</html>And the corresponding setup.js :
var bot = new ChatSDK({
config: {
navbar: { title: '智能助理' },
robot: { avatar: '//gw.alicdn.com/tfs/TB1U7FBiAT2gK0jSZPcXXcKkpXa-108-108.jpg' },
messages: [{ type: 'text', content: { text: '智能助理为您服务,请问有什么可以帮您?' } }]
},
requests: {
send: function (msg) {
if (msg.type === 'text') {
return { url: '//api.server.com/ask', data: { q: msg.content.text } };
}
}
},
handlers: {
parseResponse: function (res, requestType) {
if (requestType === 'send' && res.Messages) {
// 解析 ISV 消息数据
return isvParser({ data: res });
}
return res;
}
}
});
bot.run();DingTalk Bot Integration
The team also built a DingTalk robot to push notifications. The webhook request format is:
{
"Content-Type": "application/json; charset=utf-8",
"timestamp": "1577262236757",
"sign": "xxxxxxxxxx"
}Sending a text message via the ding-bot-sdk :
const Bot = require('ding-bot-sdk');
const bot = new Bot({
access_token: 'xxx', // Webhook address token (required)
secret: 'xxx' // Signature secret (required)
});
bot.send({
"msgtype": "text",
"text": { "content": "我就是我, @150XXXXXXXX 是不一样的烟火" },
"at": { "atMobiles": ["150XXXXXXXX"], "isAtAll": false }
});Automation Capability
The chatbot can trigger scripts via command mode (e.g., ONCALL, 值班) or a threshold‑based weak‑match mode. Example threshold logic:
const THRESHOLD = 0.25;
const questionStr = '今天谁值班';
const instructionMap = [
{ instruction: '值班', handler: () => console.log('获取当前值班人员') },
{ instruction: 'oncall', handler: () => console.log('触发ONCALL相关') }
];
const { scroe, qaAns } = await getQA(questionStr);
if (scroe > THRESHOLD) {
return qaAns;
}
// Find matching instruction
const [{ instruction, handler }] = instructionMap.filter(({ instruction }) => questionStr.indexOf(instruction) > -1);
return handler();Multiple system logics (ONCALL, ICS, voice, ticket) can be orchestrated to form a closed‑loop solution for incident tracking and knowledge‑base enrichment.
Promotion and Adoption
The article outlines how to promote the bot within the organization, gather user feedback, and iterate based on usage data.
Summary and Future Plans
“Jarvis” currently provides QA answering and automation. Future work includes context‑aware dialogue, deeper semantic understanding, broader system integrations, and analytics to continuously improve developer happiness.
政采云技术
ZCY Technology Team (Zero), based in Hangzhou, is a growth-oriented team passionate about technology and craftsmanship. With around 500 members, we are building comprehensive engineering, project management, and talent development systems. We are committed to innovation and creating a cloud service ecosystem for government and enterprise procurement. We look forward to your joining us.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.