Artificial Intelligence 11 min read

Design and Implementation of a City‑Governance AI Assistant Using Rasa

This article reviews the evolution of voice assistants, compares commercial and open‑source solutions, explains why Rasa was chosen for a city‑governance AI assistant, and details its architecture, data processing, NLU/Core workflow, code implementation, and practical demonstration.

Zhengtong Technical Team
Zhengtong Technical Team
Zhengtong Technical Team
Design and Implementation of a City‑Governance AI Assistant Using Rasa

Since Apple introduced Siri in 2011, major companies have launched voice assistants such as Microsoft Cortana, Amazon Echo, and Google Home, making intelligent assistants an integral part of daily life.

In the field of city governance, the increasing precision of event handling demands smarter information‑management systems. The Zhiyun AI Assistant was created to meet this need.

2. AI Assistant Solutions

Commercial options include Tencent TBP, Baidu UNIT, and Alibaba Xiaomì, each with distinct advantages (e.g., free beta, pre‑trained models, large data resources) and drawbacks (e.g., incomplete packaging, usage‑based pricing, limited Chinese support).

Open‑source alternatives include Uber Plato, Microsoft Malmo, and Rasa. Plato offers a no‑code training experience but lacks Chinese support; Malmo provides a complete Minecraft‑based AI research platform but also lacks Chinese support; Rasa boasts a large community, optional Chinese tokenizers, and easy integration with existing systems, though it requires manual knowledge‑base construction.

Solution Selection

City‑governance scenarios require precise intent recognition and action prediction, which Rasa Core’s policy and action mechanisms handle well. Consequently, the Zhiyun AI Assistant adopts the Rasa solution.

3. Rasa Overall Design

3.1 Terminology

intent : the user’s goal (e.g., "change channel", "query weather").

slot : key information needed to fulfill an intent (e.g., location, time).

entity : a collection of slot values (e.g., name extracted from a greeting).

Example: the sentence "Hello, I am Rasa" is parsed as {"intent": "/greet", "slots": {"name": "Rasa"}} .

3.2 Architecture

The architecture consists of four main components:

Interpreter : extracts intents and entities from user input.

Tracker : records the dialogue state for each user.

Policy : decides the next action based on the current state.

Action : executes the chosen response or operation.

These components work together to enable multi‑turn conversations.

3.3 Process Flow

See the flow diagram below:

4. Specific Implementation

4.1 Data Processing

The knowledge base, built from domain, encyclopedia, scenario, language, and common‑sense data, provides the foundation for intent recognition. After constructing the knowledge base, data undergoes cleaning, tokenization, part‑of‑speech tagging, and stop‑word removal before being formatted for Rasa training.

4.2 Handling User Input

Rasa NLU classifies intents and extracts entities. For example, the user query "Help me find case number 2020xxxx" is tokenized as "帮/我/查询/案件号/2020xxxx/案件". The NLU output is:

{
    "intent": {"name": "rec_query", "confidence": 0.93433877362682923},
    "intent_ranking": [
        {"confidence": 0.93433877362682923, "name": "rec_query"},
        {"confidence": 0.08161531595656784, "name": "hello"}
        // other results ...
    ]
}

After intent detection, the entity extractor retrieves the case number, which is then used by Rasa Core to update the dialogue state and select an appropriate action.

4.3 Action Definition (Code Example)

class RecAction(FormAction):
    # Action name
    def name(self):
        return 'rec_number_form'

    # Submit method
    def submit(self, dispatcher: CollectingDispatcher, tracker: Tracker, domain: Dict[Text, Any]) -> List[Dict]:
        rec_number = tracker.get_slot('rec_number')
        result = fetch_rec_info(rec_number)
        dispatcher.utter_message(text="你要查询的案件信息为{}".format(result))
        return []

The dialogue policy then triggers this form action, prompting the user for the required slot and finally returning the case information.

4.4 Demonstration Scenario

A concrete demo shows the assistant handling a case‑query request, with screenshots of the interaction flow.

5. Summary

The Zhiyun AI Assistant, built on Rasa, has already been deployed for case queries, case registration, configuration changes, and weather inquiries. Future plans include adding voice and image recognition capabilities and leveraging the company’s extensive knowledge‑base to create more specialized, intelligent interaction solutions.

Natural Language ProcessingAI AssistantCity GovernanceDialogue ManagementRASA
Zhengtong Technical Team
Written by

Zhengtong Technical Team

How do 700+ nationwide projects deliver quality service? What inspiring stories lie behind dozens of product lines? Where is the efficient solution for tens of thousands of customer needs each year? This is Zhengtong Digital's technical practice sharing—a bridge connecting engineers and customers!

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.