Building a Scalable AI Agent Smart Task Framework for Offline & Event‑Driven Use

After LLMs entered the deep‑water stage, developers realized that agents must go beyond passive Q&A to support asynchronous, long‑running, and subscribable tasks; this article details the design, architecture, and engineering challenges of the “Xiao Gao Teacher AI Agent” smart‑task system, from event‑driven logic to fault‑tolerant deployment.

Amap Tech
Amap Tech
Amap Tech
Building a Scalable AI Agent Smart Task Framework for Offline & Event‑Driven Use

Background

LLM‑driven agents need to go beyond a passive question‑answer mode and support asynchronous, long‑running, subscribable tasks. Major vendors (OpenAI, xAI, Manus, Baidu) have converged on a model that converts user prompts into scheduled or event‑driven workflows that run offline and push results (e.g., summaries, reminders) to users.

Task Classification

Periodic subscription : high‑frequency notifications that improve user stickiness (e.g., daily weather).

Monitoring subscription : event‑driven alerts that reach users at decision moments (e.g., oil‑price drop).

Engineering Challenges

Full‑link closed loop : intent creation → lifecycle management → asynchronous execution → result notification, with breakpoint‑retry for long tasks.

High‑concurrency stability : sustain millions of simultaneous subscriptions without throttling.

System extensibility : add new task types, tools, and data sources rapidly.

Four‑Layer Architecture

Interaction Layer : entry point where the main Agent uses Chain‑of‑Thought (CoT) reasoning to parse natural‑language intents and generate task parameters.

Management Layer : TaskManager provides CRUD, scheduling policies (cron or event), and persistent state, decoupling definition from execution.

Execution Layer : Task Agent consumes messages, calls external tools (weather, oil price) via the MCP gateway, and produces notification payloads.

Infrastructure Layer : message queues (Kafka/RocketMQ) for traffic smoothing, Redis for state caching, and observability components for end‑to‑end monitoring.

"Twin" Deployment

Separate online and offline concerns by deploying two agents:

Main Agent (online twin) : runs in a low‑latency front‑end cluster, handling real‑time user queries.

Task Agent (offline twin) : runs in an isolated compute cluster, scales horizontally, and processes queued tasks without blocking the front‑end.

Core Workflow

User sends a request; Main Agent parses intent with CoT, extracts slots, and validates parameters.

Task Manager persists the draft, shows a confirmation card, and upon activation creates a task instance.

The instance is pushed to the execution queue.

Task Agent consumes the message, invokes the MCP gateway for real‑time data, generates the final content, and calls the push service.

Result is written back to Task Manager, completing the closed loop.

Trigger Layer

Dynamic priority sorting : high‑value tasks (e.g., oil‑price alerts) receive higher execution priority.

Parameter injection : at consumption time, templates are filled with personalized data and source identifiers.

Execution state recording : start time, retry count, and metadata are logged for fault analysis.

Asynchronous Message Queue

Decouples task generation from execution, allowing traffic spikes (e.g., millions of 8 am weather pushes) to be buffered and processed safely.

Multi‑Level Retry Strategy (Exponential Back‑off)

Tiered response :

Transient errors – up to 3 immediate retries.

Temporary errors – exponential back‑off (10 s, 20 s, 40 s).

Permanent errors – mark as failed, no retry.

Idempotency : combine task ID with execution instance ID as a unique key to avoid duplicate pushes.

Cache Hierarchy

Local + Redis : ultra‑fast local cache for hot data, Redis for cross‑instance state sharing.

Fine‑grained TTL : cache lifetimes align with task periods (e.g., 24 h for weather).

Cache pre‑heat : predict high‑peak tasks and warm up data, reducing first‑run latency to < 50 ms.

MCP Protocol (Tool Plug‑in)

Unified interface & dynamic registration : external tools are abstracted as standard MCP endpoints, enabling hot‑plug without service restart.

Cross‑platform adaptation : same semantic context works for internal APIs or third‑party services.

Circuit Breaker & Rate Limiting

Smart rate limiting : per‑second task quota adapts to current load, prioritizing high‑value tasks.

Circuit & degradation : when a tool’s failure rate exceeds a threshold, the system short‑circuits and returns cached or default data.

Observability

Metrics, logs, and traces form a three‑pillar stack that provides real‑time alerts and post‑mortem analysis.

Future Directions

AI‑driven task planning : support conditional branches, loops, and automatic priority adjustment based on user feedback.

Real‑time intelligent operations : lower event‑detection latency, AI‑based anomaly detection, and auto‑scaling for zero‑human‑intervention ops.

Conclusion

The architecture blends LLM reasoning flexibility with engineering guarantees, delivering a general‑purpose AI‑Agent task framework that scales to millions of concurrent users while providing a proactive personal‑assistant experience.

Task classification diagram
Task classification diagram
Overall workflow diagram
Overall workflow diagram
Intent parsing flowchart
Intent parsing flowchart
Message queue architecture
Message queue architecture
Monitoring dashboard
Monitoring dashboard
Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

LLMfault toleranceai-agentEvent-Driven ArchitectureScalable BackendSmart Tasks
Amap Tech
Written by

Amap Tech

Official Amap technology account showcasing all of Amap's technical innovations.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.