How Multi‑Agent AI Powers Zero‑Barrier Big Data Analysis in Tomoro’s Lumos

This article explores Tomoro’s Lumos data‑intelligent agent, detailing its multi‑agent architecture, technical design, practical implementations, and performance optimizations that enable seamless, low‑threshold big‑data self‑service analysis powered by AI.

Tencent Technical Engineering
Tencent Technical Engineering
Tencent Technical Engineering
How Multi‑Agent AI Powers Zero‑Barrier Big Data Analysis in Tomoro’s Lumos
AI integration with data analysis has long been a focus, and Tomoro combines a big‑data engine, spreadsheet UI, and AI into a unified tool. Lumos, the data‑intelligent agent in Tomoro, uses a multi‑agent architecture to tackle complex professional scenarios, with various technical optimizations to enhance user experience.

0. Content Summary (Condensed Long Article)

0.1 Product and Technical Vision

1) Product Positioning: Big data + spreadsheet + AI, collaborative completion of comprehensive frontline data analysis tasks.

2) Product Thinking: Combining familiar spreadsheet interaction with AI‑driven analysis workbench to achieve zero‑threshold big‑data self‑service analysis.

3) Product Capabilities: Simple spreadsheet operations + unlimited data ingestion + AI‑driven workflow from analysis to reporting.

4) Technical Approach: Consider technical elements from analysis environment, Data Agent, and analysis scenarios; human verification of Agent results is essential under current model limitations.

5) Practical‑Driven Technical Design: Prioritize AI‑enhanced scenarios, design Agent主体 based on scenarios, and drive environment product development to meet AI application needs throughout the workflow.

0.2 Lumos Implementation Practice

1) Why choose a multi‑agent design? Complex analysis tasks often require multiple steps; a multi‑agent system offers flexibility and professionalism without overloading a single Agent.

2) Solving consistency among agents: Implement shared work memory so agents can observe each other's goals, plans, results, and states, with sequential task execution to ensure downstream tasks utilize upstream results.

3) Achieving extreme query & compute tool response: Use a layered computation framework and continuous scenario optimization to accelerate queries and calculations.

4) AI Coding vs. tool invocation: Prefer tool calls; fall back to AI Coding when tools have limitations, with monitoring to drive tool iteration.

5) Co‑building expert agents via MCP: Integrate business knowledge, tools, and algorithms into the system to improve adaptability.

6) Enhancing user question effectiveness: Provide table‑based question recommendation and multi‑turn recommendation to help users ask effective questions, using clarification to avoid vague queries.

7) Measuring Agent analysis capability: Build continuous product and Agent capability evaluation mechanisms, using benchmarks across data domains to guide improvements.

8) Engineering structure optimization for Data Agent stability: Apply rational application‑layer division, public‑layer integration, and model‑layer design principles for orderly and focused development.

0.3 Future Plans

Focus on optimizing Lumos Agent performance, advancing business knowledge and tool integration, and deepening scenario validation to continuously enhance capabilities.

1. Tomoro Product and Technical Vision

1.1 Tomoro Product Positioning – AI‑Driven Spreadsheet Big‑Data Analysis Tool

Tomoro aims to combine AI with big‑data to enable frontline users to solve data analysis problems across all domains, achieving true data democratization and efficiency.

1) Core BI analysis engine capabilities:

DataTalk’s billion‑row second‑level query chain

DataTalk’s charting and business analysis functions

DataTalk’s business data models and complex analysis paths

2) UI redesign to familiar spreadsheet form

Low learning curve, visual data exploration

Pre‑mounted data assets and rich data source connections

Extensible functions and plugins for billion‑row, second‑level response

3) AI Analyst – perception and accompaniment of full analysis flow

AI decomposes complex tasks, plans analysis paths, provides insights

AI handles SQL + Python coding for complex problems

AI absorbs business‑specific knowledge for better context understanding

1.2 Why Tomoro – Goal: Zero‑Threshold Big‑Data Self‑Service Analysis

Non‑technical users face two main barriers: BI data comprehension difficulty and high tool usage thresholds. Tomoro addresses these by integrating big‑data, spreadsheets, and AI to lower barriers and improve analysis efficiency.

1.3 Tomoro Design Framework

1.3.1 Simple, familiar data ingestion and basic analysis

Support three data source categories: traditional data warehouses (MySQL, StarRocks, ClickHouse, Hive), platform data (DataTalk reports, Tencent Docs, questionnaires), and multimodal data (screenshots, PDFs).

Provide Excel‑like analysis with features such as enumerated value filtering, group aggregation, AI‑generated functions, and vlookup‑style multi‑table association.

1.3.2 Professional analysis & visualization generation

Include cross‑pivot tables, advanced visualizations comparable to Tableau, and report/dashboard generation, all with AI assistance.

1.3.3 AI‑driven end‑to‑end simplification

AI learns user habits, recommends next steps, and records workflows for automatic execution, enhancing efficiency and trust.

1.4 Technical Design Framework

1.4.1 Key technical pillars: Environment + Agent + Analysis Scenario

1) Tomoro Environment: Interactive GUI + OpenAPI for Agents, analysis engine, metadata, and security controls.

2) Lumos Data Agent: Planning ability, tool/function call & AI coding, multi‑turn dialogue with context injection, and deliverable generation.

3) User Analysis Scenarios: Cover goal decomposition, preprocessing, exploration, advanced analysis, and result presentation.

1.4.2 Practical‑Driven Technical Solution for Big‑Data & AI Fusion

Design steps include AI scenario analysis & design, Lumos design based on AI needs, and building the analysis environment.

STEP 1: AI Scenario Analysis & Design

Analysis guidance: intelligent menu recommendation, analysis guide, one‑click dashboard generation

Page operation efficiency: dialogue‑generated commands, visual output, report generation

Data preparation: intelligent text extraction, function recommendation

STEP 2: Design Lumos based on AI scenarios

Agent core abilities: table understanding, question recommendation, chart generation, insight extraction, planning, summarization

Contextual data source & memory capabilities

Output abilities: structured stepwise returns, streaming results

STEP 3: Build the analysis environment

Provide an interactive environment that is simple for humans and agents, with compute engine, toolset, metadata, and security.

2. Lumos Practical Implementation in Tomoro

2.1 Role analysis in user analysis scenarios

Example: e‑commerce operations analyst with deep business understanding but limited SQL skills, needing quick issue identification and strategy formulation.

System provides data from DT report, 350 million rows, 18 fields.

Lumos agents include a Master Agent for alignment and planning, and Executor Agents for specialized analysis tasks.

2.2 Why Lumos is multi‑agent

Complex problems decompose into multiple tasks; a multi‑agent design distributes workload and maintains professionalism.

2.3 Solving consistency among multiple agents

Introduce shared work memory and sequential execution to ensure downstream agents leverage upstream results, acknowledging trade‑offs in parallel efficiency.

2.4 Achieving extreme query & compute response

Implement a hierarchical computation framework with pre‑computation, in‑memory databases, hosted high‑performance engines, and Python acceleration (Pandas, Modin, Ray) to speed up billion‑row queries.

2.5 AI Coding vs. tool invocation selection

Prefer tool calls for stability; use AI Coding to fill tool limitations, creating a hybrid approach that balances effectiveness and flexibility.

2.6 Co‑building expert agents via MCP for complex business analysis

Integrate business‑specific workflows and knowledge into expert agents to handle tasks like metric deviation attribution and experiment analysis.

2.7 Enhancing user question effectiveness

Provide table‑based question recommendation, multi‑turn suggestions, and clarification capabilities to avoid vague queries.

2.8 Effective Agent capability evaluation

Establish dual evaluation systems for product and Agent capabilities, continuously expanding benchmarks across data domains to guide improvements.

2.9 Engineering safeguards for Data Agent stability

Adopt layered architecture and shared infrastructure to ensure flexible, reliable, and orderly Agent development.

3. Future Plans & Collaboration

Invest in business knowledge, tool integration, and scenario validation to further enhance Lumos, seeking broader collaboration to continuously improve professional analysis agents.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

Big DataAIData Analysismulti-agent systemsdata agentsLow-Code Analytics
Tencent Technical Engineering
Written by

Tencent Technical Engineering

Official account of Tencent Technology. A platform for publishing and analyzing Tencent's technological innovations and cutting-edge developments.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.