How Alibaba’s New Distributed Agent Framework Solves 2C AI Challenges
Alibaba introduces the ali‑langengine‑dflow framework, a hybrid distributed‑agent architecture that moves core intelligence to the cloud while keeping execution reachable on heterogeneous client devices, addressing data‑isolation, latency and security issues of existing cloud‑VM and local‑agent solutions for 2C internet services.
Introduction
Alibaba has launched a distributed, Manus‑Agent‑style framework called ali‑langengine‑dflow to overcome the limitations of current agent architectures in 2C internet scenarios. The framework combines a distributed server side with heterogeneous client side, keeping core AI logic in the cloud while allowing execution on the user device.
Problem Statement
Existing cloud‑VM based agents suffer from data‑isolation caused by virtual machine boundaries, and local agents struggle with response speed and security. Both architectures are essentially single‑machine solutions that cannot meet the performance and privacy requirements of modern business applications.
Proposed Architecture
The ideal design places the core intelligence in the cloud and provides a client‑side execution layer. This hybrid model uses a distributed service on the server and a heterogeneous C‑client, enabling seamless data flow across platforms without sacrificing security or latency.
Implementation Details
The solution is built on the ali‑langengine ecosystem and the DFlow library, which offers a Java‑based monadic API for asynchronous distributed workflows. DFlow stores each step in a global in‑memory map, and a message adapter can trigger any step on any machine. The framework supports both local and distributed execution, integrates with DashScope LLMs, and provides a set of agents such as DFlowConversationAgentExecutor, DFlowToolCallAgent, DFlowPlanningFlow, DFlowDeepCrawlerAgent, and DFlowPlanningFlow.
Key components include:
DFlowConversationAgentExecutor : Rewrites the original SKConversationAgentExecutor to use DFlow.
DFlowToolCallAgent : Supports multiple concurrent tool calls and integrates with the underlying LLM.
DFlowPlanningFlow : Handles planning, initialization, and finalization of agent workflows.
AgentTool : Wraps a DFlow agent as a tool, preserving context isolation.
JedisBasedPullingJobDFlowTool : Provides a client‑pull model for asynchronous job execution.
All agents are implemented in Java, allowing developers to use familiar monadic constructs ( map, flatMap) while benefiting from distributed execution.
Example Code
DFlow.globalInitForTest(); // Local initialization without DFlow‑starter
DashScopeLLM llm = new DashScopeLLM();
llm.setModel("qwen-plus-latest");
llm.setToken("sk‑x");
ConversationBufferMemory memory = new ConversationBufferMemory();
DFlowToolCallAgent agent = new DFlowToolCallAgent("manus", memory, getBaseTools(), llm);
agent.setSystemPrompt("You are a dressing‑assistant, help the user with outfit recommendations.");
agent.setNextStepPrompt("You can use GoogleSearch for information, dapai for outfit suggestions, and chuan to actually try the outfit.
Terminate when the task is complete.");
agent.setFunctionChoose(agent.genQwen25Function(llm));
DFlowPlanningFlow planningFlow = new DFlowPlanningFlow("manus", agent);
planningFlow.setFinalizePlanFunction(planningFlow.genQwen25FinalizeFunction(llm));
planningFlow.setInitPlanFunction(planningFlow.genQwen25InitFunction(llm));
DFlow.directRun(DFlow.InitParam.of("How should I dress for spring?"), y ->
DFlow.just(y)
.flatMap(x -> { System.out.println("====" + x); return planningFlow.execute(x.getParam()); })
.id("a1")
.flatMap(x -> { System.out.println("====" + x); return planningFlow.execute("Casual indoor"); })
.id("a2")
.map(x -> { System.out.println("====" + x); return x; })
.id("a3"));
System.in.read();Integration with Ali‑langsmith
Beans are initialized with @PostConstruct to set up memory, tools, and agents. The BrowserCrawlerTool wraps a DFlowDeepCrawlerAgent as a tool, enabling web crawling via a client‑pull model. Agents interact with the ali‑langsmith SDK for prompt management, function selection, and result reporting.
Future Directions
Planned improvements include more robust handling of distributed updates, better support for closures across machines, and a richer starter package. The framework will continue to evolve to support flexible edge‑cloud collaboration, advanced tool orchestration, and tighter integration with emerging LLM capabilities.
Conclusion
The ali‑langengine‑dflow framework demonstrates how a hybrid distributed architecture can provide deterministic, low‑latency AI agent execution while preserving the flexibility needed for rapid product iteration. By combining Java monadic workflows with cloud‑native AI services, it offers a scalable foundation for future AI‑driven applications.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
