How a Constraint-Aware Multi-Agent System Won the IJCAI Travel Planning Challenge

Leveraging a proprietary “large model + optimization” approach, Alibaba’s Ant Group and East China Normal University built a constraint-aware multi-agent framework that secured first place in the Original OS track and second in the DSL track of the IJCAI-2025 Autonomous Travel Planning Competition.

AntTech
AntTech
AntTech
How a Constraint-Aware Multi-Agent System Won the IJCAI Travel Planning Challenge

Introduction

In collaboration with Ant Group’s AI technology team and professors Qian Hong and Li Bingdong from East China Normal University, the team participated in the 34th International Joint Conference on Artificial Intelligence (IJCAI‑2025) Autonomous Travel Planning Challenge, winning the Original OS Track and placing second in the DSL Track using a proprietary “large model + optimization” solution.

Background

The IJCAI‑2025 challenge aims to advance large‑model‑driven agents that can generate feasible, personalized itineraries from user requests such as “I want to travel from Shanghai to Beijing for three days, visit the Forbidden City, with a budget of 5,000 CNY.” Existing large models often produce plans with temporal or spatial conflicts, outdated information, or shallow personalization because they treat travel planning as pure text generation rather than a multi‑constraint dynamic resource scheduling problem.

Solution

For the DSL track, we built a constraint‑aware multi‑agent framework for travel planning that incorporates a retrieval‑augmented fine‑tuned LLM to extract user constraints and intents more accurately. For the OS track, we extended this framework with a powerful DSL generation capability that automatically produces verifiable domain language for downstream validation. The framework consists of three core modules: Environment, Thinking, and Action.

Environment Module

The environment module serves as the interface between user requests and the large language model. By applying large‑scale supervised fine‑tuning (SFT) and retrieval‑enhancement, it parses ambiguous expressions and dialect variations, aligning them with task requirements and generating verifiable constraints expressed in a Python‑based domain language for later validation.

Thinking Module

The thinking module uses the fine‑tuned LLM to analyze the input, systematically infer task‑specific constraints, and capture implicit needs and personalized preferences, ensuring they remain consistent with the problem context.

Action Module

The action module coordinates multiple specialized agents: inter‑city transportation, attraction recommendation, route planning, dining and hotel recommendation, itinerary integration, and a master controller for correction and reflection. Each agent incorporates the inferred constraints to produce a coherent final itinerary.

Transportation agents balance time, budget, and mode preferences; attraction agents align user interests with geographic logic; route planning agents solve a heuristic optimization problem that selects POIs and schedules daily trips while minimizing total travel time; dining and hotel agents suggest options within budget; the integration agent stitches all POIs together; and the controller validates the solution against the Python‑based domain language.

Future Outlook

We plan to broaden the framework’s applicability to more complex constraint scenarios, continuously improve module performance, and enhance response speed and plan quality for Ant Group’s travel services. The Ant AI Travel Assistant is already live, and the team will contribute to high‑quality travel data sets, expert model training, and industry standards to advance AI‑driven travel applications.

large language modelsmulti-agent systemsAI OptimizationIJCAITravel Planning
AntTech
Written by

AntTech

Technology is the core driver of Ant's future creation.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.