Artificial Intelligence 31 min read

Exploring Game AI Agents: Review, LLM‑Driven Exploration, and Future Directions

This article reviews the evolution of game AI agents, examines how large language models (LLMs) can drive new AI behaviors in games, and discusses practical case studies across genres such as Werewolf‑style, war‑SLG, and MOBA games, concluding with challenges and future research directions.

DataFunSummit
DataFunSummit
DataFunSummit
Exploring Game AI Agents: Review, LLM‑Driven Exploration, and Future Directions

Speaker Zhang Shize from NetEase introduces his work on game‑scene data mining, AI robots, and AIGC applications, outlining three main parts: a review of game AI agents, LLM‑driven exploration of game AI, and a summary with outlook.

1. Game AI Agent Value Game AI agents are already used in many scenarios, such as warm‑up matches, low‑skill matches, filler players for insufficient participants, and AI‑hosted sessions for disconnected players. Early implementations relied on rule‑based systems, later evolving to behavior trees, policy networks, and reinforcement or imitation learning. With the advent of ChatGPT‑style language models, agents can now be driven by natural‑language reasoning.

2. LLM‑Driven Game AI Exploration Three case studies are presented:

Werewolf‑like text‑heavy games: LLMs generate logical, context‑aware dialogue, handling game‑specific terminology.

War‑SLG games: LLMs assist low‑frequency, high‑decision‑weight tasks such as strategic city building and alliance management.

MOBA games: LLM‑generated chat influences in‑game decisions, bridging social communication and tactical actions.

For each case, the workflow includes constructing prompts, using memory streams for context, and employing self‑play loops where the language model interacts with the game environment, collects trajectories (state‑action‑next‑state‑reward), and iteratively refines its policy.

3. Summary and Outlook The three cases illustrate that LLMs excel at macro‑level decision making and dialogue generation but still face challenges in real‑time low‑level control, handling game‑specific jargon, and converting chat commands into actionable game APIs. Future work should focus on better historical context modeling, terminology learning, and robust command‑to‑action pipelines.

AI AgentsLLMGame developmentlanguage modelgame AIself-play
DataFunSummit
Written by

DataFunSummit

Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.