Game AI SDK: Overview, Architecture, and Usage
Tencent’s open‑source Game AI SDK provides a versatile automation testing platform—supporting a wide range of game genres and mobile/PC apps—by integrating environment simulation, configurable tools, image‑recognition modules, and deep‑learning algorithms (DQN and IM) into a unified, user‑friendly workflow for training and executing AI agents.
Game AI SDK is the first open‑source project developed by Tencent TuringLab, aimed at solving the generality problem of automation testing tools. Originally built for game AI automated testing services, it can now be applied to mobile apps, PC games, and other software for specialized automation testing. The AI algorithms are trained on large‑scale data, providing good generalization across games of the same type.
The SDK is publicly available on GitHub: https://github.com/Tencent/GameAISDK . The TuringLab team also published the book AI Automated Testing: Technical Principles, Platform Construction, and Engineering Practice , which details the development and application experience of their deep‑learning‑based AI testing framework.
Supported game categories include shooting (e.g., CrossFire), MMO (e.g., Xianxia, Dragon Nest), puzzle (e.g., Happy Eliminate), battle‑royale (e.g., Peace Elite), racing (e.g., QQ Speed), MOBA (e.g., Honor of Kings), fighting (e.g., Soul Warrior), action (e.g., Contra), card (e.g., Saint Seiya), board games (e.g., Texas Hold'em), endless runner (e.g., Crazy Run), sports (e.g., NBA), and flight shooting (e.g., Airplane Battle). The SDK can also be used for mobile apps and Windows applications.
Technical Architecture
The platform consists of four main parts:
AI SDK Platform – core functions integrated into a single platform.
Tools – a set of SDK tools for AI‑related configuration; users can also develop custom tools.
Environment Simulation (EM) – generates mobile game environments to accelerate training of networks such as DQN.
AI Template Library – pre‑built templates categorized by game type, extensible with user‑defined templates.
Figure 1: Overall System Modules
Core Platform Modules
Automation System – handles data acquisition (screenshots or API data) and forwards AI actions to the device.
UI Module – recognizes and processes game UI elements.
Image Recognition Module – performs all image‑based recognition tasks and passes results to the AI algorithm module.
AI Algorithm Module – receives recognition results or API data, runs AI networks (e.g., DQN, IM), and outputs possible actions.
The data flow is: user input → automation system → image/AI modules → AI algorithm → action execution on the device.
AI Workflow
Step 1: Users provide data via screenshots or API. Step 2: GAME AI SDK reads task and AI algorithm configurations. Step 3: Training is performed either on the simulated environment or online. Step 4 (optional): Store game data for future training or analysis.
Figure 3: AI Process Flowchart
Image Recognition Task Flow
Users may need to label samples (e.g., for YOLO) using SDKTool or external tools like LabelImage. After labeling, tasks are configured in SDKTool, then either training or direct recognition is performed.
Figure 4: Image Recognition Task Flowchart
Image Recognition Module Details
Built on TensorFlow and OpenCV, the module provides common algorithms such as YOLO, template matching, pixel detection, and feature‑point matching. It also offers game‑specific recognizers for numbers, buttons, health bars, etc., and supports multithreading for performance.
Figure 5: Image Recognition Module
AI Algorithm Module
Inputs come from image recognition results or directly from APIs. The module uses TensorFlow and ADB for device interaction, and includes two built‑in algorithms:
DQN – does not require labeled data, works online, needs longer training time but offers better generalization.
IM – requires sample collection (via SDKTool), trains quickly, suitable for specific game scenes.
Users can also extend the platform with custom algorithms via provided interfaces. A behavior‑tree (BeTree) component allows defining custom AI behavior rules.
Figure 6: AI Algorithm Module
Usage Guide
1. Environment Installation – Choose local installation for development or use the provided Docker image. Detailed steps are in the source documentation.
2. SDKTool UI Configuration – No scripting required; users connect a device, perform minimal sample collection and labeling, and the tool generates configuration data.
Figure 7: UI Recognition Configuration
3. Recognition Task Configuration – Define which image data to extract and feed into the AI module.
Figure 8: Recognition Task Configuration
4. AI Algorithm Configuration – Parameters for DQN and IM can be set without writing code.
Figure 9: DQN Parameter Configuration
5. Action Definition – Define game actions in SDKTool; the trained AI agent will output these actions based on visual input.
Figure 10: AI Action Definition
6. Training and Execution – After configuration, launch SDKTool to start training and run the AI on a connected device. Progress is displayed in the UI.
Figure 11: IM Training Progress
Conclusion
The goal of Game AI SDK is to provide a generic automation platform that lets users focus on testing business logic. By leveraging deep learning and image‑recognition algorithms, the SDK extracts key data from game screens and feeds it to AI models. Larger training datasets improve model generalization, and the platform continues to evolve with advances in AI and computer‑vision technologies.
Reference links: https://github.com/Tencent/GameAISDK http://aitest.qq.com
Tencent Cloud Developer
Official Tencent Cloud community account that brings together developers, shares practical tech insights, and fosters an influential tech exchange community.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.