Build a Conversational 24‑Point Game with Baidu AppBuilder’s AI Agent
This guide walks through the complete workflow of creating an AI‑native 24‑point game using Baidu Cloud's AppBuilder, covering the three‑step methodology, Agent architecture, component design, custom workflow implementation, and practical tips for optimal model selection.
The article explains how to build an AI‑native application—a conversational 24‑point game—using Baidu Cloud's Qianfan AppBuilder. AI‑native apps differ from traditional apps by using natural language interaction (text, voice, vision) to drive tasks; this example focuses on text‑based dialogue.
Three‑Step Methodology
Creative description: a one‑sentence summary of the idea.
Creative decomposition: split the idea into thinking modules and component work.
Creative implementation: use natural‑language prompts to describe the thinking model and realize the components.
Case Overview
The 24‑point game assistant should be able to generate a random set of four numbers (1‑13), verify a user‑provided expression that evaluates to 24, and offer hints when requested.
Why an Agent Is Needed
Because large language models only predict the next token, relying solely on prompt engineering or fine‑tuning cannot reliably implement "generate puzzle", "verify answer", and "suggest solution". An Agent—combining a thinking model with tool components—provides a robust solution.
Agent Architecture
An Agent consists of a thinking model (often a standard LLM) that decides which tool to invoke and a set of tool components that perform concrete actions. The thinking model handles planning, reasoning, and decision‑making, while components execute tasks such as number generation, expression evaluation, or hint provision.
Task Decomposition for the 24‑Point Game
Define the thinking module’s functional boundaries: specify the protocol for invoking components, including trigger conditions, component descriptions, and input parameter design.
Implement the components according to the definitions above.
AppBuilder Framework
AppBuilder uses an interactive dialogue as the entry point. The framework follows the ReAct algorithm, which interleaves reasoning ("think") and acting ("act") steps, offering good interpretability despite not being the most efficient.
Component Execution Details
Long‑term memory: persisted variables, knowledge‑base retrievals.
Short‑term memory: multi‑turn dialogue history, system time, etc.
Component description: official components have built‑in descriptions; custom components require user‑defined descriptions.
Component execution results: multi‑call tasks need to pass prior results back to the thinking model.
System prompts: hidden prompts that guide the model.
User role prompts: developer‑defined instructions that influence model behavior.
Current query and uploaded file info.
Defining the Thinking Module
Key elements include role instructions, component descriptions, input‑parameter design, output design, and model selection.
Role Instructions (example)
# Role task
As a 24‑point game assistant, your job is to randomly generate four numbers between 1 and 13, let the player use +, -, *, / and parentheses to reach 24, verify the player's answer, and provide hints when needed.Tool Capabilities
Generate puzzle: invoked when the user asks for a new game.
Provide solving hints: invoked when the user requests assistance.
Validate answer: invoked when the user submits an expression; the tool evaluates whether it equals 24.
Component Design
Each component is described as a function/API with explicit input parameters:
Puzzle generation component : parameters {"name": "start", "type": "string", "desc": "minimum value"} and {"name": "end", "type": "string", "desc": "maximum value"}.
Answer verification component : parameter
{"name": "expression", "type": "string", "desc": "user's arithmetic expression"}.
Solution suggestion component : parameters for the four numbers, e.g.,
{"name": "number1", "type": "string", "desc": "first number"}etc.
Implementation of Custom Components
AppBuilder does not provide these three components out‑of‑the‑box; they must be built as custom workflow nodes. The typical pattern is to connect a code node to a start node (providing input variables) and an end node (outputting the result). Images in the original article illustrate the node connections.
Model Selection
The recommended thinking model is the latest Ernie‑Speed AppBuilder‑specific version (released 2024‑05‑25), which is 3‑4× faster than Ernie‑4 and 2‑3× faster than Ernie‑3.5. For the QA model, Ernie‑3.5‑8K is suggested for better generalization on custom components.
Practical Tips
Official component descriptions are pre‑designed and improve model understanding over time.
Custom component descriptions and input schemas must be clear and model‑friendly.
When choosing a thinking model, balance cost and capability; Ernie‑Speed is optimal for official components, while Ernie‑3.5/4.0 handle custom components better.
Before building the full Agent, create a small test suite covering all components to validate behavior.
Conclusion
AppBuilder dramatically lowers the barrier for developers to create AI‑native applications. By clearly defining the creative idea, decomposing it into thinking modules and components, and selecting appropriate models, developers can quickly prototype functional AI agents such as the conversational 24‑point game.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
