Build AI Apps with Natural Language Using VibeCoding: A Hands‑On Guide
This article demonstrates how to create diverse AI applications—such as a children’s picture‑book generator, a guessing‑game app, and an enterprise website with knowledge‑base Q&A—entirely through natural‑language prompts in VibeCoding, covering the setup, step‑by‑step construction, code‑free publishing, integration of various AI services, deployment configurations, and practical tips for reliable development.
Overview
VibeCoding lets developers create AI‑enhanced web applications using only natural‑language prompts. In expert mode a built‑in "senior full‑stack engineer" agent iteratively generates the full code base, UI layout and configuration without the user writing any source files.
Representative Scenarios
1. Children’s Picture‑Book Generator
Through roughly ten dialogue rounds the expert agent produces a complete Next.js application that serves an audio‑enabled picture book. After the conversation the app can be deployed with a single click on the "Publish" button.
2. Children’s Guess‑Drawing Game
In four dialogue rounds the same agent creates an interactive game where children draw and the AI guesses the content. No database is required.
3. Enterprise Website + Knowledge Q&A
The workflow is split into two steps.
Step 1 – Build Knowledge‑Base Service
Create a dataset.
Create a data source.
Upload knowledge documents.
Create a knowledge‑base agent.
Link the dataset to the agent.
Copy the generated API endpoint.
Step 2 – Integrate with Expert Mode
Using the same "senior full‑stack engineer" agent, the developer adds calls to the knowledge‑base API and to other AI capabilities (text generation, image generation, visual understanding, audio understanding, text‑to‑speech) via the IntegrationExamples block.
const response = await fetch('/api/ai', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
action: 'text-generation',
data: {
model: 'qwen-plus',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: '你是谁?' }
]
}
})
});The snippet shows how to invoke text generation, handle streaming responses, and call image‑generation or audio‑understanding APIs.
System Prompt Integration
A full system‑prompt template defines the expert’s role, collaborators (database expert, project manager) and a library of integration examples for DashScope services (text, image, visual, audio, speech). This prompt guides the AI to emit correct code and configuration files.
Development‑Stage Integration
VibeCoding injects an AI API server into a Next.js backend ( app/api/ai/route.ts). The file defines TypeScript types for different actions, maps actions to DashScope endpoints, authenticates with the DASHSCOPE_API_KEY environment variable, supports both synchronous and asynchronous calls, and forwards streaming responses directly to the client.
type Message = { role: string; content: string | MessageContent[] };
const API_ENDPOINTS = {
'text-generation': 'https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions',
'image-generation': 'https://dashscope.aliyuncs.com/api/v1/services/aigc/text2image/image-synthesis',
// ... other endpoints
};
export async function POST(req: NextRequest) {
const { action, data } = await req.json();
const apiKey = process.env.DASHSCOPE_API_KEY;
if (!apiKey) return NextResponse.json({ error: 'API key missing' }, { status: 500 });
const url = typeof API_ENDPOINTS[action] === 'function' ? API_ENDPOINTS[action](data.taskId) : API_ENDPOINTS[action];
const response = await fetch(url, { method: 'POST', headers: { Authorization: `Bearer ${apiKey}`, 'Content-Type': 'application/json' }, body: JSON.stringify(data) });
const result = await response.json();
return NextResponse.json(result, { status: response.status });
}Production‑Stage Integration
For production the AI service is packaged as a Function Compute deployment using an s.yaml configuration file. The YAML defines resources, runtime, environment variables (including the API key) and HTTP trigger settings. The file can be generated programmatically and deployed with the Function Compute CLI.
const sYamlContent = `edition: 3.0.0
name: mayama_ai_generated_app
vars:
region: '${process.env.REGION}'
functionName: 'mayama_${projectId}'
resources:
mayama_nextjs_build:
component: fc3
actions:
pre-deploy:
- run: npm run build
props:
region: '${vars.region}'
runtime: custom.debian10
memorySize: 3072
environmentVariables:
DASHSCOPE_API_KEY: ${process.env.DASHSCOPE_API_KEY || ''}
triggers:
- triggerType: http
triggerConfig:
methods: [GET, POST, PUT, DELETE]
`;
await fs.writeFile(sYamlPath, sYamlContent, 'utf-8');Practical Tips
Project Unique ID: Keep the same project ID across sessions to ensure incremental updates instead of recreating a new project.
Concrete API Calls: Provide explicit JSON schemas or layout instructions when needed to guide the AI.
Error Handling: Paste error messages back to the AI so it can adjust the generated code.
Loop Prevention: If the AI gets stuck, ask it to change the approach or rephrase the request.
Future Outlook
Planned enhancements include native login via domestic platforms (Alipay, WeChat), support for mini‑programs and mobile apps, and more sophisticated agentic workflows that combine multimodal AI capabilities.
Alibaba Cloud Native
We publish cloud-native tech news, curate in-depth content, host regular events and live streams, and share Alibaba product and user case studies. Join us to explore and share the cloud-native insights you need.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
