Artificial Intelligence 48 min read

A Practical Guide to Building Large Language Model Applications: Prompt Engineering, Retrieval‑Augmented Generation, Function Calling and AI Agents

The guide teaches non‑AI developers how to build practical LLM‑powered applications by mastering prompt engineering, function calling, retrieval‑augmented generation, and AI agents, and introduces the Modal Context Protocol for seamless tool integration, offering a clear learning path to leverage large language models without deep theory.

Tencent Cloud Developer
Tencent Cloud Developer
Tencent Cloud Developer
A Practical Guide to Building Large Language Model Applications: Prompt Engineering, Retrieval‑Augmented Generation, Function Calling and AI Agents

In recent years large language models (LLMs) have surged ahead of other technologies, profoundly influencing programming and prompting many developers to feel anxious about being replaced. Rather than fearing the change, the article encourages developers to actively embrace it.

The guide is aimed at developers without an AI background and explains how to build LLM‑powered applications without deep mathematical or theoretical knowledge. It outlines a clear learning path:

Understand the role of prompts (zero‑shot and few‑shot) and how to craft them so the model returns data in a strict format (e.g., JSON).

Learn the typical multi‑turn interaction pattern where the LLM can request external information, the application performs a search or calls a tool, and the result is fed back to the model.

func AddScore(uid string, score int) { // first interaction user := userService.GetUserInfo(uid) // application logic newScore := user.score + score // second interaction userService.UpdateScore(uid, score) }

Function calling is introduced as a way for the model to invoke external services (search, weather, database queries, etc.) by describing tools in OpenAPI‑like definitions. Example tool registration code:

let builder = llm::Builder(); let app = builder .tool("get_weather", weather_handler) .tool("search_online", search_handler) .build(); app.exec(userInput);

The article then dives into Retrieval‑Augmented Generation (RAG). It explains the pipeline: chunk documents, embed each chunk into high‑dimensional vectors, store them in a vector database, and at query time retrieve the most semantically similar chunks to augment the prompt. It discusses embedding models (open‑source, self‑trained, LLM‑provided) and the importance of chunk size and context preservation.

For code‑related use cases (e.g., Copilot), the same RAG principles apply but with additional challenges: code chunking must respect syntax and semantics, and code‑specific embedding models are needed to capture logical relationships.

Beyond knowledge‑base Q&A, the guide presents AI Agents that can perform real actions via tools. It gives a concrete example of a “daily horoscope” agent that combines user info lookup, fortune‑telling API, task retrieval from TAPD, and matchmaking from an internal BBS. The workflow is illustrated with a diagram and pseudo‑code showing how the agent orchestrates multiple tool calls.

Finally, the article introduces the Modal Context Protocol (MCP), an open protocol that standardizes communication between an LLM client (the agent) and MCP servers (tools). MCP supports both network RPC and local stdio transports, enabling seamless integration of external services, command runners, file readers, etc. Example MCP server configuration:

[{"type":"http/sse","addr":"a.com/x"}, {"type":"local","command":"/usr/local/bin/foo -iv"}]

The conclusion summarizes three practical directions for developers: building infra/frameworks, improving RAG pipelines (chunking, embedding, vector search), and creating MCP‑servers to expose real‑world capabilities to LLMs.

LLMprompt engineeringRAGvector databaseembeddingAI Agentfunction calling
Tencent Cloud Developer
Written by

Tencent Cloud Developer

Official Tencent Cloud community account that brings together developers, shares practical tech insights, and fosters an influential tech exchange community.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.