AI Large Model Application Practice
Author

AI Large Model Application Practice

Focused on deep research and development of large-model applications. Authors of "RAG Application Development and Optimization Based on Large Models" and "MCP Principles Unveiled and Development Guide". Primarily B2B, with B2C as a supplement.

131
Articles
0
Likes
29
Views
0
Comments
Recent Articles

Latest from AI Large Model Application Practice

100 recent articles max
AI Large Model Application Practice
AI Large Model Application Practice
Mar 10, 2025 · Backend Development

How to Build a Multi‑User Agent Backend with Docker Isolation

This guide walks through constructing a multi‑user, cloud‑hosted Agent‑as‑a‑Service platform using Docker containers for isolation, detailing the system architecture, required Docker image, container management API, tool implementations such as code execution and web browsing, and provides complete Python code examples for testing and deployment.

DockerMulti-userReAct
0 likes · 12 min read
How to Build a Multi‑User Agent Backend with Docker Isolation
AI Large Model Application Practice
AI Large Model Application Practice
Mar 3, 2025 · Artificial Intelligence

Can DeepSeek‑R1 Unlock True “Deep Thinking” for Enterprise RAG?

This article examines how swapping in DeepSeek‑R1 enhances Retrieval‑Augmented Generation with deeper reasoning, outlines its benefits and pitfalls—including slower inference, higher compute costs, and hallucinations—provides a simple hallucination test, and proposes an Agentic RAG research assistant to balance accuracy and creativity.

AI reasoningDeepSeekLLM
0 likes · 10 min read
Can DeepSeek‑R1 Unlock True “Deep Thinking” for Enterprise RAG?
AI Large Model Application Practice
AI Large Model Application Practice
Feb 28, 2025 · Artificial Intelligence

How Self-Attention Powers LLMs: A Step‑by‑Step Deep Dive

This article explains the self‑attention mechanism behind large language models, detailing why static word importance fails, how queries, keys, and values are generated, how attention scores are computed, scaled, softmaxed, and used to produce context‑aware word vectors, while noting computational costs.

AILLMSelf-attention
0 likes · 9 min read
How Self-Attention Powers LLMs: A Step‑by‑Step Deep Dive
AI Large Model Application Practice
AI Large Model Application Practice
Feb 24, 2025 · Artificial Intelligence

How Web Agents Combine LLMs and Browser Automation to Perform Real‑World Tasks

This article explains what Web Agents are, their ReAct‑style reasoning loop, key implementation technologies such as observation parsing, multimodal models, and browser control tools like Selenium and Playwright, and demonstrates building a DeepSeek‑powered Web Agent with the Browser‑use framework, including code samples and performance insights.

DeepSeekLLMPlaywright
0 likes · 11 min read
How Web Agents Combine LLMs and Browser Automation to Perform Real‑World Tasks
AI Large Model Application Practice
AI Large Model Application Practice
Feb 17, 2025 · Artificial Intelligence

Mastering Structured Output for DeepSeek‑R1 with LangChain, LangGraph, and ReAct Agents

DeepSeek‑R1 excels at deep reasoning but lacks native structured output; this guide explains why structured output matters, outlines common API‑level techniques, and provides three practical solutions—using an auxiliary model with a LangChain chain, a LangGraph workflow, and a ReAct agent—complete with code snippets and JSON‑mode tips.

DeepSeekLLMLangChain
0 likes · 12 min read
Mastering Structured Output for DeepSeek‑R1 with LangChain, LangGraph, and ReAct Agents
AI Large Model Application Practice
AI Large Model Application Practice
Jan 20, 2025 · Artificial Intelligence

How Embeddings Transform Simple Character Codes into Powerful Vectors for LLMs

This article explains how embeddings convert basic character indices into high‑dimensional vectors, describes their training via gradient descent, introduces the embedding matrix, and shows how these vectors enable modern language models to capture semantic relationships and be reused across tasks.

EmbeddingsLLMmachine learning
0 likes · 8 min read
How Embeddings Transform Simple Character Codes into Powerful Vectors for LLMs