Data Party THU
Data Party THU
Sep 3, 2025 · Artificial Intelligence

Unlocking Large Model Secrets: Transformers, MoE, Fine‑Tuning, RAG & KV Caching

This article provides a comprehensive technical overview of today’s large‑model ecosystem, covering the Transformer architecture, Mixture‑of‑Experts extensions, five fine‑tuning methods, the evolution from traditional RAG to agentic RAG, classic agent design patterns, diverse text‑chunking strategies, and the KV‑cache optimization that accelerates inference.

Agentic AIFine‑tuningKV cache
0 likes · 13 min read
Unlocking Large Model Secrets: Transformers, MoE, Fine‑Tuning, RAG & KV Caching
JD Tech
JD Tech
Jun 19, 2024 · Artificial Intelligence

Advances in Large AI Models: Prompt Engineering, RAG, Agents, Fine‑Tuning, Vector Databases and Knowledge Graphs

This article surveys the rapid expansion of large AI models, covering prompt engineering, structured prompts, retrieval‑augmented generation, AI agents, fine‑tuning strategies, vector database technology, knowledge graphs, function calling, and their collective role in moving toward artificial general intelligence.

AIAgentFine‑tuning
0 likes · 23 min read
Advances in Large AI Models: Prompt Engineering, RAG, Agents, Fine‑Tuning, Vector Databases and Knowledge Graphs
DataFunSummit
DataFunSummit
Aug 14, 2023 · Artificial Intelligence

State of GPT: A Programmer’s Guide to Large Language Model Fundamentals, Training, and Applications

This article provides programmers with a comprehensive overview of large language models—including their evolution, core concepts, data pipelines, model architectures, training techniques such as 3D parallelism, supervised fine‑tuning, RLHF, open‑source recipes, and emerging application ecosystems—while also highlighting current challenges and future directions.

Fine‑tuningLLM applicationsRLHF
0 likes · 43 min read
State of GPT: A Programmer’s Guide to Large Language Model Fundamentals, Training, and Applications
JD Tech
JD Tech
Aug 4, 2023 · Artificial Intelligence

Deploying and Evaluating the Vicuna Open‑Source Large Language Model on a Single Machine

This article details a step‑by‑step guide to deploying the Vicuna open‑source LLM on a single server, covering model preparation, environment setup, dependency installation, GPU and CUDA configuration, inference commands, performance evaluation, and attempted fine‑tuning, while sharing practical observations and results.

Fine‑tuningGPUInference
0 likes · 16 min read
Deploying and Evaluating the Vicuna Open‑Source Large Language Model on a Single Machine
Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
Jul 12, 2023 · Artificial Intelligence

Comprehensive Guide to Vision Transformer (ViT): Architecture, Patch Tokenization, Embedding, Fine‑tuning, and Performance

This article provides an in‑depth, English‑language overview of Vision Transformer (ViT), covering its Transformer‑based architecture, patch‑to‑token conversion, token and position embeddings, fine‑tuning strategies such as 2‑D interpolation, experimental results versus CNNs, and the model’s broader significance for multimodal AI research.

Computer VisionFine‑tuningPatch Embedding
0 likes · 25 min read
Comprehensive Guide to Vision Transformer (ViT): Architecture, Patch Tokenization, Embedding, Fine‑tuning, and Performance