Wu Shixiong's Large Model Academy
Author

Wu Shixiong's Large Model Academy

We continuously share large‑model know‑how, helping you master core skills—LLM, RAG, fine‑tuning, deployment—from zero to job offer, tailored for career‑switchers, autumn recruiters, and those seeking stable large‑model positions.

107
Articles
0
Likes
33
Views
0
Comments
Recent Articles

Latest from Wu Shixiong's Large Model Academy

100 recent articles max
Wu Shixiong's Large Model Academy
Wu Shixiong's Large Model Academy
Oct 24, 2025 · Artificial Intelligence

Can Large Language Models Truly Plan? Unpacking Agent Frameworks

This article explains why most LLM‑based agents only perform pseudo‑planning through prompts or hard‑coded loops, outlines when to rely on prompt‑driven versus program‑driven planning, compares popular frameworks such as ReAct, MRKL, BabyAGI and AutoGPT, and clarifies what true autonomous planning would require.

AgentArtificial IntelligenceAutoGPT
0 likes · 12 min read
Can Large Language Models Truly Plan? Unpacking Agent Frameworks
Wu Shixiong's Large Model Academy
Wu Shixiong's Large Model Academy
Oct 23, 2025 · Artificial Intelligence

Why the Transformer Core Structure Is the Key to AI Interview Success

This article explains the fundamental purpose, architecture, and variants of the Transformer model—including Encoder‑Decoder, Encoder‑only, and Decoder‑only designs—while detailing how attention mechanisms work and why modern large‑language models favor the Decoder‑only approach, providing a concise framework for answering interview questions.

AI InterviewEncoder-DecoderLarge Language Model
0 likes · 10 min read
Why the Transformer Core Structure Is the Key to AI Interview Success
Wu Shixiong's Large Model Academy
Wu Shixiong's Large Model Academy
Oct 22, 2025 · Artificial Intelligence

Mastering LLM Training: A Step‑by‑Step Blueprint from Data to Alignment

This guide walks through the complete end‑to‑end process of training a large language model from scratch, covering data collection, cleaning, tokenization, pre‑training objectives and engineering, post‑training alignment methods, scaling laws, over‑fitting mitigation, and gradient‑stability techniques.

LLMalignmentgradient stability
0 likes · 9 min read
Mastering LLM Training: A Step‑by‑Step Blueprint from Data to Alignment
Wu Shixiong's Large Model Academy
Wu Shixiong's Large Model Academy
Sep 28, 2025 · Artificial Intelligence

Can AI Automate the Entire Research Cycle? From Paper Reading to Code Reproduction

The author builds an AI‑driven end‑to‑end assistant that transforms a research paper into a structured reading note, generates reproducible code, runs experiments, summarizes results, and creates a report, demonstrating how large language models like Kimi K2 can streamline the entire paper‑to‑implementation workflow.

AI workflowClaude CodeKimi
0 likes · 9 min read
Can AI Automate the Entire Research Cycle? From Paper Reading to Code Reproduction
Wu Shixiong's Large Model Academy
Wu Shixiong's Large Model Academy
Sep 26, 2025 · Artificial Intelligence

Crack Large-Model Interviews: Master Positional Encoding, Residuals, LayerNorm & FFN

Preparing for large-model interview? This guide reveals why interviewers probe seemingly minor components—positional encoding, residual connections, layer normalization, and feed-forward networks—explains each technique's purpose, variants, and how to answer confidently, plus practical tips and a learning roadmap to boost your chances.

Artificial IntelligenceFFNInterview Tips
0 likes · 8 min read
Crack Large-Model Interviews: Master Positional Encoding, Residuals, LayerNorm & FFN
Wu Shixiong's Large Model Academy
Wu Shixiong's Large Model Academy
Sep 25, 2025 · Artificial Intelligence

Master Self-Attention & Multi-Head Attention for Large Model Interviews

This guide breaks down the core logic, computation steps, formulas, and common interview questions about Self‑Attention and Multi‑Head Attention in Transformers, offering concrete explanations, dimensional examples, and practical answering techniques to help candidates ace large‑model algorithm interviews.

Interview TipsMulti-Head AttentionSelf-attention
0 likes · 8 min read
Master Self-Attention & Multi-Head Attention for Large Model Interviews
Wu Shixiong's Large Model Academy
Wu Shixiong's Large Model Academy
Sep 19, 2025 · Artificial Intelligence

Master Parameter-Efficient Fine‑Tuning: LoRA & QLoRA Explained for Interviews

This article explains why full fine‑tuning of large models is impractical, introduces parameter‑efficient fine‑tuning (PEFT) with LoRA and QLoRA, provides mathematical foundations, implementation code, resource‑usage analysis, interview question templates, and practical deployment tips for real‑world AI projects.

LoRAQLoRAlow-rank adaptation
0 likes · 24 min read
Master Parameter-Efficient Fine‑Tuning: LoRA & QLoRA Explained for Interviews
Wu Shixiong's Large Model Academy
Wu Shixiong's Large Model Academy
Sep 18, 2025 · Artificial Intelligence

How to Diagnose and Optimize RAG Systems When 30% Answers Miss the Mark

This guide explains why RAG systems often produce off‑topic answers, outlines how to measure hit‑rate, retrieval, reranking and generation metrics, provides step‑by‑step evaluation pipelines, code examples, real‑world case studies, and interview‑ready templates for diagnosing and optimizing each stage of the pipeline.

AIPipelineRAG
0 likes · 18 min read
How to Diagnose and Optimize RAG Systems When 30% Answers Miss the Mark