Wu Shixiong's Large Model Academy
Wu Shixiong's Large Model Academy
Mar 19, 2026 · Artificial Intelligence

Making LLM Answers Trustworthy: Citation Attribution and Hallucination Detection

This article explains why simple prompt‑based citation is insufficient for Retrieval‑Augmented Generation, introduces a sentence‑level attribution pipeline, combines semantic similarity with NLI verification, and presents practical hallucination detection and structured JSON output to ensure answer reliability.

Hallucination DetectionLLM reliabilityNLI
0 likes · 10 min read
Making LLM Answers Trustworthy: Citation Attribution and Hallucination Detection
Tencent Cloud Developer
Tencent Cloud Developer
Oct 15, 2025 · Artificial Intelligence

Why LLMs Are Unreliable: The pⁿ Dilemma and Building Trustworthy AI‑Human Collaboration

The article explains that large language models are fundamentally probabilistic predictors, causing their success rate to drop exponentially with task complexity (the pⁿ dilemma), and proposes a systematic, human‑centered approach—using deterministic tools, narrowing prompt scope, and delivering incremental results—to create reliable AI‑human collaborative systems.

AI-human collaborationLLM reliabilityp^n dilemma
0 likes · 66 min read
Why LLMs Are Unreliable: The pⁿ Dilemma and Building Trustworthy AI‑Human Collaboration