AI Tech Publishing
AI Tech Publishing
Apr 20, 2026 · Artificial Intelligence

How Claude Code Achieves 92% Prompt Cache Hit Rate and Cuts Costs by 81% – A Deep Dive

This article explains the mechanics of prompt‑caching for large language models, breaks down static versus dynamic context, details KV‑cache operation and its pricing, and shows how Claude Code’s 30‑minute programming session reached a 92% cache hit rate that reduced inference costs by 81%, concluding with three production‑grade design rules.

AI agentsAnthropic APIClaude Code
0 likes · 13 min read
How Claude Code Achieves 92% Prompt Cache Hit Rate and Cuts Costs by 81% – A Deep Dive
AI Engineering
AI Engineering
Mar 11, 2026 · Artificial Intelligence

Run Claude Code Locally with Qwen 3.5 to Skip Anthropic API Costs

This guide shows how to replace Anthropic's API by running a local Qwen 3.5 model with llama.cpp, configuring Claude Code via ANTHROPIC_BASE_URL, and includes hardware checks, build steps, model download, server launch, speed‑fix tips, and usage instructions for secure, cost‑free development.

Anthropic APIClaude CodeGPU acceleration
0 likes · 8 min read
Run Claude Code Locally with Qwen 3.5 to Skip Anthropic API Costs
Old Zhang's AI Learning
Old Zhang's AI Learning
Mar 7, 2026 · Artificial Intelligence

vLLM 0.17.0 Release: Full Qwen 3.5 Support and Anthropic API Compatibility

The vLLM 0.17.0 release brings FlashAttention 4 integration, a mature Model Runner V2, complete Qwen 3.5 series support, a one‑click performance‑mode flag, Anthropic API compatibility, advanced weight‑offloading, broader hardware support beyond NVIDIA, ASR model integration, and detailed upgrade and installation guidance.

ASRAnthropic APIFlashAttention
0 likes · 12 min read
vLLM 0.17.0 Release: Full Qwen 3.5 Support and Anthropic API Compatibility
Old Zhang's AI Learning
Old Zhang's AI Learning
Mar 4, 2026 · Artificial Intelligence

Unlock the Full Power of LM Studio for Local LLM Deployment

This article explores LM Studio’s evolution into a complete local AI development platform, detailing version 0.4’s architectural overhaul, headless daemon, parallel request handling, stateful REST API, UI refresh, and a suite of hidden developer features such as OpenAI‑compatible, Anthropic‑compatible APIs, CLI tools, native SDKs, and the LM Link remote‑model solution.

Anthropic APICLILM Link
0 likes · 12 min read
Unlock the Full Power of LM Studio for Local LLM Deployment