PaperAgent
PaperAgent
Jan 4, 2026 · Artificial Intelligence

How Sophia’s System 3 Turns LLM Agents into Persistent Learners

The article presents Sophia, a System 3‑enabled persistent agent framework that adds a meta‑cognitive layer to LLM‑based agents, enabling identity continuity, self‑scheduled learning, real‑time self‑checks, and autonomous task generation, and validates its benefits through a 24‑hour continuous‑run experiment.

AI agentsAutonomous AgentsLLM
0 likes · 7 min read
How Sophia’s System 3 Turns LLM Agents into Persistent Learners
Alibaba Cloud Developer
Alibaba Cloud Developer
Aug 20, 2025 · Artificial Intelligence

Why AI Programming Needs Compiler Theory: From Prompt to Context Engineering

This article explores how formal language theory and compiler concepts provide a solid theoretical foundation for modern AI engineering practices such as Prompt Engineering, Context Engineering, and Anthropic's Think Tool, highlighting the trade‑offs between expressiveness and reliability and proposing a path toward more verifiable AI systems.

AI programmingCompiler TheoryFormal Language
0 likes · 15 min read
Why AI Programming Needs Compiler Theory: From Prompt to Context Engineering
Baobao Algorithm Notes
Baobao Algorithm Notes
Mar 21, 2025 · Artificial Intelligence

Unlocking LLM Reasoning: A Deep Dive into Post‑Training Techniques

This article provides a comprehensive technical overview of large language model post‑training, covering fine‑tuning methods (full, parameter‑efficient, LoRA families, prompt tuning), domain‑adaptive tuning, reinforcement‑learning reward modeling, process vs. outcome rewards, inference‑enhancement strategies, dynamic compute allocation, verifier‑augmented reasoning, current challenges, and emerging research directions such as meta‑cognition, physical reasoning, and swarm intelligence.

LLMmeta-cognitionpost-training
0 likes · 21 min read
Unlocking LLM Reasoning: A Deep Dive into Post‑Training Techniques