190 Must-Read AI Agent Papers + 321 Google Implementation Cases – Free Resource Pack

The article provides a free compiled resource containing 190 essential AI Agent papers—from fundamentals to cutting‑edge topics—along with 321 Google‑released implementation cases and 500 open‑source agent applications, all with source code to help beginners and researchers quickly understand the field and reproduce results.

PaperAgent
PaperAgent
PaperAgent
190 Must-Read AI Agent Papers + 321 Google Implementation Cases – Free Resource Pack

This article shares a free resource package that aggregates 190 essential AI Agent papers covering the full research pipeline—from introductory to advanced topics—321 Google‑released practical project cases, and 500 open‑source AI Agent applications, each with source code to facilitate replication and further development.

Tree of Thoughts: Deliberate Problem Solving with Large Language Models

The paper proposes a reasoning framework that models large‑language‑model inference as a tree‑search process. By exploring multiple paths, performing self‑evaluation, and using foresight and backtracking, it achieves markedly higher success on tasks such as the 24‑point game, creative writing, and mini‑crossword puzzles, surpassing Chain‑of‑Thought (CoT) prompting and substantially improving GPT‑4’s problem‑solving rate.

Learning to Detect Objects from Multi‑Agent LiDAR Scans without Manual Labels

This work introduces DOtA, an unsupervised approach that learns 3‑D object detection from multi‑agent LiDAR scans without any human annotations. It leverages shared pose and shape information among agents to generate initial pseudo‑labels, employs multi‑scale bounding‑box encoding to separate high‑ and low‑quality pseudo‑labels, and applies contrastive learning within the pseudo‑labels to guide correct feature learning. DOtA significantly outperforms existing unsupervised 3‑D detection methods on the V2V4Real and OPV2V collaborative perception datasets and shows greater robustness to localization noise.

AGENTGYM‑RL: An Open‑Source Framework to Train LLM Agents for Long‑Horizon Decision Making via Multi‑Turn RL

The authors present AgentGym‑RL, an open‑source framework designed for multi‑turn, long‑horizon reinforcement learning of large‑language‑model agents. It features a modular, decoupled architecture compatible with mainstream RL algorithms and introduces a staged training strategy called ScalingInter‑RL, which first establishes short‑horizon interactions before gradually extending the interaction length to stabilize training and deepen exploration. Evaluated on 27 cross‑scenario tasks, the framework yields substantial performance gains, matching or surpassing commercial models such as OpenAI o3 and Gemini 2.5‑Pro. All code and datasets are publicly released.

PlugMem: A Task‑Agnostic Plugin Memory Module for LLM Agents

The paper proposes a speculative decoding (SSD) unified framework that transforms the traditional sequential draft‑and‑verify process into parallel execution, eliminating the drafting overhead for smaller models. Coupled with the optimization algorithm SAGUARO, the approach achieves roughly a two‑fold speedup over mainstream inference engines while preserving generation quality.

AI Agent Systems: Architectures, Applications, and Evaluation

This survey systematically reviews AI agent system architectures—including reasoning, planning, memory, and tool‑calling components—along with orchestration patterns such as single‑agent vs. multi‑agent and centralized vs. distributed coordination. It discusses deployment scenarios and analyzes key design trade‑offs like latency versus accuracy and autonomy versus controllability. The paper also summarizes evaluation methodologies and benchmark practices, highlighting open challenges such as tool‑action verification, scalable memory management, and decision interpretability.

In addition to the paper summaries, the package lists 500 open‑source AI Agent applications, providing direct links to their repositories to help readers experiment with real implementations.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

LLMopen sourceAI AgentMemoryReinforcement LearningMulti-AgentResearch Papers
PaperAgent
Written by

PaperAgent

Daily updates, analyzing cutting-edge AI research papers

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.