Final Lesson: Build a Fully Working RSS News Brief Agent
In this final lesson of a nine‑day Agent engineering series, the author integrates the full Agent Loop, tools, MCP, skills, RAG, context handling, multi‑turn dialogue, and multi‑agent coordination to create a runnable RSS news‑briefing Agent that fetches feeds in parallel, filters content with LLMs, summarizes articles, and outputs a markdown report.
In the previous lesson the multi‑Agent framework was extended so that a Coordinator can schedule tasks, automatically decompose complex tasks, and merge results while sub‑Agents communicate only through the Coordinator.
Why RSS
"We should bring back RSS - it's open, pervasive, hackable." – Andrej Karpathy
RSS is considered open (no login, no algorithmic recommendation), high‑quality (author‑published content), and controllable (no filter bubbles). The author builds a daily AI‑related news brief using RSS feeds.
Overall Architecture
Four specialized agents are built on the MiniManus framework:
Fetcher : parallel RSS fetching.
Filter : LLM‑based relevance judgment.
Summarizer : generates article summaries.
Reporter : assembles a markdown brief.
Workflow:
Coordinator receives a task and uses an LLM to decide whether decomposition is required.
If required, the LLM splits the task into sub‑tasks.
The LLM selects the most suitable agent for each sub‑task.
Fetcher retrieves multiple RSS sources concurrently.
Filter lets an LLM judge relevance.
Coordinator merges sub‑task results into the final brief.
Tool System
rss_fetch: get RSS article list (parallel). rss_filter: filter related articles (LLM judgment). rss_summarize: generate summary (LLM call). rss_report: generate markdown report. search: alternative search tool. terminate: end task.
Technical Choices
RSS parsing with feedparser + ThreadPoolExecutor (5‑thread pool) for parallel fetching.
Filtering delegated to an LLM agent instead of hard‑coded keywords.
Summarization performed by the Multi‑Agent system from Lesson 08.
Markdown output for easy copy‑paste.
Core Code Implementation
RSS Parser (parallel fetching)
@dataclass
class RSSItem:
title: str
link: str
published: Optional[str] = None
summary: Optional[str] = None
source: Optional[str] = None
class RSSParser:
def __init__(self, opml_path: str, max_feeds: int = 10):
self.opml_path = opml_path
self.max_feeds = max_feeds
self.feeds = []
def load_feeds(self) -> list[dict]:
# Parse OPML, extract xmlUrl
for outline in root.iter("outline"):
if outline.get("xmlUrl"):
self.feeds.append({"title": ..., "url": outline.get("xmlUrl")})
return self.feeds[:self.max_feeds]
def fetch_all(self) -> list[RSSItem]:
# Parallel fetching
with ThreadPoolExecutor(max_workers=5) as executor:
future_to_feed = {executor.submit(self.fetch_feed, f["url"]): f for f in self.feeds}
for future in as_completed(future_to_feed):
all_items.extend(future.result())
return all_itemsRSS Tool Wrapper
class RSSFilterTool(BaseTool):
@property
def name(self) -> str:
return "rss_filter"
def execute(self, articles: str = "", **kwargs) -> tuple[bool, str]:
# Directly return; the Agent uses LLM to judge relevance
return True, articlesFour Specialized Agents
def create_multi_agent_system(cfg) -> Coordinator:
coordinator = Coordinator()
for name, specialty in [("Fetcher", "RSS 获取"),
("Filter", "内容过滤"),
("Summarizer", "摘要生成"),
("Reporter", "简报生成")]:
spec = AgentSpec(name=name, specialty=specialty,
description=f"专门负责{specialty}")
agent = AgentCore(spec, cfg, all_tools, coordinator)
coordinator.register(agent)
return coordinatorMain Entry
def main():
args = parser.parse_args()
cfg = load_config_from_env()
coordinator = create_multi_agent_system(cfg)
result = coordinator.dispatch(args.task)
print(result)Running Effect
$ uv run python 09_rss_news/main.py
=====================================================================
RSS News Agent 启动
=====================================================================
已注册 Agent: ['Fetcher (RSS 获取)', 'Filter (内容过滤)', 'Summarizer (摘要生成)', 'Reporter (简报生成)']
[Coordinator] 收到主任务: 生成今日AI新闻简报
[Coordinator] 任务分解为 7 个子任务
[Coordinator] 执行子任务 1/7: 全网搜索并采集今日AI新闻原始数据...
[Fetcher] 开始处理任务...
[RSS] 加载源: simonwillison.net
[RSS] 加载源: jeffgeerling.com
...(并行加载多个源)
[RSS] Jeff Geerling - 10 篇文章
[RSS] Simon Willison's Weblog - 10 篇文章
[Fetcher] 任务完成
[Coordinator] 子任务 1 完成
[Coordinator] 执行子任务 2/7: 筛选高价值新闻并去除重复内容...
[Filter] 开始处理任务...
[Filter] 任务完成
[Coordinator] 执行子任务 3/7: 提取每条新闻的核心要点与关键信息...
[Summarizer] 开始处理任务...
[Summarizer] 任务完成
...Core Technical Highlights
Parallel fetching : ThreadPoolExecutor fetches RSS sources concurrently, greatly improving speed.
LLM task decomposition : Coordinator uses an LLM to decide if a task needs splitting and performs the split.
Agent‑based filtering : An LLM‑driven agent judges article relevance instead of hard‑coded keywords.
Cooperative division of labor : Each agent performs a single responsibility coordinated by the Coordinator.
Sample Output
# AI Agent 每日简报
**日期**: 2026年2月16日
---
### 1. Andrej Karpathy: The Future of AI Agents
**来源**: simonwillison.net
Karpathy 讨论了 AI Agent 的发展趋势,认为 autonomous agents 将成为下一个热点.
[原文链接](https://simonwillison.net/...)
---
### 2. Building AI Agents with MCP
**来源**: overreacted.io
介绍如何使用 MCP 协议构建 AI Agent,实现工具调用标准化.
[原文链接](https://overreacted.io/...)
---
## 订阅说明
- RSS 源来自 HN 2025 最热门博客
- 每天自动抓取并筛选 AI/Agent 相关内容
- 由 MiniManus Agent 自动生成
---
_由 AI Agent 自动生成_Advanced Extensions
Daily Automatic Execution
# crontab -e
0 8 * * * cd /path/to/exercise && uv run python 09_rss_news/main.pyPush to WeChat / Feishu
def notify_wechat(report: str):
"""Push to WeChat"""
# Use enterprise WeChat webhook
pass
def notify_feishu(report: str):
"""Push to Feishu"""
# Use Feishu webhook
passExtend RSS Sources
Modify tools/registry.py to point OPML_PATH to a custom OPML file.
OPML_PATH = str(
Path(__file__).resolve().parent.parent.parent
/ ".."
/ "news"
/ "your-custom-feeds.opml"
)Repository
Source code: https://github.com/HUANGLIWEN/mini-manus
AI Tech Publishing
In the fast-evolving AI era, we thoroughly explain stable technical foundations.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
