Why a Single For Loop Powers BU’s Open‑Source Agent Framework

The BU Browser Use team open‑sourced bu‑agent‑sdk, a minimal LLM agent framework that treats the agent as a simple for‑loop and adds explicit done tools, context compression, ephemeral messages, and a unified LLM interface, enabling flexible, low‑overhead AI applications.

AI Engineering
AI Engineering
AI Engineering
Why a Single For Loop Powers BU’s Open‑Source Agent Framework

Why the framework matters

Most agent frameworks fail because the action space is incomplete. bu‑agent‑sdk flips this by minimizing abstraction and giving the model maximum freedom.

Done Tool Pattern

Traditional frameworks stop when no tool is called, causing agents to quit prematurely. bu‑agent‑sdk requires an explicit done tool, illustrated by the following code:

@tool("Signal completion")
async def done(message: str) -> str:
    raise TaskComplete(message)

agent = Agent(
    llm=llm,
    tools=[..., done],
    require_done_tool=True,  # autonomous mode
)

This simple pattern solves the core problem of reliably detecting task completion.

Ephemeral Messages

Large outputs such as browser state or screenshots quickly fill the context. The SDK keeps only the most recent N large outputs, e.g.:

@tool("Get browser state", ephemeral=3)  # keep last 3
async def get_state() -> str:
    return massive_dom_and_screenshot

This allows long‑running tasks without context overflow.

Automatic Context Compression

When the token usage approaches the model’s limit, the SDK automatically summarizes early dialogue:

from bu_agent_sdk.agent import CompactionConfig

agent = Agent(
    llm=llm,
    tools=tools,
    compaction=CompactionConfig(threshold_ratio=0.80),
)

The 80 % threshold leaves enough space for the model to act.

Unified LLM Interface

The SDK supports three major providers with roughly 300 lines of code each:

from bu_agent_sdk.llm import ChatAnthropic, ChatOpenAI, ChatGoogle

agent = Agent(llm=ChatAnthropic(model="claude-sonnet-4-20250514"), tools=tools)
agent = Agent(llm=ChatOpenAI(model="gpt-4o"), tools=tools)
agent = Agent(llm=ChatGoogle(model="gemini-2.0-flash"), tools=tools)

This makes swapping models straightforward for comparison.

Practical Use

Matt Shumer showed that changing three environment variables lets the SDK run with any model supported by OpenRouter. Community member Robert Lukoszko verified that Gemini Flash works, demonstrating the framework’s flexibility.

Dependency Injection

The SDK’s injection system mirrors FastAPI’s design, providing type‑safe, testable components:

from typing import Annotated
from bu_agent_sdk import Depends

def get_db():
    return Database()

@tool("Query users")
async def get_user(id: int, db: Annotated[Database, Depends(get_db)]) -> str:
    return await db.find(id)

It also emits detailed events (ToolCallEvent, ToolResultEvent, FinalResponseEvent) for UI or logging.

Claude‑style Sandbox in <100 lines

A sandbox context restricts file operations to a root directory, preventing out‑of‑bounds access:

@dataclass
class SandboxContext:
    """All file ops stay inside root_dir"""
    root_dir: Path
    working_dir: Path

    def resolve_path(self, path: str) -> Path:
        resolved = (self.working_dir / path).resolve()
        resolved.relative_to(self.root_dir)  # raises if out of bounds
        return resolved

The toolset includes bash, file I/O, search, todo, and done, forming a complete programming environment.

Takeaway

The project shows that when LLMs are strong, the most effective framework is the simplest: a full action space, a single for‑loop, an explicit exit mechanism, and context management. “The less you build, the more it works.”

Project URL: https://github.com/browser-use/agent-sdk

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

PythonLLMopen sourceagent frameworkcontext compressiondone toolephemeral messages
AI Engineering
Written by

AI Engineering

Focused on cutting‑edge product and technology information and practical experience sharing in the AI field (large models, MLOps/LLMOps, AI application development, AI infrastructure).

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.