AgentRun CLI v0.1.0 Open‑Source: Run Your Hosted Agent with a Single Command
The article introduces the open‑source AgentRun CLI v0.1.0, explains how a single command can launch a hosted Agent, details its core features, declarative YAML API, six unified command groups, and shows how the new Python SDK integrates for full lifecycle management of Agentic AI workloads.
Quick Start
Step 1 – Install the CLI
curl -fsSL https://raw.githubusercontent.com/Serverless-Devs/agentrun-cli/main/scripts/install.sh | shWindows users can run irm .../install.ps1 | iex. The installer fetches the latest release from GitHub, verifies the SHA‑256 checksum, and places the binary in $HOME/.local/bin. Verify with ar --version.
Step 2 – Configure credentials
ar config set access_key_id LTAI5t...</code><code>ar config set access_key_secret ***</code><code>ar config set account_id 1234567890</code><code>ar config set region cn-hangzhouUse --profile to manage multiple environments (e.g., --profile staging). Resolution order: command‑line arguments > profile > environment variables.
Step 3 – Run an Agent
$ ar super-agent run --prompt "You are a Python expert"
Creating super agent: super-agent-tmp-20260424140912 ...
Ready. Type your message (/<code>help</code> for commands).
> Write a quicksort
def quicksort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
mid = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quicksort(left) + mid + quicksort(right)
> /exitThe CLI reads local credentials, selects a ModelService (default or via --model-service), creates the super‑agent on the server, opens an SSE stream for REPL interaction, and records the conversation ID in ~/.agentrun/super-agent-state.json for later reuse.
Core v0.1.0 Capabilities
ar super-agent run: create a hosted Agent and enter a REPL.
Kubernetes‑style YAML: ar sa apply -f superagent.yaml enables idempotent deployment.
Multi‑profile configuration with --profile for isolated environments.
Rich output formats – json (default), table, yaml, quiet – friendly to pipelines.
Cross‑platform binary built with PyInstaller (Linux, macOS, Windows, x86_64 & arm64).
Declarative API – Managing Agents with YAML
Typical superagent.yaml:
apiVersion: agentrun/v1
kind: SuperAgent
metadata:
name: my-helper
description: "My super Agent"
spec:
prompt: "You are my super Agent assistant, help me with any task"
tools:
- mcp-time-sa
skills:
- skill-wechat-article-search
sandboxes: []
workspaces: []
subAgents:
- agent-research-assistantCommon commands:
# Create or update (idempotent)
ar sa apply -f superagent.yaml
# Dry‑run validation
ar sa apply -f superagent.yaml --dry-run
# Render locally without contacting the server (useful for CI)
ar sa render -f superagent.yaml
# Interactive REPL
ar sa chat my-helper
# One‑off invocation with text‑only output
ar sa invoke my-helper -m "Explain closures" --text-only | tee answer.txtThe apply command decides between create and update based on metadata.name, ensuring convergence in CI pipelines. The render command validates the YAML schema without server interaction.
Six Unified Resource Groups
config : access‑credential and multi‑profile environment management.
model : register and manage ModelService (DashScope, DeepSeek, OpenAI, private deployments).
sandbox : sandbox management – file system, process, runtime, template, browser automation.
tool : MCP and FunctionCall native tool management.
skill : lifecycle of platform skill packages and local scan/load/exec.
super‑agent : run, CRUD, declarative deployment, and conversation management for Super Agents.
All groups follow the ar <group> <action> pattern, supporting standard verbs such as list, get, create, update, and delete. Example high‑frequency commands:
# Register a model service
ar model create --name svc-tongyi --provider dashscope --api-key $DASHSCOPE_API_KEY
# Create a Python code‑interpreter sandbox and execute a snippet
SB=$(ar sandbox create --template py-default --type CodeInterpreter --output quiet)
ar sandbox exec "$SB" --code "import pandas as pd; print(pd.__version__)"
# Register an external MCP tool
ar tool create --name mcp-time-sa --type mcp --endpoint https://time-mcp.example.com
# Upload a local skill package
ar skill upload ./skills/wechat-article-search
# Inspect a specific conversation
ar sa conv get my-helper conv-9f8e7d6c-xxxSDK Integration – SuperAgentClient
The Python SDK provides SuperAgentClient for full Agent lifecycle definition in code. Example ( quick_start_super_agent.py) demonstrates asynchronous creation, two‑stage streaming invocation, conversation reuse, and CRUD operations:
import asyncio, os
from agentrun.super_agent import SuperAgentClient
async def main():
client = SuperAgentClient()
# 1. Create a super agent
agent = await client.create_async(
name="my-helper",
description="A super agent created from SDK",
prompt="You are a helpful assistant.",
tools=["mcp-time-sa"],
skills=[],
sandboxes=[],
agents=[],
)
print(f"Created: {agent.name}")
# 2. First invocation – returns conversation_id immediately
stream = await agent.invoke_async(messages=[{"role": "user", "content": "现在几点了"}])
saved_conv_id = stream.conversation_id
async for event in stream:
print(f" [{event.event}] {event.data}")
# 3. Continue the same conversation
stream2 = await agent.invoke_async(messages=[{"role": "user", "content": "And what can you do?"}], conversation_id=saved_conv_id)
async for event in stream2:
print(f" [{event.event}] {event.data}")
# 4. List and delete conversations
async for conv in await agent.list_conversations_async():
print(f"- {conv.conversation_id} title={conv.title!r}")
await agent.delete_conversation_async(saved_conv_id)
# 5. Update and delete the agent resource
await client.update_async("my-helper", prompt="You are a concise assistant.")
await client.delete_async("my-helper")
asyncio.run(main())The SDK separates long‑running streaming calls (asynchronous) from short‑lived CRUD operations (both sync and async). invoke_async returns the conversation_id instantly, allowing callers to persist it before processing the SSE stream, which simplifies concurrent usage and guarantees stateless client objects.
Collaboration Model Between CLI and SDK
Development & debugging : use ar sa run to iterate on prompts and tools, then lock the configuration into a YAML file.
Release & deployment : run ar sa apply -f superagent.yaml in CI to provision the entire Agent stack.
Production usage : backend services invoke SuperAgentClient, store the returned conversation_id in business databases.
Online troubleshooting : ops staff query conversations with ar sa conv list my-helper and ar sa conv get … for detailed traceability.
Future Roadmap
v0.1.0 is the first high‑availability release. Planned enhancements include expanding the official Skills marketplace, enriching the subAgents primitive for higher‑order multi‑Agent orchestration, and building an integrated evaluation pipeline covering offline regression tests, online tracing, and overall quality metrics.
Getting Started
curl -fsSL https://raw.githubusercontent.com/Serverless-Devs/agentrun-cli/main/scripts/install.sh | sh && ar super-agent runFor issues or suggestions, open an Issue on GitHub.
Open‑source repositories
AgentRun CLI: https://github.com/Serverless-Devs/agentrun-cli
AgentRun Python SDK: https://github.com/Serverless-Devs/agentrun-sdk-python
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Alibaba Cloud Native
We publish cloud-native tech news, curate in-depth content, host regular events and live streams, and share Alibaba product and user case studies. Join us to explore and share the cloud-native insights you need.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
