How I Built a Telegram AI Coding Bot (FakeClawBot) Using OpenCode

This article walks through creating a Telegram bot that leverages OpenCode's Server API to provide full AI coding assistance, covering setup, multi‑model integration, core architecture, common pitfalls, and extensible features, all with under 900 lines of Python code.

Old Zhang's AI Learning
Old Zhang's AI Learning
Old Zhang's AI Learning
How I Built a Telegram AI Coding Bot (FakeClawBot) Using OpenCode

OpenCode server mode

Running opencode serve --port 4096 starts an HTTP API server that exposes all Claude Code‑like functions (session creation, message exchange, command execution, file management) as REST endpoints.

Architecture of FakeClawBot

The bot consists of two Python modules: bot.py (~660 lines) – Telegram Bot logic. opencode_client.py (~220 lines) – wrapper for the OpenCode HTTP API.

Telegram User → Telegram Bot API → bot.py → OpenCode Server (:4096) → Model Provider (Claude / Gemini / Kimi / GLM …)

Step 1 – Prepare the environment

Install OpenCode (Homebrew or npm):

# macOS
brew install opencode-ai/tap/opencode

# npm
npm install -g opencode

Obtain a Telegram Bot Token from @BotFather.

Clone the project and install Python dependencies:

git clone https://github.com/tjxj/fakeclawbot.git
cd fakeclawbot
pip install -r requirements.txt

Copy .env.example to .env and set TELEGRAM_BOT_TOKEN and OPENCODE_SERVER_URL (e.g., http://127.0.0.1:4096).

Step 2 – Configure multi‑model access

OpenCode’s custom provider mechanism allows any OpenAI‑compatible API to be added. Example configuration for SiliconFlow (placed in ~/.config/opencode/opencode.json or a local opencode.json):

{
  "provider": {
    "siliconflow": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "SiliconFlow",
      "options": {
        "apiKey": "sk-YOUR_API_KEY",
        "baseURL": "https://api.siliconflow.cn/v1"
      },
      "models": {
        "Pro/moonshotai/Kimi-K2.5": {"name": "Pro/moonshotai/Kimi-K2.5"},
        "Pro/zai-org/GLM-5": {"name": "Pro/zai-org/GLM-5"}
      }
    }
  }
}

Set the npm field to @ai-sdk/openai-compatible and provide baseURL and apiKey. Similar blocks can be added for Quotio, Ollama, and OpenCode Zen, giving access to dozens of models.

Step 3 – Launch the bot

# Start OpenCode server
opencode serve --port 4096

# In another terminal, start the Telegram bot
python3 bot.py

When the console prints “🤖 Bot 已启动!”, the bot is ready to receive Telegram messages.

Bot commands

/start

– begin a conversation. /new – create a new programming session. /sessions – list all sessions. /switch <id> – switch to a specific session. /model <name> – select a model (supports 25+ models). /init – initialize a project (generates AGENTS.md). /undo – revert the last action. /status – check server health.

Typical workflow: /new/model 25 (select Kimi‑K2.5) → send a request such as “Help me analyze src/main.py ”. The AI replies within seconds.

Common pitfalls

Model identifiers with multiple slashes : SiliconFlow model IDs like Pro/moonshotai/Kimi-K2.5 produce three slashes when combined with the provider name ( siliconflow/Pro/moonshotai/Kimi-K2.5). The original code used model.split("/"), which raised “too many values to unpack”. Fix by splitting only on the first slash:

# ❌ old
provider_id, model_id = model.split("/")

# ✅ new
provider_id, model_id = model.split("/", 1)

Telegram Bot singleton listener : Only one process can poll getUpdates or receive webhook events for a given token. Running two listeners will kick each other off. Sending messages only (no polling) is safe and allows other services to reuse the same token for notifications.

Core code insight

The opencode_client.py class encapsulates all HTTP calls. Example method for sending a message:

class OpenCodeClient:
    async def send_message(self, session_id, content, model=None):
        """Send a message and get the AI reply"""
        body = {"parts": [{"type": "text", "text": content}]}
        if model:
            provider_id, model_id = model.split("/", 1)
            body["model"] = {"providerID": provider_id, "modelID": model_id}
        return await self._request("POST", f"/session/{session_id}/message", json=body)

OpenCode API endpoints used by the bot: POST /session – create a session. POST /session/{id}/message – send a message and wait for a reply. POST /session/{id}/command – execute slash commands (e.g., /init). POST /session/{id}/abort – abort a running task. POST /session/{id}/revert – undo the last operation.

Model selection is expressed in the request body as:

{
  "model": {
    "providerID": "siliconflow",
    "modelID": "Pro/moonshotai/Kimi-K2.5"
  }
}

Advanced uses

Multi‑user isolation via user_id ensures each Telegram user has an independent session and model choice.

Scheduled tasks (e.g., RSS daily summaries) can reuse the same Bot token for outbound messages without conflict.

The opencode_client.py wrapper can be reused to build bots for Discord, Slack, WeChat, etc.

Enabling OpenCode Server authentication and exposing it through a tunnel allows small teams to share the same AI coding assistant.

Pros and limitations

Open‑source, under 900 lines of Python, straightforward deployment.

Supports 25+ models, including Chinese providers, via the custom provider mechanism.

Provides mobile‑first access to AI‑assisted coding.

Requires a continuously running local OpenCode server.

Telegram message length limit (4096 characters) may require chunking of long replies.

File upload to the bot is not yet implemented.

Repository: https://github.com/tjxj/fakeclawbot.git

OpenCode documentation: https://opencode.ai/docs/zh-cn/server/

PythonAutomationopen-sourceLarge Language ModelAI Coding AssistantTelegram botOpenCodeServer API
Old Zhang's AI Learning
Written by

Old Zhang's AI Learning

AI practitioner specializing in large-model evaluation and on-premise deployment, agents, AI programming, Vibe Coding, general AI, and broader tech trends, with daily original technical articles.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.