Step-by-Step Local Deployment Guide for Coze Studio: Launch Your Low-Code AI Agent Development

This article provides a comprehensive, hands‑on tutorial for installing Ollama, Docker, and the open‑source Coze Studio on a local machine, configuring various LLM services such as Qwen 3, DeepSeek‑V3, and OpenRouter, and running the platform via Docker Compose to create and test AI agents.

Full-Stack Cultivation Path
Full-Stack Cultivation Path
Full-Stack Cultivation Path
Step-by-Step Local Deployment Guide for Coze Studio: Launch Your Low-Code AI Agent Development

Coze Studio is an open‑source low‑code AI Agent development platform built with a Golang backend and a React + TypeScript frontend, using a microservice and DDD architecture for high performance and extensibility.

1. Install Ollama (private LLM runtime)

Ollama is a lightweight framework for running large language models locally. Download the client from https://ollama.com/ for macOS, Linux or Windows and follow the installer.

Pull a model (e.g., Qwen 3) with one of the following commands:

ollama run qwen3:8b
# or
ollama run qwen3:14b
# or
ollama run qwen3:32b

Memory requirements:

7B ≈ 8‑16 GB RAM

14B ≈ 16‑32 GB RAM

32B ≈ 32‑64 GB RAM

70B+ ≈ 64 GB+ RAM

2. Install Docker

Docker provides containerised deployment. Download Docker Desktop from https://www.docker.com/ and install the graphical client for your OS.

3. Deploy Coze Studio locally

3.1 Environment requirements

Minimum 2 CPU cores, 4 GB RAM.

Docker and Docker Compose installed; Docker daemon running.

3.2 Get the source code

git clone https://github.com/coze-dev/coze-studio.git

If Git is unavailable, download the zip archive from the project page.

3.3 Configure model templates

Copy a template from backend/conf/model/template to backend/conf/model. Example: rename model_template_ollama.yaml to model_ollama_qwen3_8b.yaml and set model: qwen3:8b. Ensure the id field is a non‑zero unique integer.

For an online service, copy model_template_deepseek.yaml to model_openrouter_ds_v3.yaml and set the DeepSeek‑V3 API key obtained from OpenRouter (

https://openrouter.ai/deepseek/deepseek-chat-v3-0324:free/api

).

Use ollama list to verify locally installed models.

3.4 Start the services

cd docker
cp .env.example .env
docker compose --profile '*' up -d

The first run pulls and builds images; wait until the coze-server container turns green.

To apply configuration changes, restart the server:

docker compose --profile '*' restart coze-server

3.5 Use Coze Studio

Open a browser at http://localhost:8888/, register with an email and password, then create an agent via the “Create” button. The model list shows the configured LLMs; test them in the chat pane.

If errors occur, inspect the coze-server logs.

Project repository:

https://github.com/coze-dev/coze-studio
Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

DockerLLMLocal DeploymentOllamaLow-code AICoze Studio
Full-Stack Cultivation Path
Written by

Full-Stack Cultivation Path

Focused on sharing practical tech content about TypeScript, Vue 3, front-end architecture, and source code analysis.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.