How to Build a High‑Performance Local Text‑to‑Image Service with Flux and Cursor IDE

Learn step‑by‑step how to set up a stable, high‑efficiency local text‑to‑image generation service using the Flux model series on Alibaba Cloud’s Baile​n platform, integrate it with Cursor IDE’s MCP tool, configure environments, manage API keys, and run the service with sample code and results.

Eric Tech Circle
Eric Tech Circle
Eric Tech Circle
How to Build a High‑Performance Local Text‑to‑Image Service with Flux and Cursor IDE

Introduction

With the rapid growth of AI image generation, developers increasingly need a reliable, high‑performance text‑to‑image (text‑to‑visual) service that can run locally. This guide explains how to combine the Flux model series from Black Forest Labs with Alibaba Cloud’s Baile​n platform and Cursor IDE to create a fully local, production‑ready service.

Flux Model Series Overview

The Flux family, created by the core team behind Stable Diffusion, represents the state‑of‑the‑art in text‑to‑image generation. Three versions are offered for different scenarios:

Flux.1 Pro

Flagship model with top‑tier image quality and speed.

Closed‑source; accessed via API only.

Designed for enterprise and commercial projects.

Delivers the best visual fidelity and style diversity.

Flux.1 Dev

Developer‑friendly base model distilled from the Pro version.

Improved prompt adherence.

Near‑professional visual detail.

Released under a non‑commercial license suitable for community research.

Flux.1 Schnell

Speed‑optimized version, roughly ten times faster than Dev.

Lightweight design with minimal resource consumption.

Ideal for edge devices and local deployment.

Apache 2.0 license, allowing personal and commercial use.

Comparison with Other Popular Models

Compared with Stable Diffusion 3 and Midjourney, Flux models provide higher image quality, more natural color rendering, and far better performance efficiency. They are also commercially friendly, while Stable Diffusion 3 requires higher resources and Midjourney is a closed‑source cloud service.

Implementation Process

1. Environment Preparation

Operating System: macOS, Linux, or Windows (examples use macOS).

Development Tool: Cursor IDE 0.47.9 or later.

Programming Language: Python 3.8 or newer (3.13 recommended).

Package Manager: uv (recommended) or pip.

2. Environment Setup and Dependency Installation

Install uv for fast package management: curl -LsSf https://astral.sh/uv/install.sh | sh Create and initialize the project:

# Create and enter project directory
mkdir flux-image-service && cd flux-image-service

# Initialise Python project
uv init
uv venv
source .venv/bin/activate

# Install core dependencies
uv add "mcp[cli]" dashscope python-dotenv requests

3. Implement MCP Server (server.py)

Create server.py with the following content. The script loads environment variables, creates a FastMCP server, defines a generate_image tool that calls the Flux model via DashScope, saves generated images, and returns status information.

from mcp.server.fastmcp import FastMCP
from http import HTTPStatus
from urllib.parse import urlparse, unquote
from pathlib import Path
import requests
from dashscope import ImageSynthesis
from dotenv import load_dotenv
import os, json
from pathlib import PurePosixPath

load_dotenv()

mcp = FastMCP("flux-schnell-server")
OUTPUT_DIR = Path("output")
OUTPUT_DIR.mkdir(parents=True, exist_ok=True)
CURRENT_DIR = Path.cwd().resolve()
print(f"MCP server working directory: {CURRENT_DIR}")

@mcp.tool()
def generate_image(prompt: str) -> dict:
    """Generate an image from a text prompt using flux‑schnell or flux‑dev model.
    Args:
        prompt (str): Text prompt for image generation.
    Returns:
        dict: Contains status, image paths, and error messages if any.
    """
    model = "flux-dev"  # switch to "flux-schnell" for faster inference
    api_key = os.getenv("DASH_API_KEY")
    if not api_key:
        error_msg = {"status": "error", "message": "DASH_API_KEY not found in config or environment variables"}
        print(f"Error: {json.dumps(error_msg, ensure_ascii=False)}")
        return error_msg
    try:
        print(f"Generating image for prompt: {prompt}")
        rsp = ImageSynthesis.call(model=model, prompt=prompt, size="1024*1024", api_key=api_key)
        print(f"API response code: {rsp.status_code}")
        if rsp.status_code == HTTPStatus.OK:
            image_paths = []
            for result in rsp.output.results:
                file_name = PurePosixPath(unquote(urlparse(result.url).path)).parts[-1]
                relative_path = f"output/{file_name}"
                output_path = OUTPUT_DIR / file_name
                print(f"Saving image to: {relative_path}")
                with open(output_path, "wb+") as f:
                    f.write(requests.get(result.url).content)
                absolute_path = str((CURRENT_DIR / relative_path).resolve())
                image_paths.append(absolute_path)
            result = {"status": "success", "image_paths": image_paths}
            print(f"Image generation succeeded: {json.dumps(result, ensure_ascii=False)}")
            return result
        else:
            error_msg = {"status": "error", "status_code": rsp.status_code, "code": rsp.code, "message": rsp.message}
            print(f"Generation failed: {json.dumps(error_msg, ensure_ascii=False)}")
            return error_msg
    except Exception as e:
        error_msg = {"status": "error", "message": f"Exception occurred: {str(e)}"}
        print(f"Exception: {error_msg}")
        return error_msg

if __name__ == "__main__":
    mcp.run()

4. API Key Configuration

Security Note: API keys are sensitive; never commit them to a repository. Store them in environment variables or a local .env file.

Register on the Alibaba Cloud Baile​n platform, complete real‑name verification, locate the API‑KEY option, and create a new key. Then create a .env file at the project root containing:

DASH_API_KEY=your_api_key_here

5. Cursor Integration Configuration

Create .cursor/mcp.json to tell Cursor how to launch the MCP server. Replace placeholder paths with the absolute locations of your uv executable and project directory.

{
  "mcpServers": {
    "flux-schnell-server": {
      "command": "/full/path/to/uv",
      "args": ["--directory", "/full/path/to/project", "run", "server.py"]
    }
  }
}

After saving, Cursor shows a green indicator confirming successful MCP integration.

Running the Service

Activate the virtual environment, ensure DASH_API_KEY is set, and start the server via Cursor or by executing python server.py. The service listens for MCP calls, generates images using the selected Flux variant, stores them under output/, and returns absolute file paths.

Result Demonstration

Sample outputs generated with Flux demonstrate fine‑grained details, natural color transitions, and a balance between realism and artistic style. Example images (shown below) include portrait, landscape, and stylized scenes.

Example Image 1
Example Image 1
Example Image 2
Example Image 2
Scenic Image
Scenic Image
Cityscape
Cityscape
Artistic Scene
Artistic Scene

Free Quota Information

Free Quota: Baile​n provides 1,000 free calls per Flux model every six months, with separate counts for each variant—sufficient for personal development and learning.

Conclusion

This guide equips developers with a complete workflow to deploy a high‑quality, locally hosted text‑to‑image service using the latest Flux models, Alibaba Cloud’s Baile​n platform, and Cursor IDE’s MCP integration, enabling rapid experimentation without relying on external cloud APIs.

cloud computingPythonMCPtext-to-imageFluxCursor IDEAI Model Deployment
Eric Tech Circle
Written by

Eric Tech Circle

Backend team lead & architect with 10+ years experience, full‑stack engineer, sharing insights and solo development practice.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.