Bridging Go and Python with pyproc: Ultra‑Low‑Latency Interprocess Calls

This article introduces pyproc, a library that lets Go applications invoke Python functions via Unix Domain Sockets with sub‑45 µs latency, explaining the problem of mixing Go and Python ecosystems, the architecture, performance benefits, suitable use cases, and a step‑by‑step quick‑start guide with full code examples.

BirdNest Tech Talk
BirdNest Tech Talk
BirdNest Tech Talk
Bridging Go and Python with pyproc: Ultra‑Low‑Latency Interprocess Calls

pyproc: An Elegant Solution

The author discovered pyproc while searching GitHub‑starred projects and decided to document it because it promises to become a core tool in high‑performance networking and AI‑infra projects that need to combine Go with Python libraries such as LangChain, PyTorch, or VLLM.

Why Go‑Python Integration Is Hard

Go services often face tasks that depend on the Python ecosystem, including:

Machine‑learning models : need to call models trained in PyTorch or TensorFlow.

Data‑science libraries : pandas, numpy, and similar tools.

Legacy Python code : large, hard‑to‑refactor codebases.

Python‑only libraries : functionality that exists only in Python.

Traditional approaches—CGO bindings, gRPC‑based microservices, or shell‑command wrappers—introduce high latency, deployment complexity, and maintenance overhead.

Core Advantages of pyproc

High performance, low latency : communication occurs over Unix Domain Sockets (UDS) with no network overhead; the 50th‑percentile latency is reported as 45 µs.

True parallel processing : a pool of independent Python workers bypasses the Global Interpreter Lock (GIL), enabling genuine parallel computation.

Process isolation : crashes in a Python worker do not affect the main Go program.

Simple deployment : only a single Go binary and the required Python scripts are needed—no separate microservice infrastructure.

Concise API : pool.Call(ctx, "function_name", input, &output) invokes a Python function as if it were a local call.

How pyproc Works

The Go application creates a Python worker pool .

When a Python function is needed, the Go side sends the function name and arguments through a Unix Domain Socket to an idle Python worker.

The Python worker executes the function and returns the result over the same socket.

All communication stays inside the same host or Kubernetes pod, ensuring efficiency and stability.

Who Should Use pyproc

Teams that must embed existing Python machine‑learning models into Go services.

Developers who want to leverage pandas, numpy, or other data‑processing libraries from Go.

Groups migrating from Python microservices to Go while retaining some Python logic.

Quick‑Start Guide

Three steps get you running:

Install the Go and Python packages:

go get github.com/YuminosukeSato/pyproc@latest
pip install pyproc-worker

Create a Python worker (e.g., worker.py) and expose functions with the @expose decorator:

# worker.py
from pyproc_worker import expose, run_worker

@expose
def predict(req):
    """A simple example that doubles the input value"""
    return {"result": req["value"] * 2}

if __name__ == "__main__":
    run_worker()

Call from Go using the pool API:

package main

import (
    "context"
    "fmt"
    "log"
    "github.com/YuminosukeSato/pyproc/pkg/pyproc"
)

func main() {
    // Create a pool with 4 Python workers
    pool, err := pyproc.NewPool(pyproc.PoolOptions{
        Config: pyproc.PoolConfig{Workers: 4, MaxInFlight: 10},
        WorkerConfig: pyproc.WorkerConfig{SocketPath: "/tmp/pyproc.sock", PythonExec: "python3", WorkerScript: "worker.py"},
    }, nil)
    if err != nil { log.Fatal(err) }
    ctx := context.Background()
    if err := pool.Start(ctx); err != nil { log.Fatal(err) }
    defer pool.Shutdown(ctx)

    input := map[string]interface{}{ "value": 42 }
    var output map[string]interface{}
    if err := pool.Call(ctx, "predict", input, &output); err != nil { log.Fatal(err) }
    fmt.Printf("Result: %v
", output["result"]) // prints: Result: 84
}

Run the program: go run main.go The example demonstrates how a Go service can offload a simple computation to Python with virtually no latency overhead, while preserving Go's concurrency model and deployment simplicity.

PythonGoHigh PerformanceAI infrastructureUnix Domain SocketInterprocess Communication
BirdNest Tech Talk
Written by

BirdNest Tech Talk

Author of the rpcx microservice framework, original book author, and chair of Baidu's Go CMC committee.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.