Backend Development 11 min read

Implementing a Simple Load Balancer in Go

This article walks through building a basic round‑robin load balancer in Go, covering the underlying principles, data structures, reverse‑proxy integration, atomic indexing, concurrency handling, health‑check mechanisms, and how to extend the implementation for production use.

360 Tech Engineering
360 Tech Engineering
360 Tech Engineering
Implementing a Simple Load Balancer in Go

Load balancing is a critical component of web architecture, distributing requests across multiple backend services to improve scalability and availability. To deepen understanding beyond tools like Nginx, this guide demonstrates a lightweight Go implementation using a round‑robin strategy.

Working principle : The balancer selects a backend according to a chosen algorithm; here we start with the simplest round‑robin method, which cycles through backends equally.

Data structures :

type Backend struct {
    URL          *url.URL
    Alive        bool
    mux          sync.RWMutex
    ReverseProxy *httputil.ReverseProxy
}

type ServerPool struct {
    backends []*Backend
    current  uint64
}

Each Backend stores its URL, health status, a mutex for safe concurrent access, and a ReverseProxy instance. ServerPool holds a slice of backends and a counter used for atomic selection.

ReverseProxy usage : Go’s standard library provides httputil.NewSingleHostReverseProxy , which forwards incoming requests to a target URL and proxies the response back to the client.

u, _ := url.Parse("http://localhost:8080")
rp := httputil.NewSingleHostReverseProxy(u)
http.HandlerFunc(rp.ServeHTTP)

Selection process : To skip unhealthy backends, the pool uses an atomic increment to compute the next index.

func (s *ServerPool) NextIndex() int {
    return int(atomic.AddUint64(&s.current, 1) % uint64(len(s.backends)))
}

Fetching a live backend :

func (s *ServerPool) GetNextPeer() *Backend {
    next := s.NextIndex()
    l := len(s.backends) + next
    for i := next; i < l; i++ {
        idx := i % len(s.backends)
        if s.backends[idx].IsAlive() {
            if i != next {
                atomic.StoreUint64(&s.current, uint64(idx))
            }
            return s.backends[idx]
        }
    }
    return nil
}

Concurrency handling : The Alive flag may be read/written by multiple goroutines, so RWMutex protects it.

func (b *Backend) SetAlive(alive bool) {
    b.mux.Lock()
    b.Alive = alive
    b.mux.Unlock()
}

func (b *Backend) IsAlive() (alive bool) {
    b.mux.RLock()
    alive = b.Alive
    b.mux.RUnlock()
    return
}

Request handling :

func lb(w http.ResponseWriter, r *http.Request) {
    peer := serverPool.GetNextPeer()
    if peer != nil {
        peer.ReverseProxy.ServeHTTP(w, r)
        return
    }
    http.Error(w, "Service not available", http.StatusServiceUnavailable)
}

The handler is registered with the HTTP server:

server := http.Server{Addr: fmt.Sprintf(":%d", port), Handler: http.HandlerFunc(lb)}

Health checks : A passive health‑check routine periodically pings each backend, updates its Alive status, and logs the result.

func isBackendAlive(u *url.URL) bool {
    timeout := 2 * time.Second
    conn, err := net.DialTimeout("tcp", u.Host, timeout)
    if err != nil {
        log.Println("Site unreachable, error:", err)
        return false
    }
    _ = conn.Close()
    return true
}

func (s *ServerPool) HealthCheck() {
    for _, b := range s.backends {
        alive := isBackendAlive(b.URL)
        b.SetAlive(alive)
        status := "up"
        if !alive {
            status = "down"
        }
        log.Printf("%s [%s]\n", b.URL, status)
    }
}

func healthCheck() {
    t := time.NewTicker(20 * time.Second)
    for {
        select {
        case <-t.C:
            log.Println("Starting health check...")
            serverPool.HealthCheck()
            log.Println("Health check completed")
        }
    }
}

go healthCheck()

Additional logic handles retry attempts, marks failing backends as down after three unsuccessful tries, and uses context values to track attempts and retries.

Conclusion : The tutorial provides a functional, extensible Go load balancer and suggests further enhancements such as weighted round‑robin, least‑connection algorithms, heap‑based live‑node tracking, statistics collection, and configuration file support.

Backend DevelopmentConcurrencyGoReverse Proxyload balancerHealth Check
360 Tech Engineering
Written by

360 Tech Engineering

Official tech channel of 360, building the most professional technology aggregation platform for the brand.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.