Backend Development 12 min read

Design and Implementation of a Go WorkQueue Library: Architecture, Interface, and Performance Analysis

This article introduces the Go WorkQueue project, detailing its motivation, overall architecture, layered design of Queue, Simple Queue, Delaying Queue, Priority Queue, and RateLimiting Queue, compares underlying data structures such as DoubleLinkedList versus slice, presents performance benchmarks, and explains memory‑fragmentation mitigation using sync.Pool.

Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
Design and Implementation of a Go WorkQueue Library: Architecture, Interface, and Performance Analysis

Author Lee, a veteran with 17 years in IT, explains the motivation behind creating an independent WorkQueue library after the client-go upgrade to version 1.28 made the previous WorkQueue implementation obsolete.

The project, hosted at https://github.com/shengyanli1982/workqueue , is a clean reimplementation of Kubernetes client-go's workqueue package, aiming for a lightweight, dependency‑free solution.

Architecture Design

The code mirrors the layered structure of the original client-go workqueue, with a base Queue providing Add, Get, Len, Stop, etc., and specialized queues (Delaying Queue, Priority Queue) derived from it. A RateLimiting Queue extends the Delaying Queue to control processing rates.

A Simple Queue behaves like the base Queue but without deduplication, offering higher performance.

Interface Design

The library defines a Interface with methods Add, Len, Get, GetWithBlock, Done, Stop, and IsClosed, and a Callback interface for OnAdd, OnGet, and OnDone events.

type Interface interface {
  Add(element any) error
  Len() int
  Get() (element any, err error)
  GetWithBlock() (element any, err error)
  Done(element any)
  Stop()
  IsClosed() bool
}

type Callback interface {
  OnAdd(any)
  OnGet(any)
  OnDone(any)
}

Performance Comparison

Benchmarks comparing client-go v1.26.12 and WorkQueue v0.1.3 (Go 1.20.11) show that WorkQueue is slightly slower due to its use of a DoubleLinkedList instead of a slice, but optimizations have narrowed the gap.

$ go test -benchmem -run=^$ -bench=^Benchmark* github.com/shengyanli1982/workqueue/benchmark
BenchmarkClientgoAdd-8          2363436   466.3 ns/op   171 B/op   1 allocs/op
BenchmarkWorkqueueAdd-8         1600765   670.0 ns/op    95 B/op   2 allocs/op
...

The slower performance stems from the underlying DoubleLinkedList storage, whereas client-go uses a slice. Nevertheless, the gap is minimal after tuning.

STL Analysis

The author compares DoubleLinkedList and slice implementations for FIFO queues. Both offer O(1) push/pop, but slices suffer from occasional O(n) resizing, while linked lists incur memory fragmentation.

Tables summarise the average and worst‑case complexities for each data structure.

Optimization with sync.Pool

To mitigate memory fragmentation from the linked‑list nodes, the library pools ListNode objects using Go's sync.Pool . Nodes are fetched from the pool on Add and returned on Get, keeping the pool size stable and reducing GC pressure.

type ListNodePool struct {
  bp sync.Pool // sync pool
}

func NewListNodePool() *ListNodePool {
  return &ListNodePool{bp: sync.Pool{New: func() any { return NewNode(nil) }}}
}

func (p *ListNodePool) Get() *Node { return p.bp.Get().(*Node) }
func (p *ListNodePool) Put(b *Node) { if b != nil { b.Reset(); p.bp.Put(b) } }

The main queue implementation uses this pool to allocate and recycle nodes efficiently.

Conclusion

The first article in the WorkQueue series provides a comprehensive overview of the project's goals, architectural choices, interface definitions, performance characteristics, and memory‑management strategies, setting the stage for deeper dives into individual queue modules in future posts.

performanceGoData Structuressync.PoolQueueWorkQueue
Rare Earth Juejin Tech Community
Written by

Rare Earth Juejin Tech Community

Juejin, a tech community that helps developers grow.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.