How Kafka Powers Scalable E‑commerce Order Processing with Go
This article walks through the challenges of a fast‑growing e‑commerce platform during peak sales, explains why Apache Kafka is the ideal asynchronous messaging backbone, and provides a complete Go implementation—including producers, consumers, best‑practice patterns, and real‑world use cases—to achieve high throughput, fault tolerance, and seamless scalability.
Why use Apache Kafka for high‑traffic back‑ends
Kafka provides a durable, horizontally scalable log that decouples producers and consumers. It can absorb traffic spikes (e.g., Black Friday sales) without overloading databases or email services.
Core Kafka concepts
Topic : a named stream of records (e.g., order-created).
Partition : each topic is split into ordered logs; partitions enable parallelism.
Broker : a Kafka server that stores partitions.
Producer : client that appends records to a topic.
Consumer : client that reads records from one or more partitions.
Offset : the position of a consumer within a partition.
Data flow overview
Producer → Topic → [Partition‑0, Partition‑1, …] (stored on Brokers)
Consumer reads from a Partition → updates its OffsetUsing the same key (e.g., userID) guarantees that all related messages land in the same partition, preserving order.
Go implementation with Sarama
Step 1 – Project setup
# Create project directory
mkdir kafka-golang-demo && cd kafka-golang-demo
# Initialise a Go module
go mod init kafka-demo
# Add the Sarama client library
go get github.com/Shopify/saramaStep 2 – Producer
package main
import (
"fmt"
"log"
"time"
"github.com/Shopify/sarama"
)
func main() {
cfg := sarama.NewConfig()
cfg.Producer.Return.Successes = true // wait for ack
cfg.Producer.RequiredAcks = sarama.WaitForAll // replicate to all ISR
cfg.Producer.Retry.Max = 5 // retry on transient errors
brokers := []string{"localhost:9092"}
producer, err := sarama.NewSyncProducer(brokers, cfg)
if err != nil {
log.Fatalf("Failed to create producer: %v", err)
}
defer producer.Close()
topic := "user-events"
msg := &sarama.ProducerMessage{
Topic: topic,
// Key guarantees ordering per user
Key: sarama.StringEncoder("user-123"),
Value: sarama.StringEncoder("User John registered at " + time.Now().String()),
}
partition, offset, err := producer.SendMessage(msg)
if err != nil {
log.Printf("Send failed: %v", err)
return
}
fmt.Printf("Message stored in partition %d, offset %d
", partition, offset)
}Step 3 – Consumer
package main
import (
"fmt"
"log"
"github.com/Shopify/sarama"
)
func main() {
cfg := sarama.NewConfig()
cfg.Consumer.Return.Errors = true
brokers := []string{"localhost:9092"}
consumer, err := sarama.NewConsumer(brokers, cfg)
if err != nil {
log.Fatalf("Failed to create consumer: %v", err)
}
defer consumer.Close()
topic := "user-events"
// Consume from partition 0 starting at the newest offset
pc, err := consumer.ConsumePartition(topic, 0, sarama.OffsetNewest)
if err != nil {
log.Fatalf("Failed to start partition consumer: %v", err)
}
defer pc.Close()
fmt.Println("Listening for messages…")
for {
select {
case msg := <-pc.Messages():
fmt.Printf("Received: %s
", string(msg.Value))
case err := <-pc.Errors():
fmt.Printf("Consumer error: %v
", err)
}
}
}Production‑grade best practices
1️⃣ Error handling & retries
func sendMessageWithRetry(p sarama.SyncProducer, m *sarama.ProducerMessage) error {
const maxRetries = 3
for i := 0; i < maxRetries; i++ {
_, _, err := p.SendMessage(m)
if err == nil {
return nil // success
}
fmt.Printf("Retry %d/%d: %v
", i+1, maxRetries, err)
time.Sleep(time.Second * time.Duration(i+1))
}
return fmt.Errorf("failed after %d retries", maxRetries)
}2️⃣ Connection‑pool management
func createProducerPool(brokers []string, size int) ([]sarama.SyncProducer, error) {
var pool []sarama.SyncProducer
for i := 0; i < size; i++ {
cfg := sarama.NewConfig()
cfg.Producer.Return.Successes = true
p, err := sarama.NewSyncProducer(brokers, cfg)
if err != nil {
return nil, err
}
pool = append(pool, p)
}
return pool, nil
}3️⃣ Graceful shutdown
func gracefulShutdown(consumer sarama.Consumer, producer sarama.SyncProducer) {
sig := make(chan os.Signal, 1)
signal.Notify(sig, os.Interrupt, syscall.SIGTERM)
<-sig // wait for termination signal
fmt.Println("Shutting down gracefully…")
consumer.Close()
producer.Close()
os.Exit(0)
}Real‑world scenarios
Scenario 1 – Decoupled order processing
// Order service stores the order then emits an event
func (s *OrderService) CreateOrder(o Order) error {
if err := s.repo.Save(o); err != nil {
return err
}
evt := OrderCreatedEvent{OrderID: o.ID, UserID: o.UserID, Amount: o.TotalAmount, Time: time.Now()}
return s.kafkaProducer.Send("order-created", evt)
}
// Payment service consumes the event independently
func (s *PaymentService) ProcessPayment(evt OrderCreatedEvent) error {
p := Payment{OrderID: evt.OrderID, Amount: evt.Amount, Status: "pending"}
return s.repo.Save(p)
}Because the services communicate only through Kafka, a failure in the payment service does not block order creation.
Scenario 2 – User‑activity tracking
func (s *UserService) TrackActivity(userID, action string) error {
act := UserActivity{UserID: userID, Action: action, Timestamp: time.Now(), IP: getClientIP()}
return s.kafkaProducer.Send("user-activities", act)
}Collected events can feed real‑time analytics, recommendation engines, or security monitoring pipelines.
Troubleshooting tips
Connection failure
func checkKafkaConnection(brokers []string) error {
cfg := sarama.NewConfig()
cfg.Net.DialTimeout = 5 * time.Second
client, err := sarama.NewClient(brokers, cfg)
if err != nil {
return fmt.Errorf("cannot connect to Kafka: %w", err)
}
defer client.Close()
return nil
}Message ordering
msg := &sarama.ProducerMessage{
Topic: "user-events",
Key: sarama.StringEncoder(userID), // same key → same partition
Value: sarama.StringEncoder(payload),
}Kafka guarantees order only within a partition; using a consistent key ensures that all events for a given entity are processed sequentially.
Kafka architecture diagram
Summary
Kafka’s durable log, partitioned storage, and high‑throughput networking make it ideal for building resilient, scalable back‑ends. With the Sarama Go client you can quickly write producers and consumers, apply retry logic, pool connections, and shut down cleanly. These patterns enable real‑time order processing, activity tracking, and many other event‑driven use cases while keeping services loosely coupled.
Go Development Architecture Practice
Daily sharing of Golang-related technical articles, practical resources, language news, tutorials, real-world projects, and more. Looking forward to growing together. Let's go!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
