Backend Development 25 min read

Service Registry Showdown: Zookeeper, Eureka, Nacos, Consul & ETCD

This article examines five popular service registries—Zookeeper, Eureka, Nacos, Consul, and ETCD—explaining their core concepts, architecture, CAP trade‑offs, health‑check mechanisms, multi‑data‑center support, and provides guidance on selecting the most suitable registry for different technology stacks and availability requirements.

macrozheng
macrozheng
macrozheng
Service Registry Showdown: Zookeeper, Eureka, Nacos, Consul & ETCD
This article explains five common service registries, compares their processes and principles, and is helpful for interviews or technology selection.

Before writing this article the author only had deep knowledge of ETCD, but learned about Zookeeper, Eureka, Nacos, and Consul over two weeks.

Basic Concepts of Service Registry

What Is a Service Registry?

A service registry has three main roles:

Service Provider (RPC Server) : registers itself with the registry at startup and sends periodic heartbeats.

Service Consumer (RPC Client) : subscribes to services at startup, caches the list of service nodes locally, and connects to the chosen server.

Registry : stores registration information of RPC servers and synchronizes changes to clients.

Clients select a server from the cached list using a load‑balancing algorithm.

Functions a Registry Must Implement

This section is optional for readers already familiar with the basics.

CAP Theory

Consistency : all nodes see the same data at the same time.

Availability : every request receives a response, success or failure.

Partition Tolerance : the system continues to operate despite network partitions.

In practice, only two of the three properties can be guaranteed simultaneously.

Distributed System Protocols

Common consensus algorithms include Paxos, Raft, and ZAB.

Paxos is a message‑based consistency algorithm that requires a majority of replicas to be online.

Raft, designed for easier understanding and implementation, also requires a majority of nodes and is used by etcd and Kubernetes.

ZAB (ZooKeeper Atomic Broadcast) is a ZooKeeper‑specific protocol derived from Paxos, providing strong consistency for ZooKeeper.

Common Service Registries

The five widely used registries are Zookeeper, Eureka, Nacos, Consul, and ETCD.

Zookeeper

Although not officially marketed as a registry, Zookeeper is often used as one in Dubbo environments.

How Zookeeper Implements a Registry

For detailed principles see the referenced article.

Zookeeper stores service information as znodes, e.g.,

/HelloWorldService/1.0.0/100.19.20.01:16888

. Clients watch these paths and receive lightweight change events, then pull the updated data themselves.

Zookeeper follows a CP model (strong consistency) which can lead to long leader‑election times (30‑120 s) and temporary unavailability, making it less suitable when high availability is critical.

Eureka

Architecture

Eureka adopts an AP model: as long as any instance is alive, the service remains available, though data may be stale.

Decentralized architecture : peer‑to‑peer communication without a master.

Automatic request switching : clients fail over to other Eureka servers.

Self‑protection mode : when many heartbeats are missed, Eureka stops deregistering instances.

Workflow

Eureka Server starts and waits for service registrations.

Clients register themselves.

Clients send heartbeats every 30 s.

If a server misses heartbeats for 90 s, the instance is deregistered.

During network issues, self‑protection prevents mass deregistration.

Clients cache the registry locally and refresh as needed.

Eureka prioritizes availability over strong consistency, making it suitable for multi‑datacenter deployments where uptime is paramount.

Nacos

Content excerpted from the Nacos official documentation.

Nacos provides service discovery, health monitoring, dynamic configuration, and DNS‑based routing, supporting both CP and AP modes.

Main Features

Service discovery & health checks : supports DNS and RPC discovery, with rich health‑check options.

Dynamic configuration : centralized, externalized, and hot‑reloadable configuration management.

Dynamic DNS : weight‑based routing and DNS‑based service discovery.

Nacos can act as both a Spring Cloud service registry and configuration center.

Consul

Consul is an open‑source tool from HashiCorp offering service discovery, health checks, KV store, multi‑datacenter support, and TLS‑enabled communication.

Call Flow

Producer registers with Consul.

Consul performs periodic health checks.

Consumer queries Consul for service address before calling the producer.

Key Characteristics

CP model using Raft for strong consistency.

Built‑in health checks, KV store, and multi‑datacenter capabilities.

Supports both HTTP and DNS interfaces.

ETCD

ETCD is a Go‑based distributed key‑value store that provides strong consistency via the Raft algorithm.

Features

Simple HTTP+JSON API.

Easy deployment and cross‑platform support.

Strong consistency, high availability, and fast write performance.

Optional SSL authentication and watch mechanism.

Architecture

ETCD consists of an HTTP server, a Store module, the Raft consensus component, and a Write‑Ahead Log (WAL) for persistence.

Registry Comparison & Selection

Comparison

Health checks : Eureka requires explicit configuration; Zookeeper and ETCD rely on lost connections; Consul offers detailed health metrics.

Multi‑datacenter support : Consul and Nacos support it natively; others need extra work.

KV store : All except Eureka provide KV services, enabling dynamic configuration.

CAP trade‑offs : Eureka (AP) and Nacos (configurable) favor availability; Zookeeper, ETCD, Consul (CP) favor consistency.

Watch support : Zookeeper pushes changes; others use long‑polling.

Cluster monitoring : Zookeeper and Nacos expose metrics; others have default monitoring.

Spring Cloud integration : All have corresponding boot starters.

Selection Guidance

Prefer AP models (Eureka, Nacos) when availability outweighs consistency.

Match the registry language to your tech stack (Go‑based: ETCD, Consul; Java‑based: Zookeeper, Eureka, Nacos).

All options provide high‑availability clustering; choose based on operational familiarity.

Consider community activity and ecosystem support.

distributed systemsmicroservicesCAP theoremservice discoveryService Registry
macrozheng
Written by

macrozheng

Dedicated to Java tech sharing and dissecting top open-source projects. Topics include Spring Boot, Spring Cloud, Docker, Kubernetes and more. Author’s GitHub project “mall” has 50K+ stars.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.