Cloud Native 21 min read

Understanding Eureka Service Registry: Server and Client Architecture and Mechanisms

This article explains the architecture and core mechanisms of Eureka, a service registry used in Spring Cloud micro‑service environments, covering server components such as Lease and caching, client registration and discovery processes, self‑preservation mode, consistency trade‑offs, and comparisons with other discovery solutions.

Yang Money Pot Technology Team
Yang Money Pot Technology Team
Yang Money Pot Technology Team
Understanding Eureka Service Registry: Server and Client Architecture and Mechanisms

Eureka is a service discovery center originally open‑sourced by Netflix and later integrated into Spring Cloud. In a micro‑service architecture, multiple stateless service instances are dynamically created and removed, so a centralized registry is required for clients to obtain up‑to‑date instance information.

Architecture : Eureka consists of a Eureka Server and Eureka Client. The server maintains service instance metadata and status, typically deployed in multiple instances for high availability. The client is embedded in each business service to register, send heartbeats, and fetch the service list.

Server core components : Resources: exposes RESTful endpoints for registration, heartbeat, and service list retrieval. Controller: provides web UI for viewing registered services and instance status. PeerAwareInstanceRegistry: stores all registered Lease objects in a double‑map structure.

private final ConcurrentHashMap<String, Map<String, Lease<InstanceInfo>>> registry;
PeerEurekaNodes

: holds information about all Eureka Server nodes for replication. HttpReplicationClient: sends HTTP requests to other servers during replication.

The Lease records registration time, last update time and expires after 90 seconds if not refreshed. An EvictionTask runs every 60 seconds (configurable) to remove expired leases:

eureka.server.evictionIntervalTimerInMs=60 * 1000

Incremental changes are kept in a recentlyChangedQueue (default retention 3 minutes) and returned to clients that request delta updates:

private ConcurrentLinkedQueue<RecentlyChangedItem> recentlyChangedQueue;

Response caching uses a two‑layer design: ReadWriteCache (populated on cache miss) and ReadOnlyCache (periodically refreshed, default 30 seconds):

eureka.server.responseCacheUpdateIntervalMs=30 * 1000

Self‑preservation mode prevents deletion of leases when the server receives fewer heartbeats than a configurable threshold (default 85 % of expected). The mode can be disabled via configuration: eureka.server.enableSelfPreservation=false Eureka clusters are AP systems: they favor availability over strong consistency, requiring only one alive node to serve requests.

Client responsibilities include registration, periodic heartbeat, deregistration, and fetching the service list. Configuration can disable registration or fetching when not needed:

eureka.client.registerWithEureka=false
 eureka.client.fetchRegistry=false

Clients obtain server addresses through ClusterResolver (default ConfigClusterResolver wrapped by ZoneAffinityClusterResolver) and create two EurekaHttpClient instances for query and registration, each decorated with metrics, redirect handling, retry, and session management:

public interface ClusterResolver<T extends EurekaEndpoint> {
    List<T> getClusterEndpoints();
}

Retryable client maintains a quarantine set of failing servers; when the set exceeds two‑thirds of the total, it is refreshed:

eureka.client.transport.retryableClientQuarantineRefreshPercentage=0.66

Sessioned client periodically recreates the underlying HTTP client to balance load across server nodes (default session 20 minutes ± random half‑duration):

eureka.client.transport.sessionedClientReconnectIntervalSeconds=20 * 60

Service list updates are performed incrementally when possible. Clients process three delta types— ADDED, MODIFIED, DELETED —by adding, updating, or removing instances in the local cache, which is idempotent. If the delta queue expires or inconsistencies are detected via hashcode comparison, a full fetch is triggered.

Other discovery solutions such as Zookeeper/etcd (CP), Consul (CP), and Nacos (configurable CP/AP) are compared. Eureka’s AP nature offers higher availability at the cost of eventual consistency, which is acceptable for service discovery scenarios.

Conclusion : Eureka’s server and client design, cache strategies, self‑preservation mode, and incremental updates provide a highly available service registry suitable for cloud‑native micro‑service architectures, trading strong consistency for resilience during network partitions or node failures.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

Cacheservice discoveryEurekaSpring CloudAP SystemSelf-Preservation
Yang Money Pot Technology Team
Written by

Yang Money Pot Technology Team

Enhancing service efficiency with technology.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.