Understanding the Underlying Mechanism of Nacos Service Registration
This article explains how Nacos registers services by detailing the client‑side request assembly, random node selection in a cluster, routing forwarding, and server‑side handling, while providing code snippets, diagrams, and practical tips for debugging the registration flow.
Hello everyone, I am Wukong.
Preface
In the previous article we explained how to use Nacos as a registration and configuration center.
Now we will discuss the underlying principles of Nacos's registration service.
Nacos, as a registration center, receives registration requests from client service instances and stores the registration information for management.
What steps does a registration request go through?
Knowledge Points Preview
Here is an overall flow diagram:
Cluster Environment: What is the topology when Nacos runs in a cluster?
Assemble Request: How the client assembles the registration request and calls Nacos remotely.
Random Node: How the client randomly selects a Nacos node for load balancing.
Routing Forward: How a Nacos node forwards a request that does not belong to it.
Process Request: How the designated node parses the instance information and stores it in a custom memory structure.
Final Consistency: How Nacos uses its self‑developed Distro protocol with delayed asynchronous tasks to synchronize registration data across the cluster.
Asynchronous Retry: How the client retries with another node if registration fails, ensuring high availability.
These points will be explained with diagrams and source‑code analysis. If any source code is unclear, refer to the diagrams and then compare with the code.
Tip: The Nacos version used in this article is 2.0.4.
1. Origin: Initiating Registration
1.1 Small Tips for Reading Source Code
Adding the annotation @EnableDiscoveryClient enables automatic service registration to Nacos.
Where exactly does the registration happen and what does the registration data look like?
A useful trick is to first look at the example folder in the source code. The App class inside the example contains the registration example.
You can also issue a curl command directly:
curl -X POST 'http://127.0.0.1:8848/nacos/v1/ns/instance?serviceName=nacos.naming.serviceName&ip=20.18.7.11&port=8080'Question: When we add @EnableDiscoveryClient , how does the service get automatically registered?
1.2 Flowchart of Initiating Registration
Below is the code flow diagram:
1.3 Assembling Instance Information
The core code assembles the instance information and stores it in a variable.
Debugging shows the instance information looks like this:
1.4 Assembling the Registration Request
The core method doRegisterService() builds the request, which contains the previously assembled instance , the namespace , serviceName , and groupName .
1.5 Remote Call Invocation
Inside requestToServer() the RpcClient performs the remote call:
response = this.currentConnection.request(request, timeoutMills);This sends the request to a Nacos node; if the cluster is used, it picks one node.
2. Cluster Environment: Distributed Preconditions
In a Nacos cluster, the client randomly selects a node to register.
2.1 Building a Nacos Cluster
For demonstration, a local cluster with three Nacos instances (same IP, different ports) was set up:
192.168.10.197:8848
192.168.10.197:8858
192.168.10.197:8868Both Service A and Service B are configured with the same cluster address list:
spring.cloud.nacos.discovery.server-addr = 192.168.10.197:8848,192.168.10.197:8858,192.168.10.197:8868Question: Does Service A register to all Nacos nodes or only one? If only one, which node?
Answer: Before the client sends the registration, a background thread randomly picks one address from the Nacos server list.
This design provides load balancing and high availability.
3. Random Node: Equal Opportunity
The client generates a random number and selects a Nacos node from the server list:
The relevant code:
// get Nacos node info
serverInfo = recommendServer.get() == null ? nextRpcServer() : recommendServer.get();
// connect to Nacos node
connectToServer = connectToServer(serverInfo);
// assign to currentConnection
this.currentConnection = connectToServer;The nextRpcServer() method picks a random address:
// random int in [0, serverList.size())
currentIndex.set(new Random().nextInt(serverList.size()));
// increment index and modulo size
int index = currentIndex.incrementAndGet() % getServerList().size();
return getServerList().get(index);4. Routing Forward: Not My Responsibility
4.1 Request and Forwarding Flow
A curl command is used to simulate a registration request to node 127.0.0.1:8848:
curl -X POST 'http://127.0.0.1:8848/nacos/v1/ns/instance?serviceName=nacos.naming.serviceName&ip=20.18.7.11&port=8080'If the receiving node determines the request does not belong to it, it forwards the request to the correct node based on a hash of the instance tag.
4.2 Routing Forward Source Code
The entry point is DistroFilter.java :
naming/src/main/java/com/alibaba/nacos/naming/web/DistroFilter.javaInside doFilter() the target server is determined:
final String targetServer = distroMapper.mapSrv(distroTag);
int index = distroHash(responsibleTag) % servers.size();
// distroHash uses hash of ip+port or service name
Math.abs(responsibleTag.hashCode() % Integer.MAX_VALUE);5. Processing Request: Final Step
Nacos v1 uses instanceController to handle registration; v2 uses instanceControllerV2 .
Example registration command:
curl -X POST 'http://127.0.0.1:8858/nacos/v1/ns/instance?serviceName=nacos.naming.serviceName&ip=20.18.7.11&port=8080'The core server‑side code stores the instance in a synchronized block and then triggers three actions:
Store the instance in an in‑memory ConcurrentHashMap .
Enqueue a task to push the updated instance list to all clients via UDP.
Start a delayed (1 s) task to synchronize the data to other Nacos nodes using the Distro protocol.
6. Summary
This article demonstrated how a single registration request travels: the client randomly selects a Nacos node, the node may forward the request based on hashing, and the server stores the instance and synchronizes it across the cluster.
Future articles will dive deeper into the Distro consistency protocol and compare Nacos's storage/synchronization with Eureka.
Next preview: Nacos's consistency protocol Distro – unveiling the AP architecture.
Wukong Talks Architecture
Explaining distributed systems and architecture through stories. Author of the "JVM Performance Tuning in Practice" column, open-source author of "Spring Cloud in Practice PassJava", and independently developed a PMP practice quiz mini-program.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.