Operations 28 min read

Using Ctrip’s Apollo Distributed Configuration Center: Concepts, Setup, and Practical Examples

This article provides a comprehensive guide to Ctrip’s open‑source Apollo configuration center, covering its core concepts, features, architecture, deployment dimensions, client design, code examples for a SpringBoot project, testing procedures, and Kubernetes deployment with Docker.

Architecture Digest
Architecture Digest
Architecture Digest
Using Ctrip’s Apollo Distributed Configuration Center: Concepts, Setup, and Practical Examples

Today we dive deep into Ctrip’s open‑source distributed configuration center Apollo, which offers functionality comparable to Nacos.

1. Basic Concepts

Apollo was created to meet the growing demand for real‑time, environment‑aware, and cluster‑aware configuration management, surpassing traditional file‑based or database approaches.

Background

As applications become more complex, the need for dynamic configuration—feature toggles, parameters, server addresses, gray releases, multi‑environment and multi‑cluster management, and robust permission/audit mechanisms—has increased, prompting the development of Apollo.

Overview

Apollo is an open‑source configuration management center from Ctrip’s framework team that centralizes configuration for different environments and clusters, pushes updates in real time, and provides strict permission and workflow controls.

Key Features

Simple deployment

Gray release support

Version management

Open API platform

Client configuration monitoring

Native Java and .Net clients

Hot‑update (real‑time push)

Permission management, release audit, operation audit

Unified management of environments and clusters

Fundamental Model

The basic workflow consists of three steps:

User modifies and publishes configuration in the Apollo console.

The configuration center notifies Apollo clients of the update.

Clients pull the latest configuration, update local cache, and notify the application.

Four Dimensions of Apollo

Apollo manages key‑value configurations across four dimensions:

application (the app identifier)

environment (DEV, FAT, UAT, PRO)

cluster (e.g., Beijing, Shanghai)

namespace (logical grouping such as database, RPC, etc.)

Application

Clients must know their app.id to fetch the correct configuration.

Environment

Typical environments include FAT, UAT, DEV, and PRO. The environment is selected via the env variable.

Cluster

Clusters group instances, often by data center, allowing the same key to have different values per cluster.

Namespace

Namespaces act like separate configuration files (e.g., application.yml ) and can be public or private. They also have three types: private, public, and linked (inheritance).

Local Cache

Apollo clients cache configuration locally to survive server outages. Default cache paths are /opt/data/{appId}/config-cache on Linux/macOS and C:\opt\data\{appId}\config-cache on Windows. Cached files follow the pattern {appId}+{cluster}+{namespace}.properties .

Client Design

The client maintains a long‑living HTTP connection (long‑polling) to receive push updates instantly. If no update occurs within 60 seconds, the server returns 304, and the client re‑establishes the connection. A fallback polling interval (default 5 minutes) can be overridden with apollo.refreshInterval .

Overall Design

Config Service handles configuration reads and pushes; Admin Service manages modifications and releases. Both are stateless, register with Eureka, and are discovered via a Meta Server. Clients obtain service lists from the Meta Server and perform load‑balanced calls.

Availability Considerations

Scenario

Impact

Degradation

Reason

One Config Service down

No impact

Stateless; client reconnects to another instance

All Config Services down

Clients cannot read latest config; Portal unaffected

Clients fall back to local cache on restart

One Admin Service down

No impact

Stateless; Portal reconnects

All Admin Services down

Portal cannot update config; clients unaffected

One Portal down

No impact

SLB redirects to healthy instance

All Portals down

Portal unavailable; clients unaffected

One data center down

No impact

Multi‑data‑center deployment with synchronized data

2. Creating an Apollo Project and Configuration

Log into the Apollo portal (default credentials: user apollo , password admin ) and create a project with app.id=apollo-test and app.name=apollo-demo . Add a configuration key test=123456 and publish it.

3. Building a SpringBoot Client Project

1. Maven Dependency

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
4.0.0
org.springframework.boot
spring-boot-starter-parent
2.1.8.RELEASE
club.mydlq
apollo-demo
0.0.1
1.8
org.springframework.boot
spring-boot-starter-web
com.ctrip.framework.apollo
apollo-client
1.4.0
org.springframework.boot
spring-boot-maven-plugin

2. Application.yml

# Application configuration
server:
  port: 8080
spring:
  application:
    name: apollo-demo

# Apollo configuration
app:
  id: apollo-test  # Application ID
apollo:
  cacheDir: /opt/data/               # Local cache directory
  cluster: default                    # Cluster to use
  meta: http://192.168.2.11:30002     # DEV environment config center address
  autoUpdateInjectedSpringProperties: true
  bootstrap:
    enabled: true
    namespaces: application
    eagerLoad:
      enabled: false

3. Test Controller

import org.springframework.beans.factory.annotation.Value;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class TestController {
    @Value("${test:默认值}")
    private String test;

    @GetMapping("/test")
    public String test(){
        return "test的值为:" + test;
    }
}

4. Application Entry Point

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class Application {
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

5. JVM Startup Parameters

When running inside Kubernetes, add:

-Dapollo.configService=http://192.168.2.11:30002 -Denv=DEV

4. Testing the Client

Start the application and request http://localhost:8080/test . The response should be test的值为:123456 , confirming the value comes from Apollo.

Modify the value in Apollo to 666666 , republish, and the endpoint now returns the updated value without restarting.

Rollback the change in Apollo; the endpoint reverts to the previous value.

If the config service becomes unreachable, the client falls back to the local cache, still returning the last known value. Deleting the cache forces the client to use the default value defined in the code.

5. Exploring Cluster and Namespace

By creating different environments (PRO), clusters (beijing, shanghai), and namespaces (dev-1, dev-2), you can observe how Apollo selects configuration based on env , apollo.cluster , and apollo.bootstrap.namespaces settings.

6. Deploying the SpringBoot Application on Kubernetes

1. Build Docker Image

FROM openjdk:8u222-jre-slim
VOLUME /tmp
ADD target/*.jar app.jar
RUN sh -c 'touch /app.jar'
ENV JAVA_OPTS="-XX:MaxRAMPercentage=80.0 -Duser.timezone=Asia/Shanghai"
ENV APP_OPTS=""
ENTRYPOINT ["sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar $APP_OPTS"]

Build with:

docker build -t mydlqclub/springboot-apollo:0.0.1 .

2. Kubernetes Manifests

apiVersion: v1
kind: Service
metadata:
  name: springboot-apollo
spec:
  type: NodePort
  ports:
    - name: server
      nodePort: 31080
      port: 8080
      targetPort: 8080
    - name: management
      nodePort: 31081
      port: 8081
      targetPort: 8081
  selector:
    app: springboot-apollo
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: springboot-apollo
  labels:
    app: springboot-apollo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: springboot-apollo
  template:
    metadata:
      name: springboot-apollo
      labels:
        app: springboot-apollo
    spec:
      restartPolicy: Always
      containers:
        - name: springboot-apollo
          image: mydlqclub/springboot-apollo:0.0.1
          imagePullPolicy: Always
          ports:
            - containerPort: 8080
              name: server
          env:
            - name: JAVA_OPTS
              value: "-Denv=DEV"
            - name: APP_OPTS
              value: "
                     --app.id=apollo-demo
                     --apollo.bootstrap.enabled=true
                     --apollo.bootstrap.eagerLoad.enabled=false
                     --apollo.cacheDir=/opt/data/
                     --apollo.cluster=default
                     --apollo.bootstrap.namespaces=application
                     --apollo.autoUpdateInjectedSpringProperties=true
                     --apollo.meta=http://service-apollo-config-server-dev.mydlqcloud:8080    
                     "
          resources:
            limits:
              memory: 1000Mi
              cpu: 1000m
            requests:
              memory: 500Mi
              cpu: 500m

Deploy with:

kubectl apply -f springboot-apollo.yaml -n mydlqcloud

Access the service via the NodePort (e.g., http://192.168.2.11:31080/test ) and observe the value returned from Apollo.

END

Dockerkubernetesconfiguration managementSpringBootApollo
Architecture Digest
Written by

Architecture Digest

Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.