Comprehensive Guide to Using Apollo Distributed Configuration Center
This article provides an in‑depth tutorial on Apollo, Ctrip's open‑source distributed configuration center, covering its core concepts, architecture, four‑dimensional configuration model, client design, deployment, and step‑by‑step instructions for creating projects, adding configurations, testing dynamic updates, and running the service in Kubernetes with Docker.
We begin with an overview of Apollo, explaining why traditional file‑based or database configuration approaches no longer meet the needs of modern microservices and introducing Apollo as a solution for real‑time, environment‑aware, cluster‑aware, and namespace‑aware configuration management.
Basic Concepts
Apollo allows users to modify and publish configurations via a central console, which then notifies clients to pull updates and apply them instantly.
Key Features
Simple deployment
Gray release support
Version management
Open API platform
Client configuration monitoring
Native Java and .Net clients
Hot‑update of configurations
Permission management, release audit, operation audit
Unified management across environments and clusters
Four‑Dimensional Model
Apollo organizes key‑value configurations by application , environment , cluster , and namespace , enabling fine‑grained control over which configuration applies to which service instance.
Local Cache
The client caches configuration files locally (e.g., /opt/data/{appId}/config-cache on Linux/macOS or C:\opt\data\{appId}\config-cache on Windows) to ensure service continuity when the server is unavailable.
Client Design and Long‑Polling
Clients maintain a long‑polling HTTP connection to the server; if a configuration change occurs within 60 seconds, the server pushes a notification, otherwise it returns 304 Not Modified. A fallback periodic pull runs every 5 minutes (configurable via apollo.refreshInterval ).
Overall Architecture
Config Service handles configuration reads and pushes, while Admin Service manages modifications. Both services are stateless, register with Eureka, and are discovered via a Meta Server. Load balancing and retry mechanisms are applied on the client side.
High Availability
The system tolerates individual service instance failures, with automatic failover to other instances; local cache ensures read‑only operation when all config services are down.
Practical Walkthrough
Steps include logging into the Apollo portal, creating a project, adding a configuration key test with value 123456 , publishing it, and building a SpringBoot demo that reads the value via @Value("${test:default}") . Maven dependencies and pom.xml snippets are provided within <code>... blocks.
Running and Testing
The demo can be started locally or in Kubernetes. JVM arguments such as -Dapollo.configService=... and -Denv=DEV select the configuration service and environment. Tests demonstrate dynamic updates, rollbacks, cache usage, and behavior when the configuration center is unreachable.
Kubernetes Deployment
A Dockerfile builds a lightweight image, exposing JAVA_OPTS and APP_OPTS environment variables for JVM and Apollo settings. A Kubernetes manifest defines a Service (NodePort) and Deployment, injecting the necessary environment variables to connect to the Apollo server inside the cluster.
Finally, accessing http:// :31081/test returns the current configuration value, confirming successful integration of Apollo with SpringBoot, Docker, and Kubernetes.
Architect's Guide
Dedicated to sharing programmer-architect skills—Java backend, system, microservice, and distributed architectures—to help you become a senior architect.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.