Cloud Native 15 min read

How a Container Cloud Platform Boosts 24/7 Continuous Delivery with Docker and Rancher

This article explains how a Docker‑based container cloud platform solves resource waste, isolation, and deployment challenges for startups by standardizing images, using Rancher for orchestration, implementing CI/CD pipelines, and managing logs and service registration to achieve seamless 24/7 continuous delivery.

Efficient Ops
Efficient Ops
Efficient Ops
How a Container Cloud Platform Boosts 24/7 Continuous Delivery with Docker and Rancher

Docker Image Standardization

Docker images are layered; the first layer is the operating system (e.g., CentOS or Alpine) with common components, the second layer contains middleware such as Nginx or Tomcat, and the third layer holds the packaged application code.

To keep images small and push faster, the team follows three practices:

When installing packages in the middleware layer, use the package manager (e.g., yum) or clone source code so that copy and installation happen in the same layer, then remove unnecessary RPMs or source files after installation.

Do not bundle the JDK inside the Java application image; instead, install the JDK on each host and mount the host's Java home into the container, avoiding a large base image.

Docker caches the OS and middleware layers; only the application layer is rebuilt when code changes, which speeds up image builds and pushes.

Container Orchestration Management

Orchestration tool selection:

The team chose Rancher for its graphical interface, easy deployment, AD/LDAP/GitHub integration, user‑group access control, and the ability to upgrade to Kubernetes or Swarm with professional support, lowering the entry barrier for container technology.

Rancher, combined with Docker‑Compose, enables unified scheduling of container instances across multiple hosts, and its "SCALE" feature in the

rancher-compose.yml

file allows dynamic scaling during traffic peaks and troughs.

Container network model selection:

Because the backend services built on Alibaba's HSF framework require real IP addresses for service discovery, the team uses the host network mode. During container startup a script assigns each container a unique port to avoid conflicts.

Continuous Integration and Continuous Deployment

Continuous Integration

Code commits trigger CI pipelines that run unit tests, static analysis (Sonar) and security scans, notify developers, and deploy to an integration environment. Successful integration then launches automated tests.

Continuous Deployment

The platform uses distributed builds with a master node managing multiple slave nodes, each representing a different environment. The deployment workflow is:

(1) Developers push code to GitLab. (2) The pipeline pulls source and configuration, then compiles. (3) The built artifact is packaged into a new Docker image and pushed to the registry. (4) A customized

docker-compose.yml

is generated for the target environment and deployed with

rancher-compose

to a pre‑release environment that mirrors production. (5) After pre‑release testing, the image is promoted to the production environment and test results are reported.

Container Runtime Management

Once containers run in production, two operational concerns remain: preserving application logs and automatically updating Nginx configuration when backend services change.

Log Management

Containers write to a writable layer that disappears on restart. To avoid log loss and scattered logs across hosts, the team uses a centralized log service (Logstore) where agents collect container logs and push them to a searchable store with full‑text indexing.

Service Registration

Etcd serves as a highly available key‑value store. Each service registers under a root key formatted as

/{APP_NAME}_{ENVIRONMENT}

with child keys like

IP-PORT

. Registration includes a TTL so stale entries are automatically removed.

Service Discovery

Confd polls Etcd every five seconds, renders Go text/template files based on the latest key‑value data, writes the resulting Nginx configuration, and reloads Nginx to apply changes.

<code>{{range getvs "/${APP_NAME}/*"}}
server {{.}};
{{end}}</code>

Summary

The operations team built a Docker‑based container cloud platform that enables 24/7 one‑stop continuous delivery, improves resource utilization, isolates applications, automates CI/CD, and provides robust logging and service discovery, thereby significantly enhancing the company’s overall development and deployment efficiency.

cloud nativedockerCI/CDDevOpsContainer OrchestrationRancher
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.