Automate Service Containerization with GitLab CI & Kubernetes Namespaces
This article explains how to use GitLab CI and Kubernetes namespaces to automate building, testing, and deploying containerized services, ensuring environment isolation, resource control, traceability, and low operational overhead without introducing additional platforms.
In the previous two posts we discussed how services can gracefully adapt to containerized environments, but the prerequisite is that the services themselves are containerized, i.e., packaged into images. Manually building and pushing images is possible but costly and labor‑intensive, so we turn to GitLab CI to automate the process.
Problems We Face
Testers need an isolated test environment that is not disturbed by frequent code submissions, yet they should not bear extra burden.
The build pipelines for development and test environments must be consistent, allowing developers to trigger builds and deployments for multiple environments locally.
Environments should be isolated per development team, preventing cross‑team interference.
Only services that run correctly in the development environment should be handed over to testers.
Production deployments must use the version that has passed testing to avoid accidental roll‑outs.
A rough estimate of CPU and memory usage for services in production is required.
When issues arise in production, we need to trace them back to the specific release version and the corresponding code commit.
We prefer not to introduce additional platforms that increase operational complexity and maintenance cost.
Proposed Scenario
Developers complete their work, test it in the development environment, and then automatically deploy to the test environment. Testers perform multiple test cycles and mark a service version as publishable. The same service may have multiple branches for different requirements, but only the test‑approved version is deployed to production, with resource configuration references provided.
During production, any issue can be traced back from the running service version to the code commit.
Key Conclusions
Use Kubernetes namespace strategies for team and environment isolation, and apply resource‑quota and scheduling policies to control resource consumption, reducing cost and maintenance effort.
Enable automatic build and deployment for multiple development and test environments triggered by code commits.
Match code branches with deployment environments and inject environment‑specific variables as needed.
Provide CPU and memory reference values for production services via monitoring data; consider using k8s VPA strategies, though manual adjustment of request/limit may be required.
Record service image version numbers in production to facilitate issue traceability back to code.
Achieve the entire workflow without adding external platforms by leveraging GitLab's built‑in CI capabilities.
Why GitLab CI?
GitLab CI is lightweight, built into the GitLab platform, and naturally integrates with code management, making commit‑based traceability straightforward.
Configuration is done with clear YAML files, allowing the entire build, test, and deployment process to be managed as code, which simplifies maintenance and version control.
It offers strong extensibility: if standard tasks are insufficient, custom pipelines can be added without modifying GitLab's core code.
Overall Process
The end‑to‑end flow covers development and production stages. Ops staff create namespaces with resource limits for each team in the dev/test Kubernetes cluster. Deployments are triggered by developers' code submissions, with GitLab pipelines showing build and deployment results. After successful health checks, the code is merged into the test branch, deployed to the test namespace, and tested. Once approved, the service image version is marked as official and released to production. Any production issue can be traced back to the corresponding code commit via the image tag.
Core Points in Detail
Branch‑Environment Mapping: Namespace names are derived from GitLab groups and branch conventions (e.g., dev‑team1, test‑team1), allowing CI to create services with branch‑specific suffixes.
Resource Limits: Use Kubernetes ResourceQuota to cap namespace resources; monitor cluster and namespace usage to adjust the overall pool, optionally using Cluster Autoscaler in public clouds.
Traceability: During CI image build, the commit SHA is captured and embedded in the image tag, enabling later mapping from a running image back to the source code.
Namespace‑Centric Design: Namespaces provide a logical isolation layer, facilitating rapid reconstruction of services in a new cluster after a disaster and enabling multi‑cluster CI deployments.
Implementation Steps:
CI: Define a .gitlab-ci.yml file and reference it across projects.
Dev/Test Deployment: Use operators or direct Kubernetes APIs to deploy services, injecting version and environment variables.
Production Deployment: Leverage existing Kubernetes management platforms to promote approved images.
Monitoring: Collect metrics with Prometheus and logs with Loki (or EFK), storing logs in a custom database.
This week’s sharing ends here; feel free to join the group for further discussion.
Ops Development Stories
Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
