Cloud Native 14 min read

Building a Container Cloud PaaS Platform: Architecture, Implementation, and Practice in an Insurance Group

This article describes the end‑to‑end process of selecting, designing, building, and deploying a container‑based PaaS platform for an insurance group, covering technology choices, architecture, deployment workflow, observability, monitoring, and the business impact of moving applications to a cloud‑native environment.

Architects' Tech Alliance
Architects' Tech Alliance
Architects' Tech Alliance
Building a Container Cloud PaaS Platform: Architecture, Implementation, and Practice in an Insurance Group

1. Construction Background

Full cloud migration is a trend in the insurance industry, where traditional virtualization offers limited resource utilization, flexibility, and elasticity. The group adopted cloud‑native technologies such as Kubernetes, AI, GPU, edge computing, and serverless, selecting a mainstream K8s platform and introducing SRE practices to improve system availability, resource efficiency, and reduce total cost of ownership.

2. Construction Goals of Container Cloud PaaS

Kubernetes serves as the standard for container orchestration, enabling the convergence of PaaS and IaaS. The platform aims to provide a cloud‑native environment with high availability, performance, and scalability, supporting cloud‑management, CI/CD integration, image registry, logging, monitoring, application orchestration, resource scheduling, and networking.

3. Architecture Design and Practice Experience

Technical Selection : After evaluating Kubesphere, RancherLab, Kubernetes, and OpenShift, the team chose OpenShift for its enterprise‑grade PaaS capabilities and shorter implementation cycle.

OpenShift Component Architecture : Core components include Master nodes, Worker nodes, Container Registry, Routing Layer, Service Layer, Web Console, and CLI.

Technical Support : The platform offers multi‑tenant isolation, elastic scaling, high‑availability clusters, direct physical‑machine deployment, comprehensive DevOps capabilities (CI/CD, micro‑service governance, application management), and a distributed, multi‑tenant architecture built on Kubernetes (OpenShift).

Technology Stack : Web console, GitLab CI, K8s, ETCD, CRI‑O, running on Red Hat CoreOS.

Logical Architecture : The platform manages containerized applications with a resource layer (hosts, network, storage) and an application layer (CI/CD pipelines, GitLab integration, LDAP, logging, monitoring, DevOps tools, cloud‑management, object storage, bastion host).

Cluster Deployment Process : Code is pulled from GitLab, built via Jenkins or GitLab CI pipelines, packaged into images, and deployed using Helm, manual, or GitLab deployment methods.

Integration with Internal DevOps Platform : The DevOps platform supports traditional VM automation, while the container cloud PaaS focuses on rapid deployment of cloud‑native applications via GitLab CI.

Log Collection : Logs are standardized and sent to an external ELK platform; an observability platform aggregates logs and metrics for AIOps support.

Monitoring Solution : Uses Zabbix for node resources, Prometheus for pod metrics, and Grafana dashboards for visualization.

Infrastructure : Combines virtual machines and bare‑metal servers, with bare‑metal chosen for performance and cost efficiency, leveraging Intel Xeon Scalable processors.

4. Application Scenarios and Practice

Business Application Migration : Technical debt is addressed by consolidating code, documentation, scripts, and CI configurations; Dockerfiles, Helm charts, and .gitlab-ci.yml files are created to containerize and deploy applications on OpenShift.

Observability Platform : Built on Thanos, Loki, and MinIO; provides centralized log storage, unified analysis, and multi‑dimensional correlation. The architecture includes an external load balancer (Nginx), OAuth2 Proxy for authentication, and multiple Prometheus instances aggregated by Thanos, with logs visualized in Grafana via Loki.

5. Business Impact

Core applications now run on the container cloud, achieving rapid scaling during traffic spikes (resource provisioning reduced from hours to seconds) and enabling fast rollbacks via GitLab CI, improving continuity and availability. Bare‑metal deployment enhances ROI and leverages Intel Xeon and Optane technologies for higher performance.

6. Challenges and Outlook

The main challenge is shifting technical staff mindset toward cloud‑native practices. Ongoing efforts focus on training, selecting suitable applications for migration, and gradually introducing micro‑service architectures to further transform application and infrastructure layers.

cloud-nativekubernetesDevOpsPaaScontainer cloudOpenShift
Architects' Tech Alliance
Written by

Architects' Tech Alliance

Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.