How to Deploy Dedicated Kubernetes Clusters for SaaS & ToB on Public Cloud
This article explains the architecture, deployment process, and custom node configuration of dedicated Kubernetes clusters for SaaS and ToB scenarios on public cloud, highlighting differences from internal setups, networking challenges, and resource‑optimization solutions.
1. New Feature: ToB & SaaS Dedicated Cluster Launch
Overall SaaS Process Diagram
The SaaS workflow mirrors the internal process: the master components are hosted on a separate cluster. Instead of using an internal Rancher‑built base cluster, a dedicated cluster is used as the base to reduce resource duplication and operational costs across regions.
Key difference for public‑cloud/toB dedicated clusters is the base cluster selection.
Internal: Rancher builds a cluster as the base.
ToB & SaaS: An external dedicated cluster serves as the base.
2. Master Architecture Differences
The internal dedicated‑cluster architecture includes:
Konk controller creates master components, certificates, and related resources.
Network uses the company's BGP and VPC setup.
Master containers communicate via 127.0.0.1.
When deploying SaaS or ToB, several issues arise with private VPC networking and communication with internal services:
Kubernetes Webhook requires bidirectional communication between webhook pods and the apiserver pod.
Private network must access internal resources such as Harbor and base master VIP.
Probe failures occur because internal environments use non‑isolated BGP/VPC networks, while public‑cloud hosts have isolated container networks; probes rely on the host network to curl or telnet pod IPs.
Final Solution Architecture
The revised design implements the following changes:
Apiserver pods use a 172.x.x.x VPC private network for bidirectional communication with webhook pods and node agents.
Etcd runs in the internal common VPC (11.x.x.x); apiserver accesses etcd via IPVS SNAT using the host network.
A CNI‑agent intercepts probes to monitor apiserver container status.
3. Summary of SaaS & ToB Dedicated Cluster Deployment
By integrating the dedicated cluster with internal VPC‑CNI, polefs, arkit, and load balancing, the solution becomes a core selling point for public‑cloud and ToB scenarios.
2. Custom Node Configuration for Dedicated Clusters
In public‑cloud and internal environments, diverse user requirements lead to resource waste, especially for workloads that need managed machines. Traditional systemd‑based kubelet and kube‑proxy allow host‑level parameter tweaks, but containerized versions on RKE/Rancher lack such configurability.
1. Demand
Some sensitive services run on privately built DBA clusters with low CPU and memory utilization. While these workloads can be satisfied by mixed‑deployment solutions, they still suffer from significant resource waste.
2. Mixed‑Node Resource Profile
Different node specifications cause uneven container utilization; smaller nodes leave considerable resources idle. A dynamic node‑resource reservation mechanism is needed.
3. Refactored Architecture Diagram
The new design introduces a CRD called NodeTemplate , allowing users to adjust parameters based on node size, thereby maximizing resource utilization.
4. Cloud & Public‑Cloud Adaptation
The platform now supports custom node configuration in both the container cloud and public‑cloud. Users can modify resource reservations and specify disk partitions for different workloads (e.g., ES uses /aaa, MySQL uses /bbb) by adjusting the rootDir , enabling containers to store data on appropriate disks.
Conclusion
The custom node reservation and mixed‑deployment solution increases usable resources for managed machines by about 10%, significantly reducing hardware waste and adding an advanced feature to dedicated clusters.
360 Zhihui Cloud Developer
360 Zhihui Cloud is an enterprise open service platform that aims to "aggregate data value and empower an intelligent future," leveraging 360's extensive product and technology resources to deliver platform services to customers.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.