Cloud Native 10 min read

How SegmentFault Scaled Its Architecture with Kubernetes and KubeSphere

This article chronicles SegmentFault's evolution from a single‑instance VPS to a cloud‑native platform on KubeSphere, explaining why Kubernetes was chosen, how front‑end/back‑end separation reshaped the system, and the operational lessons learned for cost‑effective, automated deployments.

Qingyun Technology Community
Qingyun Technology Community
Qingyun Technology Community
How SegmentFault Scaled Its Architecture with Kubernetes and KubeSphere

SegmentFault Architecture Evolution

SegmentFault, a Chinese tech community, started in 2012 with a single‑instance VPS on Linode, later moved to self‑hosted servers, then to public cloud, and finally migrated its core services to KubeSphere in 2020 to adopt a cloud‑native architecture.

Why Choose Kubernetes?

The platform faces several challenges: a complex business line with a small engineering team, frequent configuration changes, lack of dedicated operations staff, and strict cost constraints. These pressures motivated a shift to Kubernetes.

Frontend‑Backend Separation

Before 2020 the site used traditional server‑side rendering with PHP. Growth forced a split into multiple services: a Node.js React server for server‑side rendering, a PHP API service for client‑side rendering, and an internal API using a proprietary protocol, all requiring load balancing.

What Kubernetes Brings

KubeSphere provides an out‑of‑the‑box, high‑availability Kubernetes cluster that can be provisioned with a few clicks, ideal for teams without dedicated ops. Deployment is managed as code: Dockerfiles, K8s manifests, and versioned in Git, enabling automated CI/CD pipelines.

Continuous integration with GitLab automates testing, image building, and deployment to the cluster, making releases fast and traceable.

Operational Lessons

Run log services (e.g., Elasticsearch) outside the cluster to avoid heavy resource consumption.

Deploy three or more master nodes for high availability.

Avoid running critical databases or caches on Kubernetes unless you are a service provider; prefer managed cloud services.

Scale replica counts to match cluster size to prevent scheduling overload.

Containerization reduced the number of physical servers needed, allowing finer‑grained resource planning and higher efficiency.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

cloud nativearchitectureKubernetesdevopsKubeSphere
Qingyun Technology Community
Written by

Qingyun Technology Community

Official account of the Qingyun Technology Community, focusing on tech innovation, supporting developers, and sharing knowledge. Born to Learn and Share!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.