Cloud Native 4 min read

How to Deploy Microservices: From Single‑Process to Serverless and Kubernetes

This article compares four microservice deployment models—single‑machine multi‑process, containerized, serverless, and Kubernetes‑based orchestration—detailing their architectures, advantages, drawbacks, and ideal use cases for modern cloud‑native applications.

Mike Chen's Internet Architecture
Mike Chen's Internet Architecture
Mike Chen's Internet Architecture
How to Deploy Microservices: From Single‑Process to Serverless and Kubernetes

Overview

Microservices are the backbone of large‑scale architectures, and selecting the right deployment method is crucial for performance, scalability, and operational efficiency.

Single‑Machine Multi‑Process Deployment

All microservices run as independent processes on the same physical or virtual host.

Machine A:
├── User-Service (process)
├── Order-Service (process)
├── Payment-Service (process)

Advantages: Simple to set up, low resource overhead, suitable for development, testing, or small production environments with limited resources.

Disadvantages: Resource contention (CPU/Memory), no isolation (failure of one service can affect all), difficult to scale (only whole‑machine scaling), and a single point of failure.

Containerized Deployment

Each microservice runs inside its own container (e.g., Docker), leveraging Linux namespaces and cgroups for process‑level isolation.

Advantages:

Second‑level startup: containers share the host kernel, eliminating OS boot time.

High utilization: many containers can be densely packed on a single machine, with Kubernetes dynamically allocating resources.

Environment consistency: "build once, run anywhere" across development, testing, and production.

Disadvantage: Requires a mature container orchestration platform (such as Kubernetes) and accompanying monitoring infrastructure.

Serverless Deployment

Microservices are broken into functions that are invoked on demand and run on a cloud provider’s serverless platform (e.g., AWS Lambda, Alibaba Cloud Function Compute), with billing only for execution time.

Suitable scenarios: Event‑driven workloads, burst traffic, or short‑lived tasks, offering significant reductions in operational burden and cost.

Limitations: Cold‑start latency, execution time and resource caps, and potential vendor lock‑in.

Container Orchestration Deployment (Kubernetes)

Building on containerization, Kubernetes (or Docker Swarm) manages thousands of containers across clusters.

Core components: Deployment (replica control), Service (load balancing & service discovery), HPA (auto‑scaling), Ingress, ConfigMap/Secret, etc.

Advantages:

Fully automated: self‑healing, rolling updates, gray releases, zero downtime.

Unified scheduling and automatic elasticity.

Integrated service discovery, load balancing, and monitoring.

Supports multi‑cluster and multi‑region deployments.

Disadvantage: Steep learning curve due to the breadth of Kubernetes concepts, making it best suited for large‑scale production environments.

serverlesscloud-nativemicroservicesdeploymentcontainerization
Mike Chen's Internet Architecture
Written by

Mike Chen's Internet Architecture

Over ten years of BAT architecture experience, shared generously!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.