Cloud Native 7 min read

Performance Degradation After Containerization: Analysis and Optimization Strategies

The article examines why applications experience slower performance after being containerized on Kubernetes, presenting benchmark comparisons, analyzing increased soft‑interrupt overhead due to the Calico ipip overlay, and proposing network optimizations such as ipvlan modes and Cilium to restore efficiency.

Selected Java Interview Questions
Selected Java Interview Questions
Selected Java Interview Questions
Performance Degradation After Containerization: Analysis and Optimization Strategies

Background : As more companies adopt cloud‑native architectures and move from monolithic VMs to containerized micro‑services orchestrated by Kubernetes, a noticeable performance regression is observed when applications run in containers.

Benchmark before containerization : Using the wrk tool on a VM, the average response time (RT) was 1.68 ms with a QPS of 716 /s, while CPU usage was near saturation.

Benchmark after containerization : The same workload on containers yielded an average RT of 2.11 ms and a QPS of 554 /s, again with CPU fully utilized.

Performance comparison : Overall performance dropped by roughly 25 % in latency and 29 % in QPS when moving to containers.

Root‑cause analysis : The degradation is attributed to architectural differences introduced by the Calico ipip overlay network. Container‑to‑host communication traverses a veth pair and the Linux kernel stack, generating a higher rate of soft interrupts (approximately 14 % more) compared with the VM scenario.

Relevant kernel code :

static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
{
    ...
    if (likely(veth_forward_skb(rcv, skb, rq, rcv_xdp)))
        ...
}

static int veth_forward_skb(struct net_device *dev, struct sk_buff *skb,
                           struct veth_rq *rq, bool xdp)
{
    return __dev_forward_skb(dev, skb) ?: xdp ?
           veth_xdp_rx(rq, skb) :
           netif_rx(skb); // soft‑interrupt handling
}

/* Called with irq disabled */
static inline void ____napi_schedule(struct softnet_data *sd,
                                    struct napi_struct *napi)
{
    list_add_tail(&napi->poll_list, &sd->poll_list);
    __raise_softirq_irqoff(NET_RX_SOFTIRQ); // trigger soft‑interrupt
}

The call chain veth_xmit → veth_forward_skb → netif_rx → __raise_softirq_irqoff shows that every packet sent through the veth interface ultimately raises a soft interrupt, explaining the increased CPU overhead.

Optimization strategies :

ipvlan L2 mode : Bypasses the overlay by attaching containers directly to the host's Ethernet interface, eliminating the extra soft‑interrupt path.

ipvlan L3 mode : Uses the host as a router, allowing cross‑subnet container communication with a shorter data path.

Cilium : An eBPF‑based CNI that reduces iptables overhead and provides high‑performance networking, outperforming Calico in both QPS and CPU usage.

Conclusion : While containerization brings agility and resource efficiency, it also adds network complexity that can degrade performance. By adopting underlay networking solutions such as ipvlan or high‑performance CNI plugins like Cilium, teams can mitigate soft‑interrupt overhead and restore application throughput.

performanceKubernetesnetworkContainerizationSoft InterruptsCiliumipvlan
Selected Java Interview Questions
Written by

Selected Java Interview Questions

A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.