Information Security 5 min read

Understanding TLS Handshake Overhead and Bandwidth Impact in High‑Concurrency Services

The article analyzes why a high‑concurrency GET service quickly saturates a 100 Mbps uplink due to TLS handshake overhead, demonstrates bandwidth savings by switching to HTTP or using Keep‑Alive, and highlights practical considerations for secure connections.

DevOps Operations Practice
DevOps Operations Practice
DevOps Operations Practice
Understanding TLS Handshake Overhead and Bandwidth Impact in High‑Concurrency Services

Origin: A high‑concurrency crawling service quickly filled a 100 Mbps uplink even though each request was a simple GET with a small payload. Because the service used a dedicated line, the bandwidth consumption was traced to the TLS handshake, which alone accounted for about 1.27 KB of a 1.68 KB request, resulting in an estimated 262.5 Mbps for 20,000 concurrent requests.

What is the TLS Handshake? HTTPS is HTTP over TLS, and each new TCP connection normally performs a full TLS handshake. During this process the client and server exchange random numbers, supported cipher suites and TLS version, the server’s digital certificate (including the public key), and a pre‑master secret used to derive symmetric keys. This exchange consumes both bandwidth and CPU resources.

Solution 1 – Switch to HTTP: By changing the request protocol to plain HTTP, the TLS handshake is eliminated. The request size drops from 1.68 KB to about 0.4 KB, saving roughly 70 % of the data transferred and noticeably reducing server load under the same concurrency.

Solution 2 – Keep‑Alive for HTTPS: If HTTPS is mandatory, adding the header Connection: keep-alive enables multiple HTTPS requests to reuse the same TCP connection, avoiding a full TLS handshake for each request after the first one. The initial handshake is still required, but subsequent requests incur much lower overhead, making this approach suitable for high‑concurrency scenarios.

Things to watch out for: Keep‑Alive connections have timeout limits (Nginx defaults to 75 seconds, Apache to 5 seconds). If the crawling program uses many proxy IPs, the benefit of Keep‑Alive may be limited, and using plain HTTP remains the most effective way to reduce bandwidth.

Author: 麦麦麦造

Link: https://juejin.cn/post/7409138396792881186

Source: 稀土掘金

Popular technical columns are on limited‑time discount – click to view:

Kubernetes from Beginner to Practice – original price 499 CNY, Discount price 89 CNY

Play with Prometheus Monitoring – original price 299 CNY, Discount price 79 CNY

--- END ---

Follow the public account to get more exciting content.

securityKeep-AliveTLSHTTPSnetwork performanceBandwidth
DevOps Operations Practice
Written by

DevOps Operations Practice

We share professional insights on cloud-native, DevOps & operations, Kubernetes, observability & monitoring, and Linux systems.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.