Operations 22 min read

Does Upgrading Nginx → Upstream to HTTP/2 Really Boost Performance?

This article details a systematic performance test of Nginx 1.29.x’s new HTTP/2 upstream support, comparing HTTP/1.1, HTTP/2 with keep‑alive, and HTTP/1.0 baselines across various payload sizes, connection‑pool settings, and large‑header scenarios to determine when protocol upgrades yield real throughput or latency gains.

Tech Musings
Tech Musings
Tech Musings
Does Upgrading Nginx → Upstream to HTTP/2 Really Boost Performance?

Background and Test Goals

NGINX 1.29.4 added native support for HTTP/2 upstream connections via the proxy_http_version 2 directive, enabling clear‑text h2c communication. The core question is whether upgrading the Nginx → upstream link from HTTP/1.1 to HTTP/2 improves performance when most clients still use HTTP/1.1.

Key constraint: current Nginx HTTP/2 upstream implements only HPACK header compression; multiplexing is not yet available.

Test Focus

Throughput and tail‑latency comparison of three upstream protocol modes.

Impact of keep‑alive connection pooling.

Effect of payload type (small, large, CPU‑intensive, large‑header) on protocol differences.

Overall Architecture

┌─────────────────────────────────────────┐
Client (wrk/h2load) │   WSL2 Host (host network)            │
               │                                 │
:8090 HTTP/1.1+KA ──► nginx-http1   ─► backend x4 (:8080‑8083)
:8091 HTTP/2 h2c   ──► nginx-http2   ─► backend x4 (:8080‑8083)
:8092 HTTP/1.0     ──► nginx-no-keepalive ─► backend x4 (:8080‑8083)
               └─────────────────────────────────────────┘

All containers run with network_mode: host to eliminate NAT overhead, and a 5 ms loopback delay simulates a data‑center network.

Backend Service: Spring Boot 3 + JDK 25

Four Spring Boot instances expose several endpoints used for the tests.

application.yml (relevant parts)

server:
  port: ${SERVER_PORT:11080}
  http2:
    enabled: true   # enable h2c (Tomcat native support)
  tomcat:
    threads:
      max: 400
      min-spare: 50
      max-connections: 8192
    connection-timeout: 5s
    keep-alive-timeout: 60s
spring:
  threads:
    virtual:
      enabled: true   # enable Project Loom virtual threads
server.http2.enabled: true

– Spring Boot’s embedded Tomcat supports HTTP/2 without TLS. spring.threads.virtual.enabled: true – each request runs on a lightweight virtual thread.

Backend JVM Options

-server -Xms1G -Xmx1G -XX:+UseG1GC -XX:MaxGCPauseMillis=50
-XX:+UseStringDeduplication -XX:+OptimizeStringConcat
-XX:+DisableExplicitGC -Djava.net.preferIPv4Stack=true

Test Endpoints

/ping          ~4 B   pure protocol overhead
/json          ~200 B typical REST API
/data?size=1K   1 KB
/data?size=10K  10 KB
/data?size=50K  50 KB (pivot point)
/data?size=100K 100 KB
/compute        CPU‑intensive (~150 B response)
/info           ~2 KB echo of request headers (HPACK test)

Nginx Configuration Details

Three Nginx instances listen on different ports, sharing a common upstream block. The configuration differences are shown per mode.

Common Template

http {
    keepalive_timeout   65;
    keepalive_requests  100000;
    gzip off;

    upstream backend {
        server 127.0.0.1:8080;
        server 127.0.0.1:8081;
        server 127.0.0.1:8082;
        server 127.0.0.1:8083;
        # keepalive set per mode
    }

    server {
        # listen set per mode
        location / {
            proxy_pass http://backend;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_buffering on;
            proxy_buffer_size 16k;
            proxy_buffers 8 32k;
        }
    }
}

Mode 1 – HTTP/1.1 + keepalive (port 8090)

upstream backend {
    keepalive 256;
    keepalive_requests 100000;
    keepalive_timeout 60s;
}
server {
    listen 8090;
    location / {
        proxy_http_version 1.1;
        proxy_set_header Connection "";  # clear Connection header
    }
}

Mode 2 – HTTP/2 h2c + keepalive (port 8091)

upstream backend {
    keepalive 256;
    keepalive_requests 100000;
    keepalive_timeout 60s;
}
server {
    listen 8091;
    location / {
        proxy_http_version 2;   # core: upstream uses HTTP/2 h2c
    }
}

Mode 3 – HTTP/1.0 (no keepalive, baseline) (port 8092)

upstream backend { }
server {
    listen 8092;
    location / {
        proxy_http_version 1.0;
    }
}
Note: When proxy_http_version 2 is used, the Connection header must not be set because HTTP/2 has no Connection concept, but the keepalive pool still needs to be configured.

Testing Tools and Scenarios

wrk (8 threads, 30 s per run) for HTTP/1.x and HTTP/2 + keepalive cases.

h2load for native HTTP/2 multiplexing tests (single backend, no Nginx).

wrk Lua script injects ~4 KB of large headers (JWT, session token, tracing IDs) to evaluate HPACK compression.

Key Findings

Keepalive is decisive

Without keepalive, throughput drops 40‑63 % and latency spikes, especially under the simulated 5 ms handshake cost.

Throughput (ping endpoint, pure protocol overhead)

c=10   H1+KA: 13,149 RPS   H2+KA: 11,295 RPS   -14 %
c=200  H1+KA: 46,315 RPS   H2+KA: 33,732 RPS   -27 %

HTTP/1.1 consistently outperforms HTTP/2 for tiny payloads because HPACK adds overhead without any multiplexing benefit.

Medium payloads (1‑10 KB)

/json (≈200 B)   H2 slightly ahead (+2 %)
/data 1KB         H1 ahead (‑29 %)
/data 10KB        H1 ahead (‑12 %)

Large payloads – the pivot (≥50 KB)

/data 50KB  H2 beats H1 by +2 % to +3 % (RPS) and reduces P99 latency up to 47 % at c=50.
/data 100KB H2 leads by +4 % to +11 % (RPS) and cuts P99 latency by 11‑52 % (up to 4.3× improvement).

Big‑header scenario (≈4 KB request headers)

Throughput is slightly lower for HTTP/2 (‑3 % to ‑7 %), but P99 latency improves about 27 % at concurrency 50, confirming that HPACK compression reduces tail latency.

Multiplexing potential (h2load)

10 connections × 50 streams = 500 concurrent requests → 143,306 RPS
≈12× the single‑stream HTTP/1.1 baseline.

Current Nginx upstream does not support multiplexing, so observed gains are limited to HPACK compression.

Decision Framework

Response size      Throughput advantage   P99 latency advantage
< 1KB               HTTP/1.1 +++           HTTP/1.1 ++
1‑10KB              HTTP/1.1 +             HTTP/1.1 + (near parity)
≈50KB               HTTP/2 +               HTTP/2 ++ (large gain at c=50)
≥100KB             HTTP/2 ++              HTTP/2 ++++ (up to 4.3× improvement)

For latency‑sensitive services, P99 matters more than average throughput; HTTP/2 provides decisive latency reductions for large payloads.

Conclusions

Upgrading Nginx → upstream to HTTP/2 yields benefits only for large responses or when large request headers are present.

Enabling keepalive in the upstream block is essential; without it performance collapses.

The biggest missing piece is upstream multiplexing – once Nginx adds it, HTTP/2 gains will match the theoretical limits demonstrated by h2load.

HTTP2NginxKeepaliveupstreamh2ch2load
Tech Musings
Written by

Tech Musings

Capturing thoughts and reflections while coding.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.