Operations 6 min read

Why 502 Errors Show Up Only on POST? Uncovering Nginx‑Ingress‑uWSGI Mismatch

After migrating to a PaaS platform, intermittent 502 errors appeared on both GET and POST requests, and a detailed analysis of Nginx retry behavior, Ingress HTTP version settings, and uWSGI incompatibility reveals the root cause and a configuration fix.

Efficient Ops
Efficient Ops
Efficient Ops
Why 502 Errors Show Up Only on POST? Uncovering Nginx‑Ingress‑uWSGI Mismatch

Specific Phenomenon

After migrating the application to our PaaS platform, intermittent 502 errors appeared; the error screenshots are shown below.

The errors are relatively rare compared to request volume, but they persist and affect callers, so the root cause must be investigated.

Why Only POST Requests Are Seen

Readers may think the ELK filter only logs POST because the filter is set to POST, but GET requests can also produce 502. Nginx retries GET requests automatically (proxy_next_upstream), generating logs like the following.

Because GET is considered idempotent, Nginx retries when an upstream returns 502. For our issue, confirming the 502 reason via POST is sufficient because the causes are the same.

Network Topology

When a request enters the cluster, the flow is:

<code>Client Request => Nginx => Ingress => uWSGI</code>

We keep Nginx for historical reasons, even though Ingress is also present.

Statistical Investigation

Statistics of error requests from Nginx and Ingress show that the number of 502 errors is identical, indicating the problem lies between Ingress and uWSGI.

Packet Capture

After other statistical methods failed to reveal a pattern, we resorted to packet capture.

Captured request log:

Capture result:

The TCP connection is reused; Ingress uses HTTP/1.1 and attempts to send a second HTTP request on the same connection, but uWSGI does not support HTTP/1.1, so the second request is rejected, leading Ingress to return 502. GET requests are retried, while POST requests are not, which explains why POST 502 appears in statistics.

Ingress Configuration Study

Ingress defaults to HTTP/1.1 for upstreams, but our uWSGI uses an http-socket (HTTP/1.0). This protocol mismatch causes the unexpected 502 errors.

Solution: explicitly set the HTTP version used by Ingress.

<code>{% if keepalive_enable is sameas true %}
nginx.ingress.kubernetes.io/proxy-http-version: "1.1"
{% else %}
nginx.ingress.kubernetes.io/proxy-http-version: "1.0"
{% endif %}</code>

By aligning the protocol versions, the 502 errors disappear.

Summary

Capturing packets once resolved the issue; without capture the problem would remain hidden. For end‑to‑end fault isolation, first compare error counts of Nginx and Ingress; if they match, focus on Ingress‑to‑uWSGI. This approach quickly pinpoints the faulty link in multi‑hop request chains.

KubernetestroubleshootingNginxIngress502 erroruwsgi
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.