Why POST Requests Get 502 After PaaS Migration – Nginx, Ingress & uWSGI Explained
After moving an application to a PaaS platform, intermittent 502 errors appear mainly on POST requests; the article analyzes Nginx retry behavior, Ingress‑uwsgi protocol mismatches, packet‑capture findings, and provides a configuration fix to resolve the issue.
Specific Phenomenon
After migrating the application to our PaaS platform, intermittent 502 errors appear; the error screenshots are shown below.
The errors are rare compared to request volume but persist, affecting callers and requiring investigation.
Why Only POST Requests Are Seen
Although ELK filters only POST, GET requests can also produce 502; Nginx retries GET requests, generating logs like the one below.
The retry mechanism is Nginx's default
proxy_next_upstream(see nginx.org ). Because GET is idempotent, Nginx retries when an upstream returns 502, while POST is not retried. For our issue, checking POST is sufficient as the cause is the same.
Network Topology
Request flow into the cluster:
<code>user request => Nginx => Ingress => uwsgi</code>Ingress is retained for historical reasons even though Nginx is also present.
Statistical Investigation
Statistics show 502 errors appear equally in Nginx and Ingress, indicating the problem lies between Ingress and uwsgi:
Ingress<=>uwsgiPacket Capture
After other methods failed, a packet capture was performed.
The capture shows TCP connection reuse; Ingress uses HTTP/1.1 and attempts a second HTTP request on the same TCP connection, but uwsgi does not support HTTP/1.1, so the second request is rejected, causing Ingress to return 502. GET requests are retried, POST requests are not, which explains why POST 502 appears in the statistics.
Ingress Configuration Learning
Ingress defaults to HTTP/1.1 for upstreams, but our uwsgi uses an http‑socket (not http11‑socket). This mismatch causes unexpected 502 errors.
Solution: force the HTTP version in Ingress annotations, for example:
<code>nginx.ingress.kubernetes.io/proxy-http-version: "1.1"</code>or set to "1.0" depending on the condition.
Conclusion
The packet capture quickly identified the root cause. When troubleshooting a chain that includes both Nginx and Ingress, compare error counts; if they match, focus on Ingress configuration rather than Nginx. This approach speeds up fault isolation across multiple hops.
Efficient Ops
This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.