Operations 11 min read

What Nginx Can Do: Reverse Proxy, Load Balancing, HTTP Server, and More

This article explains the capabilities of Nginx without third‑party modules, covering reverse proxy, load balancing strategies, static HTTP serving, dynamic/static separation, and forward proxy, and provides concrete configuration examples for each feature.

Top Architect
Top Architect
Top Architect
What Nginx Can Do: Reverse Proxy, Load Balancing, HTTP Server, and More

Nginx What It Can Do

This article focuses on what Nginx can handle without loading third‑party modules; it may not be exhaustive because many modules exist, and the author only shares personal experience.

Reverse Proxy

Reverse proxy is one of the most common uses of Nginx. It receives external requests and forwards them to internal servers, returning the response to the client. A simple implementation is shown below:

server {
    listen       80;
    server_name  localhost;
    client_max_body_size 1024M;

    location / {
        proxy_pass http://localhost:8080;
        proxy_set_header Host $host:$server_port;
    }
}

After saving the configuration and starting Nginx, accessing http://localhost will be proxied to http://localhost:8080 .

Load Balancing

Nginx can distribute traffic among multiple backend servers. It supports three built‑in strategies (RR, weight, ip_hash) and two popular third‑party strategies (fair, url_hash).

RR (default)

upstream test {
    server localhost:8080;
    server localhost:8081;
}
server {
    listen       81;
    server_name  localhost;
    client_max_body_size 1024M;

    location / {
        proxy_pass http://test;
        proxy_set_header Host $host:$server_port;
    }
}

The core upstream definition is:

upstream test {
    server localhost:8080;
    server localhost:8081;
}

Even if one backend (e.g., port 8081) is unavailable, Nginx automatically skips it, ensuring high availability.

Weight

Assigns a weight to each server; traffic is distributed proportionally.

upstream test {
    server localhost:8080 weight=9;
    server localhost:8081 weight=1;
}

In ten requests, roughly nine will go to 8080 and one to 8081.

ip_hash

Ensures a client IP always reaches the same backend, useful for session‑based applications.

upstream test {
    ip_hash;
    server localhost:8080;
    server localhost:8081;
}

fair (third‑party)

Distributes requests based on backend response time, preferring faster servers.

upstream backend {
    fair;
    server localhost:8080;
    server localhost:8081;
}

url_hash (third‑party)

Hashes the request URL so the same URL always goes to the same backend, useful for caching.

upstream backend {
    hash $request_uri;
    hash_method crc32;
    server localhost:8080;
    server localhost:8081;
}

Note: fair and url_hash require third‑party modules, which are not covered here.

HTTP Server

Nginx can serve static files directly. The following configuration serves files from e:\wwwroot on port 80.

server {
    listen       80;
    server_name  localhost;
    client_max_body_size 1024M;
    location / {
        root   e:\wwwroot;
        index  index.html;
    }
}

Static/Dynamic Separation

Static resources (HTML, images, CSS, JS) are served by Nginx, while dynamic requests (e.g., JSP) are proxied to a backend such as Tomcat.

upstream test{    
   server localhost:8080;    
   server localhost:8081;    
}

server {
    listen       80;
    server_name  localhost;

    location / {
        root   e:\wwwroot;
        index  index.html;
    }
    # static files
    location ~ \.(gif|jpg|jpeg|png|bmp|swf|css|js)$ {
        root    e:\wwwroot;
    }
    # dynamic files
    location ~ \.(jsp|do)$ {
        proxy_pass  http://test;
    }
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   e:\wwwroot;
    }
}

Forward Proxy

A forward proxy sits between the client and the origin server. The configuration below enables Nginx to act as a forward proxy.

resolver 114.114.114.114 8.8.8.8;
server {
    resolver_timeout 5s;
    listen 81;
    access_log  e:\wwwroot\proxy.access.log;
    error_log   e:\wwwroot\proxy.error.log;
    location / {
        proxy_pass http://$host$request_uri;
    }
}

After setting the DNS resolvers and listening port, browsers or proxy tools can use the server's IP and port as a forward proxy.

Final Notes

Common commands for managing Nginx:

/etc/init.d/nginx start/restart   # start or restart Nginx service
/etc/init.d/nginx stop           # stop Nginx service
/etc/nginx/nginx.conf            # location of the main config file

Nginx supports hot reload; after modifying the configuration, run:

nginx -s reload

to apply changes without stopping the service.

load balancingconfigurationDevOpshttp serverReverse Proxy
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.