Backend Development 39 min read

Comprehensive Guide to Nginx: Overview, Core Configuration, and Practical Deployment

This article provides a detailed introduction to Nginx, covering its architecture, core configuration directives, installation steps, reverse proxy and load‑balancing setups, caching, HTTPS, CORS handling, gzip compression, and practical examples to help developers and operations engineers configure and use Nginx effectively.

Top Architect
Top Architect
Top Architect
Comprehensive Guide to Nginx: Overview, Core Configuration, and Practical Deployment

Introduction

As a front‑end developer you may often be asked to modify Nginx configuration on the server; this guide helps you overcome that obstacle and become a true programmer.

Nginx Overview

Nginx is an open‑source, high‑performance, highly reliable web and reverse‑proxy server that supports hot deployment and can run continuously for months without a restart, consuming little memory while handling up to 50,000 concurrent connections.

Key Features

High concurrency and performance

Modular architecture for easy extension

Asynchronous, non‑blocking event‑driven model (similar to Node.js)

Hot deployment and graceful upgrades

Fully open‑source with a thriving ecosystem

Typical Use Cases

Static file serving

Reverse proxy (including caching and load balancing)

API services (e.g., OpenResty)

Installation on CentOS 7

Run the following command to install Nginx via yum:

yum install nginx -y

After installation, you can view the installed files with:

# Nginx configuration files
/etc/nginx/nginx.conf          # main configuration file
/etc/nginx/nginx.conf.default
# Executable binaries
/usr/sbin/nginx
/usr/bin/nginx-upgrade
# Library and module directories
/usr/lib64/nginx/modules
# Documentation
/usr/share/doc/nginx-1.16.1/README
# Static resources
/usr/share/nginx/html/index.html
# Log directory
/var/log/nginx

Core Configuration Sections

The main sections in nginx.conf are main , events , and http . The http block contains most of the server‑level directives such as log_format , access_log , sendfile , and the server blocks that define virtual hosts.

Important Directives

user : defines the user and group for worker processes.

worker_processes : number of worker processes (often set to auto ).

worker_connections : maximum concurrent connections per worker.

keepalive_timeout : timeout for keep‑alive connections.

server_name : virtual host name matching rules (exact, wildcard, regex).

location : URL matching with modifiers (=, ~, ~*, ^~).

proxy_pass : forwards requests to an upstream server.

upstream : defines a pool of backend servers for load balancing.

Reverse Proxy Configuration

Define an upstream block and use proxy_pass to forward traffic:

upstream backend {
    server 121.42.11.34:8080 weight=2 max_fails=3 fail_timeout=10s;
    keepalive 32;
}

server {
    listen 80;
    server_name proxy.example.com;
    location /proxy {
        proxy_pass http://backend;
    }
}

This setup forwards requests from proxy.example.com/proxy to the backend server while preserving the original client IP.

Load Balancing Strategies

Nginx supports several load‑balancing algorithms:

Round‑robin (default)

IP hash : binds a client IP to a specific backend, useful for session persistence.

Least connections : sends requests to the server with the fewest active connections.

Hash : custom hash key (e.g., $request_uri ) for deterministic routing.

Example using least_conn :

upstream demo {
    least_conn;
    server 121.42.11.34:8020;
    server 121.42.11.34:8030;
    server 121.42.11.34:8040;
}

server {
    listen 80;
    server_name balance.example.com;
    location /balance/ {
        proxy_pass http://demo;
    }
}

Caching with proxy_cache

Configure a shared cache zone and enable caching for upstream responses:

proxy_cache_path /etc/nginx/cache_temp levels=2:2 keys_zone=cache_zone:30m max_size=2g inactive=60m use_temp_path=off;

upstream cache_server {
    server 121.42.11.34:1010;
    server 121.42.11.34:1020;
}

server {
    listen 80;
    server_name cache.example.com;
    location / {
        proxy_cache cache_zone;
        proxy_cache_valid 200 5m;
        proxy_cache_key $request_uri;
        add_header Nginx-Cache-Status $upstream_cache_status;
        proxy_pass http://cache_server;
    }
}

The response header Nginx-Cache-Status shows cache status (MISS, HIT, etc.).

HTTPS Configuration

Enable SSL by specifying the certificate and key files:

server {
    listen 443 ssl http2;
    server_name lion.club;
    ssl_certificate /etc/nginx/https/lion.club_bundle.crt;
    ssl_certificate_key /etc/nginx/https/lion.club.key;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    location / {
        root /usr/share/nginx/html;
        index index.html index.htm;
    }
}

CORS Handling

To allow cross‑origin requests, add the appropriate response headers in the location block:

location /api/ {
    add_header Access-Control-Allow-Origin "*";
    add_header Access-Control-Allow-Methods "GET, POST, OPTIONS";
    add_header Access-Control-Allow-Headers "Authorization,Content-Type";
    if ($request_method = OPTIONS) {
        return 204;
    }
    proxy_pass http://backend;
}

Gzip Compression

Enable gzip to reduce bandwidth for text resources:

gzip on;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_vary on;

Process Model and Reload Mechanism

Nginx runs a master process that manages multiple worker processes. When a configuration reload is triggered (e.g., nginx -s reload ), the master validates the new configuration, spawns new workers with the updated settings, and gracefully shuts down the old workers, ensuring zero‑downtime deployment.

Modular Architecture

Nginx’s functionality is split into a core and loadable modules, allowing developers to extend the server without affecting the core stability. Modules are independent, promoting low coupling and high cohesion.

Conclusion

After reading this guide you should have a solid understanding of Nginx’s architecture, core directives, and practical configurations such as reverse proxy, load balancing, caching, HTTPS, CORS, and gzip compression, enabling you to deploy and maintain robust web services.

load balancingConfigurationcachingCORSNginxreverse proxyWeb ServergzipHTTPS
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.