Why Nginx Is the Ultimate High‑Performance Web Server and How to Configure It

Nginx, a high‑performance HTTP and mail proxy server, achieves near‑million QPS throughput through its multi‑process architecture, efficient CPU core utilization, and fine‑grained configuration options, and this guide explains its design, core processes, configuration syntax, common directives, and practical usage scenarios for developers and ops engineers.

ELab Team
ELab Team
ELab Team
Why Nginx Is the Ultimate High‑Performance Web Server and How to Configure It

Introduction

Nginx is a high‑performance HTTP server that can also act as a mail proxy. It uses little memory, has strong concurrency, is very stable, and offers a rich module ecosystem and flexible configuration, making it indispensable in modern Internet systems.

Performance Ceiling

As a web server, Nginx is often considered a “performance ceiling”. After optimization it can reach “single‑machine million QPS”. This performance is why major Chinese internet companies use Nginx as their gateway to handle all online traffic.

Architecture Design

CPU frequency has plateaued for over a decade, so performance gains come from increasing core counts. Single‑process, single‑threaded software cannot benefit from higher CPU frequencies; to improve QPS one must fully utilize multiple CPU cores.

To exploit multiple cores, Nginx adopts a multi‑process architecture. The main process reads configuration, binds ports, and creates worker processes. Worker processes handle most logic such as network requests, disk I/O, and communication with other services.

master process: reads configuration, binds ports, creates child processes;

worker process: handles network requests, I/O, etc.;

cache manager / cache loader: cache‑related logic.

When Nginx starts, it creates a master process, which then spawns worker processes and goes to sleep, consuming minimal resources. Each worker runs continuously under high concurrency, typically occupying an entire CPU core; by default Nginx creates as many workers as CPU cores.

Workers inherit the listening sockets from the master, allowing multiple workers to listen on the same port.

The master/worker model also enables features such as hot upgrades.

Maximizing CPU Utilization

Beyond architecture, Nginx raises the static priority of worker processes so they receive longer time slices from the Linux scheduler, further increasing CPU usage.

To achieve high performance, you need both a good architecture and careful detail handling.

Getting Started

Nginx’s configuration consists of a main file and auxiliary files under the conf directory. Lines starting with # or TAB followed by # are comments.

Each configuration item has a directive and parameters, terminated by a semicolon. Example:

# This line is a comment, the next line defines the error_page directive
error_page   500 502 503 504  /50x.html;

Configuration Directives

Directives are either simple (parameters are simple strings) or complex (contain a block delimited by {} that can include nested directives).

events {
    worker_connections   1024;
}

Directive Parameters

Parameters are separated by spaces or tabs and form one or more tokens.

Directive Contexts

Directives such as http, server, location, and mail define contexts that can be nested. Example:

# main context
user  nginx;
worker_processes  1;
error_log  logs/error.log  info;

events {
    worker_connections  1024;
}

http {
    server {
        listen          80;
        server_name     www.example.com;
        location / {
            index index.html;
        }
    }
}

mail {
    auth_http  127.0.0.1:80/auth.php;
    pop3_capabilities  "TOP"  "USER";
    imap_capabilities  "IMAP4rev1"  "UIDPLUS";

    server {
        listen 110;
        protocol   pop3;
        proxy      on;
    }
}

In the main context you configure items unrelated to specific business logic, such as error_log, worker_processes, user, and events (e.g., worker_connections).

Common Directives

include – imports other configuration files, useful for splitting complex configurations.

http {
    server {
        listen          80;
        server_name     www.example.com;
        location / {
            index index.html;
        }
        include /etc/nginx/conf.d/*.conf;
    }
}

server – defines a virtual server. Each server block can have its own listen and server_name directives.

server {
    listen 80;
    server_name a.com;

    location / {
        proxy_pass https://www.baidu.com;
    }
}
server {
    listen 80;
    server_name b.com;

    location / {
        proxy_pass https://www.google.com;
    }
}

listen – configures the IP and port a virtual server listens on. Examples:

# listen only on 127.0.0.1:8000
listen 127.0.0.1:8000;
# default port 80
listen 127.0.0.1;
# listen on all IPs, port 8000
listen 8000;
# default server on port 80
listen 80 default_server;

server_name – sets the domain names for a virtual server, supporting exact names, wildcards, and regular expressions.

server_name myserver.com www.myserver.com;
# wildcard
server_name myserver.* *.myserver.com;
# regex
server_name ~^(?<www>.+).example.org$;

When multiple server_name match, Nginx selects the most specific according to a defined priority order.

Location Matching

The location directive determines how requests are processed. Syntax:

location [ = | ~ | ~* | ^~ ] uri {
    ...
}

Five matching types exist: prefix (no modifier), exact ( =), ^~ (prefix with immediate stop), case‑sensitive regex ( ~), and case‑insensitive regex ( ~*). Matching proceeds in a defined order, from exact to prefix to regex, finally falling back to / if nothing matches.

Rewrite and Proxy

rewrite

can appear in server, location, or if blocks to modify request URIs.

rewrite regex replacement [last|break|redirect|permanent];
proxy_pass

forwards requests to another service and can be used inside location or if blocks.

proxy_pass uri;

Use Cases

Domain → Domain

Redirect a domain to another domain.

server {
    listen 80;
    server_name www.baidu.com;
    location / {
        proxy_pass http://www.google.com;
    }
}

Domain → Local IP

Redirect a domain to a local IP address.

server {
    listen 80;
    server_name www.baidu.com;
    location / {
        proxy_pass http://127.0.0.1:8001;
    }
}

Path → Domain

Route different paths to different domains.

server {
    listen 80;
    server_name www.baidu.com;

    location ^~ /to_google {
        proxy_http_version 1.1;
        rewrite .* /;
        proxy_pass http://google.com/;
    }
}

API Cross‑Origin

Handle CORS for frontend development.

server {
    listen 80;

    location ^~ /api {
        proxy_pass http://example.com;

        add_header Access-Control-Allow-Methods *;
        add_header Access-Control-Max-Age 3600;
        add_header Access-Control-Allow-Credentials true;
        add_header Access-Control-Allow-Origin $http_origin;

        if ($request_method = OPTIONS) {
            return 200;
        }
    }
}

Conclusion

Nginx’s strong architecture and meticulous detail handling give it exceptional performance.

Its directive‑based configuration is easy to learn compared to traditional programming languages.

Consider using Nginx as a development proxy tool for its capabilities.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

ProxyconfigurationNginxweb server
ELab Team
Written by

ELab Team

Sharing fresh technical insights

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.