30 Must‑Know Nginx Interview Questions to Ace Your Linux Cloud Job
A comprehensive collection of 30 Nginx interview questions—from basic concepts and commands to advanced performance tuning and high‑availability—organized by difficulty level, each with detailed explanations, best‑practice answers, and ready‑to‑use configuration snippets for Linux cloud engineers.
To help candidates prepare for Linux cloud engineering interviews, this article compiles 30 frequently asked Nginx interview questions covering basic concepts, intermediate mechanisms, and advanced optimization.
Basic Difficulty Questions
1. What is Nginx?
Bad answer: Nginx is a web server for hosting sites.
Why it is bad: The answer is too simple and does not highlight core features.
Good answer: Nginx is a high‑performance, open‑source HTTP and reverse‑proxy server that can also act as a mail proxy and generic TCP/UDP proxy. It uses an event‑driven, asynchronous, non‑blocking architecture, consumes little memory, handles thousands of concurrent connections, and is commonly used for static content serving, load balancing, reverse proxying, and API gateways.
2. Main differences between Nginx and Apache?
Bad answer: Nginx is faster; Apache is older.
Why it is bad: Lacks concrete technical comparison.
Good answer:
Architecture model: Apache uses a multi‑process/thread model; each connection spawns a process/thread, consuming more resources under high load. Nginx uses an event‑driven, asynchronous model where a single process can handle many connections.
Resource consumption: Nginx typically uses less memory and CPU than Apache, especially under high concurrency.
Static content handling: Nginx serves static files faster; Apache has richer dynamic module support.
Configuration: Apache supports .htaccess distributed configuration; Nginx does not, but its centralized configuration is more performant.
Module system: Apache modules can be loaded at runtime; Nginx modules must be compiled in.
3. List major Nginx features or advantages.
Bad answer: Nginx is fast and easy to configure.
Why it is bad: Too simplistic.
Good answer:
High concurrency and performance: Event‑driven architecture supports tens of thousands of concurrent connections.
Low memory usage: Very little memory when serving static content.
Reverse proxy and load balancing: Built‑in powerful reverse‑proxy and multiple load‑balancing algorithms.
Hot deployment: Supports graceful upgrades and configuration reload without downtime.
High extensibility: Modular design allows custom module development.
High reliability: Can run 24/7 with excellent stability.
4. Common Nginx commands?
Bad answer: nginx -s reload, others I forget.
Why it is bad: Incomplete.
Good answer:
# Start Nginx
nginx
# Fast stop
nginx -s stop
# Graceful stop (finish current requests)
nginx -s quit
# Reload configuration
nginx -s reload
# Reopen log files
nginx -s reopen
# Test configuration syntax
nginx -t
# Specify configuration file
nginx -c /path/to/nginx.conf5. Default path of Nginx configuration file?
Bad answer: Somewhere under /etc.
Why it is bad: Uncertain.
Good answer:
Main config: /etc/nginx/nginx.conf Additional configs: /etc/nginx/conf.d/ Site‑available: /etc/nginx/sites-available/ Site‑enabled: /etc/nginx/sites-enabled/ Default document root: /usr/share/nginx/html or /var/www/html Use nginx -t to see the actual file used on a specific distribution.
6. Directive to set number of Nginx worker processes?
Bad answer: worker_processes, set to CPU cores.
Why it is bad: Lacks explanation.
Good answer: Use the worker_processes directive, usually placed in the main context of nginx.conf. Example:
# Auto‑detect CPU cores
worker_processes auto;
# Or specify explicitly
worker_processes 4;Best practice is to match the number of CPU cores for CPU‑bound workloads or adjust based on I/O characteristics.
7. Directive to configure virtual hosts?
Bad answer: server directive.
Why it is bad: Too brief.
Good answer: Virtual hosts are defined with server blocks. Each server block represents one virtual host:
server {
listen 80;
server_name example.com www.example.com;
root /var/www/example;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}Multiple server blocks can coexist; Nginx selects the appropriate one based on the Host header.
8. How to set Nginx to listen on a specific port?
Bad answer: Write listen 80 in a server block.
Why it is bad: Incomplete.
Good answer: Use the listen directive with optional parameters:
server {
# IPv4 and IPv6 on port 80
listen 80;
listen [::]:80;
# Specific IP and port
listen 192.168.1.100:8080;
# HTTPS
listen 443 ssl;
server_name example.com;
}Additional parameters such as default_server and ssl can be added.
9. Explain forward proxy vs reverse proxy.
Bad answer: Forward proxy proxies the client; reverse proxy proxies the server.
Why it is bad: Overly simple.
Good answer:
Forward proxy: Client knows the proxy and sends requests through it. Typical uses: corporate internet control, bypassing geo‑restrictions, caching.
Reverse proxy: Server side receives client requests, hides backend servers. Uses: load balancing, static/dynamic separation, security, caching.
Key difference: forward proxy hides the client, reverse proxy hides the server.
10. Primary proxy role of Nginx?
Bad answer: Reverse proxy.
Why it is bad: Too brief.
Good answer: Nginx is primarily used as a reverse proxy, enabling load balancing, high availability, SSL termination, and security shielding for backend services.
Intermediate Difficulty Questions
11. Describe Nginx Master/Worker process model.
Bad answer: One master and many workers; workers handle requests.
Why it is bad: Oversimplified.
Good answer:
Master process: Runs as root, reads/validates configuration, starts and stops workers, performs graceful upgrades, reopens log files.
Worker processes: Run as non‑privileged users, handle actual client connections using an event‑driven, asynchronous I/O model, each capable of handling thousands of connections.
Workflow: Master binds privileged ports (80/443), workers inherit the listening sockets, compete for connections via an accept mutex, and process events with epoll/kqueue.
12. How does Nginx process an HTTP request?
Bad answer: Receives request, returns response.
Why it is bad: No detail.
Good answer:
Parsing phase: Parse request line, headers, build request structure.
Rewrite phase: Apply rewrite rules, URL rewriting, redirects.
Access control phase: Execute access module checks, IP allow/deny.
Content generation phase: Match location, serve static files, proxy, FastCGI, etc.
Logging phase: Write access and error logs.
The pipeline ensures high performance and flexibility.
13. Matching priority order of the location directive?
Bad answer: Exact match first, then regex.
Why it is bad: Incomplete.
Good answer: From highest to lowest priority: = exact match (e.g., location = /path). ^~ prefix match that stops further regex checks. ~ and ~* regex matches, evaluated in order of appearance.
Standard prefix match, longest match wins (e.g., location /prefix).
server {
location = / { # exact
# ...
}
location ^~ /static/ { # prefix, no regex
# ...
}
location ~ \.php$ { # regex
# ...
}
location / { # generic
# ...
}
}14. Simple reverse proxy configuration?
Bad answer: Minimal location / { proxy_pass http://backend; }.
Why it is bad: Lacks necessary headers and timeout settings.
Good answer:
location /api/ {
proxy_pass http://backend_server;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 30s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
}
upstream backend_server {
server 192.168.1.10:8080;
server 192.168.1.11:8080;
}15. Common Nginx load‑balancing algorithms?
Bad answer: Round‑robin and IP hash.
Why it is bad: Incomplete.
Good answer:
Round‑robin (default).
Weighted round‑robin.
IP hash (client IP affinity).
Least connections.
URL hash.
Fair (response‑time based, requires third‑party module).
upstream backend {
# weighted round‑robin
server backend1.example.com weight=3;
server backend2.example.com weight=2;
least_conn; # or ip_hash;
}16. Configure a load‑balancing upstream group?
Bad answer: Simple upstream myapp { server 1.1.1.1; server 2.2.2.2; }.
Why it is bad: No health checks or advanced parameters.
Good answer:
upstream backend_cluster {
least_conn;
server 192.168.1.10:8080 weight=3 max_fails=2 fail_timeout=30s;
server 192.168.1.11:8080 weight=2 max_fails=2 fail_timeout=30s;
server 192.168.1.12:8080 weight=1 max_fails=2 fail_timeout=30s backup;
# sticky cookie example (requires module)
# sticky cookie srv_id expires=1h domain=.example.com path=/;
}
server {
location / {
proxy_pass http://backend_cluster;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
proxy_connect_timeout 2s;
# health check example (requires module)
# check interval=3000 rise=2 fall=5 timeout=1000 type=http;
}
}17. Difference between root and alias ?
Bad answer: Both set paths, similar.
Why it is bad: Misses key distinction.
Good answer:
# root example
location /static/ {
root /var/www/html; # /static/css/style.css → /var/www/html/static/css/style.css
}
# alias example
location /static/ {
alias /var/www/static/; # /static/css/style.css → /var/www/static/css/style.css
} rootappends the URI to the specified path, preserving the location prefix; alias replaces the location prefix with the given path, and the location must end with a trailing slash.
18. Configure Nginx for static‑dynamic separation?
Bad answer: Use one location for static, another for dynamic.
Why it is bad: No concrete config.
Good answer:
server {
listen 80;
server_name example.com;
# Static assets
location ~* \.(css|js|png|jpg|jpeg|gif|ico|svg|woff|ttf)$ {
root /var/www/static;
expires 1y;
add_header Cache-Control "public, immutable";
access_log off;
}
# Dynamic requests
location / {
proxy_pass http://backend_app;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
upstream backend_app {
server 192.168.1.20:8000;
server 192.168.1.21:8000;
}19. WebSocket reverse‑proxy configuration?
Bad answer: Same as normal proxy.
Why it is bad: Ignores protocol specifics.
Good answer:
location /websocket/ {
proxy_pass http://websocket_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
upstream websocket_backend {
server 192.168.1.30:8080;
}20. Enable Gzip compression?
Bad answer: Turn gzip on.
Why it is bad: Too simplistic.
Good answer:
http {
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml text/javascript application/javascript application/xml+rss application/json image/svg+xml;
gzip_disable "msie6";
gzip_static on; # optional pre‑compressed files
}21. SPA with History mode returning 404 on refresh – fix?
Bad answer: Switch to hash routing.
Why it is bad: No Nginx solution.
Good answer: Use try_files to fallback to index.html:
server {
listen 80;
server_name spa.example.com;
root /var/www/spa;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
}22. Configure CORS to solve front‑end cross‑origin issues?
Bad answer: Set Access-Control-Allow-Origin: *.
Why it is bad: Insecure and incomplete.
Good answer:
server {
listen 80;
server_name api.example.com;
location / {
add_header Access-Control-Allow-Origin "https://www.example.com";
add_header Access-Control-Allow-Methods "GET, POST, OPTIONS, PUT, DELETE";
add_header Access-Control-Allow-Headers "Authorization, Content-Type, X-Requested-With";
add_header Access-Control-Allow-Credentials "true";
add_header Access-Control-Max-Age 3600;
if ($request_method = 'OPTIONS') { return 204; }
proxy_pass http://backend;
}
}23. Cache strategy for front‑end static resources?
Bad answer: expires 1d.
Why it is bad: No differentiation.
Good answer:
server {
# Hashed assets – long cache
location ~* \.[a-f0-9]{8,}\.(css|js)$ {
expires 1y;
add_header Cache-Control "public, immutable, max-age=31536000";
access_log off;
}
# Regular static files – moderate cache
location ~* \.(css|js|png|jpg|jpeg|gif|ico|svg)$ {
expires 30d;
add_header Cache-Control "public";
access_log off;
}
# HTML – short cache
location ~* \.(html)$ {
expires 5m;
add_header Cache-Control "public, must-revalidate";
}
# API – no cache
location /api/ {
proxy_pass http://backend;
add_header Cache-Control "no-cache, no-store, must-revalidate";
add_header Pragma "no-cache";
expires 0;
}
}24. Effect of trailing slash in proxy_pass ?
Bad answer: Little difference.
Why it is bad: Misses important URL rewriting behavior.
Good answer: Without trailing slash, Nginx appends the original location URI to the upstream URL; with a trailing slash, the location prefix is stripped.
# No trailing slash – keep prefix
location /api/ {
proxy_pass http://backend;
# /api/users → http://backend/api/users
}
# With trailing slash – drop prefix
location /api/ {
proxy_pass http://backend/;
# /api/users → http://backend/users
}Advanced Difficulty Questions
25. Optimize Nginx for higher concurrency and performance.
Bad answer: Increase worker_processes and worker_connections . Why it is bad: Too superficial. Good answer:
Process and connection tuning:
# CPU affinity
worker_processes auto;
worker_cpu_affinity auto;
# Max connections per worker
worker_connections 65536;
# Max open files per worker
worker_rlimit_nofile 65536;Network tuning:
# Use epoll
use epoll;
# Accept multiple connections quickly
multi_accept on;
# Sendfile and TCP optimizations
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
keepalive_requests 1000;System‑level tuning:
# Increase file descriptor limits
* soft nofile 65536
* hard nofile 65536
# Increase kernel network parameters
net.core.somaxconn = 65536
net.ipv4.tcp_max_syn_backlog = 6553626. Purpose of the try_files directive and a common use case.
Bad answer: Tries to find a file. Why it is bad: No detail. Good answer: try_files checks a list of files/URIs in order and serves the first one that exists; if none match, it falls back to a specified URI or error code. Commonly used for SPA routing: <code>location / { try_files $uri $uri/ /index.html; }</code> Also used for graceful 404 handling or conditional upstream routing.
27. Configure HTTPS and force HTTP to HTTPS redirection.
Bad answer: Install certificate and add 301 redirect. Why it is bad: Lacks full config and security hardening. Good answer:
# HTTP to HTTPS redirect
server {
listen 80;
server_name example.com www.example.com;
return 301 https://$server_name$request_uri;
}
# HTTPS server
server {
listen 443 ssl http2;
server_name example.com www.example.com;
# SSL certificates
ssl_certificate /etc/ssl/certs/example.com.crt;
ssl_certificate_key /etc/ssl/private/example.com.key;
# SSL security settings
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# HSTS
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
location / {
root /var/www/html;
index index.html;
}
}28. Implement access restrictions (rate limiting, connection limits, etc.).
Bad answer: Use limit_req only. Why it is bad: Incomplete. Good answer:
Rate limiting:
# Define a shared zone
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
location /api/ {
limit_req zone=api burst=20 nodelay;
proxy_pass http://backend;
}Concurrent connection limit:
# Define connection zone
limit_conn_zone $binary_remote_addr zone=perip:10m;
location /download/ {
limit_conn perip 5; # max 5 concurrent per IP
limit_rate 500k; # optional bandwidth limit
}Geo‑IP based access control (requires geoip module):
geoip_country /usr/share/GeoIP/GeoIP.dat;
location / {
if ($geoip_country_code != CN) { return 403; }
}Basic authentication:
location /admin/ {
auth_basic "Admin Area";
auth_basic_user_file /etc/nginx/.htpasswd;
}29. Diagnose common Nginx error status codes.
Bad answer: Look at error logs. Why it is bad: Too vague. Good answer:
502 Bad Gateway: Verify backend service health, firewall rules, and adjust proxy buffers/timeouts.
proxy_connect_timeout 60s;
proxy_read_timeout 60s;
proxy_send_timeout 60s;504 Gateway Timeout: Increase proxy timeout, check backend performance.
413 Request Entity Too Large: client_max_body_size 100m; 499 Client Closed Request: Client timed out; investigate client network or reduce server response time.
General troubleshooting steps:
Inspect Nginx error log: tail -f /var/log/nginx/error.log Test configuration syntax: nginx -t Check system resources (CPU, memory, file descriptors).
Verify network connectivity to upstream services (telnet, curl).
30. Implement high‑availability for Nginx.
Bad answer: Use Keepalived for active‑passive. Why it is bad: Single solution, not cloud‑native. Good answer: Several approaches: Traditional Keepalived: VRRP with health checks. <code># Keepalived example vrrp_script chk_nginx { script "pidof nginx"; interval 2; weight 2; } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 100 virtual_ipaddress { 192.168.1.100 } track_script { chk_nginx } }</code> DNS round‑robin with health checks: Multiple A records, external health‑check service removes failed nodes. Cloud load balancers: AWS ALB/NLB, GCP Load Balancer, Alibaba Cloud SLB, Tencent Cloud CLB – provide automatic health checks and scaling. Kubernetes Ingress: Deploy Nginx Ingress Controller for HA inside a cluster. <code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx-ingress spec: rules: - host: example.com http: paths: - path: / pathType: Prefix backend: service: name: web-service port: number: 80</code> Select the solution based on scale, tech stack, operational expertise, and budget. Conclusion Mastering these Nginx interview questions not only helps pass technical interviews but also equips you to configure, optimize, and troubleshoot Nginx effectively in real‑world deployments. Practice the configurations, experiment with performance tuning, and keep exploring advanced features. May every nginx -t you run return syntax is ok !
Open Source Linux
Focused on sharing Linux/Unix content, covering fundamentals, system development, network programming, automation/operations, cloud computing, and related professional knowledge.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
