High‑Performance Caching with OpenResty, Nginx, and Redis Using Lua
This article explains how to leverage OpenResty and Lua scripts to integrate Nginx with Redis for direct caching, compression, timed updates, request forwarding, and configurable URL management, thereby improving concurrency, reducing latency, and enhancing the resilience of backend web services.
1. OpenResty
OpenResty is a high‑performance web platform based on Nginx and Lua, bundling numerous Lua libraries, third‑party modules, and dependencies, which makes it easy to build ultra‑concurrent, highly extensible dynamic web applications, services, and gateways.
The access‑layer cache is implemented by developing Lua scripts on top of OpenResty.
2. Nginx + Redis
The typical architecture forwards HTTP requests through Nginx load‑balancing to Tomcat, which then reads data from Redis; this chain is serial, and a Tomcat failure or thread exhaustion blocks responses.
By using OpenResty’s lua‑resty‑redis module, Nginx can access Redis directly, avoiding Tomcat threads, reducing response time, and increasing system concurrency.
3. Compression to Reduce Bandwidth
When data exceeds 1 KB, Nginx compresses it before storing it in Redis:
Improves Redis read speed
Reduces bandwidth consumption
Compression consumes CPU; data smaller than 1 KB is left uncompressed for higher TPS.
OpenResty does not provide a built‑in Redis connection pool, so a custom Lua implementation is required. An example implementation can be found at http://wiki.jikexueyuan.com/project/openresty/redis/out_package.html .
Redis values are stored as JSON, e.g., {length:xxx,content:yyy} , where content is the compressed page and length records the original size to decide whether decompression is needed on read.
Compression is performed with the lua‑zlib library.
4. Timed Updates
Nginx’s Lua timer periodically requests a Tomcat page URL, stores the returned HTML in Redis, and can continue serving cached data even if Tomcat is down. Cache TTL can be set long (e.g., 1 hour) to tolerate Tomcat failures, while the timer interval can be short (e.g., 1 minute) for rapid cache refresh.
5. Request Forwarding
When a browser requests a page:
Nginx first tries to fetch the HTML from Redis.
If Redis misses, Nginx retrieves the page from Tomcat and updates Redis.
The HTML is then returned to the browser.
6. Single‑Process Timed Update
All Nginx worker processes handle request forwarding, but only worker 0 runs the timed task that updates Redis. The worker ID is obtained with ngx.worker.id() .
7. Configurability
Through a management backend, cacheable URLs, TTL, and update intervals can be configured, e.g., modify?url=index&&expire=3600000&&intervaltime=300000&&sign=xxxx . The sign is a signature generated with a secret key; Nginx verifies the signature using the same secret key before applying the configuration.
Finally, a curated list of BAT‑level interview questions is offered to readers who reply with “面试题”.
Top Architect
Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.