Backend Development 5 min read

Implementing OpenResty‑Lua Caching, Compression, and Timed Updates with Nginx and Redis

This article explains how to use OpenResty and Lua to integrate Redis caching, compress responses, schedule periodic updates, and configure URL‑based cache rules directly within Nginx, improving concurrency and resilience of web services.

Selected Java Interview Questions
Selected Java Interview Questions
Selected Java Interview Questions
Implementing OpenResty‑Lua Caching, Compression, and Timed Updates with Nginx and Redis

1. OpenResty

OpenResty is a high‑performance web platform built on Nginx and Lua, bundling many useful Lua libraries and third‑party modules to simplify development of highly concurrent, extensible dynamic web applications, services, and gateways.

2. Nginx + Redis

The typical architecture routes HTTP requests through Nginx to Tomcat, which then reads data from Redis; this chain is serial and blocks when Tomcat fails or its threads are exhausted.

By using the lua‑resty‑redis module, Nginx can access Redis directly without consuming Tomcat threads, allowing the service to continue serving requests even if Tomcat is down, thus reducing latency and increasing concurrency.

3. Compression to Reduce Bandwidth

For payloads larger than 1 KB, Nginx compresses the data before storing it in Redis, which speeds up Redis reads and lowers bandwidth usage. Compression adds CPU overhead, so data smaller than 1 KB is left uncompressed for higher TPS.

OpenResty does not provide a built‑in Redis connection pool; you must implement one in Lua or use existing examples such as http://wiki.jikexueyuan.com/project/openresty/redis/out_package.html .

Store Redis values as JSON, e.g., {length:xxx,content:yyy} , where content is the compressed page and length records the original size to decide whether decompression is needed on read.

Compression can be performed with the lua‑zlib library.

4. Timed Updates

Configure Nginx Lua timers to periodically request a Tomcat page URL, compress the HTML, and save it to Redis. Cache TTL can be long (e.g., one hour) to tolerate Tomcat failures, while the update interval can be short (e.g., one minute) to keep the cache fresh.

5. Request Forwarding

When a browser requests a page, Nginx first tries to fetch the HTML from Redis.

If the data is missing, Nginx retrieves the page from Tomcat, updates Redis, and returns the HTML to the client.

The final HTML is sent back to the browser.

6. Single‑Process Timed Update

All Nginx worker processes handle normal requests, but only worker 0 runs the periodic cache‑update task. The Lua script obtains the worker ID with ngx.worker.id() to ensure only one worker performs the update.

7. Configurable Caching

Through a management backend you can configure cacheable URLs, their TTL, and update intervals, e.g., modify?url=index&&expire=3600000&&intervaltime=300000&&sign=xxxx . The sign is a signature generated from the same parameters using a secret key; Nginx validates the signature before applying the configuration.

Backend DevelopmentRediscachingnginxLuacompressionOpenResty
Selected Java Interview Questions
Written by

Selected Java Interview Questions

A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.