Practical Applications of OpenResty: Blacklist, Rate Limiting, AB Testing, and Service Quality Monitoring
This article demonstrates how OpenResty can be used in production to implement static and dynamic blacklists, request rate limiting, AB testing, and service quality monitoring by embedding Lua scripts into Nginx, with detailed configuration examples and code snippets.
What is OpenResty? OpenResty is a high‑performance web platform built on Nginx and Lua, integrating many Lua libraries and third‑party modules to enable rapid development of dynamic, high‑concurrency web applications.
Blacklist
Three methods are presented for adding blacklists in OpenResty:
Static blacklist defined directly in a Lua file and referenced via access_by_lua_file .
Dynamic blacklist stored in Redis, queried on each request.
Dynamic blacklist cached in shared memory ( ngx.shared.DICT ) and periodically refreshed from Redis.
Example static blacklist configuration:
location /lua {
default_type 'text/html';
access_by_lua_file /path/to/access.lua;
content_by_lua 'ngx.say("hello world")';
}Lua script for static blacklist:
local blacklist = {
["10.10.76.111"] = true,
["10.10.76.112"] = true,
["10.10.76.113"] = true
}
local ip = ngx.var.remote_addr
if blacklist[ip] then
return ngx.exit(ngx.HTTP_FORBIDDEN)
endDynamic blacklist (1) reads the list from Redis on each request:
local redis = require "resty.redis"
local red = redis:new()
red:set_timeout(100)
local ok, err = red:connect("127.0.0.1", 6379)
if not ok then return end
local cid = ngx.var.arg_cid
local res, _ = red:get(cid)
if res and res ~= ngx.null then
return ngx.exit(ngx.HTTP_FORBIDDEN)
endDynamic blacklist (2) stores the list in shared memory and updates it periodically with a timer:
lua_shared_dict blacklist 1m;
init_worker_by_lua_file /path/to/init_redis_blacklist.lua;
-- timer_work function creates a recurring timer that calls update_blacklist()
function update_blacklist()
local red = redis:new()
local ok, err = red:connect(redis_host, redis_port)
if not ok then ngx.log(ngx.ERR, "redis connection error: ", err); return end
local new_blacklist, err = red:smembers("blacklist")
if err then ngx.log(ngx.ERR, "Redis read error: ", err); return end
blacklist:flush_all()
for _, k in pairs(new_blacklist) do blacklist:set(k, true) end
endRate Limiting
Two Lua modules are used:
lua-resty-limit-traffic for per‑location rate limiting.
lua-resty-redis-ratelimit for distributed rate limiting across Nginx instances.
Configuration for lua-resty-limit-traffic :
http {
lua_shared_dict location_limit_req_store 1m;
server {
listen 2019;
location /limit/traffic {
access_by_lua_file "/path/to/limit_traffic.lua";
default_type 'text/html';
content_by_lua_block { ngx.say('hello 2019') }
}
}
}Lua script (simplified):
local limit_req = require "resty.limit.req"
local json = require "cjson"
local rate, burst = 1, 1
local lim, err = limit_req.new("location_limit_req_store", rate, burst)
if not lim then ngx.log(ngx.ERR, "init failed! err: ", err); return end
local delay, err = lim:incoming("location_limit_key", true)
if not delay then
if err == "rejected" then
ngx.header.content_type = "application/json;charset=utf8"
ngx.say(json.encode({message = "Too Fast"}))
return ngx.exit(ngx.HTTP_OK)
end
return
end
if delay > 0 then ngx.sleep(delay) endConfiguration for lua-resty-redis-ratelimit (cross‑machine limiting):
http {
server {
listen 2019;
location /redis/ratelimit {
access_by_lua_file "/path/to/redis_ratelimit.lua";
default_type 'text/html';
content_by_lua_block { ngx.say('hello 2019') }
}
}
}Lua script (simplified):
local ratelimit = require "resty.redis.ratelimit"
local json = require "cjson"
local rate, burst, duration = "1r/s", 0, 1
local lim, err = ratelimit.new("user-rate", rate, burst, duration)
local delay, err = lim:incoming(ngx.var.arg_unique_id, {host="127.0.0.1", port=6379, timeout=0.02})
if not delay then
if err == "rejected" then
ngx.header.content_type = "application/json;charset=utf8"
ngx.say(json.encode({message = "Too Fast"}))
return ngx.exit(ngx.HTTP_OK)
end
return
end
if delay >= 0.001 then ngx.sleep(delay) endAB Testing
Using set_by_lua_file to select an upstream based on a request parameter (e.g., cid ).
http {
upstream pool_1 { server 0.0.0.0:2020; }
upstream pool_2 { server 0.0.0.0:2021; }
server {
listen 2019;
location /select/upstream/according/cid {
set_by_lua_file $selected_upstream "/path/to/select_upstream_by_cid.lua" "pool_1" "pool_2";
if ($selected_upstream = "") { proxy_pass http://pool_1; }
proxy_pass http://$selected_upstream;
}
}
}Lua selector (simplified):
local first_upstream = ngx.arg[1]
local second_upstream = ngx.arg[2]
local cid = ngx.var.arg_cid
if not cid then return "" end
local id = tonumber(cid)
if not id then return "" end
if id % 2 == 0 then
return first_upstream
else
return second_upstream
endService Quality Monitoring
Metrics are collected in the log_by_lua_file phase and stored in a shared dictionary. The module records request count, error count, request time, upstream request count, and upstream response time.
local nginx_metric = require "metric"
local dict = ngx.shared.nginx_metric
local metric = nginx_metric:new(dict, "|", ngx.var.proxy_host, 3600*24)
metric:record()Key functions (simplified):
function _M:request_count()
local status = tonumber(ngx.var.status)
if status < 400 then
self.dict:incr(self:req_sign("request_count"), 1, true, self.exptime)
end
end
function _M:request_time()
local rt = tonumber(ngx.var.request_time) or 0
self.dict:incr(self:req_sign("request_time"), rt, true, self.exptime)
end
function _M:err_count()
local status = tonumber(ngx.var.status)
if status >= 400 then
self.dict:incr(self:req_sign("err_count"), 1, true, self.exptime)
end
end
function _M:upstream()
local up_resp = ngx.var.upstream_response_time or ""
if up_resp == "" then return end
up_resp = string.gsub(string.gsub(up_resp, ":", ","), " ", "")
local times = {}
for t in string.gmatch(up_resp, "([^,]+)") do table.insert(times, tonumber(t) or 0) end
self.dict:incr(self:req_sign("upstream_count"), #times, true, self.exptime)
local total = 0
for _, v in ipairs(times) do total = total + v end
self.dict:incr(self:req_sign("upstream_response_time"), total, true, self.exptime)
end
function _M:record()
self:request_count()
self:err_count()
self:request_time()
self:upstream()
endMetrics can be retrieved via a dedicated location that outputs JSON:
local json = require "cjson"
local dict = ngx.shared.nginx_metric
local keys = dict:get_keys()
local res = {}
for _, k in ipairs(keys) do
local value = dict:get(k)
local s, e = string.find(k, "|")
local up = string.sub(k, 1, s-1)
local metric = string.sub(k, e+1)
res[up] = res[up] or {}
if metric == "err_count" then
res[up][metric] = (res[up][metric] or 0) + value
else
res[up][metric] = value
end
end
ngx.say(json.encode(res))
ngx.exit(ngx.HTTP_OK)The article concludes that OpenResty greatly simplifies development of high‑performance web services such as blacklists, rate limiting, AB testing, and monitoring, avoiding the low‑level C development traditionally required for Nginx extensions.
Sohu Tech Products
A knowledge-sharing platform for Sohu's technology products. As a leading Chinese internet brand with media, video, search, and gaming services and over 700 million users, Sohu continuously drives tech innovation and practice. We’ll share practical insights and tech news here.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.