Backend Development 18 min read

OpenResty‑Based Interface Authentication, Traffic Control, and Request Tracing in Production

The article shows how OpenResty’s Lua‑based extensions can implement lightweight, version‑controlled API authentication, dynamic traffic‑shaping via shared‑memory peer status, and selective request tracing with batch‑sent logs to Elasticsearch, enabling secure, highly available services and rich observability without sacrificing Nginx performance in production.

Sohu Tech Products
Sohu Tech Products
Sohu Tech Products
OpenResty‑Based Interface Authentication, Traffic Control, and Request Tracing in Production

OpenResty, a high‑performance web platform built on Nginx and Lua, has been widely adopted by many Internet companies for its high concurrency and stability. By embedding Lua scripts into the Nginx layer, developers can quickly implement complex logic while preserving the performance of the underlying server.

This article describes the practical implementation of three essential tools—interface authentication, traffic control, and request tracing—using OpenResty in a production environment. The goal is to improve service security, enable rapid traffic management, and provide detailed request logs for operation and debugging.

1. OpenResty Overview

OpenResty integrates a large collection of Lua libraries, third‑party Nginx modules, and most dependencies, allowing developers to build highly concurrent, extensible dynamic web applications, gateways, and services. It also supports direct execution of web services on top of Nginx, enabling non‑blocking I/O for protocols such as HTTP, MySQL, PostgreSQL, Memcached, and Redis.

2. Interface Authentication

Background: As online services become more complex, API endpoints are increasingly exposed to malicious scraping, traffic abuse, and attacks. Therefore, a lightweight, configurable authentication mechanism is required.

Goal: Provide a generic authentication flow that can be toggled per APP version, allowing quick enable/disable of authentication and flexible key management.

Implementation Principle: The authentication logic runs at the load‑balancing layer. It extracts version information, loads the corresponding versionkey from a blacklist store, constructs a signature string, and verifies it against the client‑provided signature.

Key Steps:

1. Retrieve APP version and corresponding versionkey .

2. Initialize Lua memory with authentication configuration, interface list, and key values.

3. Parse request parameters (GET or POST) and assemble a canonical string.

4. Append the versionkey and compute a digest using a custom algorithm.

5. Compare the computed digest with the client signature; on mismatch, return an error.

6. Log success or failure for monitoring.

Code Example – Version Key Retrieval:

local function get_version_key(version)
    local version_number = blacklist:get(config.sig_version)
    local first_version = blacklist:get(config.sig_first_version)
    local first_version_key = blacklist:get(first_version)
    local lower_version_key = first_version_key
    for i = 2, version_number, 1 do
        local n = tostring(i)
        local k = config.sig_version_number_prefix .. n
        local v = blacklist:get(k)
        if v == nil then
            return nil, "not found ".. k
        end
        if version < v then
            return lower_version_key, nil
        end
        local version_key = blacklist:get(v)
        if version_key == nil then
            return nil, "not found " .. v
        end
        lower_version_key = version_key
    end
    return lower_version_key, nil
end

Code Example – Signature Check:

local function sig_check()
    local sig_prefix = ngx.var.sig_prefix
    if sig_prefix == nil or sig_prefix == "" then
        return
    end
    local switch_flag = config.sig_switch .. sig_prefix
    local init_version = blacklist:get(switch_flag)
    if init_version == nil then
        ngx.log(ngx.ERR, "switch_flag not set")
        return
    end
    local sig_string, create_err
    if ngx.var.request_method == "GET" then
        sig_string, create_err = create_get_method_sig_string()
    else
        if content_type == "application/json" then
            sig_string, create_err = create_post_sig_string()
            sig_string, create_err = create_post_json_method_sig_string()
        else
            return
        end
    end
    sig_string = sig_string .. version_key
    local aa = resty_aa:new()
    if aa == nil then
        ngx.log(ngx.ERR, "resty_aa new err ")
        return
    end
    local ok = aa:update(sig_string)
    if not ok then
        ngx.log(ngx.ERR, "resty_aa update err ")
        return
    end
    local digest = aa:final()
    local right_sig = resty_str.to_hex(digest)
    if right_sig ~= sig then
        ngx.log(ngx.ERR, "signature failed, right_sig: ", right_sig, " client_sig: ", sig, " sig_string: ", sig_string)
    end
end

Notes: Use half‑open intervals for version ranges to avoid mismatches during version roll‑over, ensure request parameters are deduplicated, decoded, and sorted consistently for GET and POST, and keep authentication switches dynamically configurable.

3. Traffic Control

Background: In multi‑IDC or multi‑AZ deployments, it is often necessary to isolate a specific node or whole availability zone for maintenance, fault isolation, or rapid rollback.

Implementation Principle: A shared memory zone stores the status (up/down) of each backend node. Lua scripts manipulate this state via the set_peer_down API, allowing dynamic enable/disable of traffic to individual IPs or entire IDC segments.

Key Steps:

1. Manage node lists and down‑state flags through a web UI or configuration service.

2. Initialize shared memory (e.g., lua_shared_dict healthcheck 1m; ) and load the down‑state data.

3. Use set_peer_down to mark a peer as down or up.

4. Ensure worker processes synchronize via the shared dictionary.

5. Monitor node status and set expiration alarms for manual down operations.

Code Example – Shared Memory Declaration:

lua_shared_dict healthcheck 1m;
lua_shared_dict logger_dict 10m;
lua_shared_dict logger_metric_dict 10m;

Code Example – Peer Down Function:

local function set_primary_backup_peer_down(name, addr, is_backup, down_value)
    local err_msg
    local ok = false
    local peers
    if is_backup then
        peers, err_msg = get_backup_peers(name)
    else
        peers, err_msg = get_primary_peers(name)
    end
    if not peers then
        err_msg = "failed to get servers in upstream "..name.." err: "..err_msg
        return ok, err_msg
    end
    for i, srv in ipairs(peers) do
        for k, v in pairs(srv) do
            if k == "name" and v == addr then
                local peer_id = i - 1
                ok, err_msg = set_peer_down(name, is_backup, peer_id, down_value)
                break
            end
        end
    end
    return ok, err_msg
end

Notes: Take care of worker synchronization, monitor status changes, and set expiration for manual down actions to avoid forgetting to bring a node back up.

4. Request Tracing

Background: Logging every request can be costly; therefore a whitelist of CID (customer IDs) is maintained to filter which requests are recorded.

Implementation Principle: A global Lua filter captures the response body (limited to 50 KB), stores it in ngx.ctx , and writes the filtered logs to an Elasticsearch (ES) cluster in batches.

Key Steps:

1. Define a global log_by_lua_file and body_filter_by_lua_file in the Nginx configuration.

2. In the body filter, truncate the response to a configurable maximum size.

3. Batch logs using a shared dictionary; configure batch size, timeout, and retry policy.

4. Push batches to ES via lua‑resty‑elasticsearch or lua‑resty‑http .

5. Emit metrics for batch overflow, timeout, and retry counts.

Global Configuration Example:

log_by_lua_file "/opt/openresty/lualib/logger-plugin/logger.lua";
server {
    set $resp_body "";
    body_filter_by_lua_file "/opt/openresty/lualib/logger-plugin/filter.lua";
}

Body Filter Example (limit response body to 50 KB):

local ngx = ngx
local string = string
local max_size = 51200  -- 50 * 1024
local body = string.sub(ngx.arg[1], 1, max_size)
ngx.ctx.resp_buffered = (ngx.ctx.resp_buffered or "") .. body
if ngx.arg[2] then
    ngx.var.resp_body = ngx.ctx.resp_buffered
end

Batch Processor Configuration Example:

local _M = {}
_M.conf = {
    dict = ngx.shared.logger_dict,
    name = "ES",
    send_metric_exptime = 3600,
    elasticsearch_index = "****",
    endpoint_addr = "ES地址",
    response_body_max_size = 51200,
    retry_delay = 5,
    max_retry_count = 5,
    batch_max_size = 1000,
    batch_log_max_size = 52428800, -- 50M
    inactive_timeout = 10,
    buffer_duration = 10,
}
return _M

Notes: Keep the body size small to avoid performance impact, set reasonable batch limits and timeouts, and monitor overflow metrics.

5. Summary

This article demonstrates how Lua scripts running on OpenResty can provide fast, low‑overhead solutions for API authentication, dynamic traffic control, and request tracing in large‑scale production environments. By leveraging shared memory, dynamic configuration, and batch processing, operations teams can improve security, reduce downtime, and gain richer observability without sacrificing the performance of the load‑balancing layer.

traffic controlloggingNginxLuaOpenRestyAPI authentication
Sohu Tech Products
Written by

Sohu Tech Products

A knowledge-sharing platform for Sohu's technology products. As a leading Chinese internet brand with media, video, search, and gaming services and over 700 million users, Sohu continuously drives tech innovation and practice. We’ll share practical insights and tech news here.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.