Backend Development 11 min read

Build High‑Performance Web Services with OpenResty and Linux Socket I/O

This guide explains how to use Linux non‑blocking socket APIs (select, poll, epoll) and the OpenResty platform (Nginx + LuaJIT) to create scalable, high‑concurrency web services, covering architecture, caching strategies, and step‑by‑step installation and configuration.

Efficient Ops
Efficient Ops
Efficient Ops
Build High‑Performance Web Services with OpenResty and Linux Socket I/O

Socket

Linux socket programming for handling massive connection requests relies on non‑blocking I/O and multiplexing mechanisms such as

select

,

poll

and

epoll

. Since the introduction of epoll in Linux 2.6, high‑performance servers like Nginx use it for I/O reuse and high concurrency.

For high‑performance back‑ends, the focus is not language speed but caching and asynchronous non‑blocking support.

Cache hierarchy (memory > SSD > HDD, local > network, intra‑process > inter‑process) aims for the highest hit rate inside the process, maximizing overall efficiency.

Asynchronous non‑blocking I/O lets the server avoid waiting on slow I/O (database, network, disks) by using event‑driven notifications, freeing CPU cycles for serving other clients.

OpenResty

OpenResty is a high‑performance web platform built on Nginx and Lua, integrating a rich Lua library and third‑party modules. It enables rapid development of dynamic web applications that can handle 10K‑1000K concurrent connections on a single machine by leveraging Nginx's non‑blocking I/O model.

OpenResty combines Nginx and LuaJIT, fundamentally changing the development model for high‑performance services.

In Nginx’s master‑worker model, each worker runs a separate Lua VM. When a request is assigned to a worker, a Lua coroutine is created; coroutines have isolated global variables, similar to threads but with cooperative scheduling.

Benchmarks show OpenResty performance comparable to Nginx’s C modules, sometimes exceeding them.

OpenResty Architecture

Load Balancing LVS + HAProxy forward traffic to core Nginx instances, achieving load distribution.

Single‑Machine Closed Loop All required data is obtained locally, minimizing network calls.

Distributed Closed Loop Two main challenges arise: data inconsistency (e.g., lack of master‑slave replication) and storage bottlenecks (disk or memory limits). Solutions include using master‑slave or distributed storage for consistency and sharding data by business key to alleviate bottlenecks.

Gateway The access gateway (entry layer) receives traffic and performs preprocessing before handing it to backend services.

OpenResty Environment Setup

Prerequisites: install

perl

,

libpcre

, and

libssl

libraries.

<code># Check required libraries
$ sudo ldconfig -v
# Install required libraries
$ sudo apt install libpcre3-dev libssl-dev perl make build-essential curl libreadline-dev libncurses5-dev</code>

Download and extract OpenResty, then compile and install:

<code>$ wget https://openresty.org/download/ngx_openresty-1.13.6.1.tar.gz
$ tar -zxvf ngx_openresty-1.13.6.1.tar.gz
$ mv openresty-1.13.6.1 openresty
$ cd openresty
$ ./configure
$ sudo make && make install
# Default installation path: /usr/local/openresty</code>

Start Nginx:

<code>$ sudo /usr/local/openresty/nginx/sbin/nginx
$ ps -ef | grep nginx
$ service nginx status</code>

If port 80 is already in use, identify the occupying process and stop it:

<code>$ sudo netstat -ntlp | grep 80
$ sudo killall -9 nginx</code>

Edit the Nginx configuration to listen on both IPv4 and IPv6:

<code>$ sudo vim /etc/nginx/conf/nginx.conf
listen 80;
listen [::]:80 ipv6only=on default_server;</code>

Test the server with curl or a browser (http://127.0.0.1).

Add OpenResty binaries to the PATH:

<code>$ sudo vim ~/.bashrc
export PATH=$PATH:/usr/local/openresty/nginx/sbin
$ source ~/.bashrc</code>

OpenResty Quick Start

Create a working directory and custom configuration:

<code>$ mkdir -p ~/openresty/test/logs ~/openresty/test/conf
$ vim ~/openresty/test/conf/nginx.conf
worker_processes 1;
error_log logs/error.log;
events { worker_connections 10224; }
http {
    server {
        listen 8001;
        location / {
            default_type text/html;
            content_by_lua_block { ngx.say("hello world"); }
        }
    }
}
$ nginx -p ~/openresty/test
$ curl 127.0.0.1:8001</code>

Use Lua scripts via

content_by_lua_file

:

<code>$ vim nginx.conf
location /test {
    content_by_lua_file "lua/test.lua";
}
$ mkdir lua && vim lua/test.lua
local args = ngx.req.get_uri_args()
local salt = args.salt
if not salt then ngx.exit(ngx.HTTP_BAD_REQUEST) end
local md5str = ngx.md5(ngx.time()..salt)
ngx.say(md5str)
$ sudo /usr/local/openresty/nginx/sbin/nginx -s reload
$ curl -i 127.0.0.1/test?salt=lua</code>

On Windows, view and terminate Nginx processes with:

<code>tasklist /fi "imagename eq nginx.exe"
taskkill /im nginx.exe /f</code>
backend developmentLinuxNginxLuaSocket ProgrammingOpenResty
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.