Operations 16 min read

Why nf_conntrack Table Gets Full? Deep Dive and Tuning for High‑Concurrency Environments

This article analyzes the nf_conntrack table‑full issue encountered during high‑concurrency OpenStack upgrades, explains how connection tracking works and is stored in a hash table, and provides practical parameter and iptables adjustments to improve memory usage and lookup performance.

360 Zhihui Cloud Developer
360 Zhihui Cloud Developer
360 Zhihui Cloud Developer
Why nf_conntrack Table Gets Full? Deep Dive and Tuning for High‑Concurrency Environments

Background

The article originates from the HULK virtualization team and examines a nf_conntrack table‑full problem that surfaced during an OpenStack upgrade with high‑concurrency iptables traffic. The issue is common in environments where iptables is heavily used.

Scenario Description

During an Ansible‑driven OpenStack M‑version upgrade, the kernel logged repeated messages:

Dec 14 15:00:05 w-openstack08 kernel: nf_conntrack: table full, dropping packet

These drops were caused by the nf_conntrack module reaching its maximum entry count, not by specific Ansible steps.

How nf_conntrack Works

nf_conntrack tracks connections for iptables rules that use state tracking (ESTABLISHED, RELATED). Each tracked connection is recorded in /proc/net/nf_conntrack and stored in a kernel hash table.

# iptables -S | grep state
-A neutron-openvswi-i6db946b2-1 -m state --state RELATED,ESTABLISHED -j RETURN
-A neutron-openvswi-i6db946b2-1 -m state --state INVALID -j DROP

Typical entries look like:

ipv4 2 tcp 6 115 TIME_WAIT src=xx.xx.xx.xx dst=yy.yy.yy.yy sport=54585 dport=9000 [ASSURED] use=2

Most connections end up in TIME_WAIT, generating a large number of entries (e.g., ~12000 per second for 100 QPS web traffic).

Hash‑Table Storage and Complexity

nf_conntrack stores entries in a hash table with chaining for collisions (linked‑list per bucket). While O(1) lookup is ideal, storing every entry in its own bucket would consume excessive memory (≈3 GB for 12 M entries).

Typical defaults:

net.netfilter.nf_conntrack_max = 12262144
net.netfilter.nf_conntrack_buckets = 16384

This yields about 750 entries per bucket, degrading lookup performance.

Optimization Strategies

Upgrade OVS – Use newer Open vSwitch versions that can enforce security without iptables, eliminating the need for nf_conntrack.

Modify iptables rules – Add -j NOTRACK for traffic that does not require tracking.

Tune kernel parameters – Balance nf_conntrack_max and nf_conntrack_buckets based on available RAM. CONNTRACK_MAX = RAM_SIZE / 16384 / (x / 32) For a 64 GB host, a good configuration is:

nf_conntrack_max = 4194304
nf_conntrack_buckets = 1048576

Adjust timeout values – Reduce nf_conntrack_tcp_timeout_time_wait from the default 120 s to 60 s to halve the number of TIME_WAIT entries.

Recommended Configuration

nf_conntrack_max = 4194304
nf_conntrack_buckets = 1048576
nf_conntrack_tcp_timeout_time_wait = 60

This setup fits comfortably within a 64 GB server (≈1.2 GB memory usage) while providing near‑O(1) lookup performance for the typical 30 K‑200 K entries observed.

Linux kernelhash tableiptablesconnection trackingnf_conntrack
360 Zhihui Cloud Developer
Written by

360 Zhihui Cloud Developer

360 Zhihui Cloud is an enterprise open service platform that aims to "aggregate data value and empower an intelligent future," leveraging 360's extensive product and technology resources to deliver platform services to customers.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.