If you have written a firewall rule on Linux in the last two decades, you have almost certainly typed iptables. It was the default packet filtering interface for so long that many administrators treat it as synonymous with "Linux firewall." But iptables is now legacy software. Its successor, nftables, has been available in the mainline kernel since January 2014 and is the default firewall framework on Debian 10+, RHEL 8+, Fedora 32+, and Ubuntu 20.04+. If you are still building rulesets on top of iptables, this guide is your reason to stop -- and your roadmap to move forward.
A Brief History: From Netfilter Workshop 2008 to Kernel 3.13
The nftables project traces its origins to the Netfilter Workshop 2008 in Paris, where Patrick McHardy of the Netfilter Core Team first presented the concept. The initial preview of both kernel and userspace code followed in March 2009. At the time, it was described as the most significant change to Linux firewalling since iptables itself was introduced with the Linux 2.4 kernel in 2001, and security researcher Fyodor Vaskovich (creator of Nmap) publicly expressed anticipation for its inclusion in the mainline kernel.
The project stalled in alpha, however, and its original website went offline in 2009. It was not until October 2012 that Pablo Neira Ayuso -- now the lead maintainer of Netfilter at the University of Seville -- revived the effort, proposing a compatibility layer for iptables and charting a path toward mainline inclusion. On October 16, 2013, Neira Ayuso submitted the nftables core pull request to the Linux kernel tree, and it was merged with the release of Linux kernel 3.13 on January 19, 2014.
nftables is a new packet classification framework based on lessons learnt from {ip,ip6,arp,eb}tables. It reuses the existing Netfilter building blocks: hooks, conntrack, NAT, logging and userspace queueing. -- Patrick McHardy and Pablo Neira Ayuso, Netdev 0.1 Conference, February 2015
That lineage matters. nftables is not a ground-up rewrite of the networking stack. It sits on top of the same Netfilter hooks, the same connection tracking subsystem, and the same NAT engine. What changed is the classification layer -- how rules are expressed, compiled, and evaluated inside the kernel.
Why iptables Needed Replacing
iptables served Linux well, but it carried architectural baggage that could not be fixed with incremental patches. According to the Debian Wiki and the Netfilter project documentation, the core problems include the following.
Code duplication across protocol families. iptables, ip6tables, arptables, and ebtables are four separate tools with four separate kernel codepaths handling IPv4, IPv6, ARP, and Ethernet bridging respectively. Every shared feature -- say, connection tracking integration -- had to be implemented and maintained independently in each tool. This created a significant code maintenance burden and inconsistent behavior across protocol families.
Linear rule evaluation. In iptables, packets traverse rules in a chain sequentially from top to bottom. For large rulesets with thousands of entries (common in container orchestration or multi-tenant hosting), this linear O(n) evaluation becomes a performance bottleneck. There was no native mechanism for set-based lookups or indexed matching.
Non-atomic rule updates. When you modify an iptables ruleset, the entire table is fetched from the kernel into userspace as a binary blob, modified, and then pushed back. During this window, the ruleset is inconsistent -- creating a potential race condition where packets can be evaluated against a partially applied ruleset. For dynamic environments where rules change frequently, this is a serious reliability concern.
Limited extensibility. Adding new match or target modules to iptables required writing kernel modules and corresponding userspace libraries, then integrating them into the iptables source tree. This tight coupling between kernel and userspace made development cycles slow and raised the barrier for new features.
iptables has not been fully removed from modern kernels. Instead, distributions ship iptables-nft, a compatibility shim that translates iptables commands into nftables kernel API calls via the nf_tables backend. Running readlink -f $(which iptables) on current Ubuntu systems reveals it points to xtables-nft-multi. This means your iptables commands are already hitting the nftables engine under the hood.
Architecture: The nf_tables Virtual Machine
The fundamental architectural change in nftables is the introduction of a bytecode virtual machine inside the kernel. Rather than hard-coding protocol-specific matching logic into kernel modules (as iptables did), nftables compiles rules in userspace into a sequence of low-level VM instructions. These instructions perform basic operations: load data from a packet header into a register, compare register values, look up a value in a set, render a verdict.
The Wikipedia article on nftables describes this design clearly: the VM operations are intentionally kept basic, and complex filtering logic is built by combining these primitives. This approach has several concrete advantages.
First, a single VM handles all protocol families. IPv4, IPv6, ARP, and bridge filtering all use the same instruction set and evaluation engine. The inet address family allows you to write a single ruleset that processes both IPv4 and IPv6 packets -- eliminating the need to maintain duplicate rules across iptables and ip6tables.
Second, new features can be added without kernel modifications. If a new filtering capability can be expressed as a combination of existing VM instructions, it requires only updates to the nft userspace tool and the libnftnl library -- no kernel patches needed. As Red Hat's RHEL 8 documentation explains, enhancements can be introduced either as new VM instructions (requiring a kernel module) or by creatively combining existing ones.
Third, rule updates are atomic. nftables uses Netlink transactions to apply rulesets. All rules in a transaction are either applied completely or not at all. There is no window where a partially applied ruleset is active. The nftables userspace API allows atomic replacement of one or more rules within a single Netlink message, which is critical for high-availability environments.
Core Concepts: Tables, Chains, Rules
If you are coming from iptables, the terminology sounds familiar -- tables, chains, rules -- but the semantics are different in important ways. In iptables, the kernel provides predefined tables (filter, nat, mangle, raw, security) with predefined chains (INPUT, FORWARD, OUTPUT, etc.). In nftables, nothing is predefined. You create your own tables with arbitrary names, define your own chains, and register them at whatever Netfilter hooks you need. This is a key design philosophy: no overhead from unused chains and tables.
Tables
A table in nftables is a namespace -- a container for chains, rules, sets, and maps. Each table is associated with an address family that determines which packets it processes. If no family is specified in a command, ip (IPv4) is assumed by default -- a common source of confusion when rules don't match IPv6 traffic as expected:
ip-- IPv4 only (default if no family is specified)ip6-- IPv6 onlyinet-- dual-stack, processes both IPv4 and IPv6 (also supports an ingress hook since kernel 5.10)arp-- ARP packetsbridge-- Ethernet frames traversing bridge devicesnetdev-- early packet processing at the device driver level (ingress hook since kernel 4.2, egress hook since kernel 5.16)
Creating a table is straightforward:
Chains
Chains come in two types: base chains and regular chains. Base chains are registered at a Netfilter hook and have a type, a hook point, and a priority. Regular chains are unattached -- they only see packets if a base chain uses jump or goto to send traffic to them.
The three chain types are filter (generic packet filtering, supported on all families), nat (network address translation, only the first packet of each flow is evaluated), and route (allows influencing routing decisions by modifying packet marks or headers).
# Create a filter chain hooked into the input path # nft add chain inet firewall inbound \ { type filter hook input priority 0 \; policy drop \; } # Create a filter chain for forwarded traffic # nft add chain inet firewall forward \ { type filter hook forward priority 0 \; policy drop \; } # Create an output chain # nft add chain inet firewall outbound \ { type filter hook output priority 0 \; policy accept \; }
The priority value determines ordering when multiple chains are registered at the same hook. Lower numbers execute first. Common priority values include -300 for raw/notrack processing, -200 for connection tracking, 0 for standard filtering, and 100 for source NAT. The policy sets the default verdict -- accept or drop -- for packets that reach the end of the chain without matching any rule.
Rules
Rules are the actual filtering logic. Unlike iptables, where each rule can only perform one action (the -j target), nftables allows multiple actions per rule. You can log, count, and accept a packet in a single rule statement. Conditions within a rule are logically ANDed -- all must match for the rule to fire.
# Allow established and related connections # nft add rule inet firewall inbound ct state established,related accept # Allow loopback # nft add rule inet firewall inbound iifname lo accept # Allow ICMP and ICMPv6 # nft add rule inet firewall inbound ip protocol icmp accept # nft add rule inet firewall inbound ip6 nexthdr icmpv6 accept # Allow SSH and HTTP with logging and counters -- single rule # nft add rule inet firewall inbound tcp dport { 22, 80, 443 } \ counter log prefix "ALLOWED: " accept
Notice the { 22, 80, 443 } syntax -- that is an anonymous set, defined inline. The kernel automatically selects an efficient data structure (hash table or red-black tree) for lookup. In iptables, you would need three separate rules or the external ipset utility to achieve the same result.
Also note that counters are optional in nftables. In iptables, every rule carries an implicit counter, which adds overhead even when you do not need the data. nftables gives you the choice, and even offers two counter types: a system-wide counter with locking and a per-CPU counter for high-throughput paths.
Beyond inline counters, nftables supports named stateful objects -- counters, quotas, and limits that are defined at the table level and referenced by name from one or more rules. Named counters let you aggregate hit counts across multiple rules (for example, counting all HTTP and HTTPS traffic into a single named counter), while named quotas can enforce bandwidth limits per service and trigger a drop verdict once a threshold is exceeded. These stateful objects can be inspected and reset independently with nft list counters, nft list quotas, and nft reset counters, without modifying the ruleset itself. This capability has no iptables equivalent.
Sets, Maps, and Verdict Maps
One of the most compelling features that nftables has over iptables is its built-in generic set infrastructure. Sets, maps, and verdict maps allow you to replace long chains of individual rules with efficient data structure lookups. According to the nftables wiki, set elements are internally represented using hash tables and red-black trees, allowing O(1) or O(log n) lookups instead of iptables' linear O(n) rule traversal.
Named Sets
Named sets are persistent, mutable collections that can be referenced from multiple rules and updated dynamically without modifying the rules themselves:
# Create a named set for blocked IPs # nft add set inet firewall blocklist { type ipv4_addr \; } # Add elements to the set # nft add element inet firewall blocklist { 192.0.2.1, 198.51.100.0/24 } # Reference the set in a rule # nft add rule inet firewall inbound ip saddr @blocklist drop # Later, add more IPs without touching any rules # nft add element inet firewall blocklist { 203.0.113.50 }
Sets can contain IP addresses, port numbers, MAC addresses, interface names, or even concatenations of multiple types (e.g., IP address combined with port number). You can also add timeout and flag options to automatically expire elements -- useful for dynamic blocklists or rate-limiting scenarios.
Maps and Verdict Maps
Maps extend sets by associating each key with a value, functioning like dictionaries. Verdict maps (vmaps) take this further by mapping keys directly to verdict statements like accept, drop, or jump. The Red Hat documentation for RHEL 8 through RHEL 10 covers verdict maps extensively as a core firewall management technique.
# Create a verdict map for source IP policy # nft add map inet firewall policy_map \ { type ipv4_addr : verdict \; } # Populate the map # nft add element inet firewall policy_map \ { 192.0.2.1 : accept, 192.0.2.2 : drop, 192.0.2.3 : accept } # Apply it in a single rule # nft add rule inet firewall inbound ip saddr vmap @policy_map
Maps can also be used for NAT. A common pattern is port-based DNAT using a map to route incoming traffic to different backend servers:
# Port-based DNAT using an inline map # nft add rule ip nat prerouting dnat to tcp dport map \ { 80 : 192.168.1.100, 8888 : 192.168.1.101 }
For large dynamic blocklists (think fail2ban or GeoIP filtering with thousands of entries), named sets with the flags interval option support CIDR ranges efficiently. The kernel chooses the optimal backend data structure automatically based on set size and element types.
Writing Complete Rulesets
While individual nft commands are useful for testing, production rulesets should be defined in configuration files using the declarative syntax. This is the format you will see when running nft list ruleset, and it is also valid input for nft -f. Atomic loading means the entire file is applied as a single transaction -- if any rule fails validation, none are applied. You can also use nft -c -f /etc/nftables.conf to validate a configuration file against the current kernel without actually loading it -- an essential practice before deploying changes to production systems.
#!/usr/sbin/nft -f # Flush existing rules for a clean slate flush ruleset table inet firewall { # Named set for trusted management IPs set trusted_mgmt { type ipv4_addr elements = { 10.0.0.100, 10.0.0.200 } } # Named set for allowed TCP services (public-facing) set tcp_allowed { type inet_service elements = { 80, 443 } } chain inbound { type filter hook input priority 0; policy drop; # Connection tracking ct state established,related accept ct state invalid drop # Loopback iifname lo accept # ICMP -- allow ping and IPv6 neighbor discovery icmp type echo-request accept icmpv6 type { echo-request, nd-neighbor-solicit, nd-router-advert } accept # Allowed public TCP services (HTTP, HTTPS) tcp dport @tcp_allowed accept # SSH restricted to trusted management IPs only tcp dport 22 ip saddr @trusted_mgmt counter accept # Log and count anything else before the implicit drop counter log prefix "nft-drop: " } chain forward { type filter hook forward priority 0; policy drop; # Only allow established traffic through ct state established,related accept } chain outbound { type filter hook output priority 0; policy accept; } }
Enable the nftables service so this configuration loads at boot:
By default on Debian-based systems, the nftables service loads rules from /etc/nftables.conf. You can modularize your configuration using include directives to pull in files from /etc/nftables.d/ or any path you prefer.
The flush ruleset directive clears all tables, chains, sets, and maps from the kernel -- including those managed by other applications like Docker, Kubernetes kube-proxy, or firewalld. If you are running containerized workloads, be careful with blanket flushes. Consider flushing only your own table with flush table inet firewall, or use destroy table (available since nftables 1.0.2 / kernel 5.16) which safely deletes a specific table and all its objects without returning an error if the table does not exist -- making it safer for use in startup scripts.
Flowtables: The Software Fastpath
For systems acting as routers or firewalls that forward large volumes of traffic, nftables offers flowtables -- a network acceleration feature that provides a fastpath bypass for established connections. Once a connection is tracked and classified by the normal forwarding path, subsequent packets in that flow can skip the entire Netfilter hook traversal and be forwarded directly from the ingress hook.
The Netfilter developers describe flowtables as a mechanism where the initial packets of a connection (the TCP handshake, for instance) pass through the full filtering and connection tracking pipeline. Once the flow is established and offloaded, all following packets bypass the classic forwarding path entirely, using a cached routing decision that stores the output device and next-hop address. The TTL/hop limit fields are still decremented correctly.
table inet filter { flowtable fastpath { hook ingress priority 0 devices = { eth0, eth1 } } chain forward { type filter hook forward priority 0; policy accept; # Offload established TCP and UDP flows meta l4proto { tcp, udp } flow add @fastpath ct state established,related accept } }
On Linux kernels with appropriate hardware support, flowtables can additionally leverage hardware offload for even greater throughput. For software-only environments, the acceleration is still substantial -- especially on routers handling tens of thousands of concurrent flows.
Debugging and Tracing
Debugging firewall rules has historically been painful. In legacy iptables, the -j TRACE target logged to the kernel ring buffer, and correlating the output with specific rules was tedious. nftables introduces a significantly improved event-based tracing system available since nftables 0.6 and kernel 4.6.
The approach is elegant: you set the nftrace metadata flag on a packet, and then use nft monitor trace to watch the packet's journey through your ruleset in real time. Each trace event includes a unique ID so you can follow a specific packet across multiple chains and rules.
# Add a temporary trace chain (priority before your filter chains) # nft add table inet trace_debug # nft add chain inet trace_debug trace_chain \ { type filter hook prerouting priority -301 \; } # Enable tracing for TCP traffic to port 443 from a specific host # nft add rule inet trace_debug trace_chain \ ip saddr 192.0.2.50 tcp dport 443 meta nftrace set 1 # Monitor trace events in real time # nft monitor trace
The trace output shows every chain the packet enters, every rule it is evaluated against, and the final verdict. When you are finished debugging, simply delete the trace table:
For less surgical debugging, per-rule counters remain the simplest tool. Add counter to any rule and then inspect the hit counts with nft list ruleset. You can also add counters to existing rules without rewriting them, and reset counters with nft reset counters.
The nftables wiki recommends registering trace chains with the flags owner table option. This creates a temporary table that is automatically removed when the controlling nft process exits -- preventing leftover trace rules from accumulating in production rulesets. For pre-deployment validation, always run nft -c -f to check your ruleset syntax before loading it.
Migrating from iptables to nftables
The Netfilter project provides two main migration paths to move from legacy iptables to nftables. The right choice depends on your situation and timeline.
Path 1: The iptables-nft Compatibility Layer
The lowest-effort path is using iptables-nft, which accepts standard iptables syntax but translates it to the nftables kernel API behind the scenes. On current Debian, Ubuntu, and Fedora systems, this is already the default -- your iptables commands are going through the nf_tables backend. You can verify this by checking the version output:
If the output shows (nf_tables) after the version number, you are already using the compatibility layer. Rules created this way are visible in nft list ruleset.
However, as noted in a Red Hat blog post on iptables-nft, this approach trades convenience for capability. The compatibility layer cannot take advantage of native nftables features like sets, maps, verdict maps, concatenations, or per-rule counter controls. Compatibility expressions are also larger than native equivalents, and evaluation may carry extra overhead due to the translation indirection. Upstream development priorities focus on native nftables, so new features and optimizations land there first.
Path 2: Full Native Migration
For a complete migration, the Netfilter project ships two translation tools:
iptables-translate-- translates individual iptables commands to their nft equivalentsiptables-restore-translate-- translates an entire saved ruleset (fromiptables-saveoutput) to nftables format
# Step 1: Export current iptables rules $ sudo iptables-save > /tmp/iptables-backup.txt # Step 2: Translate the entire ruleset $ iptables-restore-translate -f /tmp/iptables-backup.txt > /tmp/nft-ruleset.nft # Step 3: Review the translated output $ cat /tmp/nft-ruleset.nft # Step 4: Test-load the translated rules $ sudo nft -f /tmp/nft-ruleset.nft # Step 5: Verify $ sudo nft list ruleset
The iptables-translate tool is not perfect. Complex rules involving advanced matches -- particularly those using string matching, u32, or certain custom targets -- may produce commented-out lines in the output that require manual conversion. Always review the translated output carefully before deploying it. The nftables wiki notes that some iptables matches (notably the policy match for IPsec and the rpfilter match) still lack native nft replacements.
After migration, refactor the translated ruleset to take advantage of native nftables features. The auto-translated rules will mirror the iptables structure (separate chains named INPUT, FORWARD, OUTPUT) rather than leveraging sets, maps, and the more flexible chain topology that nftables enables. The real payoff comes from restructuring your ruleset to use these native capabilities.
nftables in Container and Kubernetes Environments
Container orchestration platforms add complexity to firewall management. Kubernetes' kube-proxy component, for example, manages thousands of iptables rules for Service routing. As of Kubernetes 1.29, kube-proxy includes native nftables support as an alpha backend option (promoted to beta in 1.31 and expected GA in 1.33), and projects like Calico offer an iptablesBackend: NFT configuration for their Felix component.
If you are running Docker or Podman, be aware that these tools may inject their own nftables rules for container networking. Using flush ruleset in your configuration will destroy those rules. The recommended approach is to manage your own rules in a dedicated table and leave container runtime tables untouched. Many administrators pair nftables with firewalld (which uses nftables as its backend since firewalld 0.6.0) to provide a managed abstraction that integrates with container runtimes, libvirt, and NetworkManager.
Quick Reference: iptables vs nftables
For a concise comparison of the key differences:
- Tools -- iptables required four separate utilities (iptables, ip6tables, arptables, ebtables); nftables uses a single
nftcommand for all protocol families - Tables/Chains -- iptables shipped with predefined tables and chains; nftables starts empty and you build what you need
- Rule evaluation -- iptables uses linear traversal; nftables supports hash-based set lookups and tree-based matching
- Actions per rule -- iptables allows one target per rule (
-j); nftables supports multiple actions (counter, log, accept) in a single rule - Atomicity -- iptables replaces rulesets with a download-modify-upload cycle; nftables applies changes via atomic Netlink transactions
- Counters -- iptables applies implicit counters to every rule; nftables makes counters opt-in per rule
- Sets -- iptables relies on the external
ipsetutility; nftables has native set, map, and verdict map support built in - Debugging -- iptables logs TRACE events to the kernel ring buffer; nftables provides an event-based trace monitor with per-packet tracking IDs
- Stateful objects -- iptables has no concept of named counters or quotas; nftables supports named counters, quotas, and limits that can be referenced across rules and managed independently
Wrapping Up
nftables is not a speculative future replacement -- it is the present reality of Linux firewalling. Every major distribution has adopted it as the default. The iptables compatibility layer exists as a bridge, not a destination. If you are still writing raw iptables rules or maintaining legacy rulesets, the migration path is well-documented and the tooling is mature.
Start with the fundamentals: understand tables, chains, and the address family model. Then move to sets and verdict maps -- they are the features that deliver the largest practical improvement over iptables. Use the declarative file-based syntax for production rulesets, enable the nftables systemd service, and leverage the built-in tracing when things go wrong.
The official nftables wiki maintained by the Netfilter project remains the authoritative reference. Red Hat's RHEL 8 Networking Guide and the Arch Wiki nftables page are excellent supplemental resources with practical examples. The Netfilter project's migration guide covers the translate tools in detail.
The best firewall is the one you understand well enough to debug at 3 AM. nftables gives you the tools -- tracing, counters, atomic updates, structured rulesets -- to build that confidence.