If you have ever run iptables -t nat -L and received the cryptic error table `nat' is incompatible, use 'nft' tool, you have walked straight into one of the more frustrating problems in modern Linux administration. The error is not telling you that iptables is broken. It is telling you that two different firewall management interfaces are fighting over the same kernel resources, and iptables lost.

This problem has been growing since the Linux ecosystem began its transition from the legacy iptables framework to nftables. The transition was supposed to be seamless. In many environments, it was not. This article explains the architecture behind these compatibility failures, walks through the specific failure modes you are likely to encounter, and provides tested procedures for diagnosing and resolving each one.

The Architecture Behind the Conflict

To understand why iptables-nft breaks, you need to understand what it is and what it is not. The Linux kernel has a single packet-filtering framework called Netfilter. Both iptables and nftables are user-space tools that configure Netfilter hooks, but they do so through different kernel APIs.

The legacy iptables tool (now called iptables-legacy) uses the older x_tables kernel API. Rules created by iptables-legacy live in kernel data structures that are entirely separate from nftables. The newer nft tool uses the nf_tables kernel API, which offers a more flexible and efficient rule representation.

iptables-nft occupies an awkward middle ground. It accepts the familiar iptables command syntax, but internally it translates those commands into nf_tables rules. The reason it exists is pragmatic: decades of automation scripts, firewall management tools, container runtimes, and documentation assume the iptables command-line interface. Rather than force every tool to rewrite its firewall logic overnight, the Netfilter project created iptables-nft as a translation shim so that existing scripts could keep working while the kernel moved to the nf_tables backend underneath. The problem is that this translation layer and the native nft tool both write to the same kernel subsystem. When they agree on how tables should look, everything works. When they disagree -- because iptables-nft can only represent a subset of what nf_tables supports -- one of them breaks.

Note

On Debian 11+, Ubuntu 20.04+, RHEL 8+, Fedora 35+, and Arch Linux, the default iptables binary is a symlink to iptables-nft. When you type iptables, you are not running legacy iptables -- you are running the nf_tables translation layer. Confirm with iptables --version and look for (nf_tables) in the output. The Debian Wiki states that building new firewalls on top of iptables is now discouraged and that nftables is its replacement.

Identifying Your Active Backend

The single most important diagnostic step is determining which iptables backend is active on your system. Every troubleshooting path branches from this answer.

terminal
# Check the iptables version and backend
$ iptables --version
iptables v1.8.10 (nf_tables)   # <-- nft backend
iptables v1.8.10 (legacy)      # <-- legacy x_tables backend

# On Debian/Ubuntu: check the alternatives system
$ update-alternatives --display iptables

# On RHEL/AlmaLinux/Rocky: check alternatives
$ alternatives --display iptables

Next, determine whether any rules exist in the legacy subsystem that should not be there:

terminal
# View the full nf_tables ruleset (what nft and iptables-nft both use)
# nft list ruleset

# Check for orphaned legacy rules (should be empty on modern systems)
# iptables-legacy -L -n 2>/dev/null
# iptables-legacy -t nat -L -n 2>/dev/null

If both commands return active rules, you have a split-brain firewall. Traffic is being evaluated against two separate rulesets, and rules in one subsystem are completely invisible to the other. This is one of the hardest problems to debug because both iptables -L and nft list ruleset will happily show you rules -- just not the same rules. The reason this happens is architectural: the kernel's x_tables and nf_tables subsystems both register callbacks on the same Netfilter hooks, so both rulesets are evaluated for every packet, but each user-space tool can only query its own subsystem. There is no single command that shows you the merged view of what the kernel is doing.

Caution

Running iptables-legacy and iptables-nft on the same host creates a situation where the kernel evaluates packets against both the x_tables and nf_tables rulesets. A packet that passes one set of rules can be silently dropped by the other. Always standardize on a single backend system-wide.

Addressing Conflicting Advice Online

You may encounter forum posts and wiki pages claiming that running both iptables-legacy and iptables-nft simultaneously works fine. The Arch Wiki's nftables page has a disputed-accuracy flag on exactly this point. In isolated cases with non-overlapping rulesets, dual-backend operation can appear to function correctly. The problem is not that the system crashes -- it is that you get silent policy divergence, where traffic is filtered by two independent rulesets that neither tool can display together. Both iptables -L and nft list ruleset will report plausible-looking output while packets are being evaluated against a combined state that no single diagnostic command can show you. This guide takes the position that dual-backend operation is unreliable for production use because the failure mode is invisible, and the debugging cost when something eventually breaks is severe.

Why This Class of Bug Is Uniquely Hard to Diagnose

What makes iptables-nft compatibility failures particularly treacherous is that every individual tool involved reports success. If you run iptables -A INPUT -p tcp --dport 443 -j ACCEPT using iptables-nft and get no error, the rule was written. If you then run nft add set ip nat blocklist { type ipv4_addr \; } and get no error, the set was created. Neither command failed. Both showed you exactly what you expected. And yet iptables -t nat -L now returns an incompatible table error and your firewall is broken.

This is structurally different from most Unix configuration bugs, where a misconfiguration produces an error at the point of the mistake. Here, the mistake is a combination of two correct operations applied to the same shared kernel object by two tools with incompatible assumptions about that object's shape. The error surfaces much later -- often not until a service restarts, a rule reload occurs, or someone runs a diagnostic command they have not run recently.

A second layer of difficulty is tool output asymmetry. iptables -L and nft list ruleset report on different namespaces. When you are troubleshooting a firewall problem and run iptables -L, you see one picture. When you run nft list ruleset, you see a different one. Neither is lying to you, but neither is giving you the complete picture either. A split-brain firewall can produce a situation where every individual diagnostic command returns plausible-looking output while packets are being evaluated against a ruleset that no single tool can show you in its entirety.

The third layer is timing. On a system where Docker, libvirt, and a custom nftables policy all manage firewall rules, the state of the ruleset after a reboot depends on the order services start. Docker's ExecStartPost may fire before your custom nftables service has loaded its configuration. A nftables reload that issues flush ruleset will wipe Docker's chains, which Docker does not automatically restore until its next network operation. Debugging from a snapshot of the current state cannot reveal which sequence of events produced it.

Debugging Heuristic

When a firewall problem appears intermittent or only reproduces after a restart, suspect service start-order rather than a rule logic error. Check systemctl list-dependencies --reverse nftables and systemctl list-dependencies --reverse docker to understand which services depend on which, and whether firewall initialization is guaranteed to complete before any service that modifies rules.

The Incompatible Table Error

The error message iptables v1.8.x (nf_tables): table `nat' is incompatible, use 'nft' tool is the single most reported iptables-nft compatibility failure. It appears when iptables-nft attempts to read a table in the nf_tables backend that was created or modified by the native nft tool using features iptables-nft cannot parse.

When iptables-nft loads a table, it performs sanity checks on every chain and rule. If it encounters native nftables expressions, unexpected chain types, set definitions, or other structures that have no iptables equivalent, it aborts with the incompatible error rather than risk misinterpreting the ruleset.

This commonly happens in three scenarios. First, when a service like Docker or mailcow creates nf_tables rules using the native nft tool while other services on the same host expect to manage the same table through iptables-nft. Second, when an administrator manually adds native nftables rules to a table that iptables-nft considers its own. Third, after a distribution upgrade that changes the default firewall backend without migrating existing rules.

reproducing the error
# This works fine -- iptables-nft creates a table it understands
# iptables -t nat -L -n
Chain PREROUTING (policy ACCEPT)
...

# Now add a native nftables set to the nat table
# nft add set ip nat blocklist { type ipv4_addr \; }

# iptables-nft can no longer parse the table
# iptables -t nat -L -n
iptables v1.8.10 (nf_tables): table `nat' is incompatible, use 'nft' tool.

Resolving the Incompatible Table Error

The resolution depends on which tool you need to keep using. If you need iptables-nft to manage the table, flush the conflicting native nftables additions and let iptables-nft recreate its structures cleanly. If you can move to native nftables, stop using iptables-nft for that table entirely.

recovery procedure
# Option A: Restore iptables-nft ownership of the table
# Back up first
# nft list ruleset > /root/nft-backup.conf

# Flush the problematic table
# nft flush table ip nat
# nft delete table ip nat

# Let iptables-nft recreate it
# iptables -t nat -L -n

# Restart services that insert NAT rules
# systemctl restart docker

# Option B: Move entirely to native nft for this table
# Stop using iptables commands for nat rules
# Rewrite any iptables NAT rules in nft syntax
# nft list table ip nat
Warning

Running nft flush ruleset will wipe everything in nf_tables, including rules managed by Docker, libvirt, fail2ban, and any other service. The reason this is so dangerous is that none of these services monitor for external rule deletion -- Docker will not automatically recreate its chains until the next network operation, libvirt will not restore NAT rules until a virtual network is restarted, and fail2ban will lose its inet f2b-table entirely (meaning all active bans silently disappear with no log entry). Prefer flushing individual tables with nft flush table ip nat or nft delete table ip nat rather than the full ruleset. If you must flush the entire ruleset, plan to restart every service that manages firewall rules immediately afterward.

Docker and Container Runtime Conflicts

Docker is one of the largest sources of iptables-nft compatibility problems. Docker was built around iptables semantics and automatically inserts NAT and filter rules to manage container networking. On systems where iptables-nft is the default, Docker's iptables commands get translated to nf_tables rules. This generally works -- until something else on the host also manipulates the same nf_tables tables using native nft syntax.

The conflict becomes more severe in containerized environments where the container and the host may be running different iptables backends. A container image based on Alpine Linux, for example, defaults to iptables-legacy. Rules from the container get injected into the host's legacy x_tables subsystem, but the host kernel's active nf_tables FORWARD chain -- with its DROP policy -- ignores those legacy rules entirely. The rules appear to execute successfully inside the container, but they have no effect on traffic at the host level. This is an active real-world problem on Debian 13 (Trixie) and similar modern hosts, as confirmed by reports from both the wg-easy project and the mailcow project.

diagnosing Docker backend mismatches
# Check what backend the HOST uses
$ iptables --version
iptables v1.8.10 (nf_tables)

# Check what backend a CONTAINER uses
$ docker exec my-container iptables --version
iptables v1.8.10 (legacy)  # <-- mismatch!

# Look for the warning about legacy tables on the host
$ iptables -L FORWARD -nv
Warning: iptables-legacy tables present, use iptables-legacy to see them

"In nftables, an 'accept' rule is not final" -- it only terminates the current base chain. Other base chains at later priorities may still drop the packet. -- Docker documentation: Docker with nftables

Docker Engine v29, released in November 2025, introduced experimental native nftables support as an alternative to the iptables compatibility layer. We use the word "experimental" deliberately -- Docker's own release notes and documentation explicitly label this feature as experimental, noting that "configuration options, behavior and implementation may all change in future releases." Some community coverage describes it more casually as simply a new feature, but the experimental qualifier matters for production planning. To enable it, set "firewall-backend": "nftables" in /etc/docker/daemon.json and restart the daemon. When using the native nftables backend, Docker creates two tables -- ip docker-bridges and ip6 docker-bridges -- and manages them directly without translating through iptables-nft at all. The Docker documentation states that in a future release, nftables will become the default firewall backend and iptables support will be deprecated.

Overlay Networks Still Use iptables

Docker's nftables migration is not yet complete. The Docker documentation explicitly states that firewall rules for overlay networks have not yet been migrated from iptables. This is the reason Docker Swarm mode cannot be enabled alongside the nftables backend -- Swarm relies on overlay networking. If you run multi-host overlay networks outside Swarm (for example, with manual docker network create --driver overlay), those firewall rules will still be created through the iptables path regardless of the firewall-backend setting. This means that even on a host configured for nftables, overlay network operations still depend on a functioning iptables-nft translation layer being available.

IP Forwarding Behavior Change

With its nftables backend, Docker will not enable IP forwarding on the host automatically. If forwarding is required but not already enabled, daemon startup or network creation will fail with an error. This is a change from iptables mode, where Docker would silently enable forwarding. After migrating to the nftables backend, verify that net.ipv4.ip_forward = 1 is set in /etc/sysctl.conf or via sysctl -w net.ipv4.ip_forward=1. On hosts with multiple network interfaces, also add nftables rules to block unwanted forwarding between non-Docker interfaces.

Warning

The DOCKER-USER chain does not exist in Docker's native nftables implementation. If you have custom rules in DOCKER-USER, switching to the nftables backend without migrating those rules first will silently lose your custom policy. Additionally, the Docker documentation notes that when the daemon starts with nftables after having previously run with iptables, it will not remove the existing jump from the FORWARD chain to DOCKER-USER -- meaning stale DOCKER-USER rules may still fire until the host is rebooted. Always migrate DOCKER-USER rules to a separate nftables table before switching backends.

Pro Tip

In Docker's nftables mode, ACCEPT verdicts are not final. A packet accepted by Docker's base chain will still traverse your own base chains at later priorities, and any of those chains can drop it. This is the opposite of iptables behavior, where an ACCEPT in a chain was terminal for that hook. Always verify chain priorities with nft list chains to understand the actual evaluation order.

Overriding Docker's Drop Rule in nftables Mode

Because ACCEPT in Docker's base chains is non-final, a packet accepted by Docker can still be dropped by a higher-priority custom chain. According to the Docker documentation, the correct way to allow a packet that Docker would otherwise accept is to use a firewall mark: use --bridge-accept-fwmark=<value> when starting dockerd (or set it in /etc/docker/daemon.json as "bridge-accept-fwmark": <value>). Docker will then accept any packet carrying that mark. Add the mark in a chain that runs at a lower priority than Docker's chains -- for example, in a type filter hook forward priority filter - 1 chain -- so Docker sees the mark before evaluating its own rules.

Chain Priority Collisions

In the legacy iptables model, chain evaluation order is essentially fixed. The built-in chains -- INPUT, FORWARD, OUTPUT, PREROUTING, POSTROUTING -- execute in a predetermined sequence, and user-defined chains are only reached via explicit jump or goto targets. There is no concept of parallel chains on the same hook running at different priorities.

nftables changes this entirely. Every base chain has a priority value (an integer), and multiple base chains can register on the same hook at different priorities. The kernel evaluates them in order from lowest (most negative) to highest. An accept verdict in one chain is not final -- the packet still traverses other chains at later priorities, and any of those chains can drop it. A drop verdict, however, is immediate and final. The reason nftables uses this design is to allow independent subsystems -- Docker, libvirt, a custom firewall, kube-proxy -- to each own their own table with their own chains, without needing to coordinate insertion points within a shared chain. In iptables, every tool had to fight over the same FORWARD chain, leading to ordering conflicts and fragile rule insertion logic. nftables' priority model solves that problem but introduces a new one: understanding that accept in one chain does not protect a packet from being dropped by another chain is essential, and getting this wrong is one of the single leading causes of mysterious packet drops on modern Linux.

Why This Point Gets Debated Online

If you search for nftables accept behavior, you will find what appears to be contradictory documentation -- even within the official nftables wiki. The Quick Reference page states that accept will "stop the remaining rules evaluation," while the Configuring Chains page states that "an accept verdict isn't necessarily final." Both statements are technically correct, but they describe different scopes. The Quick Reference is describing behavior within a single chain: once a rule issues an accept, the remaining rules in that chain are skipped. The Configuring Chains page is describing behavior across base chains: the packet still traverses other base chains registered on the same hook at later priorities. The Debian nft(8) man page is the clearest authority on this, stating that "a packet is ultimately accepted if and only if no (matching) rule or base chain policy issues a drop verdict." This guide states it plainly because getting this wrong is the root cause of many Docker and libvirt firewall failures on modern Linux.

This creates a class of problems that is invisible from the iptables perspective. If Docker creates a FORWARD chain at priority -100 and your custom nftables table creates another FORWARD chain at priority 0, a packet accepted by Docker's chain will still be evaluated by your chain. If your chain has a DROP policy, the packet is dropped despite Docker's accept. From Docker's point of view, the packet was accepted. From the kernel's point of view, it was dropped by a completely separate chain that Docker knows nothing about.

inspecting chain priorities
# List all chains with their hooks and priorities
# nft list chains
table ip filter {
  chain INPUT {
    type filter hook input priority filter; policy accept;
  }
  chain FORWARD {
    type filter hook forward priority filter; policy drop;
  }
}
table inet my_firewall {
  chain forward {
    type filter hook forward priority 0; policy drop;
  }
}

# "priority filter" is an alias for priority 0
# Both chains above fire on forward at priority 0
# The kernel evaluates all chains at the same priority

To enforce rules before Docker's chains, create your chain at a lower priority value. To enforce rules after Docker's chains, use a higher priority value. Always check what priorities are already in use with nft list chains before adding new base chains.

creating a pre-Docker filter chain
# Create a table for your custom rules
# nft add table inet my_firewall

# Create a forward chain that runs BEFORE Docker (priority -200)
# nft add chain inet my_firewall forward_early \
    '{ type filter hook forward priority -200; policy accept; }'

# Add rules to this chain
# nft add rule inet my_firewall forward_early \
    tcp dport 3306 drop

libvirt and Virtual Machine Networking

libvirt manages virtual network bridges and relies on iptables to set up NAT and forwarding rules for guest VMs. On systems where nftables is the default, libvirt typically uses the iptables-nft backend. This works as long as nothing else modifies the nat or filter tables using native nft syntax.

The problem surfaces when another service -- such as a container runtime or a mail server with its own netfilter component -- uses native nft to modify the same tables. Once the nat table contains nftables-native structures that iptables-nft cannot parse, libvirt can no longer start virtual networks. The error looks like this:

libvirt network failure
$ sudo virsh net-start default
error: Failed to start network default
error: internal error: Failed to apply firewall rules
  /usr/sbin/iptables -w --table filter --list-rules:
  iptables v1.8.7 (nf_tables): table `filter' is incompatible, use 'nft' tool.

The fix follows the same pattern as the general incompatible table resolution: identify which tool modified the table with native nft expressions, either flush those modifications or migrate the service to native nftables, and restart libvirtd. Starting with Fedora 41 and targeting RHEL 10, libvirt's native nftables backend became the preferred default for virtual network management. When active, libvirt detects which backend tools are available and prefers nftables if both are installed. The setting can be overridden in /etc/libvirt/network.conf with firewall_backend = "iptables" or firewall_backend = "nftables". When the nftables backend is active, libvirt creates a table named libvirt_network instead of injecting rules through iptables, eliminating the compatibility conflict for virtual networks entirely.

Important Caveat

Libvirt's nftables backend change applies only to the virtual network driver. The nwfilter functionality -- which provides per-VM traffic filtering rules -- continues to use iptables/ebtables as of Fedora 41. According to the Fedora project change documentation, nwfilter will switch to nftables in a future release. Until then, hosts using nwfilter rules will still produce iptables entries in the nf_tables backend via the legacy compatibility layer. We call this out specifically because the headline of the Fedora change proposal -- "Libvirt Virtual Network NFTables" -- reads as though libvirt has completed a full migration to nftables. It has not. Only the virtual network NAT and forwarding rules have moved. The per-VM filtering layer (nwfilter) remains on the old stack, which means the incompatible table problem can still surface on any host that uses nwfilter rules alongside native nft commands.

For systems still on RHEL 9 or older libvirt releases, the iptables-nft backend remains the only option, which means avoiding native nftables modifications to the nat and filter tables.

Docker + libvirt nftables Interaction

When libvirt uses its nftables backend, its FORWARD rules are placed in a separate libvirt_network table from Docker's. Because nftables requires a packet to be allowed by all top-level tables, Docker's DROP rules will block libvirt guest traffic even though libvirt's own table allows it. This is the reverse of the iptables behavior, where libvirt's rules and Docker's rules shared the same table and libvirt's FORWARD rules would override Docker's DENY policy. The Fedora change documentation notes that libvirt is only compatible with firewalld when using the nftables backend -- other firewall management tools require workarounds. If you run both Docker and libvirt VMs on the same host, test guest network connectivity thoroughly after switching either service to a native nftables backend.

RHEL 9 Deprecation and the Path Forward

Red Hat deprecated the iptables-nft package in RHEL 9. When the iptables, ip6tables, ipset, ebtables, arptables, or nft_compat kernel modules load, they now log a warning stating that the driver is not recommended for new deployments and will likely be removed in the next major release. RHEL 10 and CentOS Stream 10 have followed through on this: the iptables kernel modules were moved out of the default kernel package and into kernel-modules-extra, meaning they are no longer available on a standard RHEL 10 installation. While an administrator can explicitly install kernel-modules-extra to restore the ip_tables module, Red Hat's intent is clear: iptables is not part of the default networking stack and should not be assumed present. Tools like Docker's rootless setup script that probe for ip_tables will fail on a stock RHEL 10 system unless the administrator has taken this extra step. The Kubernetes Enhancement Proposal for the nftables kube-proxy backend notes that Red Hat's deprecation applies to iptables-nft as well as iptables-legacy, meaning the compatibility shim is also on borrowed time on RHEL-family systems.

The KEP-3866 proposal notes that Red Hat declared iptables deprecated in RHEL 9, with the deprecation explicitly covering iptables-nft as well as iptables-legacy. -- Kubernetes Enhancement Proposal KEP-3866, sig-network/3866-nftables-proxy

This deprecation has practical consequences for system updates. On Rocky Linux and other RHEL derivatives, upgrading iptables-libs can fail if the system still has iptables-legacy installed, because the new library version is incompatible with the legacy package. The upgrade requires removing iptables-legacy and transitioning to iptables-nft before the library update can proceed.

RHEL/Rocky Linux upgrade fix
# Back up existing rules before any changes
# iptables-save > /root/iptables.rules.bak
# ip6tables-save > /root/ip6tables.rules.bak

# Proceed with the update, allowing conflicting packages to be replaced
# dnf update --allowerasing

# If iptables-legacy was not automatically removed
# dnf remove iptables-legacy

# Install iptables-nft utilities and service scripts
# dnf install iptables-nft-services

# Restore backed-up rules (may be saved as .rpmsave files)
# cd /etc/sysconfig
# mv iptables.rpmsave iptables
# mv ip6tables.rpmsave ip6tables

# Enable and start the service
# systemctl enable --now iptables
# systemctl enable --now ip6tables

Debian's Trixie release (Debian 13) explicitly states that the iptables/xtables framework has been replaced by nftables and that administrators should consider migrating. The iptables package is still available, but it ships with both the legacy and nft variants, and the alternatives system defaults to iptables-nft.

Kubernetes and CNI Plugin Interactions

Kubernetes has historically relied on iptables for kube-proxy service routing. On systems using the iptables-nft backend, kube-proxy's iptables commands are translated into nf_tables rules. This generally works, but the performance characteristics differ significantly. In large clusters with thousands of services, the linear rule-matching behavior of iptables (O(n) per packet) becomes a bottleneck. According to the Kubernetes project blog, in clusters with 5,000 services, nftables median latency is roughly equivalent to the best-case (p01) latency for iptables mode, a gap that widens further at 30,000 services.

"Kubernetes 1.33 introduced nftables as a fully supported kube-proxy mode" -- described as the modern replacement for iptables, offering a more efficient rule model on newer Linux kernels. -- Azure Kubernetes Service Engineering Blog, November 2025

kube-proxy's native nftables mode requires Linux kernel 5.13 or newer and nft version 1.0.0 or later. The Kubernetes documentation notes that using an older version of the nft binary risks interference between kube-proxy's nftables usage and other nftables rules on the system. The mode reached beta in Kubernetes 1.31 and reached general availability in Kubernetes 1.33, released April 23, 2025. Even at GA, iptables remains the default for compatibility reasons -- you must explicitly pass --proxy-mode nftables to opt in. You will find community coverage of Kubernetes 1.33 that says nftables "replaces" iptables. This was corrected in the official sig-release discussion by a Kubernetes maintainer who noted that nftables cannot be described as replacing iptables since it is not the default for either new or existing users. This guide uses the more precise language: nftables is GA and recommended, but iptables remains the default and is not being removed. As of Kubernetes 1.35, the Kubernetes documentation notes the nftables mode is still relatively new and may not be compatible with all network plugins. A separate active issue affecting very large clusters: a soft lockup bug in the nft binary has been identified in high-endpoint environments (5,000+ services with 250,000+ endpoints), where the nft process triggers kernel soft lockup warnings lasting hundreds of seconds during rule synchronization. In the reported cases, individual sync cycles took over two minutes on kernel 6.12 and up to thirteen minutes on kernel 6.1, with worst-case runs exceeding twenty minutes. During these sync cycles, service routing rules are stale, meaning new endpoints are not reachable and removed endpoints continue to receive traffic. This issue is specific to very high scale environments and does not affect smaller clusters, but it is worth evaluating before enabling nftables mode in any cluster that exceeds a few thousand services.

CNI plugins have varying levels of nftables compatibility. Calico supports both backends via the iptablesBackend: NFT field in its FelixConfiguration. Cilium sidesteps the issue entirely by using eBPF for data-plane operations. Flannel still relies on iptables rules. A particularly insidious problem occurs when different components in the Kubernetes stack use different backends -- Istio's CNI plugin has been observed injecting iptables rules into the legacy chain even though all other rules are in the nft chain. The rules still execute because the kernel evaluates both subsystems, but debugging packet flow becomes significantly harder.

Kubernetes 1.35 Update

Kubernetes 1.35, released December 17, 2025, deprecated the ipvs kube-proxy mode. The official documentation describes nftables as the recommended replacement for ipvs, with iptables as an alternative on older Linux systems that cannot run nftables. Amazon EKS documentation states that ipvs will be removed in Kubernetes 1.36, giving operators a concrete window to migrate. The reason ipvs is being deprecated is not a performance deficiency -- ipvs was historically faster than iptables -- but a maintenance one: the Kubernetes networking team lacked developers with deep ipvs expertise, and maintaining feature parity across three separate backends (iptables, ipvs, and nftables) had become unsustainable. This means all three kube-proxy modes are now converging toward nftables as the long-term standard. If you are currently running ipvs mode, begin planning a migration to nftables rather than falling back to iptables.

Systematic Debugging Workflow

When packets are being dropped or ports appear open but unreachable, resist the urge to start editing rules. Begin by proving who owns the firewall and where packets are being evaluated. The following workflow covers every angle:

diagnostic workflow
# 1. Which iptables backend is active?
$ iptables --version

# 2. Is the nftables service running?
$ systemctl is-active nftables

# 3. Are firewalld or UFW running?
$ systemctl is-active firewalld
$ ufw status 2>&1 | head -n1

# 4. View the COMPLETE nf_tables ruleset
# nft list ruleset

# 5. Check for orphaned legacy rules
# iptables-legacy -L -n 2>/dev/null
# iptables-legacy -t nat -L -n 2>/dev/null

# 6. List chains with hooks and priorities
# nft list chains

# 7. Check packet counters (are rules actually matching?)
# nft list ruleset -a  # -a shows handle numbers

# 8. Verify which services are listening
$ ss -tulpn

# 9. Enable packet tracing for a specific flow (add then remove when done)
# nft add table ip temp-trace
# nft add chain ip temp-trace pre \
    '{ type filter hook prerouting priority raw - 1; }'
# nft add rule ip temp-trace pre tcp dport 443 meta nftrace set 1

# In another terminal, watch the trace
# nft monitor trace

# Clean up when done
# nft delete table ip temp-trace

After enabling tracing, nft monitor trace prints a line for every packet that matches the tracing rule, showing which table, chain, and rule handled it, the final verdict, and the interface names. Remember to delete the trace table when you are finished -- stale trace tables can generate significant log volume on busy systems and slow packet processing.

Pro Tip

Counters are truth, but only for the rule you are matching. If your accept rule shows zero counters but tcpdump sees traffic, the traffic is either being handled in a different chain (check priorities), dropped earlier by a higher-priority chain, or never reaching that hook at all (e.g., traffic is forwarded rather than destined locally, or vice versa).

Migrating to Native nftables

The cleanest long-term solution to iptables-nft compatibility problems is to stop using the compatibility layer entirely. A full migration to native nftables eliminates the entire class of incompatible table errors, removes the potential for backend mismatches, and gives you access to nftables features like sets, maps, meters, and concatenated matches that have no iptables equivalent.

The iptables-translate tool converts individual iptables rules into nft syntax. The iptables-restore-translate tool converts a complete iptables-save dump. Neither produces perfect output -- some extensions and targets have no direct translation -- but they provide a solid starting point. The reason the translation is imperfect is that iptables and nftables have fundamentally different data models. iptables uses a fixed set of tables and chains with implicit semantics (the nat table always does NAT, the filter table always filters), while nftables lets you name tables and chains freely and assign their behavior through explicit type, hook, and priority declarations. The translation tools produce syntactically valid nft commands, but they cannot restructure the ruleset to take advantage of nftables features like sets and verdict maps. You should review the output for any rules marked with comments like # nft_iptables_translate: Unable to translate -- these indicate match extensions that have no nft equivalent and must be rewritten by hand.

translating existing rules
# Translate a single rule
$ iptables-translate -A INPUT -p tcp --dport 22 -j ACCEPT
nft add rule ip filter INPUT tcp dport 22 counter accept

# Translate an entire ruleset
# iptables-save | iptables-restore-translate > /root/nft-migrated.conf

# Review and edit the output, then load
# nft -f /root/nft-migrated.conf

# Persist on boot
# nft list ruleset > /etc/nftables.conf
# systemctl enable nftables

For services like fail2ban that traditionally manage iptables rules, configure them to use the nftables backend. Fail2ban 0.11.2 and later ship with nftables ban actions -- nftables-multiport and nftables-allports -- that manage an nftables set for banned IPs rather than inserting individual iptables rules. Set banaction = nftables-multiport in /etc/fail2ban/jail.local and fail2ban will create and manage a set named after the jail (e.g., f2b-sshd) within an inet f2b-table nftables table. Verify the result with nft list table inet f2b-table.

Wrapping Up

The iptables-nft compatibility layer was designed as a bridge between the old world and the new one. Like all bridges, it works best when traffic flows in one direction. The moment two tools start modifying the same nf_tables structures with different assumptions about what those structures should look like, the bridge collapses.

The core diagnostic discipline is straightforward: always verify which backend is active, always check for rules in both the nf_tables and legacy x_tables subsystems, and never assume that what iptables -L shows you is the complete picture. On modern Linux, nft list ruleset is the only command that gives you ground truth about what the kernel is actually doing with your packets.

The direction of the ecosystem has reached concrete milestones. RHEL 9 deprecated iptables-nft with kernel module warnings on every load; RHEL 10 moved the iptables kernel modules out of the default kernel package and into kernel-modules-extra, making iptables unavailable on stock installations. The Debian Wiki now states that building new firewalls on top of iptables is discouraged. Docker Engine v29 (released November 2025) ships an experimental native nftables backend and has documented a deprecation path for iptables support. Kubernetes kube-proxy's nftables mode reached beta in 1.31 and graduated to GA in Kubernetes 1.33 (April 2025) -- though iptables remains the default for compatibility, and as of 1.35 the Kubernetes documentation characterizes nftables mode as still relatively new. Kubernetes 1.35 also deprecated the ipvs kube-proxy mode, with removal planned for 1.36, consolidating the project's networking future around nftables. libvirt switched to nftables as the preferred default backend for virtual networks starting with Fedora 41 and RHEL 10, with rules now isolated in the libvirt_network table. Every major system-level tool that once kept iptables alive has either migrated or committed to a migration date. The compatibility layer served its purpose, but the sooner your infrastructure standardizes on native nftables, the fewer ghost rules you will chase at 3 AM.

Sources and References

Technical details in this guide are drawn from official documentation and verified sources.

How to Diagnose and Fix iptables-nft Compatibility Problems

Step 1: Identify Your Active iptables Backend

Run iptables --version and check the text in parentheses. If it reads nf_tables, the system uses iptables-nft. If it reads legacy, the system uses iptables-legacy. On Debian or Ubuntu, also run update-alternatives --display iptables to see which binary is active. On RHEL-based distributions, check with alternatives --display iptables.

Step 2: Detect Conflicting Rulesets from Both Backends

Run nft list ruleset to see everything in the nf_tables backend. Then run iptables-legacy -L -n to check for rules in the older x_tables backend. If both commands return active rules, you have a split-brain firewall. Traffic is being evaluated against two separate rulesets, and rules in one subsystem are invisible to the other.

Step 3: Consolidate to a Single Backend and Flush Conflicts

Back up both rulesets using iptables-save and nft list ruleset. Flush the backend you are not keeping. If standardizing on iptables-nft, flush legacy rules with iptables-legacy -F across all tables. If standardizing on native nftables, flush the iptables-nft compatibility tables with nft flush ruleset and rewrite your policy using nft syntax. Confirm only one backend has active rules before restoring services.

Step 4: Verify Services and Containers Use the Correct Backend

Restart Docker, libvirt, fail2ban, and any other service that manages firewall rules. Check that each service creates its rules in the expected backend by running nft list ruleset and confirming the tables appear. For Docker Engine v29 and later, verify the firewall-backend value in /etc/docker/daemon.json. For Docker containers, verify the in-container iptables binary matches the host backend by running iptables --version inside the container. A mismatch means container rules are going into the wrong kernel subsystem.

Frequently Asked Questions

What does the error table nat is incompatible, use nft tool mean?

This error means a table in the nf_tables kernel backend was created or modified by the native nft tool using features that iptables-nft cannot parse. When iptables-nft encounters expressions, chain configurations, or set types it does not understand, it refuses to read the table and prints the incompatible error. The fix is to manage that table exclusively with the nft command, or flush the conflicting rules with nft flush ruleset and let iptables-nft recreate its own tables cleanly.

Can iptables-legacy and iptables-nft run on the same system safely?

Running iptables-legacy and iptables-nft on the same host is technically possible but strongly discouraged. They write rules to different kernel subsystems -- iptables-legacy uses the older x_tables API while iptables-nft uses the nf_tables API. Rules in one subsystem are invisible to the other, which leads to silent policy divergence where traffic is evaluated against two separate rulesets. The safest approach is to standardize on a single backend system-wide and ensure every tool, container, and service uses the same one. Note: you will find conflicting advice on this point online. The Arch Wiki's nftables page has a disputed-accuracy flag on whether dual-backend operation is workable. In isolated cases with non-overlapping rulesets it can appear to function, but the failure mode is invisible -- both tools report success while packets are evaluated against a combined state that no single command can display.

How do I check which iptables backend my system is using?

Run iptables --version and look at the text in parentheses. If the output shows nf_tables, you are running iptables-nft, which translates iptables commands into nftables rules. If it shows legacy, you are running iptables-legacy, which uses the older x_tables kernel API. On Debian and Ubuntu, you can also run update-alternatives --display iptables to see which binary the iptables symlink points to and which alternatives are installed.

What happens to the DOCKER-USER chain when I switch Docker to the native nftables backend?

The DOCKER-USER chain does not exist in Docker's native nftables implementation. When Docker Engine v29 or later is configured with firewall-backend: nftables, Docker creates its own nftables tables (ip docker-bridges and ip6 docker-bridges) and manages them directly. Any custom rules you had in DOCKER-USER need to be rewritten as native nftables rules in a separate table, using base chain priority to control evaluation order relative to Docker's chains. Accept verdicts in nftables are not final -- a packet accepted by one base chain still traverses other base chains at later priorities, so DROP rules in your table take effect regardless of Docker's accept verdicts.

Does libvirt's switch to nftables on Fedora 41 affect nwfilter rules?

No. The libvirt nftables backend change introduced in Fedora 41 applies only to the virtual network driver, which manages NAT and forwarding rules for virtual bridges. The nwfilter functionality, which provides per-VM traffic filtering, continues to use iptables and ebtables. According to the Fedora change documentation, nwfilter will be migrated to nftables in a future release. Until then, any host using libvirt nwfilter rules will still produce iptables-nft entries alongside nftables rules, which means the incompatible table problem can still surface if native nft commands are used to modify the same tables.