For as long as Docker has existed on Linux, it has owned your firewall in a way that felt both unavoidable and vaguely hostile. Spin up a container, publish a port, and Docker quietly reaches into your netfilter stack and adds rules you never wrote. Those rules bypass ufw. They can survive an iptables -F. And for years, anyone who wanted to run nftables natively had to choose between Docker working and their firewall making sense.

That changed with Docker Engine 29.0.0, released on November 10, 2025 (blog announcement November 11). For the first time, Docker ships with an opt-in --firewall-backend=nftables flag that writes native nftables rules instead of routing through the iptables compatibility shim. It is marked experimental, overlay networks and Swarm are not yet supported, and Docker itself says behavior may change. The nftables backend remains experimental through the current latest release, 29.3.0 (March 5, 2026). But the architecture is real, the rules it creates are inspectable, and the long-term direction is clear: the Docker Engineering Blog stated explicitly that nftables will become the default firewall backend in a future release, and iptables support will be deprecated.

This article covers what nftables actually is at the kernel level, how Docker's network model has always worked, where the iptables-nft shim creates problems, and how to configure Docker 29 with the native nftables backend correctly -- including the DOCKER-USER migration that the official docs mention but do not fully explain.

What nftables Actually Is

nftables is not a drop-in replacement for iptables with a different syntax. It is a fundamentally different way of expressing packet filtering inside the Linux kernel. Pablo Neira Ayuso of the Netfilter Core Team submitted the nftables kernel patch in October 2013; it merged into the mainline kernel on January 19, 2014 with Linux 3.13. The Wikipedia article on nftables describes it accurately as adding a simple virtual machinebytecode VMA lightweight interpreter baked into the kernel. When you write an nft rule, the userspace tool compiles it to bytecode, much like a compiler produces machine code. The kernel then runs that bytecode for every matching packet.Why this matters: rules live closer to the kernel's execution model, which is why nftables can do atomic rule updates and sets without syscall-per-rule overhead. to the kernel capable of executing filtering bytecode -- the userspace nft tool compiles your rules into that bytecode and loads it via NetlinkNetlink socketA socket family (AF_NETLINK) used for communication between userspace and kernel subsystems. nft sends compiled bytecode to the kernel via Netlink, replacing the older setsockopt() mechanism iptables used.Why this matters: Netlink supports atomic batch operations. Entire rulesets can be installed or replaced in a single syscall, which is how nftables avoids the "rules flash visible mid-update" problem., rather than sending binary structures over setsockopt() as iptables did.

The official nftables wiki identifies several structural differences that matter directly for Docker administrators. Where iptables has five hard-coded tables (raw, filter, nat, mangle, security) that are always registered whether you use them or not, nftables has no pre-defined tables at all -- you create only what you need. Each base chain has a configurable hook priority, expressed as a signed integer, which determines when in the netfilter traversal path that chain fires. This is the mechanism Docker 29 uses to interleave its rules with operator-defined rules cleanly, without requiring a synthetic DOCKER-USER chain.

The official nftables wiki notes that unlike iptables, nftables registers no pre-defined tables or base chains at all -- only the structures you explicitly create are loaded into the kernel. The iptables model, by contrast, always registered its fixed set of tables whether they were used or not, and the wiki reports that unused base chains alone were observed to carry a measurable performance cost. This design difference is directly relevant to Docker administrators: when Docker 29 creates its own nftables tables, it owns exactly those tables and nothing else, with no ambient registration overhead from chains that serve no role in that deployment.

-- nftables wiki: Main differences with iptables

The other structural advantage is sets and maps. In iptables, matching against a large list of IP addresses means either a long chain of individual rules traversed linearly, or installing the external ipset kernel module. nftables builds set and map support natively into its core. A Red Hat benchmark published on their developer blog measured this directly: with small rule counts, nftables performs comparably to iptables; once you start using sets, nftables throughput stays constant regardless of element count while iptables degrades linearly.

On the iptables-nft Shim

Most modern distributions (Debian Bookworm, Ubuntu 22.04+, Fedora 32+) ship iptables-nft as the default iptables binary. When you run iptables -A FORWARD -j ACCEPT, it does not write legacy netfilter rules -- it translates the command into nftables bytecode and installs it in a compatibility table. You can verify this with iptables --version: output containing (nf_tables) means you are on the shim.

Netfilter Hook Traversal
PREROUTING FORWARD INPUT NIC PREROUTING DNAT / nat hook routing decision docker-bridges forward chains custom-table forward chain POSTROUTING masquerade NIC out INPUT host process
active path
Docker-owned
operator-owned
not traversed
Stop and think

If iptables has always been a compatibility shim on modern distros, why did Docker's rules disappear when you restarted the nftables service -- and why did restarting Docker fix it?

When systemctl restart nftables runs, it executes nft flush ruleset -- wiping every rule in the nftables kernel state, including the translated iptables-nft rules Docker had installed. Docker's daemon doesn't watch for that event. Only when Docker restarts does it re-inspect the netfilter state and reinstall its rules. With the native nftables backend, Docker manages its own tables, but the same flush-and-reload problem can still occur unless Docker's rules are preserved across service restarts -- which the native backend addresses structurally by giving Docker clear table ownership that nftables service lifecycle doesn't automatically blow away.
tap to reveal the reasoning

How Docker Has Always Used the Firewall

Docker's container networking relies on Linux network namespacesnetwork namespaceA kernel feature that gives a process its own isolated network stack: its own interfaces, routing tables, and firewall rules, completely independent of the host's. Every Docker container gets its own namespace.Why this matters: this is why containers can have their own IP addresses and why traffic between them must be explicitly routed through the host kernel -- it literally crosses a namespace boundary., virtual Ethernet pairs (vethveth pairA pair of connected virtual network interfaces. Think of them as a pipe: packets written to one end appear on the other. One end lives inside the container's network namespace, the other connects to a bridge on the host.Why this matters: the "eth0" your container sees is one end of a veth pair. The other end, attached to docker0, is how the host kernel sees the same traffic.), and bridge devices. When you create a Docker network, Docker creates a bridge interface (e.g., docker0 for the default network) and attaches container veth interfaces to it. For containers to reach the outside world, the kernel needs IP forwarding enabled, and packets arriving from the bridge need to be masqueraded (SNATSource NAT / masqueradeRewrites the source IP address of outgoing packets to match the host's external interface IP. From the perspective of the internet, all container traffic originates from the host, not from private container IPs like 172.17.0.x.Why this matters: this is why containers can reach the internet without a publicly routable IP, and it's also why return traffic finds its way back -- conntrack maps responses back to the originating container.'d) to the host's outbound IP.

Docker has handled this by writing directly into the kernel's netfilter stack. It creates several chains in the filter and nat iptables tables: DOCKER, DOCKER-ISOLATION-STAGE-1, DOCKER-ISOLATION-STAGE-2, and crucially, DOCKER-USER. The Docker packet filtering documentation is clear about the intent: Docker inserts a jump rule into the FORWARD chain that sends traffic through DOCKER-USER before any of Docker's own rules run.

DOCKER-USER was Docker's contract with the operator: put your custom restrictions here, and they will survive Docker daemon restarts because Docker will repopulate its own chains without touching yours. The problem was that this chain only existed at all because iptables has a fixed set of chains in a fixed set of tables. It was a workaround for a limitation, not a feature.

The Port Exposure Problem

Docker publishes ports by writing rules into the nat table's PREROUTING chain and the filter table's DOCKER chain. Because these rules live in the nat table, they execute before the INPUT chain that ufw and similar frontends manage. This is why a published Docker port is accessible externally even if ufw has a deny rule for it -- the DNAT happens before the packet reaches ufw's rules. This behavior exists with both iptables and nftables backends.

Stop and think

If Docker's DNAT happens at PREROUTING and ufw manages the INPUT chain, where would you need to write a rule to actually block external access to a published port?

The FORWARD hook -- not INPUT. Once DNAT rewrites the destination at PREROUTING, the packet is routed toward the container via the FORWARD path, never reaching INPUT at all. A rule blocking access to that port in your INPUT chain will never match it. The correct place is a custom chain at the FORWARD hook with a priority lower than Docker's forward chains, matching tcp dport <port> and filtering by source address or interface. This is exactly what the DOCKER-USER chain was for under iptables, and what a custom nftables forward chain at priority -1 accomplishes under the native backend.
tap to reveal the reasoning

What Happens When nftables Reloads

The central pain point that drove demand for native nftables support is the flush-and-reload lifecycle. When you run systemctl restart nftables, the service executes nft flush ruleset followed by reloading /etc/nftables.conf. Every rule in the nftables ruleset is wiped -- including the translated Docker rules that iptables-nft had installed. Docker's containers are still running, their veth pairs are still wired to the bridge, but the forwarding and masquerade rules are gone.

A 2024 Debian install documented by Natural Born Coder illustrates this directly: restarting the nftables service wiped Docker's rules from the kernel state, and affected containers lost network access until the Docker daemon was restarted. Restarting the Docker daemon re-installs its rules. This creates a sequencing dependency -- nftables must start before Docker -- that is solvable with systemd ordering but fragile in practice and invisible to operators who do not know to look for it.

Docker 29: The Native nftables Backend

The Docker Engineering Blog's v29 release post positioned the nftables addition as a significant milestone, framing it as opt-in support for writing native nftables rules in place of iptables-generated ones. The team acknowledged that while the rules are functionally equivalent between the two backends, operators who rely on the DOCKER-USER chain need to be aware of the differences before switching. The blog post signals the long-term direction without ambiguity:

"In a future release, nftables will become the default firewall backend and iptables support will be deprecated. In addition to adding planned Swarm support, there's scope for efficiency improvements."

-- Docker Engineering Blog, November 11, 2025

This is not a minor configuration option -- it is Docker committing to a migration path and asking the community to test it now. The efficiency improvements referenced include making more use of nftables sets for port matching, a capability the iptables model could not express natively.

-- Docker Engineering Blog: Docker Engine v29 Release, November 11, 2025

The Docker nftables documentation describes what Docker 29 creates when the nftables backend is active. For bridge networks, Docker creates two tables: ip docker-bridges and ip6 docker-bridges in the host's network namespace. Additionally, nftables rules for DNS are created in the container's network namespace -- a detail the official documentation flags explicitly and that most migration guides overlook. Each table contains base chains that handle forwarding, masquerading, and per-container rules. Each bridge network gets its own chain within those tables. Importantly, Docker considers these tables its own property. The documentation is explicit: "Do not modify Docker's tables directly as the modifications are likely to be lost, Docker expects to have full ownership of its tables."

The Accept-Is-Non-Final Problem

This is the most commonly misunderstood behavior when migrating from iptables to nftables. In iptables, an ACCEPT verdict in the DOCKER-USER chain terminates further evaluation for that packet. In nftables, an accept verdict terminates processing for the current base chain only -- the packet then continues into subsequent base chains registered at the same hook. The nftables wiki documents this precisely: "An accept verdict is only guaranteed to be final in the case that there is no later chain bearing the same type of hook as the chain that the packet originally entered." A later base chain with a drop rule or drop policy can still discard the packet. This means a simple accept in your custom forward chain at priority -1 will not override Docker's own drop rules at priority 0. To unconditionally allow a packet past Docker's drop rules, you must use a firewall mark and the --bridge-accept-fwmark daemon option instead. Drops, by contrast, are final -- the nft man page states this directly: "A drop verdict (including an implicit one via the base chain's policy) immediately ends the evaluation of the whole ruleset. No further chains of any hook are consulted."

Docker 29 also introduces a related option: --bridge-accept-fwmark. This daemon flag accepts a numeric firewall mark value. Any packet carrying that mark will be accepted by Docker's bridge forward chains regardless of Docker's normal drop rules. The Docker 29 release notes describe this precisely: "Packets with this firewall mark will be accepted by bridge networks, overriding Docker's iptables or nftables drop rules" -- note that the flag works with both backends, not only the nftables one. The intent is to give operators a clean way to greenlight traffic from other tools -- a VPN daemon, a policy routing table -- without modifying Docker's owned chains directly. It is a narrow escape hatch, not a general override, but it closes a gap that previously required disabling Docker's firewall rules entirely and taking full manual control of forwarding.

Inter-Network Isolation with Docker Compose

Docker Compose creates a separate bridge network per project by default. With the iptables backend, isolation between those networks was enforced by the DOCKER-ISOLATION-STAGE-1 and DOCKER-ISOLATION-STAGE-2 chains. With the nftables backend, Docker maintains equivalent per-bridge isolation chains inside ip docker-bridges. The functional behavior is the same -- containers in separate Compose projects cannot reach each other unless explicitly connected. However, if you manually added custom rules into the iptables isolation chains, those chains do not exist in the nftables backend. Any such rules need to be moved into your own custom table with appropriate hook priorities.

How to Enable It

Enabling the nftables backend requires either a daemon flag or a configuration file entry. The configuration file approach is preferred for persistent deployments:

/etc/docker/daemon.json
{
  "firewall-backend": "nftables"
}

After saving the file, restart Docker:

bash
$ sudo systemctl restart docker
$ sudo nft list ruleset | grep docker-bridges

If Docker started successfully with the nftables backend, you will see entries for ip docker-bridges in the ruleset output. If the daemon reports an error about IP forwarding, see the section below on IP forwarding differences.

Swarm Mode Limitation

As of Docker 29, the nftables backend cannot be used when the Docker daemon is running in Swarm mode. Overlay network rules have not yet been migrated from iptables. Attempting to enable firewall-backend: nftables on a Swarm node will result in an error at daemon startup. This limitation is planned to be resolved in a future release.

IP Forwarding: A Critical Behavioral Difference

One of the less-obvious differences between the two backends is how Docker handles IP forwardingnet.ipv4.ip_forwardA kernel sysctl parameter. When set to 1, the kernel will route packets between network interfaces -- i.e., act as a router. Without it, packets arriving on one interface destined for another network are silently dropped at the routing decision stage.Why this matters: this is the knob Docker needs set to move packets from container veth interfaces to the host's external NIC. With the iptables backend, Docker set it for you. With nftables, you own it.. With the iptables backend, Docker has historically enabled IP forwarding on the host automatically. If net.ipv4.ip_forward was not set, Docker would set it -- and also configure a netfilter rule dropping forwarded packets unless explicitly accepted, to prevent the host from acting as a router. This was a convenience that silently modified kernel parameters on your behalf.

The Docker packet filtering documentation describes the changed behavior clearly. The official nftables docs state it directly: "With its nftables firewall backend enabled, Docker will not enable IP forwarding itself. It will report an error if forwarding is needed, but not already enabled." With the nftables backend, if a bridge network requires forwarding and it is not already enabled, Docker will report an error at daemon startup or when creating the network. You must enable it explicitly. One additional option the official docs describe: --ip-forward=false (or "ip-forward": false in daemon.json) disables Docker's forwarding check entirely, letting the daemon start even when forwarding is disabled. This is useful for diagnostics or specific VM configurations but should not be used on production hosts that need container networking to function -- port publishing and inter-network communication require forwarding to work.

bash -- enable IP forwarding persistently
# Enable immediately
$ sudo sysctl -w net.ipv4.ip_forward=1
$ sudo sysctl -w net.ipv6.conf.all.forwarding=1

# Make it persistent across reboots
$ echo "net.ipv4.ip_forward = 1" | sudo tee -a /etc/sysctl.d/99-docker-nftables.conf
$ echo "net.ipv6.conf.all.forwarding = 1" | sudo tee -a /etc/sysctl.d/99-docker-nftables.conf
$ sudo sysctl -p /etc/sysctl.d/99-docker-nftables.conf

The documentation also raises a concern that is easy to miss: if Docker previously enabled IP forwarding using the iptables backend, and you stop Docker to migrate to nftables, forwarding may already be enabled in the running kernel. After a reboot, however, nothing re-enables it automatically -- so Docker will fail to start until you have persistence in place. Add the sysctl configuration before you reboot.

Additionally, because Docker no longer creates a default drop policy for forwarded traffic when using the nftables backend, you become responsible for ensuring that non-Docker interfaces do not forward traffic they should not. If your host has multiple network interfaces, add explicit nftables rules blocking unwanted inter-interface forwarding before enabling IP forwarding.

Stop and think

You're migrating a live production host. You stop Docker, edit daemon.json to add the nftables backend, and everything looks correct. You restart Docker successfully and all containers come up. Two weeks later the host reboots after a kernel update. What breaks, and why?

IP forwarding. You enabled it with sysctl -w net.ipv4.ip_forward=1 on the running kernel, but never wrote it to /etc/sysctl.d/. The iptables backend would have re-enabled forwarding on Docker startup. The nftables backend will not -- it expects the operator to own that sysctl. After the reboot, Docker starts, creates its nftables tables, but when a container tries to reach the internet, the kernel drops the forwarded packet silently because forwarding is disabled. The symptom is containers that appear running but have no outbound connectivity, with nothing in Docker logs pointing at the root cause.
tap to reveal the failure mode

IPv6 and the nftables Backend

The article so far has focused on IPv4 forwarding, but the nftables backend creates parallel rules for IPv6 in a separate ip6 docker-bridges table. If you have IPv6 enabled in your Docker daemon, both tables are populated. If you do not, only the IPv4 table is created.

This matters because IPv6 forwarding is a separate kernel sysctl from IPv4 forwarding. The code block above enables both with net.ipv6.conf.all.forwarding=1, but there is a subtlety: on hosts that use router advertisements (RA) to configure their own IPv6 default route, enabling forwarding can cause the kernel to stop accepting RAs on that interface. This is defined behavior in RFC 4861, not a Docker quirk. If your Docker host gets its upstream IPv6 connectivity via RA (common on VPS providers), enabling IPv6 forwarding without also setting net.ipv6.conf.eth0.accept_ra=2 will silently drop your host's IPv6 default route after the next RA expires.

IPv6 Forwarding and Router Advertisements

If your host uses SLAAC/RA for its own IPv6 connectivity, add net.ipv6.conf.<interface>.accept_ra = 2 to your sysctl.d configuration alongside the forwarding line. The value 2 tells the kernel to accept RAs even when forwarding is enabled. Replace <interface> with your actual upstream interface name (typically eth0 or ens3).

There is a second IPv6-specific question: does the nftables backend correctly masquerade IPv6 traffic from containers? By default, Docker performs NAT for IPv4 (masquerade) but does not do NAT for IPv6 -- the expectation is that IPv6 addresses are globally routable and masquerade is unnecessary. If you have assigned a real routable IPv6 prefix to your containers using fixed-cidr-v6 in daemon.json, container traffic should leave the host with the container's own address and no masquerade is needed. If you are using a ULA (Unique Local Address, fd00::/8) subnet for containers instead of a globally routable prefix, you will need to add explicit masquerade rules in your own nftables table, the same way IPv4 masquerade worked before Docker owned it.

The practical takeaway: run sudo nft list table ip6 docker-bridges after enabling the backend and confirm the table exists and contains rules for your bridge networks. If you have IPv6 disabled in Docker, the table will be absent. If you see an empty or missing table when you expected IPv6 support, check both the daemon configuration and that ip6tables: true has not been explicitly overridden in daemon.json.

Migrating Away from DOCKER-USER

If your current setup uses rules in the iptables DOCKER-USER chain -- which is the recommended way to add custom restrictions with the iptables backend -- those rules need to be converted. In the native nftables backend, there is no DOCKER-USER chain. Docker's approach is different: because nftables allows each table to define its own base chains with configurable hook prioritieshook priority (signed integer)Every base chain registered at a Netfilter hook point has a priority number. Lower numbers run first. Docker's forward chains use priority 0 (filter). A custom chain at priority -1 runs before Docker's. At priority 1, it runs after.Why this matters: this replaces the need for Docker to create a synthetic DOCKER-USER chain. You own your own tables; you set your own priorities. No contract with Docker required., you can create your own table with chains that execute before or after Docker's chains at the same hook point.

Docker uses well-known priority values for its base chains, documented in the nftables documentation. To insert rules that run before Docker's rules, create a table with a base chain at a lower priority value (lower number = higher priority in netfilter) at the same hook. To run after Docker's rules, use a higher value.

DOCKER-USER Jump Persistence: A Migration Gotcha

The official Docker documentation describes what happens when you restart the daemon with the nftables backend after having run with iptables: Docker removes its iptables chains and replaces them with nftables rules. Your DOCKER-USER chain and its rules will be removed. The real gotcha is different: what Docker does not automatically clean up on a live system is the iptables FORWARD chain default policy, which it may have previously set to DROP. An iptables FORWARD policy of DROP will silently drop packets that Docker's nftables rules would otherwise accept, because the packet traverses both the iptables and nftables chains. This stale DROP policy persists in the running kernel until a reboot, and it breaks port publishing and inter-network communication even when the nftables rules look correct. Removing it explicitly is required on a live migration: sudo iptables -P FORWARD ACCEPT && sudo ip6tables -P FORWARD ACCEPT.

Here is the same restriction -- limiting which source IPs can reach a published port -- expressed in both models side by side:

DOCKER-USER approach -- rules added to Docker's chain, survive restarts, disappear on nft flush
# Add rules directly into the DOCKER-USER chain
# Docker will preserve this chain across daemon restarts
$ sudo iptables -I DOCKER-USER -i eth0 \
    -p tcp --dport 8080 \
    ! -s 192.168.1.0/24 \
    -j DROP

# Problem: if `systemctl restart nftables` runs first,
# DOCKER-USER is gone. nft flush ruleset sees no distinction
# between Docker's rules and yours.
# Also: no set support -- 50 CIDRs means 50 separate -I rules
Custom table approach -- you own this table, Docker cannot touch it, sets scale to any size
# Your table -- completely separate from ip docker-bridges
table ip custom-docker-rules {

    set allowed-sources {
        type ipv4_addr; flags interval
        elements = { 192.168.1.0/24 }
    }

    # priority -1 = runs before Docker's filter(0) chains
    chain pre-docker {
        type filter hook forward priority -1;
        tcp dport 8080 ip saddr != @allowed-sources drop
    }
}

# Add a CIDR at runtime, no ruleset reload needed:
$ sudo nft add element ip custom-docker-rules allowed-sources { 10.0.0.0/8 }
/etc/nftables.conf -- custom Docker restrictions
# Custom table for rules that run before Docker's forward chains
table ip custom-docker-rules {

    # Docker's forward chain priority is filter (0)
    # Use -1 to execute before Docker's rules
    chain forward-before-docker {
        type filter hook forward priority -1;

        # Allow established and related connections first
        ct state established,related accept

        # Restrict port 8080 to an internal subnet only
        iifname "eth0" tcp dport 8080 \
            ip saddr != 192.168.1.0/24 drop

        # Log anything that doesn't match
        iifname "eth0" log prefix "docker-custom-drop: "
    }
}
Verifying Rule Order

After applying your nftables configuration, run sudo nft list ruleset and check the output carefully. Base chains at the same hook point are listed with their priority. Confirm your custom chain has a lower priority number than Docker's forward chains. You can also run sudo nft monitor trace combined with a test packet to trace the exact chain traversal order.

Using Sets to Replace Long DOCKER-USER Allow Lists

One of the more practical improvements the migration enables: if your DOCKER-USER chain contained a series of rules matching against individual IP addresses or CIDR ranges -- a common pattern for restricting which hosts can reach a management port -- those rules translated to a linearly traversed chain in iptables. In nftables, the same restriction collapses to a single set lookup. Define a named set of permitted source addresses in your custom table, and your forward chain rule references the set directly. The match performance is constant regardless of set size, and adding or removing permitted addresses is an atomic nft add element or nft delete element operation that does not require reloading the entire ruleset.

/etc/nftables.conf -- using a set for IP allow-listing
table ip custom-docker-rules {

    set allowed-mgmt-sources {
        type ipv4_addr
        flags interval
        elements = { 10.0.0.0/8, 192.168.1.0/24 }
    }

    chain forward-before-docker {
        type filter hook forward priority -1;

        # Block management port from anything not in the allow set
        tcp dport 9000 ip saddr != @allowed-mgmt-sources drop
    }
}

# Add an address at runtime without reloading
$ sudo nft add element ip custom-docker-rules allowed-mgmt-sources { 172.16.5.0/24 }

Handling Per-Container Rules Without Rewriting Every Chain

A challenge that comes up when managing many containers: DOCKER-USER rules often referenced specific published ports, which change as containers are started and stopped. In the nftables model, a cleaner approach is to use verdict maps -- a native nftables data structure that maps a key (such as a destination port) to a verdict (accept, drop, or jump to another chain). Instead of adding a new rule per container, you maintain one map and update its entries. The forward chain rule that references the map stays constant; only the map's content changes. This is structurally closer to how Docker itself manages per-bridge rules internally, and it makes per-container policy manageable at scale.

The Compatibility Layer: What Still Runs Through iptables-nft

Even after enabling Docker's native nftables backend, iptables does not disappear from your system. Other software -- fail2ban, libvirt, and several VPN clients among them -- may still write rules through iptables-nft. On a host using iptables-nft as the default, those rules will be translated and appear in the nftables ruleset in a compatibility table alongside Docker's native tables.

The important thing to understand is the evaluation order. Both Docker's native nftables tables and the iptables-nft compatibility tables are chains attached to the same Netfilter hooks. Netfilter calls all chains registered at a given hook in priority order, regardless of which tool created them. This means you can have Docker's rules in ip docker-bridges and a fail2ban rule in the iptables-nft compatibility table both operating on forwarded packets -- they just need to not conflict.

The Arch Linux nftables wiki notes that nftables will warn you if legacy iptables objects are loaded alongside native rules, and recommends against using iptables-nft in the same environment as native nftables to prevent rule mixing. For production Docker hosts where you control all the software, this is achievable. For shared infrastructure where other services write iptables rules, you need to audit what those services create and verify it does not interfere with Docker's chains.

bash -- inspect the full ruleset including iptables-nft tables
# See everything registered with Netfilter, both native and shim-translated
$ sudo nft list ruleset

# Check whether legacy iptables rules also exist alongside nftables rules
$ sudo iptables-legacy -L -n 2>/dev/null

# Confirm your system's iptables binary points to the nft backend
$ iptables --version
# Expected: iptables v1.8.x (nf_tables)

Interaction with firewalld

On Fedora, RHEL, and CentOS systems, firewalld manages nftables (or iptables) as its backend. The Fedora project documented the Docker incompatibility with the nftables backend clearly: Docker historically side-stepped firewalld by injecting iptables rules ahead of firewalld's rules. With the nftables backend active in firewalld, those injected iptables rules were evaluated correctly by the iptables subsystem, but firewalld's own nftables rules then ran separately and could drop the same packet.

Docker 29's native nftables backend resolves this for firewalld environments. According to the Docker Engine v29 blog post, when the nftables backend is active, Docker still sets up firewalld zones and policies for its bridge devices, but creates nftables rules directly rather than using firewalld's deprecated direct rules interface. This means Docker's rules and firewalld's rules now coexist in the same framework, governed by the same priority system.

The practical consequence is that on a firewalld system with Docker 29 in nftables mode, you no longer need to add Docker interfaces to firewalld's trusted zone to prevent firewalld from dropping Docker traffic -- Docker creates its own nftables table and handles its own forwarding policy within it.

What About Podman?

Concept connection
The Podman/Docker coexistence problem is a direct consequence of the shared-namespace nature of Netfilter. Both runtimes register hooks at the same kernel subsystem with no awareness of each other. The iptables backend's global FORWARD DROP policy doesn't know that traffic it's blocking was legitimately authorized by Podman's nftables rules -- they operate in completely different layers of the same stack.

Any practical discussion of Docker's nftables migration in 2026 has to acknowledge that Podman already did this -- and did it earlier. Podman's networking layer, Netavark, switched to nftables as its default firewall backend well before Docker 29 shipped. If you are running rootful Podman containers alongside Docker on the same host, you now have two container runtimes both writing nftables rules, each in their own table. That arrangement works -- the priority system handles it -- but there is a known conflict to be aware of.

The Fedora project documented the core problem directly: when Docker runs with its iptables backend and Podman uses nftables via Netavark, Docker's iptables rules include a catch-all forward drop that can block Podman container traffic from reaching external networks. The result is that Podman containers lose outbound connectivity even though their own nftables rules are correct. The documented workaround is to add an iptables rule permitting the Podman traffic, or to switch Docker to the nftables backend so both runtimes operate in the same framework without the iptables-injected forward drop interfering.

Switching Docker to the nftables backend effectively resolves this class of conflict. Both Docker and Podman will write native nftables rules into their own separate tables, and Netfilter will evaluate them in priority order without one runtime's legacy iptables rules unexpectedly blocking the other's traffic. If you are running a mixed Docker and Podman environment, the nftables backend migration is worth prioritizing for this reason alone -- it removes an entire category of inter-runtime networking bugs.

Rootless Containers and nftables

Rootless Podman containers do not write rules to the host's nftables at all -- they use a user-space network stack (slirp4netns or pasta) that operates entirely within the user's network namespace. This means rootless Podman has no interaction with Docker's nftables tables. The coexistence concern above applies only to rootful Podman containers.

Security Considerations for the Transition

The shift to native nftables changes the threat model for Docker host firewall management in one important way: Docker's rules are now genuinely isolated in their own named tables rather than being injected into shared chains. This makes it easier to audit exactly what Docker owns versus what your custom rules own, and harder to accidentally write a rule in the wrong place.

However, the documented limitation about published ports bypassing host firewall INPUT chains remains with both backends. A port published with -p 8080:8080 is accessible from the network even if your nftables input chain blocks port 8080. The DNAT in Docker's nat table fires at the prerouting hook before the packet reaches your input chain. Restricting access to published ports still requires rules in the forward hook -- either in a custom table with appropriate priority, or by setting "iptables": false in daemon.json and managing all Docker forwarding rules manually (not recommended for most deployments).

Do Not Disable Docker's Firewall Rules Unless You Know What You Are Doing

Setting "iptables": false or "ip6tables": false in daemon.json instructs Docker not to create any firewall rules at all. The Docker documentation notes this will likely break container networking. If you take this path, you must write all forwarding, masquerade, and DNS rules yourself. This is only appropriate for advanced operators who want full manual control of the firewall.

Using nftables Packet Tracing for Debugging

For administrators coming from an iptables-based setup, the nftables tracing feature is worth learning. Running sudo nft monitor trace while sending a test packet to a published container port lets you watch exactly which chains evaluate the packet, in what order, and what verdict each rule returns. This is something iptables never offered natively and is genuinely useful for debugging why a packet is or is not reaching a container.

Using --bridge-accept-fwmark Instead of Disabling Rules

A solution that does not appear in most migration discussions -- and that the official Docker documentation describes as the correct replacement for DOCKER-USER ACCEPT rules: rather than trying to override Docker's drop rules with a simple accept in a custom chain, use the --bridge-accept-fwmark flag. This exists precisely because of the accept-is-non-final behavior. Since an accept in your custom chain at priority -1 does not prevent Docker's forward chain at priority 0 from later dropping the packet, the only reliable way to force a packet through Docker's own drop rules is to mark it. Docker 29 introduced --bridge-accept-fwmark=<value> (optionally with a mask: --bridge-accept-fwmark=0x1/0x3) precisely for this case. Set a mark in your custom chain at priority -1 or lower, and Docker's forward chain will accept packets carrying that mark regardless of its own drop rules. Drops, by contrast, are final -- a packet dropped at any priority is discarded immediately with no further evaluation.

Set the firewall mark value in your routing policy or VPN configuration, pass that mark value to Docker, and Docker's forward chains will accept packets carrying that mark unconditionally -- without you touching Docker's owned tables at all. This preserves the clean ownership boundary while solving a real operational problem that previously forced operators toward the risky "iptables": false path.

Restricting Container-to-Host Traffic in the nftables Model

A less commonly addressed scenario: restricting what containers can reach on the Docker host itself. With the iptables backend, this was done via the INPUT chain, but published port DNAT bypasses INPUT for inbound traffic. With nftables, a more targeted approach is to write rules in a custom table at the input hook that match specifically on the Docker bridge interface name (iifname "docker0" or the specific bridge). This lets you allow selected ports from containers while blocking others without interfering with external traffic rules. The separation of Docker's tables from your custom table means these host-access rules survive Docker daemon restarts cleanly.

Auditing What Docker Actually Owns

With the nftables backend active, sudo nft list table ip docker-bridges gives you a complete, readable picture of every rule Docker has installed -- and nothing else. Compare this to the iptables model, where iptables -L -n -v returned all rules from all tools in one undifferentiated list. The new model makes security auditing structurally simpler: your rules live in your tables, Docker's rules live in Docker's tables, and nft list ruleset shows both with clear ownership labels. This is a meaningful improvement for any environment subject to configuration audits or compliance reviews.

A Practical Migration Checklist

If you are running Docker on a host where you control the full stack and want to move to the native nftables backend, the steps below reflect the current Docker 29 documentation and known migration requirements:

  1. Verify Docker version. The nftables backend requires Docker Engine 29.0.0 or later. Check with docker version.
  2. Audit your DOCKER-USER chain. Run sudo iptables -L DOCKER-USER -n and document any custom rules. Pay particular attention to any ACCEPT rules -- in nftables these must be converted to firewall-mark patterns using --bridge-accept-fwmark, not simple accept rules, because nftables accept verdicts are non-final across base chains. Drop rules translate directly to nftables drop rules in a custom forward chain at priority -1.
  3. Enable IP forwarding persistently. Add both IPv4 and IPv6 forwarding to /etc/sysctl.d/ before switching backends and before rebooting.
  4. Write blocking rules for non-Docker interfaces. If your host has multiple network interfaces, add nftables forward chain rules that drop traffic between non-Docker interfaces before enabling global forwarding.
  5. Stop and disable legacy iptables services if present. On systems where an iptables systemd service exists, run sudo systemctl disable --now iptables ip6tables to prevent it from interfering with your nftables configuration at boot.
  6. Clear any iptables FORWARD DROP policy left by Docker. When Docker ran with the iptables backend, it set the FORWARD chain policy to DROP. If you stop Docker before rebooting, that DROP policy persists in the running kernel. Running iptables -P FORWARD ACCEPT and ip6tables -P FORWARD ACCEPT before switching backends prevents that stale policy from silently blocking traffic after Docker restarts with nftables. The policy reverts automatically on reboot, but if you are migrating on a live system without a reboot, clearing it explicitly is required.
  7. Add the firewall-backend setting to daemon.json. Write "firewall-backend": "nftables" to /etc/docker/daemon.json.
  8. Restart Docker. Run sudo systemctl restart docker. Docker removes its iptables chains, including DOCKER-USER, and creates nftables rules in their place. The FORWARD DROP policy is the only thing that will not clean itself up automatically on a live system -- you must clear it explicitly as described in the previous step.
  9. Verify with nft list ruleset. Confirm the ip docker-bridges table is present, and confirm no stale DOCKER iptables chains remain: sudo iptables -L DOCKER 2>&1 || echo "DOCKER chain gone -- correct".
  10. Test container networking. Start a container, publish a port, and verify both external access and container-to-host connectivity work as expected.
bash -- quick post-migration verification
# Confirm nftables backend is active -- should show docker-bridges tables
$ sudo nft list ruleset | grep -E "table ip (docker|custom)"

# Confirm no legacy iptables Docker chains exist
$ sudo iptables -L DOCKER 2>&1 || echo "DOCKER chain not found -- correct"

# Confirm IP forwarding is live
$ sysctl net.ipv4.ip_forward
# Expected: net.ipv4.ip_forward = 1

# Start a test container and check connectivity
$ docker run --rm -it --name test-net alpine ping -c 3 1.1.1.1

Rolling Back to the iptables Backend

The migration checklist covers going forward. The question the documentation is quieter about is: what does going backward look like if the nftables backend causes problems in your environment?

Rolling back is straightforward in principle. Remove "firewall-backend": "nftables" from /etc/docker/daemon.json (or change it back to "iptables"), then restart the Docker daemon. Docker will delete its nftables rules and recreate the iptables chains it previously used. Any custom rules you wrote in your own nftables table with forward hook priorities will remain -- they will not be removed by Docker -- but they will now be running alongside Docker's iptables-nft rules rather than its native nftables tables. Review them for conflicts.

Two things to verify after rolling back. First, confirm IP forwarding is still set correctly -- in iptables mode, Docker will re-enable forwarding on startup if it was not already active, but the sysctl.d persistence you added during migration is harmless to leave in place. Second, if you cleared the iptables FORWARD DROP policy during migration, Docker will re-add it when it restarts with the iptables backend. That is expected behavior, not a problem.

Test First on Non-Production

The safest migration path is to test the nftables backend on an equivalent staging host before touching production. Capture sudo nft list ruleset output on both hosts after migration and compare. Any rules present on one but missing on the other indicate a configuration difference worth investigating before production cutover.

Where This Is Going

The thread through this article
Every concept covered here traces back to a single root: Netfilter is a shared kernel subsystem, and for 20+ years every tool that needed packet filtering had to negotiate access to it without a clean ownership model. iptables's fixed tables, Docker's DOCKER-USER workaround, the flush-and-reload sequencing problem, the Podman conflict -- these are all symptoms of that root condition. nftables's named tables with configurable priorities are the kernel's answer to the ownership problem. Docker 29 is the moment container networking finally aligns with that answer.

Docker 29's nftables backend is explicitly described as experimental, but the roadmap is not. The Docker engineering team's v29 announcement stated that nftables will become the default firewall backend in a future release and iptables support will be deprecated. The remaining work before that happens is Swarm and overlay network support, efficiency improvements using nftables sets for port matching, and gathering production feedback from the community.

For Linux administrators running Docker in single-host deployments today, the native nftables backend is testable and stable enough to evaluate. It eliminates the rule-flush sequencing problem, gives you a clean separation between Docker's firewall ownership and your own, and aligns with where every major Linux distribution is heading with its default firewall tooling. The iptables-nft shim has been a reasonable bridge, but it was always a translation layer sitting between Docker and the kernel -- and translation layers carry costs and failure modes that go away when you remove them.

The migration is not zero-effort. If you have DOCKER-USER rules, they need to be rewritten. If you have not thought carefully about IP forwarding on a multi-interface host, you need to now. But those are clarifying exercises, not obstacles -- they force you to understand what your firewall is actually doing at the packet level, which is knowledge worth having regardless of which backend is running.

Sources