You spin up a Docker Compose stack and suddenly your VPN disconnects. Or a container that worked yesterday can no longer resolve DNS. Or two services fight over port 8080 and one silently fails. These are container networking conflicts -- and they are among the hardest infrastructure problems to diagnose because the symptoms rarely point to the cause.

Container runtimes like Docker and Podman create virtual networks, manipulate routing tables, inject firewall rules, and generate DNS configurations -- all automatically, all silently. When those decisions overlap with your host network, corporate LAN, VPN tunnels, or other containers, the result is unpredictable connectivity loss that can take hours to trace. This guide covers the major categories of container networking conflict on Linux: subnet overlaps, MTU mismatches, port binding collisions, DNS resolution failures, firewall rule interference, IPv6 dual-stack gaps, and multi-runtime coexistence. It also covers direct network namespace inspection — the diagnostic technique that bypasses Docker entirely and reads kernel state directly.

Interactive Symptom Diagnosis
Step 1 of 2 — Primary symptom What is the first thing you noticed?
Diagnosis
Environment Risk Assessment

Check every condition that applies to your host. The meter shows your cumulative exposure to networking conflicts across the categories this guide covers.

Risk score: 0 / 13 — select conditions above

Subnet Overlaps: The Silent Infrastructure Killer

Subnet overlaps are the single most disruptive category of container networking conflict, and they are the hardest to notice before damage is done. The problem is straightforward: Docker picks IP address ranges for its virtual networks without checking whether those ranges are already in use on your infrastructure.

By default, Docker draws container subnets from a series of pre-defined pools. The built-in configuration allocates individual /16 blocks from 172.17.0.0 through 172.31.0.0, then /20 blocks from 192.168.0.0/16. The default bridge network (docker0) claims 172.17.0.0/16 as the first pool entry. Every additional user-defined network carves out the next available block — 172.18.0.0/16, then 172.19.0.0/16, and so on. Once those 14 allocations are exhausted, Docker moves to the 192.168.0.0/16 range in /20 blocks (16 possible subnets there). The full vanilla default is equivalent to setting seven entries in default-address-pools spanning the 172.x range and a final entry for 192.168.0.0/16 with size 20. This behavior is documented in the official dockerd configuration reference and the networking overview.

Why It Happens

Many corporate networks, cloud VPCs, and VPN tunnels use addresses within the 172.16.0.0/12 or 10.0.0.0/8 ranges. When Docker creates a bridge network with a subnet that overlaps one of these, the Linux routing table gains a second, more-specific route for the same address space. Traffic destined for your internal servers or VPN endpoints gets captured by the Docker bridge instead of reaching its intended gateway. The result: machines on your LAN become unreachable, VPN tunnels drop, and internal services vanish -- but only from the host running Docker.

Diagnosing the Conflict

Start by comparing Docker's allocated subnets against your host's routing table:

subnet audit
# List every Docker network and its subnet
$ docker network inspect $(docker network ls -q) \
    --format '{{.Name}} {{range .IPAM.Config}}{{.Subnet}}{{end}}'

# Display the host routing table
$ ip route

# Filter for private ranges that commonly conflict
$ ip route show | grep -E "172\.|10\.|192\.168"

# Show all host interfaces and their addresses
$ ip addr show

If any Docker subnet overlaps with a route in your host's routing table, you have confirmed the conflict. Pay particular attention to VPN interfaces (tun0, wg0) and cloud-provider virtual interfaces.

When a process on your host sends a packet to, say, 10.8.0.1 (a VPN gateway), the kernel consults its routing table using the longest-prefix match algorithm. It selects the most specific route — the one with the highest prefix length — that covers the destination. If Docker has created a bridge with subnet 10.8.0.0/24, and your VPN client has installed a route for 10.8.0.0/16, the Docker route wins because /24 is longer (more specific) than /16. The packet goes to the Docker bridge instead of the VPN tunnel.

What makes this particularly deceptive is that the VPN tunnel is still up. Its route still exists in the table. The kernel simply finds a more specific match first. Running ip route get 10.8.0.1 will show exactly which interface and gateway the kernel selected for that specific destination, which immediately exposes whether Docker is intercepting traffic.

The same mechanism is why the fix works: by assigning Docker a subnet range that no existing route covers — such as 10.200.0.0/24 on a network that only uses 10.0.0.0/8 with shorter prefixes elsewhere — Docker's routes never win because they never compete. The address space belongs to Docker exclusively in the routing table.

Subnet Conflict Checker

Enter your VPN or LAN CIDR range below. The calculator checks it against Docker's default address pools to identify any conflicts before you touch daemon.json.

Your VPN / LAN CIDR

Fixing It: Custom Address Pools

The permanent fix is to tell Docker exactly which address ranges to use, choosing ranges that do not collide with anything on your network. Edit /etc/docker/daemon.json:

/etc/docker/daemon.json
{
  "bip": "10.200.0.1/24",
  "default-address-pools": [
    { "base": "10.201.0.0/16", "size": 24 },
    { "base": "10.202.0.0/16", "size": 24 }
  ]
}

The bip field sets the IP and subnet for the default docker0 bridge. The default-address-pools array defines where Docker carves out subnets for all user-defined networks. Setting size to 24 means each new network gets a /24 block (254 usable addresses) instead of the default allocations (which hand out entire /16 or /20 blocks per network depending on the pool), dramatically reducing address space consumption and the chance of overlap. Always validate daemon.json before restarting Docker -- a JSON syntax error silently prevents the daemon from starting. Use python3 -m json.tool /etc/docker/daemon.json or jq . /etc/docker/daemon.json to verify syntax first.

Warning

Changing daemon.json does not retroactively update existing networks. Stop all containers, prune unused networks with docker network prune -f, restart Docker with systemctl restart docker, and then bring your stacks back up so they recreate their networks using the new pool.

daemon.json Configuration Builder

Fill in your network ranges and preferences. The builder generates a correct /etc/docker/daemon.json snippet ready to paste. All fields are optional — only what you configure is included.

Bridge IP (bip)Default bridge interface address
Address pool baseFirst pool range (RFC 1918)
Pool subnet sizePrefix length per network
DNS serversComma-separated, max 3
MTUMatch or stay below host interface MTU
OptionsSecurity / performance flags
/etc/docker/daemon.json — generated output

      

For environments using Docker Compose, you can also define explicit subnets per project to eliminate any ambiguity:

docker-compose.yml
networks:
  app-net:
    driver: bridge
    ipam:
      config:
        - subnet: 10.201.1.0/24
          gateway: 10.201.1.1
Note

Podman's Netavark-based default network (named podman) claims 10.88.0.0/16. New user-defined networks pick subnets sequentially starting from 10.89.0.0/24 up through 10.255.255.0/24, as documented in the Podman network reference. Check current assignments with podman network inspect podman. To override the defaults, edit /etc/containers/containers.conf under the [network] section: use default_subnet to change the default network range and default_subnet_pools to control allocation for new networks. The old CNI-based interface names (cni-podman0) only appear on systems that have not yet migrated to Netavark.

Knowledge Checkpoint

You have a VPN that uses the 10.8.0.0/16 range. Docker has created a user-defined network assigned to 10.8.5.0/24. What actually happens to traffic destined for 10.8.5.100?

MTU Mismatches: The Subnet Conflict Nobody Sees Coming

There is a fifth category of container networking conflict that sits adjacent to subnet overlaps but is missed far more often: MTU mismatches. Docker hardcodes an MTU of 1500 bytes on every bridge interface it creates, regardless of what MTU your host's physical interface is actually using. On cloud providers this almost always causes silent packet loss. Google Cloud VPCs default to MTU 1460. AWS VPCs default to 1500 but GCP VPN connections use 1460. OpenStack tenant networks frequently use 1450 to accommodate VXLAN headers. Any VPN tunnel subtracts further.

The failure mode is particularly deceptive: small packets work fine, but large TCP transfers, TLS handshakes with large certificate chains, and HTTP responses with big bodies silently stall or truncate. This is because TCP sets the Don't Fragment (DF) bit, so oversized packets are dropped rather than fragmented. If the ICMP "Fragmentation Needed" response is blocked by a firewall (extremely common in cloud environments), the sender never learns the path MTU and keeps retransmitting at the wrong size. The symptom looks identical to an intermittent firewall block or a DNS failure.

Diagnose it from the host before touching any container configuration:

MTU mismatch diagnosis
# Check the host interface MTU vs docker0 MTU
$ ip link show eth0 | grep mtu
# Then check what Docker created
$ ip link show docker0 | grep mtu
# If docker0 MTU > eth0 MTU -- you have the problem

# Test path MTU from inside a running container
$ docker exec mycontainer ping -M do -s 1472 -c 3 8.8.8.8
# 1472 bytes payload + 28 bytes IP/ICMP headers = 1500 total
# If this fails but -s 1400 succeeds, MTU mismatch is confirmed

# Check fragmentation failure counter inside the container
$ docker exec mycontainer cat /proc/net/snmp | grep -A1 "^Ip:"
# Look for FragFails -- if this counter is increasing, packets are being dropped

The fix is to set the mtu key in /etc/docker/daemon.json to match or fall below the host interface MTU, then recreate affected networks. For cloud-hosted Docker you should generally use 1450 as a safe default, which leaves headroom for VXLAN or GRE tunnel overhead if present. You can also set MTU per-network in Compose files:

daemon.json — global MTU
{
  "mtu": 1450
}
docker-compose.yml — per-network MTU
networks:
  app-net:
    driver: bridge
    driver_opts:
      com.docker.network.driver.mtu: "1450"

Port Binding Collisions

Port conflicts are the most immediately visible category of container networking problem. When you map a container port to a host port that is already occupied, Docker refuses to start the container and throws an error. The error messages are usually clear -- Bind for 0.0.0.0:8080 failed: port is already allocated or address already in use -- but the underlying cause is not always obvious.

Common Causes

The port might be held by a host-level service (Nginx, Apache, PostgreSQL, a development server), by another running container, or by a stopped container whose port reservation has not been released. The process-to-port identification guide covers the full range of tools for this. On some systems, Docker's internal proxy (docker-proxy) continues holding the port even after a container crashes or is forcefully killed. Stale Compose stacks and orphaned project networks also contribute, especially in development environments where stacks are frequently started and stopped.

Finding the Culprit

port diagnosis
# Find what process owns a specific port
$ sudo ss -tlnp | grep :8080

# Alternative using lsof
$ sudo lsof -i :8080

# List all Docker containers and their port mappings
$ docker ps -a --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"

# Find containers bound to a specific port
$ docker ps --filter "publish=8080"

# Check for zombie or exited containers holding ports
$ docker ps -a | grep -E "Exited|Dead"

If ss shows docker-proxy as the owner, the port is held by another container. If it shows a system service like nginx or postgres, you need to either stop that service or remap your container to a different host port.

The Hidden Cost of docker-proxy

There is a largely undocumented performance trap embedded in Docker's default port publishing behavior. Every time you expose a port, Docker spawns a docker-proxy process in userspace to handle traffic forwarding for that binding. On high-connection-rate services such as databases, this userspace relay copies each packet through a separate process before it reaches the container, bypassing the kernel's direct path entirely. Measured benchmarks with PostgreSQL show this drops external connection throughput to roughly 40% of what hairpin NAT delivers. You can verify whether docker-proxy is running for a given port with ps aux | grep docker-proxy.

The alternative is hairpin NAT, which handles port forwarding entirely in kernel space via iptables DNAT rules. You can switch to it by adding "userland-proxy": false to /etc/docker/daemon.json. There is an important side effect to know before doing this: when the userland proxy is disabled, Docker sets the kernel parameter net.ipv4.route_localnet=1 on the host bridge interface. This allows the kernel to route traffic addressed to 127.0.0.x loopback addresses back out through the bridge rather than dropping it. Without the userland proxy, connecting from the Docker host itself to a container using localhost:PORT only works via the IPv4 loopback address, not the IPv6 equivalent [::1], because route_localnet has no IPv6 counterpart. Verify the current state with sysctl net.ipv4.conf.docker0.route_localnet — a return value of 1 confirms hairpin NAT is active.

Pro Tip

Add "userland-proxy": false to /etc/docker/daemon.json on any host where containers serve high-frequency connections. The CIS Docker Benchmark also recommends this as a hardening measure. Check first that your application does not connect from the host to the container using an IPv6 loopback address, as hairpin NAT does not support that path.

Resolution Strategies

The simplest fix is to change the host port in your -p flag or Compose file. Instead of -p 8080:80, use -p 8081:80. For development environments, use environment variables for port configuration so team members can avoid collisions on shared machines:

docker-compose.yml
services:
  web:
    image: nginx:alpine
    ports:
      - "${WEB_PORT:-8080}:80"
  api:
    image: myapp/api:latest
    ports:
      - "${API_PORT:-3000}:3000"

For container-to-container communication, skip port publishing entirely. Services on the same Docker network can reach each other by service name without exposing ports to the host. Only the public-facing entry point (a reverse proxy, for example) needs a published port.

Pro Tip

If Docker's port bindings persist after a crash and the port remains occupied even though no container is running, restart the Docker daemon with sudo systemctl restart docker. This clears phantom port reservations from Docker's internal state.

Binding to a specific IP instead of all interfaces can also prevent unexpected conflicts. Use 127.0.0.1:8080:80 to bind only to localhost, ensuring the port is not exposed externally and reducing the chance of collisions with services bound to a different interface.

DNS Resolution Failures

DNS problems inside containers are common, confusing, and almost always caused by the interaction between the container runtime and the host's DNS configuration. If your containers report failed to query external DNS server, that article covers the specific error in depth. The canonical symptom is that your host resolves hostnames just fine, but containers get Temporary failure in name resolution on every lookup.

The systemd-resolved Problem

On modern Linux distributions -- Ubuntu, Fedora, Arch, and others -- DNS is handled by systemd-resolved, which runs a stub listener on 127.0.0.53. The host's /etc/resolv.conf points to this address. Docker reads the host's resolv.conf and copies its nameserver entries into each container's /etc/resolv.conf. The problem is that 127.0.0.53 only exists on the host's loopback interface. Inside a container's isolated network namespace, that address does not exist, so every DNS query fails.

A network namespace is a kernel construct that gives a process its own isolated copy of the entire network stack — interfaces, routing tables, socket tables, and loopback interface. When Docker creates a container, it creates a new network namespace and assigns it a virtual Ethernet pair (veth). One end of the pair lives in the container's namespace as eth0; the other end attaches to the Docker bridge in the host namespace.

Crucially, each namespace has its own loopback interface (lo), and 127.0.0.0/8 is only meaningful within that namespace. When systemd-resolved binds its stub listener to 127.0.0.53, it binds inside the host's network namespace. From inside a container's namespace, 127.0.0.53 refers to the container's own loopback — and nothing is listening there. The stub listener on the host is unreachable from that namespace regardless of any routing configuration.

This is why the fix requires either pointing containers at an IP address that is reachable across namespace boundaries (a real upstream nameserver, or the bridge gateway where Docker's embedded DNS listens at 127.0.0.11), or changing the host's resolv.conf to reference real upstream server addresses that containers can reach through normal network routing rather than loopback.

diagnosing DNS failure
# Check what the host resolv.conf says
$ cat /etc/resolv.conf
# If you see "nameserver 127.0.0.53" -- this is the problem

# Find the real upstream DNS servers
$ resolvectl status | grep "DNS Servers"

# Or read the resolved file with actual upstream entries
$ cat /run/systemd/resolve/resolv.conf

# Test DNS from inside a container
$ docker run --rm alpine nslookup google.com

# Test with explicit DNS to confirm it is a config issue
$ docker run --rm --dns 8.8.8.8 alpine nslookup google.com

If the last command succeeds but the one before it fails, you have confirmed a DNS configuration conflict.

Fixing Docker DNS

The cleanest fix is to configure Docker to use upstream DNS servers directly. Edit /etc/docker/daemon.json:

/etc/docker/daemon.json
{
  "dns": ["8.8.8.8", "8.8.4.4"],
  "dns-search": []
}

Restart Docker after editing. Replace the Google DNS addresses with your organization's internal DNS servers if containers need to resolve private hostnames. For environments behind a corporate VPN, you will need the VPN's DNS servers here as well, since public resolvers cannot look up internal domains.

An alternative approach is to change which resolv.conf the host presents. Instead of the stub file, point the symlink to the version that contains real upstream servers:

$ sudo ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf

This preserves systemd-resolved for the host while giving Docker (and all containers) access to the actual DNS server addresses.

Caution

Disabling the systemd-resolved stub listener entirely (DNSStubListener=no in /etc/systemd/resolved.conf) fixes Docker DNS, but can break host-level services that depend on the stub address. Use this only if you understand the full impact on your host's DNS resolution chain.

Default Bridge vs. User-Defined Networks

There is a critical distinction that catches many administrators. Containers on Docker's default bridge network (bridge) do not get Docker's embedded DNS server. They rely entirely on the host's DNS configuration, which is where the systemd-resolved conflict bites. Containers on user-defined bridge networks get Docker's internal DNS server at 127.0.0.11, which handles both container-to-container name resolution and upstream forwarding.

The fix in many cases is simply to stop using the default bridge. Create a custom network and attach your containers to it:

user-defined network
# Create a custom bridge network
$ docker network create app-net

# Run containers on the custom network
$ docker run --network app-net --name api myapp:latest
$ docker run --network app-net --name db postgres:16

# The api container can now resolve "db" by name
$ docker exec api nslookup db

Docker Compose creates a user-defined network for each project by default, which is one reason Compose stacks tend to have fewer DNS issues than standalone docker run commands.

The ndots Value, Search Domain Injection, and the Three-Nameserver Ceiling

Three DNS behaviors interact to produce failures that look completely unrelated to each other. Understanding all three is the difference between solving the problem in minutes and spending an afternoon on it.

The ndots trap. Docker's embedded DNS server sets options ndots:0 in every container's /etc/resolv.conf on user-defined networks. The ndots value controls how many dots a hostname needs before the resolver treats it as a fully-qualified domain name and skips the search domain list. With ndots:0, a lookup for myservice is sent directly to the upstream resolver without appending any search domains first. This is the correct default for container-to-container resolution, but it conflicts with corporate environments where internal hostnames are reached via short names and a search domain. If your host's /etc/resolv.conf contains an ndots value that Docker inherits and then appends its own, you can end up with two conflicting ndots declarations in the container's /etc/resolv.conf, which resolvers handle inconsistently. The fix is to set dns-opts explicitly in daemon.json:

/etc/docker/daemon.json — controlling ndots and search
{
  "dns": ["10.0.0.53", "8.8.8.8"],
  "dns-search": ["corp.example.com"],
  "dns-opts": ["ndots:1"]
}

The three-nameserver ceiling. Linux's resolver library (glibc) silently ignores any nameserver entries beyond the third in /etc/resolv.conf. This is a hard kernel-level limit with no workaround except removing entries above the limit. Docker copies the host's nameserver list into containers. If your host has four or more nameservers configured -- common in environments with split-DNS setups or redundant resolvers -- your containers will never see the fourth entry, and if that entry is the one that resolves your internal service domain, lookups fail silently. Check the nameserver count inside a running container with docker exec <name> cat /etc/resolv.conf and count the nameserver lines. If your working resolver is buried below position three, reorder the host's nameserver list or consolidate resolvers.

Search domain inheritance and container name collisions. If your host's /etc/resolv.conf contains a search directive and that search domain contains a hostname that matches a container name, Docker's DNS can resolve the wrong thing. For example, if your search domain is corp.example.com and you have both a container named web and a host web.corp.example.com in DNS, the container lookup can resolve to the external host instead of the container, producing subtle failures that only reproduce on certain DNS configurations. The "dns-search": [] directive in daemon.json strips all inherited search domains from containers, preventing this class of collision.

Knowledge Checkpoint

You run a container on the default bridge (docker0). You add "dns": ["8.8.8.8"] to daemon.json and restart Docker. The container still reports Temporary failure in name resolution. The most likely cause is:

Firewall Rule Interference

Both Docker and Podman inject firewall rules into the host to enable container networking. Docker manipulates iptables directly, inserting NAT rules for port mapping and FORWARD rules for inter-container traffic. Podman (via its Netavark backend) does the same, and as of Fedora 41, defaults to nftables for rule management. When these injected rules conflict with your existing firewall configuration, containers lose network connectivity in ways that are extremely hard to diagnose.

Docker and iptables

Docker inserts a DOCKER chain into the FORWARD table and a DOCKER chain into the nat table. If you run a host firewall that sets the default FORWARD policy to DROP (as many hardened configurations do), Docker's rules need to be evaluated before the drop policy takes effect. If the ordering is wrong -- for example, if your firewall flushes and rebuilds its rules after Docker has already inserted its chains -- container traffic gets dropped silently.

inspecting Docker firewall rules
# Check Docker's FORWARD chain rules
$ sudo iptables -L FORWARD -n -v

# Check Docker's NAT rules
$ sudo iptables -L -n -t nat | grep -A5 DOCKER

# Verify that ip_forward is enabled (required for containers)
$ sysctl net.ipv4.ip_forward
# Must be 1 -- if 0, container networking is completely broken

# Enable if disabled
$ sudo sysctl -w net.ipv4.ip_forward=1
Warning

UFW on Ubuntu has an important and counterintuitive interaction with Docker. Rather than Docker's traffic being blocked by UFW, the opposite is true: Docker bypasses UFW entirely by inserting PREROUTING and FORWARD rules into iptables before UFW evaluates them. As documented by Docker, traffic to published container ports is diverted in the nat table before reaching UFW's INPUT or FORWARD chains. This means published ports are reachable from the internet even when UFW appears to deny them. The recommended fix is to use the DOCKER-USER chain -- an empty chain Docker leaves for administrators -- by adding rules to /etc/ufw/after.rules. Binding containers to 127.0.0.1 (e.g. 127.0.0.1:8080:80) also prevents external exposure without touching firewall rules. See the Docker packet filtering documentation for the authoritative explanation.

Linux netfilter processes packets through five tables, each evaluated in a fixed order: raw, mangle, nat, filter, then security. UFW operates primarily in the filter table — specifically in INPUT, OUTPUT, and FORWARD chains. Docker injects rules into the nat table's PREROUTING chain. Because nat is evaluated before filter, Docker's DNAT rules redirect incoming traffic to containers before UFW ever sees the packet.

When an external request arrives for port 8080, the kernel processes it through PREROUTING in the nat table first. Docker's rule says: rewrite the destination to the container's IP and port. The packet's destination changes. By the time the packet reaches UFW's INPUT or FORWARD chain in the filter table, it looks like local traffic going to the container, not external traffic going to the host. UFW's rule that blocks port 8080 from the internet never matches it.

The DOCKER-USER chain exists in the filter table and is evaluated after Docker's DNAT but before the containers accept the connection — specifically in the FORWARD chain. Rules placed there can inspect the rewritten packet and drop or accept it, giving administrators a hook that Docker intentionally preserves across daemon restarts.

"Docker and ufw use firewall rules in ways that make them incompatible."

Docker Documentation — Packet Filtering and Firewalls

Docker Engine 28 and the Unpublished Port Exposure That Existed for Years

Until Docker Engine 28.0 (released February 2025), any host on the same Layer 2 network segment as your Docker host could reach unpublished container ports directly if it added a route to the container's RFC 1918 subnet. This was not a firewall misconfiguration — it was the default behavior of every Docker installation using iptables. An attacker on the same LAN could run ip route add 172.17.0.0/16 via YOUR_DOCKER_HOST_IP and then connect directly to any port on any container, whether published or not, without any credentials beyond LAN access.

Docker Engine 28.0 fixed this by inserting DROP rules into the iptables raw table's PREROUTING chain for traffic destined for container addresses. Unlike rules in the filter table, raw table rules are evaluated before connection tracking, which prevents the FORWARD chain's default policy from being the only guard. If you are running Docker older than 28.0.0 on a machine that is on a shared LAN or corporate network, every container on that host has exposed all its ports to the local network segment since Docker was installed.

Caution

If you upgraded to Docker Engine 28.0 and some service broke without explanation, check whether it was relying on direct LAN-to-container routing without a published port. The upgrade silently closed that path. Add explicit -p port mappings, or set "ip-forward-no-drop": true in daemon.json to restore the old behavior while you audit which services depended on it. Also note a separate vulnerability introduced in 28.2.0 and fixed in 28.3.3: CVE-2025-54388 (GHSA-x4rx-4gw3-53p4) causes ports published to 127.0.0.1 to become accessible from other machines on the LAN after a firewalld reload. Docker should recreate its raw table DROP rules automatically after a firewalld reload, but versions 28.2.0–28.3.2 fail to do so. If you are on any 28.2.x or 28.3.0–28.3.2 release and use firewalld, update to 28.3.3 or later, or work around it by restarting the Docker daemon after each firewalld reload. Versions older than 28.2.0 are not affected by this specific CVE.

Docker Engine 29 and the Experimental nftables Backend

Docker Engine 29.0 introduced an experimental native nftables firewall backend, selectable via "firewall-backend": "nftables" in daemon.json or the --firewall-backend=nftables flag on dockerd. This is a significant architectural shift: unlike the default iptables backend (which injects rules into the legacy iptables subsystem even on systems running nftables), the native nftables backend creates rules directly in nftables tables named ip docker-bridges and ip6 docker-bridges. This eliminates the iptables-legacy compatibility shim entirely on distributions like Fedora 41+ where nftables is the system firewall.

There are critical limitations to know before enabling it. The nftables backend is experimental and its rule structure may change between releases -- Docker explicitly warns against modifying its nftables tables directly, as it claims full ownership of them. More importantly, overlay networks (used by Docker Swarm) have not yet been migrated from iptables. Enabling the nftables backend while running in Swarm mode is not supported. IP forwarding is also not automatically enabled by Docker when running the nftables backend -- you must enable it manually via sysctl net.ipv4.ip_forward=1 and make it permanent in /etc/sysctl.conf. With the nftables backend, Docker reports an error at bridge network creation time (or at daemon startup for the default bridge) if IP forwarding is disabled, rather than silently enabling it as the iptables backend does.

If you have previously used the DOCKER-USER iptables chain to insert custom rules before Docker's forwarding policy, that chain does not exist in the nftables backend. The replacement mechanism is to create a separate nftables table with base chains of the same type and hook point as Docker's chains. Base chain priority controls evaluation order: use a lower priority value than Docker's chains to run your rules first. Docker's nftables documentation lists its priority values for each base chain. This is a meaningful operational change — any existing rules in DOCKER-USER must be manually migrated before switching backends.

If you are on Fedora 41+ running Podman with its nftables backend and now also running Docker 29 with "firewall-backend": "nftables", both runtimes write into the nftables framework -- but into separate tables, so direct collision is reduced. However, if you have any custom nftables rulesets that flush all tables on reload, both Docker and Podman will lose their rules simultaneously. Structure your custom ruleset to preserve the ip docker-bridges, ip6 docker-bridges, and Netavark tables when rebuilding.

"The new network stack is faster, more reliable, and has better support."

Red Hat Blog — Podman 4.0's New Network Stack

Podman's networking stack has undergone significant changes. Older versions used CNI (Container Network Interface) with iptables. Current versions use Netavark, which supports both iptables and nftables as firewall backends. Starting with Fedora 41, nftables is the default.

This transition introduces a specific conflict: if you have running containers when the system switches from iptables to nftables (for example, during a distribution upgrade), the old iptables rules are not cleaned up. You end up with two sets of firewall rules from two different backends, and the behavior is unpredictable. A reboot resolves this, since both sets of rules are runtime-generated and do not persist across boots.

Podman firewall diagnostics
# Check which firewall backend Podman is using
$ podman info --format '{{ .Host.NetworkBackend }}'

# List nftables rules created by Netavark
$ sudo nft list ruleset | grep -A10 NETAVARK

# Check for leftover iptables rules (should be empty on nftables systems)
$ sudo iptables -L -n | grep -i netavark

# If using firewalld, ensure Podman interfaces are trusted
$ sudo firewall-cmd --zone=trusted --add-interface=podman0 --permanent
$ sudo firewall-cmd --reload

If you manage your own nftables ruleset and run rootful Podman, be aware that Netavark inserts its own chains into the nat and filter tables. If your ruleset flushes all rules on reload, Netavark's chains get destroyed and containers lose connectivity. The recommended approach is to structure your nftables ruleset so that it preserves Netavark-managed chains, or to restart containers after any firewall reload.

Note

Running rootful Docker and rootful Podman simultaneously on the same host causes a specific documented conflict: Docker adds an iptables rule that blocks all forwarding traffic, which kills external connectivity for Podman containers. This is noted explicitly in the Fedora Netavark nftables change documentation. The workaround is either to add an explicit iptables rule allowing Podman's traffic before Docker's DROP rule, or to revert Podman's Netavark backend to iptables mode via containers.conf. The cleanest solution remains running Podman rootless, which uses userspace networking and never touches host-level iptables or nftables at all.

Rootless Containers and Firewall Limitations

Rootless Podman cannot create real network bridges or manipulate iptables/nftables rules. Instead, it uses userspace networking tools -- slirp4netns or pasta (from the passt project) -- to provide connectivity. These tools run entirely in user space and do not touch the host firewall at all. The tradeoff is reduced performance and more limited networking capabilities, but the benefit is zero firewall conflicts. If firewall interference is a persistent problem in your environment, rootless containers sidestep the issue entirely.

Multi-Runtime Coexistence

Before addressing how to run the two runtimes together, it helps to have a precise map of where their networking architectures diverge. The differences are not cosmetic — they change which conflicts are possible and which mitigations apply.

Property Docker (rootful) Podman (rootful) Podman (rootless)
Network backend libnetwork (iptables) Netavark (iptables or nftables) pasta / slirp4netns (userspace)
Default subnet 172.17.0.0/16 (docker0) 10.88.0.0/16 (podman0) Userspace NAT — no bridge
Touches host iptables/nftables Yes — always Yes (rootful) No
Embedded DNS server 127.0.0.11 (user-defined networks only) aardvark-dns (all non-default networks) aardvark-dns (all non-default networks)
Firewall conflicts with UFW Yes — Docker bypasses UFW via PREROUTING NAT rules Yes (rootful, via Netavark chains) No
Co-exists with Docker rootful N/A Conflicts — Docker FORWARD DROP breaks Podman NAT Safe — no shared kernel firewall state
Config file for address pools /etc/docker/daemon.json /etc/containers/containers.conf ~/.config/containers/containers.conf
nftables support Experimental as of Docker 29.0 ("firewall-backend": "nftables") Default on Fedora 41+; optional on other distros Not applicable (no kernel rules)
Docker rootfullibnetwork (iptables); default bridge 172.17.0.0/16
Podman rootfulNetavark (iptables or nftables); default network 10.88.0.0/16
Podman rootlesspasta / slirp4netns (userspace NAT); no bridge interface on host
Docker rootfulAlways modifies iptables; bypasses UFW via PREROUTING NAT rules in the nat table
Podman rootfulModifies iptables or nftables via Netavark; can conflict with UFW and Docker FORWARD DROP
Podman rootlessNo iptables or nftables rules created; zero firewall conflicts
Docker rootful127.0.0.11 — only on user-defined bridge networks, not the default bridge
Podman rootfulaardvark-dns on all non-default networks; the default podman network uses an older resolver path
Podman rootlessaardvark-dns on all non-default networks; same behavior as rootful for DNS
Docker rootfulConfig: /etc/docker/daemon.json; conflicts with rootful Podman over FORWARD DROP
Podman rootfulConfig: /etc/containers/containers.conf; co-existence with Docker requires explicit iptables fix or Netavark mode switch
Podman rootlessConfig: ~/.config/containers/containers.conf; safe to run alongside Docker rootful with no firewall conflicts
Docker rootfulExperimental as of Docker Engine 29.0; enable with "firewall-backend": "nftables" in daemon.json. Not yet supported in Swarm mode.
Podman rootfulDefault on Fedora 41+; selectable on other distributions via containers.conf firewall_driver
Podman rootlessNot applicable — userspace networking creates no kernel firewall rules

Running Docker and Podman side by side is increasingly common, especially on developer workstations and CI systems. However, combining two container runtimes that each manage their own network infrastructure creates a minefield of potential conflicts.

Each runtime maintains its own set of bridge interfaces, subnet allocations, and firewall rules. Docker's docker0 bridge and Podman's podman0 bridge may claim overlapping subnets if neither is configured with explicit address pools. Their respective firewall chains can interfere with each other, especially if both runtimes are running rootful and both inject rules into the same iptables/nftables tables.

The safest approach is to configure non-overlapping address pools for each runtime and run Podman rootless whenever possible. This ensures Podman uses userspace networking and never touches the host firewall, leaving Docker's iptables chains undisturbed.

The Multi-Network Default Gateway Trap

This is one of the most reliably confusing Docker networking behaviors, and it affects single-runtime setups as much as multi-runtime ones: when you connect a running container to a second network using docker network connect, Docker silently replaces the container's default gateway with the gateway of the newly connected network. The container's outbound internet traffic now routes through the second network instead of the first. Long-running connections (Redis, WebSockets, database connections) that were established before the second network was attached do not notice immediately -- they stay open but stop receiving data, producing connections that appear live but are effectively dead.

The mechanism is documented but not prominent: when a container is connected to multiple networks, its external connectivity is provided via the first non-internal network in lexical (alphabetical) order. Connecting to a network whose name sorts earlier than the current default-gateway network will silently re-route all outbound traffic. This is a common failure mode in blue-green deployment patterns where a container is first started on a backend network and then attached to a frontend network mid-flight.

Warning

If you use docker network connect on a live container and afterwards find that its external requests stall or long-lived connections drop, the default gateway has likely shifted. Check with docker exec <container> ip route and look at which interface the default via line uses. To control which network provides the default gateway, use the --gw-priority flag when connecting (supported since Docker Engine 25.0): docker network connect --gw-priority 1 frontend mycontainer. Higher priority wins. The default priority is 0. For Compose, structure your network names so that the network you want to be the default gateway sorts last alphabetically, since Docker uses the first non-internal network in lexical order.

IPv6 Dual-Stack Conflicts

IPv6 container networking is a consistent source of subtle failures because Docker handles IPv4 and IPv6 very differently by default, and the asymmetry catches engineers off guard. By default, Docker creates bridge networks with IPv4 only. IPv6 is opt-in via "ipv6": true in daemon.json. The ip6tables option is enabled by default once you opt into IPv6 — Docker documentation explicitly states it "is enabled by-default, but can be disabled." The confusion arises from the interaction between that setting and the host kernel's own IPv6 state, which is where real conflicts occur.

The first conflict class: on hosts where the kernel has net.ipv6.conf.all.disable_ipv6=1 set (a common hardening practice on servers that do not need IPv6), Docker silently falls back to IPv4-only even when "ipv6": true is in daemon.json. It does not log an error. Your containers appear to start normally, but any service binding to :: (all interfaces, dual-stack) receives connections on IPv4 only, and services that explicitly bind to an IPv6 address fail with a cryptic bind: cannot assign requested address. Additionally, on older kernels or stripped-down systems, Docker 27+ requires the ip6_tables kernel module to be loaded before creating IPv6 networks with the legacy iptables backend; if it is missing, Docker fails to start with an error about not being able to initialize the filter table.

daemon.json — minimal correct IPv6 configuration
{
  "ipv6": true,
  "fixed-cidr-v6": "fd00:dead:beef:1::/64"
  // ip6tables defaults to true -- only set explicitly if you need to disable it
}

The fixed-cidr-v6 key assigns an IPv6 subnet to the default bridge. The value must be /64 or smaller — Docker does not accept larger prefixes like /48 or /56 for fixed-cidr-v6 (those larger blocks belong in default-address-pools for dynamic allocation across multiple networks). Use a ULA (Unique Local Address) prefix from the fd00::/8 range — the IPv6 equivalent of RFC 1918 private space, guaranteed not to conflict with globally routable addresses. Do not use link-local prefixes (fe80::/10); Docker does not support them for bridge addressing. For user-defined networks, set the subnet at creation time or in your Compose file:

docker-compose.yml — explicit dual-stack network
networks:
  app-net:
    enable_ipv6: true
    ipam:
      config:
        - subnet: 10.201.1.0/24
        - subnet: fd00:dead:beef:1::/64

The second conflict class is specific to the legacy iptables backend: when Docker 27+ tries to create IPv6 networks on a host that has the ip6_tables kernel module missing or the ip6tables daemon option explicitly set to false, Docker fails at startup with an error initializing the network controller. This surfaces on hardened distributions that strip optional kernel modules and on Docker-in-Docker environments not based on recent official images. The fix is either to load ip6_tables via modprobe ip6_tables (and add it to /etc/modules-load.d/ for persistence), or — if you genuinely do not need IPv6 — to set "ip6tables": false explicitly in daemon.json. Do not set "ipv6": false alone; on Docker 25.0.0–25.0.2 there was a bug where enable_ipv6: false in Compose was insufficient to suppress IPv6 address allocation, fixed in 25.0.3.

IPv6 diagnostic commands
# Is IPv6 disabled globally on this host?
$ sysctl net.ipv6.conf.all.disable_ipv6
# 1 = disabled kernel-wide; containers cannot use IPv6 regardless of daemon.json

# Is IPv6 disabled specifically on the docker0 bridge?
$ cat /proc/sys/net/ipv6/conf/docker0/disable_ipv6

# Verify Docker's ip6tables DOCKER chain exists (present = ip6tables active)
$ sudo ip6tables -L DOCKER -n 2>/dev/null || echo "no ip6tables DOCKER chain -- ip6tables may be disabled"

# Check whether ip6_tables kernel module is loaded
$ lsmod | grep ip6_tables
# If empty and you need IPv6: sudo modprobe ip6_tables

# Verify a container actually received an IPv6 address
$ docker inspect mycontainer \
    --format '{{range .NetworkSettings.Networks}}{{.GlobalIPv6Address}}{{end}}'

# Test IPv6 connectivity from inside the container
$ docker exec mycontainer ping6 -c 3 2606:4700:4700::1111

A third issue specific to Docker Compose v2: on newer Docker Engine versions with "ipv6": true in daemon.json, Compose creates IPv6-capable networks by default. A Compose stack started on a machine with IPv6 enabled creates networks that cannot be cleanly recreated on a machine where IPv6 is disabled — Compose may error on network creation or silently produce IPv4-only networks with mismatched configuration. The fix is to explicitly declare IPv6 settings in every Compose network definition so the configuration is portable and machine-independent.

Inspecting Container Network Namespaces Directly

Almost every guide on container networking stops at docker inspect and docker exec. There is a lower level that is far more revealing: the Linux network namespace itself. Every container has its own network namespace, and you can inspect it directly with standard Linux tools without going through Docker at all. This matters when a container is crashing before you can exec into it, when you suspect Docker's reported state does not match kernel reality, or when you need packet-level traffic captures on a container interface without modifying the container image.

Docker does not expose container network namespace file descriptors via /var/run/netns/ by default, because it manages them internally through /proc. The technique is to find the container's PID and create a symlink that ip netns can use:

direct network namespace inspection
# Get the PID of a running container
$ CPID=$(docker inspect --format '{{.State.Pid}}' mycontainer)

# Symlink the container's netns into /var/run/netns so ip netns can see it
$ sudo mkdir -p /var/run/netns
$ sudo ln -sfT /proc/$CPID/ns/net /var/run/netns/mycontainer

# Inspect the container's routing table and listening ports directly
$ sudo ip netns exec mycontainer ip addr show
$ sudo ip netns exec mycontainer ip route show
$ sudo ip netns exec mycontainer ss -tlnp

# Capture traffic on the container's eth0 from the host -- no exec needed
$ sudo nsenter -t $CPID -n tcpdump -i eth0 -n port 80

# Find the host-side veth peer of the container's eth0
$ IFLINK=$(docker exec mycontainer cat /sys/class/net/eth0/iflink)
$ ip link show | grep "^$IFLINK:"
# That veth interface on the host bridges to the container -- capture here for host-side view

# Clean up the symlink when done
$ sudo rm /var/run/netns/mycontainer

The nsenter approach is especially valuable when a container does not have tcpdump, ss, or ip installed in its image — you run the host's copy of those tools inside the container's network namespace. This technique also works on containers that have crashed but left a process in the process table with a live namespace file descriptor, as long as the PID has not been reaped. For rootless Podman containers, the same approach applies but you must run the nsenter command as the user who owns the container — not as root — because the namespace is owned by that user's UID map.

Capturing on the host-side veth peer (identified by the iflink technique above) gives you a view of all traffic between the container and the bridge, including ARP traffic, which is invisible from inside the container's namespace. This is the correct interface to monitor when debugging ARP resolution failures, duplicate address detection issues, or VLAN mismatches on macvlan networks.

Pro Tip

On systems running containerd directly (not Docker), container network namespaces are accessible the same way via ctr task ls to get the PID, then /proc/<PID>/ns/net. The same nsenter and ip netns exec workflow applies. For Podman rootless, list containers with podman inspect --format '{{.State.Pid}}' mycontainer and use nsenter as the container's owner user.

Prevention: A Systematic Approach

Container networking conflicts are entirely preventable with upfront planning. The following practices eliminate the major categories of conflict before they occur.

First, document your address space. Maintain a registry of which IP ranges are used by your LAN, VPN, cloud VPCs, and container runtimes. Choose non-overlapping ranges for Docker and Podman in daemon.json and containers.conf respectively. Use /24 blocks in your default-address-pools instead of the default large allocations to conserve address space and reduce overlap risk.

Second, define explicit subnets in every Compose file. Do not rely on Docker's automatic allocation. Specify ipam.config.subnet for each network in your docker-compose.yml so that the address is predictable and documented.

Third, use user-defined networks exclusively. Never run production or development containers on the default bridge. User-defined bridge networks provide DNS resolution, better isolation, and more predictable behavior.

Fourth, standardize port allocation. Maintain a port registry for your team or organization. Use environment variables for host port mappings. For container-to-container traffic, use Docker networks and service names instead of published ports.

Fifth, set your MTU explicitly. Do not rely on Docker inheriting the correct MTU from the host interface — it does not. Check your host interface MTU with ip link show and set a matching or lower value in daemon.json using the mtu key before any containers run on cloud or VPN-connected hosts.

Sixth, never put more than a few hundred containers on a single bridge network. The Linux bridge emulates a Layer 2 switch, and as with any broadcast domain, ARP traffic scales quadratically. Bridge networks become unstable and inter-container communications may break when 1,000 or more containers connect to a single network, due to kernel processing of ARP broadcast clones flooding every bridge port simultaneously. For large container counts, use multiple smaller networks with explicit subnet assignments and connect only the containers that need to communicate.

Seventh, if you enable IPv6, set an explicit fixed-cidr-v6 using a /64 ULA prefix. The ip6tables option is on by default once "ipv6": true is set, but the address assignment behavior depends entirely on you providing a correct /64 subnet in fixed-cidr-v6 or in default-address-pools. Without it, Docker generates random ULA /64 subnets for each network (based on the daemon's randomly generated ID), which are not predictable or documentable. Also verify the ip6_tables kernel module is loaded on hardened hosts, since its absence causes Docker daemon startup failures on systems using the legacy iptables backend.

Eighth, do not treat internal: true networks as air-gapped. A Docker network created with internal: true (or --internal) blocks outbound routing to external networks — but containers on that network can still reach the Docker host's own IP address directly. If your host runs any services that containers should not access (a metadata API, a local database, an internal credential store), those services are fully reachable from an "internal" container. The iptables rules Docker creates for internal networks block external routing but do not prevent communication with the host itself. Add explicit iptables DROP rules for the bridge gateway address if you need true isolation from the host.

Ninth, do not assume host.docker.internal resolves inside containers on Linux. On Docker Desktop for macOS and Windows, the special hostname host.docker.internal resolves to the host machine's address and is automatically injected into containers. On Linux with Docker Engine, it does not exist by default. Code that connects to host.docker.internal works on a developer's macOS machine and fails silently on a Linux CI runner or production server. The fix on Linux is to add an explicit host entry to your Compose service or run command: extra_hosts: ["host.docker.internal:host-gateway"]. The special value host-gateway is resolved by Docker to the host's bridge gateway address at container start time. Alternatively, add it daemon-wide via daemon.json's default-network-opts key.

Tenth, clean up regularly. Orphaned networks, stopped containers, and unused volumes accumulate over time and create stale resource reservations. Run docker system prune periodically to reclaim them.

maintenance routine
# Remove all stopped containers
$ docker container prune -f

# Remove all unused networks
$ docker network prune -f

# Remove dangling images and build cache
$ docker system prune -f

# Full audit: show all networks, containers, and volumes
$ docker system df
Knowledge Checkpoint

You use docker network connect to attach a running container to a second network. The container's existing outbound connections to a database start dropping shortly after. No error is logged. What is the most likely cause?

Wrapping Up

Container networking conflicts are among the most frustrating infrastructure problems because the symptoms rarely point to the cause. A VPN that drops when a Compose stack starts, a container that cannot reach the internet, a port that refuses to bind despite nothing visibly using it -- these all trace back to the same fundamental issue: container runtimes make aggressive assumptions about the host network, and those assumptions collide with reality.

The categories covered here -- subnet overlaps, MTU mismatches, port collisions, DNS failures (including ndots and nameserver ceiling edge cases), firewall interference, IPv6 dual-stack gaps, the multi-network gateway trap, and direct namespace inspection -- account for the vast majority of container networking problems on Linux. Each has a clear diagnostic path and a permanent fix. The common thread is that prevention through explicit configuration is always cheaper than reactive troubleshooting. Configure your address pools, define your subnets, use user-defined networks, set your MTU, assign explicit IPv6 prefixes, and document your port allocations. These investments pay for themselves the first time a new team member spins up a stack without breaking the network.

How to Diagnose and Resolve Container Networking Conflicts on Linux

Step 1: Audit your current network state

Run ip route to display the full routing table, then run docker network inspect on each Docker network to list its assigned subnet. Compare the subnets against your host interfaces, VPN routes, and LAN ranges using ip addr show and ip route show filtered for the 172, 10, and 192.168 prefixes. Any overlap between a Docker subnet and an existing route is a confirmed conflict.

Step 2: Reconfigure container address pools

Edit /etc/docker/daemon.json and set the bip field to a non-overlapping address for the default bridge, then define default-address-pools with base ranges and a size of 24 to control how Docker allocates subnets for all future networks. Choose private ranges from RFC 1918 space that your organization does not use. Restart the Docker daemon with systemctl restart docker and verify the new subnets with docker network inspect bridge.

Step 3: Resolve port collisions and DNS failures

For port conflicts, run ss -tlnp or lsof -i to find the process occupying the port, then either stop it or remap the container to a different host port. For DNS failures on systemd-resolved hosts, configure explicit upstream DNS servers in daemon.json or point the host resolv.conf symlink to /run/systemd/resolve/resolv.conf. Always prefer user-defined bridge networks over the default bridge, as they provide Docker embedded DNS at 127.0.0.11 for container-to-container name resolution.

Step 4: Address firewall rule interference

Check Docker FORWARD chain rules with iptables -L FORWARD -n -v and verify ip_forward is enabled with sysctl net.ipv4.ip_forward. On UFW systems, add rules to the DOCKER-USER chain via /etc/ufw/after.rules rather than UFW commands, since Docker bypasses UFW by inserting PREROUTING rules before UFW evaluates traffic. For Podman with nftables, check for leftover iptables rules after a backend switch and reboot to clear conflicting dual-backend rule sets.

Frequently Asked Questions

Why does Docker break my VPN or local network connectivity?

Docker automatically assigns subnets from the 172.17.0.0/12 range to its bridge networks. Many corporate VPNs and internal LANs also use addresses within this range. When Docker claims a subnet that overlaps with your existing network infrastructure, the Linux routing table sends traffic to the Docker bridge instead of your VPN or LAN gateway. The fix is to configure custom, non-overlapping address pools in /etc/docker/daemon.json using the bip and default-address-pools directives.

How do I find which process is using a port that Docker needs?

Use ss -tlnp | grep :PORT or lsof -i :PORT to identify which process owns a specific port. The output shows the process name, PID, and user. If the process is another Docker container, run docker ps --format to list all containers and their port mappings. Once identified, either stop the conflicting process, remap your container to a different host port, or remove the stale container.

Why can my Docker containers not resolve DNS hostnames even though the host works fine?

On distributions using systemd-resolved, the host resolv.conf points to 127.0.0.53, which is the stub listener on the host loopback interface. Containers operate in their own network namespace and cannot reach that address. The solution is to either configure Docker to use upstream DNS servers directly via daemon.json, point the host resolv.conf symlink to /run/systemd/resolve/resolv.conf which contains the real upstream servers, or use user-defined bridge networks which provide Docker's embedded DNS server at 127.0.0.11.

How do I safely run Docker and Podman on the same Linux host?

Running rootful Docker and rootful Podman simultaneously causes firewall conflicts because Docker injects an iptables rule that blocks all forwarding traffic, breaking external connectivity for Podman containers. The cleanest solution is to run Podman rootless, which uses userspace networking via pasta or slirp4netns and never touches host iptables or nftables rules. If you must run both rootful, you can add an explicit iptables rule to allow Podman traffic before Docker FORWARD DROP takes effect, or revert Podman Netavark to iptables mode via containers.conf.