You need the answer right now, so here it is. To see which process is holding a specific port, run one of these two commands -- both work on any modern Linux system:

terminal
# Using lsof -- shows process name, PID, and user
$ sudo lsof -i :8080

# Using ss -- faster on large systems, ships with iproute2
$ sudo ss -tulpn | grep :8080

The output of lsof -i :8080 gives you the process name, its PID, the user running it, and whether it's listening or has an established connection. That's enough to decide what to do next. If you need to kill it: sudo kill -9 <PID>. If you just needed that one fact, you're done. But if you want to understand why this keeps happening, what these tools actually show you, and how to handle edge cases like zombie sockets and IPv6 bindings -- keep reading.

lsof -i: The Common Approach

lsof stands for "list open files," and in Linux, network sockets are files. The -i flag filters to internet connections. Without a port argument, lsof -i lists every open network socket on the system -- which is useful for auditing but overwhelming as a first pass. Adding :PORT scopes the output to exactly what you need.

lsof output explained
$ sudo lsof -i :8080
COMMAND   PID     USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
node     4821    kandi   22u  IPv6  98432      0t0  TCP *:8080 (LISTEN)

Reading left to right: COMMAND is the process name, PID is what you pass to kill, USER tells you who owns the process, and NAME shows the address and state. The *:8080 means it's listening on all interfaces. If you saw 127.0.0.1:8080, the process is only bound to localhost and won't accept external connections.

Note

You need sudo to see processes owned by other users. Without it, lsof only shows processes running under your own account. On a multi-user server, always run with elevated privileges or you'll get an incomplete picture.

Filtering by protocol

By default, lsof -i :8080 shows both TCP and UDP. If you want to narrow it down, you can specify the protocol directly:

protocol filtering
# TCP only
$ sudo lsof -i TCP:8080

# UDP only
$ sudo lsof -i UDP:8080

# All sockets for a specific PID
$ sudo lsof -i -p 4821

# All listening ports (useful for auditing)
$ sudo lsof -i -s TCP:LISTEN

That last one -- -s TCP:LISTEN -- is worth bookmarking. It gives you every TCP port currently open and listening on the system along with the owning process. It's a fast way to audit what's exposed before deploying a service or tightening firewall rules.

ss: The Modern Replacement for netstat

If you've been using netstat for years, know that it's been deprecated on many distributions. On Debian, Ubuntu, Fedora, and RHEL-based systems, netstat may not even be installed by default. ss -- part of the iproute2 package -- is its replacement, and it's considerably faster because it queries the kernel directly rather than parsing /proc.

ss usage
# -t TCP, -u UDP, -l listening only, -p show process, -n no DNS resolution
$ sudo ss -tulpn | grep :8080

# Show all listening TCP sockets with process info
$ sudo ss -tlpn

# Show established connections on port 443
$ sudo ss -tnp state established '( dport = :443 or sport = :443 )'

The -n flag is important on servers: without it, ss tries to resolve IP addresses and service names via DNS, which can make output painfully slow when your DNS is unavailable or slow. Always use -n when you want a fast result.

The process information from ss -p appears in a slightly different format. You'll see something like users:(("node",pid=4821,fd=22)) at the end of each line. The PID is right there in that string.

Pro Tip

On systems with thousands of connections, ss is dramatically faster than lsof. If you're querying a busy database server or load balancer and lsof seems to hang, switch to ss. The output format requires slightly more parsing, but the speed difference is worth it.

fuser: Quick Kill Workflows

fuser is a lesser-known tool in this space but has one advantage: it can identify and kill a port's owner in a single command. It's part of psmisc, which ships on most distributions.

fuser usage
# Show PID using TCP port 8080
$ sudo fuser 8080/tcp
8080/tcp:             4821

# Show PID with verbose output (process name, user, etc.)
$ sudo fuser -v 8080/tcp

# Kill whatever is using port 8080 (sends SIGKILL)
$ sudo fuser -k 8080/tcp

# Kill with a specific signal (SIGTERM is cleaner)
$ sudo fuser -k -TERM 8080/tcp
Caution

fuser -k sends SIGKILL by default, which doesn't give the process a chance to clean up. For databases, message brokers, or any service that writes to disk, prefer -TERM first and only escalate to SIGKILL if the process doesn't exit within a few seconds. Abrupt termination can leave lock files or corrupt in-flight writes.

What About netstat?

You'll still see netstat in a lot of older tutorials and Stack Overflow answers, so it's worth knowing. On systems where it's still available (install it with sudo apt install net-tools on Debian/Ubuntu), the equivalent command is:

netstat (legacy)
# -t TCP, -u UDP, -l listening, -p show PID/program, -n no DNS
$ sudo netstat -tulpn | grep :8080

# All listening ports
$ sudo netstat -tlnp

The flags are almost identical to ss because ss was designed to be a drop-in replacement. netstat is part of the net-tools package, which Wikipedia's netstat entry describes as "mostly obsolete" on Linux, superseded by ss from the iproute2 package. If you're writing scripts or documentation that needs to run on older systems (RHEL 6, older CentOS, ancient Debian releases), netstat may be the safer choice. For anything current, prefer ss. Source: Wikipedia: netstat, Wikipedia: iproute2.

Edge Cases That Catch People Out

TIME_WAIT sockets

A port can appear "in use" even after you've killed the owning process. This is the TCP TIME_WAIT state -- the kernel keeps the socket open for a period (typically 60 seconds, derived from 2x the Maximum Segment Lifetime) to handle any delayed packets still in transit. You'll see this in ss output as:

TIME_WAIT example
$ sudo ss -tn | grep 8080
TIME-WAIT   0    0    192.168.1.10:8080    192.168.1.5:54321

There's no process attached to a TIME_WAIT socket -- it's managed entirely by the kernel. You can't kill a process to free it. On Linux, the TIME_WAIT duration is hard-coded at 60 seconds in the kernel source (TCP_TIMEWAIT_LEN) and cannot be changed via sysctl. A common misconception is that net.ipv4.tcp_fin_timeout controls this -- it doesn't. That sysctl governs the FIN_WAIT-2 state, which is a separate part of TCP teardown. Your actual options for TIME_WAIT are: wait it out (60 seconds on Linux), set SO_REUSEADDR in your application code to allow binding to ports in TIME_WAIT, or enable net.ipv4.tcp_tw_reuse to allow reuse of TIME_WAIT sockets for new outbound connections where safe. For development, the simplest fix is just picking a different port temporarily.

Technical Reference

The Linux kernel hard-codes TCP_TIMEWAIT_LEN at 60 seconds in include/net/tcp.h. This is documented by Vincent Bernat's detailed analysis of Linux TCP TIME_WAIT behavior. The value is intentionally non-tunable -- proposals to make it configurable have been declined on the grounds that TIME_WAIT serves an important correctness function. See: Coping with the TCP TIME_WAIT state on busy Linux servers.

IPv4 vs IPv6 bindings

On modern Linux systems, a process binding to :::8080 (IPv6 wildcard) will often also accept IPv4 connections -- depending on the IPV6_V6ONLY socket option and the kernel's net.ipv6.bindv6only setting. This means lsof -i4 :8080 might show nothing while the port is clearly in use. Always check both address families:

IPv4 and IPv6
# Check IPv4 only
$ sudo lsof -i4 :8080

# Check IPv6 only
$ sudo lsof -i6 :8080

# Check both (default, no flag needed)
$ sudo lsof -i :8080

# ss shows both -- look at the Local Address column
$ sudo ss -tlpn | grep 8080
LISTEN  0  128  *:8080  *:*  users:(("node",pid=4821,fd=22))
# or on systems with dual-stack:
LISTEN  0  128  :::8080  :::*  users:(("node",pid=4821,fd=22))

Ports held by systemd socket units

If you're using systemd socket activation, the socket may be held open by systemd itself -- not by the application process. In this pattern, systemd binds to the port and passes the socket file descriptor to the service when a connection arrives. Running lsof -i :8080 may show systemd as the owner, not your application.

systemd socket activation
# Check for active socket units
$ systemctl list-units --type=socket --state=active

# Inspect a specific socket unit
$ systemctl status myapp.socket

# Stop the socket unit to release the port
$ sudo systemctl stop myapp.socket

If you kill the process but not the socket unit, systemd will simply re-acquire the port. You need to stop the .socket unit if you want the port released and to stay released.

What Port Occupancy Tells You About Security

Finding unexpected processes on ports is a meaningful security signal. A standard web server should have predictable listeners: 22 for SSH, 80 and 443 for HTTP/HTTPS, maybe a database port on localhost. Anything else deserves investigation.

Running a regular audit of listening ports is a low-effort, high-value habit. The command below gives you a clean snapshot of everything listening on your system, sorted by port number:

full port audit
$ sudo ss -tlpn | sort -t: -k2 -n

# Or with lsof, showing just listening TCP sockets
$ sudo lsof -i TCP -s TCP:LISTEN -n -P | sort -k9

Pay particular attention to any process listening on 0.0.0.0 or :: (all interfaces) that you didn't intentionally configure. A database listening on all interfaces instead of localhost is a common misconfiguration that has caused serious breaches. A process you don't recognize listening on any port is worth investigating immediately -- check the full binary path with ls -la /proc/<PID>/exe and review its network activity with lsof -p <PID>.

Warning

If you find a process listening on an unexpected port and you can't identify it from its name, don't immediately kill it. First check ls -la /proc/<PID>/exe for the binary path, then cat /proc/<PID>/cmdline | tr '\0' ' ' for the full command line. Killing an unknown process might be taking down a critical service that has a non-obvious name.

The Attacker's Side of the Same Commands

Here's something worth sitting with: the same commands you just learned are used by attackers during post-exploitation to map the environment they've landed in. This isn't hypothetical -- it's documented behavior. Understanding the adversarial use of these tools is part of understanding their full weight.

MITRE ATT&CK T1049 — System Network Connections Discovery covers exactly this: adversaries running ss, lsof, and netstat to enumerate active connections and listening services after initial access. The purpose is to find lateral movement paths, identify services that could be exploited next, and understand what the machine communicates with. MITRE's own documentation for T1049 states that on Linux, netstat and lsof can be used to list current connections -- and Atomic Red Team test #4 for this technique specifically tests ss as the preferred modern alternative. Source: attack.mitre.org/techniques/T1049.

what an attacker's recon looks like
# Enumerate all listening services -- T1049
$ ss -tulpn

# Find internal services not exposed externally (localhost-only)
$ ss -tlpn | grep 127.0.0.1

# Map all outbound connections (identify C2 beaconing candidates)
$ ss -tn state established

# Check what a specific suspicious process is connected to
$ lsof -p <PID> -i

Attackers aren't just looking for what's exposed outward. Services bound to 127.0.0.1 that are only reachable from the local machine are actually interesting targets -- internal APIs, admin interfaces, and database ports that weren't designed to be hardened because they were assumed to be unreachable. Once an attacker has a shell, those localhost-only services become accessible.

MITRE ATT&CK T1571 — Non-Standard Port describes the other direction: malware and command-and-control frameworks deliberately bind to unusual ports to blend into traffic or avoid detection. Finding a process listening on port 31337, 4444, or an arbitrary high port on a production server warrants immediate scrutiny. Legitimate software tends to use well-known ports or document what it uses. Random high ports from processes with vague names do not. Source: attack.mitre.org/techniques/T1571.

MITRE ATT&CK T1543.002 — Create or Modify System Process: Systemd Service connects to the earlier section on socket activation. Adversaries with sufficient privilege have been observed creating rogue .service and .socket unit files to establish persistent listeners that survive reboots and appear to belong to the system. MITRE's documentation includes real-world examples from groups like Sandworm (Industroyer) and TeamTNT targeting Kubernetes environments. Source: attack.mitre.org/techniques/T1543/002.

Threat Signal

When auditing a potentially compromised system, don't rely solely on lsof or ss. A rootkit operating at the kernel level (T1014 — Rootkit) can patch kernel data structures or hook system calls to hide ports and processes from these tools. If you suspect active compromise, boot from trusted media or compare /proc/net/tcp output against what ss reports. Discrepancies are a strong indicator of kernel-level tampering.

/proc/net: What the Kernel Actually Knows

Every tool covered in this article -- lsof, ss, fuser, netstat -- is reading data that ultimately comes from the Linux kernel's virtual filesystem at /proc. Understanding this layer matters both for incident response and for understanding why tool output can sometimes be misleading.

The kernel exposes raw socket tables at these paths:

/proc/net socket tables
# Raw TCP socket table (IPv4)
$ cat /proc/net/tcp

# Raw TCP socket table (IPv6)
$ cat /proc/net/tcp6

# UDP sockets
$ cat /proc/net/udp

# Unix domain sockets
$ cat /proc/net/unix

The raw output of /proc/net/tcp isn't human-friendly -- local addresses are in little-endian hex, and the port numbers are hex too. But it matters in one important scenario: if you're on a system where ss and lsof have been tampered with, /proc/net/tcp is a lower-level source that's harder (though not impossible) to falsify without a full kernel rootkit. For incident response, comparing the two is a useful sanity check. The iproute2 project's own documentation notes that ss was designed to address /proc/net/tcp scaling problems -- on systems with large numbers of sockets, reading the proc file becomes slow, whereas ss uses kernel netlink sockets for direct, fast retrieval. Source: iproute2 ss documentation (Alexey Kuznetsov).

decoding /proc/net/tcp by hand
# Each row: sl local_address rem_address state ...
$ cat /proc/net/tcp | awk '{print $2, $3, $4}' | head -5
local_address rem_address   st
0100007F:1F90 00000000:0000 0A   # 0A = LISTEN, 0x1F90 = port 8080, 0x7F000001 = 127.0.0.1

# Hex to decimal for port: printf '%d\n' 0x1F90 = 8080
$ printf '%d\n' 0x1F90
8080

The per-process view lives at /proc/<PID>/fd/ -- each open file descriptor is a symlink. Network sockets appear as socket:[inode]. Matching the inode number from /proc/<PID>/fd/ against the inode column in /proc/net/tcp is exactly what lsof does internally when it maps ports to processes.

Containers and Network Namespaces

If your system runs Docker, Podman, or any container runtime, lsof and ss will not show you the full picture by default. Each container runs in its own network namespace, with its own port table. A process inside a container binding to port 8080 is invisible to ss run on the host -- because the host and the container are looking at different kernel socket tables.

inspecting container network namespaces
# List all network namespaces
$ sudo ip netns list

# Run ss inside a specific namespace
$ sudo ip netns exec <ns-name> ss -tlpn

# For Docker: find the container PID, then enter its namespace
$ CPID=$(docker inspect --format '{{.State.Pid}}' <container_id>)
$ sudo nsenter -t $CPID -n ss -tlpn

# Or just ask Docker what ports are mapped to the host
$ docker ps --format "table {{.Names}}\t{{.Ports}}"

What is visible on the host is the port-forwarding side of the equation. When Docker maps a container's internal port 8080 to host port 8080, the host-side listener belongs to Docker's proxy process (docker-proxy) or to the kernel's iptables NAT rules directly, depending on your Docker configuration. Running ss -tlpn on the host will show you docker-proxy as the owner of that port -- not the application inside the container.

Note

On systems using Docker with --net=host, the container shares the host's network namespace entirely. In that configuration, ss and lsof on the host will show container processes directly. This is a significant security consideration -- a compromised container with host networking has full visibility into all host-level socket activity.

For Kubernetes environments, the layer above this is even more abstracted. Pod-level port queries belong inside individual pod namespaces, while service exposure is handled by kube-proxy and iptables rules at the node level. Running ss on a Kubernetes worker node will show you traffic shaping infrastructure, not application ports directly. Use kubectl exec -it <pod> -- ss -tlpn to query inside a pod's namespace.

What These Tools Can Miss

These tools are reliable under normal operating conditions. But there are scenarios -- relevant to both security responders and curious engineers -- where they give you an incomplete picture.

Kernel rootkits (MITRE ATT&CK T1014) can hook the system calls or kernel data structures that ss and lsof rely on. A well-written rootkit can present a clean port list while hiding a backdoor listener. On a suspected compromised host, always cross-reference tool output against raw /proc/net/tcp reads and, where possible, out-of-band network monitoring (captured traffic, firewall logs) that doesn't depend on the host's own reporting.

LD_PRELOAD hijacks can intercept the library calls used by user-space tools without touching the kernel. A malicious shared library preloaded into lsof's environment could filter results before they're displayed. Running lsof with a clean environment (env -i sudo lsof -i :PORT) helps, but this is cat-and-mouse territory on a compromised machine.

Raw sockets and eBPF programs can receive and transmit traffic without appearing in the normal socket tables at all. ss and lsof enumerate sockets in the conventional socket API. A program using raw sockets (requires CAP_NET_RAW) bypasses the port abstraction entirely. These show up differently -- or not at all -- in standard port queries.

cross-reference approach for incident response
# Compare ss output against raw /proc/net/tcp
# If ss shows fewer entries, something may be hiding them
$ ss -tlpn | wc -l
$ cat /proc/net/tcp | wc -l

# Check for suspicious LD_PRELOAD in any running process
$ sudo grep -r LD_PRELOAD /proc/*/environ 2>/dev/null

# List loaded kernel modules -- rootkits often appear here if not hidden
$ lsmod | sort

# Check eBPF programs currently loaded
$ sudo bpftool prog list
Important Context

On a healthy, uncompromised system, none of this evasion applies. ss and lsof are accurate and authoritative. The evasion scenarios above matter specifically during active incident response -- when you already have reason to distrust the system's own reporting. Routine sysadmin work doesn't require this level of skepticism.

Scripting Port Checks

If you're checking ports in automation, CI pipelines, or healthcheck scripts, you want something robust and parseable. Here are patterns worth keeping in your toolkit:

scripting examples
# Check if a port is in use -- returns 0 if occupied, 1 if free
$ ss -tln | grep -q ':8080 ' && echo "in use" || echo "free"

# Get just the PID using ss (no lsof dependency)
$ ss -tlpn | grep ':8080 ' | grep -oP 'pid=\K[0-9]+'

# Wait for a port to become free before proceeding
$ while ss -tln | grep -q ':8080 '; do sleep 1; done

# In a bash script: check before binding
$ if lsof -Pi :8080 -sTCP:LISTEN -t >/dev/null 2>&1; then
    echo "Port 8080 is taken. Exiting."
    exit 1
  fi

The -t flag to lsof is especially useful in scripts -- it outputs only the PID with no headers, making it easy to capture and pass directly to kill:

one-liner kill script
# Kill whatever is on port 8080 in one line
$ sudo kill -9 $(sudo lsof -t -i:8080)

# Safer version: check first
$ PID=$(sudo lsof -t -i:8080) && [ -n "$PID" ] && sudo kill -TERM $PID

Quick Reference

Here's the full cheat sheet for the commands covered in this article:

cheat sheet
# --- lsof ---
$ sudo lsof -i :PORT             # who owns this port
$ sudo lsof -i TCP:PORT          # TCP only
$ sudo lsof -t -i :PORT          # PID only (for scripting)
$ sudo lsof -i -s TCP:LISTEN    # all listening TCP sockets
$ sudo lsof -p PID -i           # all sockets for a PID

# --- ss ---
$ sudo ss -tulpn | grep :PORT   # who owns this port
$ sudo ss -tlpn                 # all listening TCP sockets
$ sudo ss -tn                   # all established TCP connections

# --- fuser ---
$ sudo fuser PORT/tcp           # show PID
$ sudo fuser -v PORT/tcp        # verbose: name, user, PID
$ sudo fuser -k -TERM PORT/tcp  # send SIGTERM to owner

# --- netstat (legacy) ---
$ sudo netstat -tulpn | grep :PORT

Wrapping Up

Port conflicts are one of those problems that feel mysterious until you've seen the underlying model once. A port isn't held by a name -- it's held by a file descriptor, owned by a process, which has a PID. lsof and ss are simply ways of reading that kernel state and presenting it in human-readable form.

The fast answer is almost always sudo lsof -i :PORT or sudo ss -tulpn | grep :PORT. The deeper skill is knowing what to do after you find the process -- whether that's killing it cleanly, understanding TIME_WAIT, chasing down a systemd socket unit, or recognizing that an unexpected listener might be worth a security investigation rather than just a quick kill.

The tools in this article are the same tools an attacker uses to map your system from the inside. That symmetry is worth remembering. When you run ss -tlpn as a sysadmin performing an audit (T1049 in your own environment), you're doing exactly what post-exploitation recon looks like from the other side. The difference is authorization and intent -- but the kernel doesn't care. It returns the same data to both.

On hardened or container-heavy systems, add a layer of skepticism: know that namespaces fragment your visibility, that docker-proxy can obscure the real owner of a port, and that on a truly compromised host, the tools themselves may be lying. For those scenarios, /proc/net/tcp and out-of-band traffic capture are your ground truth.

Every listening port is a decision someone made -- or forgot they made. Auditing them regularly is one of the cheapest security habits you can build. Knowing what you're auditing for is what separates maintenance from defense.