Pinggy is a localhost tunneling service that uses SSH under the hood to expose local ports to the public internet -- no software installation required beyond an SSH client, which ships with nearly every Linux distribution by default. A single command can make your local web server reachable from anywhere. The problem is that command dies the moment your terminal closes, or your machine reboots, or the SSH connection drops.

The fix is systemd. By registering the tunnel as a system service, you get automatic startup at boot, supervised restart on failure, centralized logging through the journal, and fine-grained control over when the tunnel starts relative to the network being available. This guide walks through every step -- from generating the right SSH key through to a production-hardened unit file with sensible restart policies and security constraints.

What This Covers

This guide applies to any Linux distribution running systemd -- Debian, Ubuntu, Fedora, RHEL, Arch, and their derivatives. The Pinggy commands use both the classic SSH method and the Pinggy CLI binary. Both approaches are covered. You need a standard user account with sudo access.

Full request / response path
supervised by systemd Local App localhost:8000 SSH process -R0 port 443 Pinggy Server a.pinggy.io Public Internet *.a.pinggy.io Remote Client browser / curl inbound request response systemd supervision
Quick Reference -- Key Commands
Generate SSH Key
ssh-keygen -t ed25519 -f ~/.ssh/pinggy_tunnel -N ""
Reload systemd
sudo systemctl daemon-reload
Enable + Start
sudo systemctl enable --now pinggy-tunnel.service
Check Status
sudo systemctl status pinggy-tunnel.service
Follow Live Logs
journalctl -u pinggy-tunnel.service -f
Reset After Failure
sudo systemctl reset-failed pinggy-tunnel.service
Verify Unit Syntax
systemd-analyze verify /etc/systemd/system/pinggy-tunnel.service
Score Hardening
systemd-analyze security pinggy-tunnel.service
Lock Config File
sudo chmod 600 /etc/pinggy/tunnel.json
Check Local Port
ss -tlnp | grep 8000

How Pinggy Works Under the Hood

Before writing a unit file, it pays to understand precisely what you are automating. Pinggy's documentation describes it as a tunneling service built on top of SSH remote port forwarding. When you run the standard Pinggy command, you are establishing an SSH connection to a.pinggy.io on port 443 (not the standard SSH port 22 -- this is intentional, since port 443 passes through nearly every firewall as HTTPS traffic). The -R0:localhost:PORT flag asks the remote server to allocate a random public port and forward all traffic on that port back to your local machine.

The result is a public URL -- something like https://randomstring.a.pinggy.io -- that routes through Pinggy's infrastructure to your machine. The free tier assigns a new random URL on each connection. Pinggy Pro users get persistent subdomains and custom domain support, meaning the URL stays the same across reconnections.

According to pinggy.io, the service is designed to be the quickest path to exposing a localhost project publicly via a secure tunnel with a shareable URL -- no configuration overhead required.

Beyond the SSH method, Pinggy also provides a CLI binary -- a standalone executable for Linux, macOS, and Windows -- that wraps the same SSH tunnel with quicker reconnection logic and a more ergonomic set of flags. The CLI is particularly useful for systemd automation because it supports autoreconnect in its saved configuration format and handles transient network drops more gracefully than a raw SSH command.

Both methods -- the SSH command and the CLI -- need a running network connection to work. That single fact drives a significant portion of the unit file design decisions below.

systemd service lifecycle
inactive activating active deactivating failed enable started stop/crash burst limit Restart=on-failure (if under burst limit)

step 1The SSH Key Prerequisite

SSH normally prompts for a password or passphrase. Inside a systemd service, there is no terminal, no interactive prompt, and nothing to enter a passphrase into. The service will hang and eventually time out. To avoid this, the SSH key used for the tunnel must have an empty passphrase.

If you already have an SSH key at ~/.ssh/id_rsa or ~/.ssh/id_ed25519 with no passphrase, you can reuse it. If you are not sure, or if you want a dedicated key for the Pinggy service, generate a new one. If you want a full walkthrough of the process, see the guide on generating an Ed25519 SSH key pair. The short version:

terminal
$ ssh-keygen -t ed25519 -C "pinggy-tunnel" -f ~/.ssh/pinggy_tunnel -N ""

The -N "" flag sets the passphrase to empty. The -f flag writes the key to a dedicated file so it does not interfere with your existing keys. The -t ed25519 selects the modern Ed25519 algorithm, which is faster and more secure than RSA for this use case.

Key File Permissions

SSH will refuse to use a private key file that is world-readable. After generating the key, verify that ~/.ssh/pinggy_tunnel has permissions 600. Run chmod 600 ~/.ssh/pinggy_tunnel if not. The .pub file should be 644.

You also need to accept the Pinggy host key the first time you connect, or bypass the check for automated use. The standard Pinggy command already includes -o StrictHostKeyChecking=no for this reason. That option is acceptable here because the tunnel is outbound -- you are not authenticating a remote server's identity in the traditional sense; you are establishing a tunnel to Pinggy's known infrastructure. For maximum security, you can instead do an interactive connection once to record the host key in ~/.ssh/known_hosts, then remove the StrictHostKeyChecking=no flag from the service.

step 2Writing the Startup Shell Script

Pinggy's own documentation recommends wrapping the tunnel command in a shell script rather than calling SSH directly from ExecStart. This is sound practice: it keeps the unit file clean, allows you to add logic (environment checks, logging preamble, conditional flags) without modifying the unit file, and makes the script independently testable.

Create the script at a system path so it runs correctly regardless of which user the service runs as:

$ sudo nano /usr/local/sbin/pinggy-tunnel.sh

Here is the minimal version using the SSH method, forwarding local port 8000 to a public URL:

/usr/local/sbin/pinggy-tunnel.sh
#!/bin/sh
# Pinggy SSH tunnel -- systemd-managed
# Forwards localhost:8000 to a Pinggy public URL

exec ssh \
  -p 443 \
  -R0:localhost:8000 \
  -o StrictHostKeyChecking=no \
  -o ServerAliveInterval=30 \
  -o ServerAliveCountMax=3 \
  -o ExitOnForwardFailure=yes \
  -i /home/youruser/.ssh/pinggy_tunnel \
  a.pinggy.io

Replace youruser with the actual username whose SSH key you want to use, and replace 8000 with the port your local service is listening on. The options here deserve explanation:

Flag What it does
-p 443 Connects to Pinggy on port 443, bypassing firewalls that block port 22.
-R0:localhost:8000 Requests a dynamically allocated remote port forwarded to local port 8000. The 0 tells the server to pick an available port.
-o ServerAliveInterval=30 Sends a keepalive packet every 30 seconds to detect dead connections.
-o ServerAliveCountMax=3 If 3 consecutive keepalives go unanswered (90 seconds total), SSH exits. systemd then restarts it.
-o ExitOnForwardFailure=yes If remote port forwarding fails at connection time (e.g., token conflict), SSH exits immediately rather than connecting without the tunnel. This ensures systemd knows the start failed.
-i /path/to/key Specifies the private key explicitly, avoiding any dependency on SSH agent state.

If you are using a Pinggy token (required for persistent subdomains or Pro features), the destination changes slightly:

/usr/local/sbin/pinggy-tunnel.sh (with token)
#!/bin/sh
# Pinggy SSH tunnel with authentication token

exec ssh \
  -p 443 \
  -R0:localhost:8000 \
  -o StrictHostKeyChecking=no \
  -o ServerAliveInterval=30 \
  -o ServerAliveCountMax=3 \
  -o ExitOnForwardFailure=yes \
  -i /home/youruser/.ssh/pinggy_tunnel \
  [email protected]

Replace YOUR_TOKEN with the token from your Pinggy dashboard. Notice the use of exec at the start of both scripts. This replaces the shell process with the SSH process, so systemd's process tracking points directly to the SSH binary rather than a wrapper shell. It makes systemctl status output cleaner and ensures signals (like SIGTERM on stop) reach SSH directly.

Make the script executable:

$ sudo chmod +x /usr/local/sbin/pinggy-tunnel.sh

step 3Writing the systemd Unit File

With the script in place, the unit file is the control layer that tells systemd how, when, and under what conditions to run it. Create it at the standard system unit location:

$ sudo nano /etc/systemd/system/pinggy-tunnel.service

Here is a well-constructed unit file that goes beyond Pinggy's minimal documentation example:

/etc/systemd/system/pinggy-tunnel.service
[Unit]
Description=Pinggy Localhost Tunnel
Documentation=https://pinggy.io/docs/run_tunnel_on_startup/linux/
After=network-online.target
Wants=network-online.target

[Service]
Type=simple
User=youruser
Group=youruser
ExecStart=/usr/local/sbin/pinggy-tunnel.sh
Restart=on-failure
RestartSec=10s
StartLimitIntervalSec=120s
StartLimitBurst=5
StandardOutput=journal
StandardError=journal
SyslogIdentifier=pinggy-tunnel

# Environment (adjust HOME if running as a system user)
Environment=HOME=/home/youruser

[Install]
WantedBy=multi-user.target

Every directive here has a specific purpose. The [Unit] section controls ordering and dependencies. The [Service] section controls how the process runs. The [Install] section controls when the service is pulled into the boot sequence.

Understanding the Network Dependency

The choice between network.target and network-online.target is one of the most commonly misunderstood aspects of writing network-dependent services. The official systemd documentation on this topic is explicit: network.target only indicates that the network management software has started -- not that any interface has a configured IP address. For a service that needs to make an outbound TCP connection (which is exactly what Pinggy's SSH tunnel requires), starting after network.target alone is a race you will eventually lose.

"network-online.target is a target that actively waits until the network is 'up'." -- systemd.io

network-online.target is the correct choice here because Pinggy needs a routable IP address and DNS resolution to connect to a.pinggy.io. The Wants=network-online.target line is a soft dependency -- if the network target fails (rare, but possible), the service will still attempt to start rather than aborting entirely. The After=network-online.target line ensures ordering: systemd will not start the tunnel service until the network target is satisfied, regardless of whether the dependency is hard or soft.

Enabling network-online.target

On systems using NetworkManager, the wait service is NetworkManager-wait-online.service. On systems using systemd-networkd, it is systemd-networkd-wait-online.service. One of these must be enabled for network-online.target to work. Check with systemctl is-enabled NetworkManager-wait-online.service. If disabled, enable it with sudo systemctl enable NetworkManager-wait-online.service. On older Debian systems using ifupdown, enable ifupdown-wait-online.service instead.

Restart Policy and Rate Limiting

Restart=on-failure tells systemd to restart the service whenever the process exits with a non-zero exit code, is killed by a signal, or times out -- but not when you explicitly stop it with systemctl stop. This is the right policy for a tunnel service: you want automatic recovery from network drops and Pinggy server timeouts, but you do not want the service bouncing back when you intentionally shut it down for maintenance.

RestartSec=10s adds a 10-second cooldown between restart attempts. Without this, a persistent failure (e.g., the Pinggy service being temporarily unreachable) would cause rapid restart loops that consume CPU and flood the journal.

StartLimitIntervalSec=120s combined with StartLimitBurst=5 implements a circuit breaker: if the service fails to start 5 times within 120 seconds, systemd stops trying and marks the service as failed. This prevents an infinite restart loop when something is fundamentally broken (wrong token, SSH key issue, misconfigured local port). You can manually reset this with sudo systemctl reset-failed pinggy-tunnel.service once you have fixed the underlying issue.

The User Directive and HOME

Running the service as your regular user account (rather than root) means it has access to your SSH key in ~/.ssh/ and your user's known_hosts file. The Environment=HOME=/home/youruser line is important: when systemd launches a service as a specific user, it does not always set HOME to that user's home directory, and SSH uses HOME to locate the .ssh/ directory. Setting it explicitly removes that ambiguity.

Dedicated System User

For a cleaner setup, consider creating a dedicated system user for the tunnel: sudo useradd -r -s /bin/false -d /var/lib/pinggy pinggy. Copy or generate an SSH key into /var/lib/pinggy/.ssh/ and set appropriate ownership. Then set User=pinggy and Environment=HOME=/var/lib/pinggy in the unit file. This follows the principle of least privilege and keeps tunnel credentials separate from your personal user account.

Hardening the Unit File

The unit file shown above is functional, but systemd offers a set of sandboxing directives that limit what the tunnel process can access on the host. These are worth adding to any service that runs persistently and communicates with the internet. If you want to go further at the SSH layer itself, the guide on hardening SSH beyond the basics covers certificate authentication, jump hosts, and fail2ban configurations. For the unit file, add the directives below to the [Service] section:

/etc/systemd/system/pinggy-tunnel.service -- with hardening
[Unit]
Description=Pinggy Localhost Tunnel
Documentation=https://pinggy.io/docs/run_tunnel_on_startup/linux/
After=network-online.target
Wants=network-online.target

[Service]
Type=simple
User=pinggy
Group=pinggy
ExecStart=/usr/local/sbin/pinggy-tunnel.sh
Restart=on-failure
RestartSec=10s
StartLimitIntervalSec=120s
StartLimitBurst=5
StandardOutput=journal
StandardError=journal
SyslogIdentifier=pinggy-tunnel
Environment=HOME=/var/lib/pinggy

# Sandboxing directives
NoNewPrivileges=yes
ProtectSystem=strict
ProtectHome=yes
ReadWritePaths=/var/lib/pinggy/.ssh /run
PrivateTmp=yes
RestrictSUIDSGID=yes

[Install]
WantedBy=multi-user.target

Each directive has a defined effect on what the process can reach:

Directive What it restricts
NoNewPrivileges=yes Prevents the process from gaining additional privileges through setuid binaries or file capabilities.
ProtectSystem=strict Mounts the entire filesystem read-only except for /dev, /proc, and /sys. Combined with ReadWritePaths, this means the tunnel process cannot write to arbitrary locations on disk.
ProtectHome=yes Makes /home, /root, and /run/user inaccessible to the service. Since the tunnel runs as a dedicated system user with its home at /var/lib/pinggy, this has no operational impact but prevents access to other users' home directories.
ReadWritePaths Carves out specific paths that the process is allowed to write to within the otherwise read-only view. The SSH process needs /var/lib/pinggy/.ssh for key access and /run for the optional URL capture script.
PrivateTmp=yes Gives the service its own private /tmp mount, isolated from the system /tmp.
RestrictSUIDSGID=yes Prevents the service from creating setuid or setgid files.
Verifying Hardening

After enabling these directives and reloading, run systemd-analyze security pinggy-tunnel.service. This command scores the unit file's hardening posture and lists additional directives you could apply. A score below 4 is considered well-hardened for a network-facing service.

Alternative: Using the Pinggy CLI Binary

SSH method
  • No binary to install or update
  • Works on any machine with an SSH client
  • Credentials stay in the shell script
  • Relies on systemd restart for reconnection
  • More verbose unit file configuration

The Pinggy CLI is a standalone binary that wraps the SSH tunnel with faster reconnection logic and supports persistent JSON configuration files. According to Pinggy's CLI documentation, it "provides more robust tunnels with quicker reconnections when your tunnels are interrupted." For a systemd service where uptime matters, this is a meaningful advantage.

Download the CLI binary for your architecture from pinggy.io/cli and place it at a predictable system path:

terminal
# Download (replace with actual release URL from pinggy.io/cli)
$ wget https://pinggy.io/cli/linux_amd64/pinggy -O /tmp/pinggy
$ sudo mv /tmp/pinggy /usr/local/bin/pinggy
$ sudo chmod +x /usr/local/bin/pinggy

The CLI supports saved configuration files. Save a config once interactively, then reference it from the unit file. This keeps credentials out of the unit file itself and allows you to update tunnel settings without touching systemd:

terminal -- save a config
# Run once interactively to save configuration
$ pinggy --token YOUR_TOKEN -l http://localhost:8000 --saveconf /etc/pinggy/tunnel.json

The saved JSON file looks like this (tokens and options included):

/etc/pinggy/tunnel.json
{
    "configname": "production-tunnel",
    "type": "http",
    "localaddress": "localhost:8000",
    "serverport": 443,
    "serveraddress": "a.pinggy.io",
    "token": "YOUR_TOKEN",
    "autoreconnect": true,
    "force": true,
    "httpsOnly": true
}

Note "autoreconnect": true and "force": true. The reconnect flag makes the CLI retry automatically on connection drop. The force flag disconnects any existing tunnel using the same token before connecting, which prevents the "A tunnel with the same token is already active" error that can occur when a previous run did not clean up cleanly.

The corresponding unit file using the CLI is simpler because the startup script can call the CLI directly:

/etc/systemd/system/pinggy-tunnel.service (CLI variant)
[Unit]
Description=Pinggy Localhost Tunnel (CLI)
Documentation=https://pinggy.io/docs/cli/
After=network-online.target
Wants=network-online.target

[Service]
Type=simple
User=youruser
Group=youruser
ExecStart=/usr/local/bin/pinggy --conf /etc/pinggy/tunnel.json
Restart=on-failure
RestartSec=10s
StartLimitIntervalSec=120s
StartLimitBurst=5
StandardOutput=journal
StandardError=journal
SyslogIdentifier=pinggy-tunnel

[Install]
WantedBy=multi-user.target

step 4Enabling, Starting, and Verifying

With the unit file saved, you need to reload systemd's configuration before it will recognize the new service:

$ sudo systemctl daemon-reload

Always run daemon-reload after creating or modifying a unit file. Without it, systemd continues operating from its cached state and may not reflect your changes. Enable the service to start at boot, and start it immediately in one command:

$ sudo systemctl enable --now pinggy-tunnel.service

The --now flag combines enable and start. Enabling writes the symlinks that pull the service into multi-user.target; starting launches the process immediately without requiring a reboot. Verify the service came up correctly:

terminal
$ sudo systemctl status pinggy-tunnel.service
 pinggy-tunnel.service - Pinggy Localhost Tunnel
     Loaded: loaded (/etc/systemd/system/pinggy-tunnel.service; enabled)
     Active: active (running) since Thu 2026-03-12 11:04:22 UTC; 8s ago
   Main PID: 3847 (ssh)
      Tasks: 1 (limit: 4915)
     Memory: 4.1M
        CPU: 0.081s
     CGroup: /system.slice/pinggy-tunnel.service
             └─3847 ssh -p 443 -R0:localhost:8000 ...

Mar 12 11:04:22 hostname ssh[3847]: Tunnel URL: https://abcd1234.a.pinggy.io

You should see active (running) and a tunnel URL in the output. If the status shows failed or activating stuck, move directly to the logging section below.

Viewing Logs and Monitoring the Tunnel

Because the unit file sets StandardOutput=journal and SyslogIdentifier=pinggy-tunnel, all output from the tunnel process goes into the systemd journal under that identifier. This includes the tunnel URL that Pinggy prints on connection -- which is particularly useful on free tier where the URL changes each time.

journal queries
# Follow live output from the tunnel service
$ journalctl -u pinggy-tunnel.service -f

# Show all logs since last boot
$ journalctl -u pinggy-tunnel.service -b

# Show the last 50 lines without paging
$ journalctl -u pinggy-tunnel.service -n 50 --no-pager

# Extract just the tunnel URL from logs
$ journalctl -u pinggy-tunnel.service -b | grep -i "pinggy.io"

The -f flag follows the log in real time, equivalent to tail -f. The -b flag limits output to the current boot, which is useful for seeing the URL assigned after the last reboot without wading through historical entries.

Persisting Logs Across Reboots

By default on many distributions, the journal is stored in /run/log/journal/ and does not survive a reboot. To make logs persistent, create /var/log/journal/ and restart journald: sudo mkdir -p /var/log/journal && sudo systemctl restart systemd-journald. After this, journalctl -u pinggy-tunnel.service without -b will show the full history including the URL from previous sessions.

Troubleshooting Common Failures

Several failure modes appear regularly when automating Pinggy with systemd. Here is a systematic approach to each one.

Service Starts Then Immediately Exits

This is almost always an SSH configuration issue. Check the journal immediately:

$ journalctl -u pinggy-tunnel.service -n 30 --no-pager

Look for SSH error messages. Common causes are: wrong path to the private key (Warning: Identity file not accessible), passphrase-protected key (Enter passphrase for key -- the process hangs then times out), or a port conflict (Error: remote port forwarding failed). The -o ExitOnForwardFailure=yes flag in the script ensures the last case produces an immediate exit rather than a silent no-tunnel connection.

Service Fails to Start During Boot But Works When Started Manually

This is the classic symptom of a network race condition. The service is starting before a routable IP address exists, the SSH connection to a.pinggy.io fails immediately, and the restart policy eventually gives up. Confirm this by checking whether network-online.target is being reached:

terminal
# Check if network-online.target was reached this boot
$ systemctl is-active network-online.target

# Check which wait service is enabled
$ systemctl is-enabled NetworkManager-wait-online.service
$ systemctl is-enabled systemd-networkd-wait-online.service

# Enable the appropriate one
$ sudo systemctl enable NetworkManager-wait-online.service

StartLimitBurst Reached -- Service Stuck in Failed State

When the service hits the StartLimitBurst ceiling, systemd stops attempting restarts and the service enters a permanently failed state until you intervene. Fix the underlying problem first, then reset the failure counter:

$ sudo systemctl reset-failed pinggy-tunnel.service && sudo systemctl start pinggy-tunnel.service

Token Already In Use Error

If the SSH process was killed without cleanly closing the tunnel (e.g., a hard reboot or OOM kill), Pinggy's server may still consider the previous tunnel active. The error message in the journal will read something like "Login is not allowed: A tunnel with the same token is already active." The [email protected] SSH destination (or "force": true in the CLI config) disconnects the previous session and allows the new one to connect. Make sure that flag is present in your script or config file.

Verifying the Unit File Syntax

Before blaming the tunnel itself, verify the unit file has no syntax errors:

$ systemd-analyze verify /etc/systemd/system/pinggy-tunnel.service

A clean file produces no output. Any output indicates a problem that will cause the service to behave unexpectedly.

Advanced Patterns

Multiple Tunnels

If you need to expose multiple local ports simultaneously, create separate unit files rather than cramming multiple SSH processes into one service. Name them descriptively:

Each gets its own SyslogIdentifier, making logs easy to isolate. Each can be individually stopped, restarted, or disabled without affecting the others.

Automatically Capturing the Tunnel URL

On the free tier, the tunnel URL changes on every reconnection. If downstream processes (webhooks, DNS records, notification systems) need the current URL, you can capture it from the journal using a simple systemd oneshot service that runs after the tunnel starts:

/usr/local/sbin/capture-pinggy-url.sh
#!/bin/sh
# Wait for the tunnel URL to appear in the journal, then write it to a file
journalctl -u pinggy-tunnel.service -f --no-pager \
  | grep --line-buffered -m 1 "a.pinggy.io" \
  | awk '{ print $NF }' \
  > /run/pinggy-current-url

This approach uses grep -m 1 to stop after the first match, then exits. Other services can read the URL from /run/pinggy-current-url at startup. Note that /run/ is a tmpfs mount and is cleared on reboot -- appropriate for a value that changes on every tunnel reconnection.

TCP and TLS Tunnels

The unit file structure is identical for TCP and TLS tunnels. Only the SSH destination changes. For a TCP tunnel (useful for SSH access, database connections, or any raw TCP service):

/usr/local/sbin/pinggy-tcp-tunnel.sh
#!/bin/sh
# TCP tunnel -- bypasses HTTP layer for raw protocol forwarding

exec ssh \
  -p 443 \
  -R0:localhost:22 \
  -o StrictHostKeyChecking=no \
  -o ServerAliveInterval=30 \
  -o ServerAliveCountMax=3 \
  -o ExitOnForwardFailure=yes \
  -i /home/youruser/.ssh/pinggy_tunnel \
  [email protected]

The +tcp prefix in the destination tells Pinggy to create a TCP tunnel rather than an HTTP tunnel. For the CLI, use --type tcp in the configuration file.

Keeping Tokens Out of Scripts and Unit Files

If you are using a Pinggy token -- required for persistent subdomains, custom domains, or Pro features -- it is a credential that should not appear in plain text inside a shell script or a JSON file with permissive permissions. Two patterns handle this correctly.

Using an EnvironmentFile

systemd supports loading environment variables from a file at service start time. Store the token in a file readable only by the service user, then reference the variable in your startup script:

/etc/pinggy/tunnel.env
# Restrict this file: sudo chmod 600 /etc/pinggy/tunnel.env
PINGGY_TOKEN=your_token_here
terminal -- create and lock down the env file
$ sudo mkdir -p /etc/pinggy
$ sudo touch /etc/pinggy/tunnel.env
$ sudo chmod 600 /etc/pinggy/tunnel.env
$ sudo chown pinggy:pinggy /etc/pinggy/tunnel.env

Add EnvironmentFile to the [Service] section of the unit file, then reference $PINGGY_TOKEN in your startup script:

unit file [Service] addition
# Add to [Service] section -- the leading dash means: continue if file is absent
EnvironmentFile=-/etc/pinggy/tunnel.env
/usr/local/sbin/pinggy-tunnel.sh -- token from env
#!/bin/sh
# Token injected at runtime via EnvironmentFile -- not stored in this script
exec ssh \
  -p 443 \
  -R0:localhost:8000 \
  -o StrictHostKeyChecking=no \
  -o ServerAliveInterval=30 \
  -o ServerAliveCountMax=3 \
  -o ExitOnForwardFailure=yes \
  -i /var/lib/pinggy/.ssh/pinggy_tunnel \
  "${PINGGY_TOKEN}@a.pinggy.io"

Permissions for the CLI Config File

If you are using the CLI method with a saved JSON config at /etc/pinggy/tunnel.json, that file contains your token in plain text. The same restriction applies:

$ sudo chmod 600 /etc/pinggy/tunnel.json && sudo chown pinggy:pinggy /etc/pinggy/tunnel.json

A world-readable config file in /etc/ means any local user on the system can read your Pinggy token. Run ls -la /etc/pinggy/ to verify permissions after making changes.

Security Posture: When Is a Persistent Tunnel Appropriate?

A systemd-managed Pinggy tunnel means a port on your machine is reachable from the public internet any time the system is running. This is appropriate for development servers, webhooks, home lab services, and remote access to machines behind NAT. It is not a substitute for a reverse proxy with TLS termination, rate limiting, and authentication in front of sensitive production workloads. Before making a service permanently public, confirm what it exposes. A local dev server with no authentication is a very different risk profile from a webhook receiver that only accepts POST requests from a known IP range. Pinggy Pro's IP allowlist feature and HTTP Basic Auth options provide additional controls when open access is not acceptable.

The Silent Failure: When the Local Service Goes Down

There is one failure mode this setup does not handle automatically: the tunnel can stay healthy while the local application it is forwarding has crashed. From systemd's perspective, the SSH process is still running and the service shows active (running). From the outside, requests to the public URL return connection errors because nothing is listening on localhost:8000.

This is invisible to the tunnel's health checks because the tunnel is not responsible for the application -- it only forwards traffic.

If the local application is a systemd service, declare a dependency directly in the tunnel unit file. The tunnel will not start until the application is running, and if the application stops, the tunnel stops with it:

unit file [Unit] section -- application binding
[Unit]
Description=Pinggy Localhost Tunnel
After=network-online.target myapp.service
Wants=network-online.target
# BindsTo: if myapp stops, the tunnel stops automatically
BindsTo=myapp.service

BindsTo is stronger than Requires: if myapp.service stops for any reason, systemd automatically stops the tunnel service. When the application restarts, restart the tunnel manually or add PartOf=myapp.service to link their lifecycles fully.

For processes that are not systemd services, a pre-check in the startup script accomplishes the same goal:

/usr/local/sbin/pinggy-tunnel.sh -- with pre-check
#!/bin/sh
# Verify the local app is accepting connections before opening the public tunnel
if ! nc -z localhost 8000 2>/dev/null; then
  echo "Local service not listening on :8000, aborting."
  exit 1
fi

exec ssh \
  -p 443 \
  -R0:localhost:8000 \
  -o StrictHostKeyChecking=no \
  -o ServerAliveInterval=30 \
  -o ServerAliveCountMax=3 \
  -o ExitOnForwardFailure=yes \
  -i /var/lib/pinggy/.ssh/pinggy_tunnel \
  "${PINGGY_TOKEN}@a.pinggy.io"

A non-zero exit triggers Restart=on-failure, so systemd retries after RestartSec. The tunnel will not open to the internet until the application is actually responding on the expected port.

Verifying the Setup End to End

After enabling and starting the service, confirm the full path works -- not just that the SSH process is alive. Start by pulling the assigned URL from the journal:

terminal
# Get the assigned tunnel URL from this boot's logs
$ journalctl -u pinggy-tunnel.service -b | grep -i "pinggy.io"

# Test HTTP response from the same machine (uses public routing)
$ curl -I https://abcd1234.a.pinggy.io

# Check that the local app is the one responding
$ curl -v https://abcd1234.a.pinggy.io/

# If curl returns connection errors but the service is active, check the local port
$ ss -tlnp | grep 8000

A successful response confirms the SSH tunnel is established, Pinggy is routing traffic, and the local application is accepting and responding to requests. If the service shows active (running) but curl times out, the problem is downstream of the tunnel -- either the local application is not listening or it is bound to a different address than localhost. The guide on finding which process is using a port on Linux covers the ss, lsof, and fuser approaches for pinning down exactly what is or is not bound to a given port.

Test After Every Reboot

The first time you configure the service, reboot the machine and confirm the tunnel comes up automatically without any intervention. Run journalctl -u pinggy-tunnel.service -b immediately after reboot. This verifies that network-online.target was reached, the service was pulled into the boot sequence correctly, and the tunnel URL was assigned. A clean boot test is the only reliable way to catch race conditions that do not appear when starting the service manually.

Wrapping Up

Automating Pinggy with systemd is straightforward once you understand the pieces involved. The SSH key must have no passphrase. The unit file must declare a dependency on network-online.target, not just network.target, and the appropriate wait service must be enabled for that target to function. The restart policy should include a rate limit so a persistent failure does not produce an infinite restart loop. Logs go to the journal by default and survive reboots if you create /var/log/journal/.

Tokens and credentials belong in a 600-permission file loaded via EnvironmentFile, not hardcoded in shell scripts. The hardening directives in the unit file limit what the tunnel process can reach on your filesystem. If the local service the tunnel forwards can go down independently, either bind the two units together with BindsTo or add a pre-connection check to the startup script.

The Pinggy CLI binary is worth considering for production deployments: its saved configuration format separates credentials from the unit file, and its built-in reconnection logic handles transient network interruptions more gracefully than the raw SSH command. Either method -- SSH or CLI -- integrates cleanly with systemd's supervision model once the unit file is correctly structured.

For teams that need a fixed, stable URL across reconnections, the Pinggy Pro persistent subdomain feature eliminates the problem of a changing URL entirely. Combined with the systemd service described here, you get a tunnel that starts at boot, recovers from failures automatically, and never requires manual intervention for routine restarts.

Sources

How to Run a Pinggy Tunnel as a systemd Service

Step 1: Generate a passphrase-free SSH key

Generate a dedicated Ed25519 SSH key with no passphrase so systemd can authenticate to Pinggy without an interactive prompt. Run: ssh-keygen -t ed25519 -C pinggy-tunnel -f ~/.ssh/pinggy_tunnel -N ""

Step 2: Write the startup shell script

Create a shell script at /usr/local/sbin/pinggy-tunnel.sh that runs the SSH tunnel command with keepalive and failure options. Make it executable with chmod +x /usr/local/sbin/pinggy-tunnel.sh.

Step 3: Create the systemd unit file

Create /etc/systemd/system/pinggy-tunnel.service with [Unit], [Service], and [Install] sections. Set After=network-online.target, Restart=on-failure, RestartSec=10s, and sandboxing directives including NoNewPrivileges=yes and ProtectSystem=strict.

Step 4: Enable and start the service

Reload systemd with sudo systemctl daemon-reload, then enable and start the service in one command: sudo systemctl enable --now pinggy-tunnel.service. Verify it is running with sudo systemctl status pinggy-tunnel.service.

Step 5: Confirm the tunnel URL and monitor logs

Check the journal for the public tunnel URL assigned by Pinggy: journalctl -u pinggy-tunnel.service -b | grep -i pinggy.io. Follow live output with journalctl -u pinggy-tunnel.service -f.

Frequently Asked Questions

Why does my Pinggy service fail to start at boot but work when started manually?

This is a network race condition. The service starts before a routable IP address is available, so the SSH connection to a.pinggy.io fails immediately. Fix it by ensuring network-online.target is active: set After=network-online.target and Wants=network-online.target in your unit file, and enable NetworkManager-wait-online.service or systemd-networkd-wait-online.service on your distribution.

Why does my Pinggy systemd service start and then immediately exit?

This is almost always an SSH configuration issue. Check the journal with journalctl -u pinggy-tunnel.service -n 30 --no-pager. Common causes are a wrong path to the private key, a passphrase-protected key (which hangs because systemd has no terminal), or a port conflict caught by ExitOnForwardFailure=yes.

How do I find the public Pinggy URL assigned to my tunnel?

Because the unit file routes all output to the systemd journal, run journalctl -u pinggy-tunnel.service -b | grep -i pinggy.io to extract the tunnel URL from the current boot's logs. On the free tier the URL changes each time the tunnel reconnects, so this command is the reliable way to retrieve it.