There's a gap between "I got a Pinggy tunnel running on my laptop" and "I have Pinggy tunnels deployed consistently across my infrastructure, version-controlled, and self-healing after reboots." That gap is exactly what infrastructure-as-code tools are designed to close -- and OpenTofu does it without licensing restrictions, without vendor lock-in, and without requiring you to pay for a commercial platform just to run a workflow.
This guide covers the full stack: how Pinggy actually establishes tunnels under the hood, how to build systemd services that survive reboots and reconnect on failure, and how to wire all of it into an OpenTofu configuration that you can version, apply, and tear down like any other piece of infrastructure.
OpenTofu writes the systemd unit and invokes systemctl. systemd owns the SSH process lifecycle. The SSH process holds the reverse tunnel to Pinggy. Traffic enters at Pinggy's public URL, travels back down the tunnel, and exits at your local service port.
What Pinggy Is (and How It Actually Works)
Pinggy is a localhost tunneling service that gives your locally running service a public URL, reachable from anywhere on the internet -- through firewalls and NAT without any port forwarding or DNS configuration. According to Pinggy's documentation, it supports HTTP, HTTPS, TCP, UDP, and TLS tunnels, and unlike many competing tools, it requires no client software download at all on the free tier -- just an SSH client, which ships with virtually every Linux distribution.
The underlying mechanism is SSH reverse tunneling. When you run the standard Pinggy command, you are establishing an SSH connection to Pinggy's servers on port 443 and instructing your local SSH client to forward incoming connections from a remote port on Pinggy's infrastructure back down that tunnel to a local port on your machine. The -R0:localhost:8000 flag is the core of this: -R specifies remote port forwarding, and the 0 tells the server to allocate a random available port and return the public URL over the connection.
# The canonical free-tier tunnel command $ ssh -p 443 -R0:localhost:8000 -o StrictHostKeyChecking=no a.pinggy.io # With a Pro token for a persistent subdomain $ ssh -p 443 -R0:localhost:8000 -o StrictHostKeyChecking=no [email protected] # With auto-reconnect loop (needed for long-running production tunnels) $ while true; do ssh -p 443 -o ServerAliveInterval=60 -R0:localhost:8000 [email protected] sleep 2 done
The ServerAliveInterval=60 option tells your SSH client to send keepalive packets every 60 seconds, which prevents the connection from being silently dropped by intermediate firewalls or NAT devices. If the server stops responding, SSH will exit -- at which point the while true loop immediately restarts it. This is fine for manual sessions, but running a while true loop as a background process on a server is fragile: it doesn't survive reboots, doesn't integrate with logging, and provides no way to check status or manage restart delays.
Pinggy also ships a dedicated CLI binary (downloadable from pinggy.io/cli) that includes built-in auto-reconnection without needing a wrapper loop, plus JSON config file support, a web debugger, and more user-friendly flag syntax. For production deployments managed via OpenTofu, the CLI binary is a better choice than raw SSH: its autoreconnect: true config option handles reconnection internally, and it can be controlled by systemd just as easily.
OpenTofu: The Open-Source IaC Engine
OpenTofu is a fork of the last MPL-licensed version of Terraform (1.5.x), cut before HashiCorp relicensed its products under the more restrictive Business Source License in August 2023. The fork is managed by the Linux Foundation, and was accepted into the Cloud Native Computing Foundation as a sandbox project in April 2025. OpenTofu 1.11 introduced ephemeral resources, write-only attributes, and the enabled meta-argument for conditional resource creation; the current stable release as of publication is 1.11.5.
According to Scalr's learning center (February 2026), OpenTofu functions as a compatible replacement for Terraform, maintaining existing configuration support while introducing new capabilities.
For this guide's purposes, you can treat OpenTofu and Terraform as interchangeable at the HCL syntax level. Replace terraform with tofu in any command. The key reason to choose OpenTofu here -- rather than shell scripts or Ansible -- is that IaC gives you a declarative description of desired state, a plan phase that shows exactly what will change before anything runs, and a state file that tracks what was actually deployed. When you want to tear down tunnels from ten machines, you run tofu destroy. When you want to confirm the systemd service is enabled on every node in your fleet, you run tofu plan and the diff tells you.
Prerequisite: Passwordless SSH Keys
Pinggy requires that your SSH connection not prompt for a password or passphrase. If the SSH process is started by systemd or OpenTofu without an interactive terminal attached, any password prompt will cause the process to hang or fail immediately. Pinggy's official Linux startup documentation makes this the first prerequisite: you must generate a key pair with no passphrase.
# Generate a key specifically for Pinggy, no passphrase (-N "") $ ssh-keygen -t ed25519 -f ~/.ssh/pinggy_key -N "" -C "pinggy-tunnel-$(hostname)" # Verify the key exists and is the right type $ ssh-keygen -l -f ~/.ssh/pinggy_key.pub 256 SHA256:abc123... pinggy-tunnel-myserver (ED25519) # Test that the connection works without prompts $ ssh -p 443 -i ~/.ssh/pinggy_key -o StrictHostKeyChecking=no \ -o BatchMode=yes -R0:localhost:8000 a.pinggy.io
Using a dedicated key rather than your default ~/.ssh/id_ed25519 is good hygiene. If the Pinggy service account is ever compromised or you need to revoke its access, you can do so without affecting other SSH operations on the machine. It also makes the systemd unit file explicit about which key it uses, which is important when you're reading that unit file months later during an incident.
~/.ssh/id_ed25519 is likely authorized on every system you manage. A dedicated pinggy_key is used solely to suppress the SSH password prompt during the handshake with Pinggy -- it is not used for authorization on Pinggy's side (Pinggy authenticates via the token in the username field). If the passphrase-free key file is ever accessed by an attacker, it has no value beyond the Pinggy tunnel itself. Revoking it means rotating one key with one purpose, not auditing every authorized_keys file across your infrastructure.
There's also an operational clarity argument: six months from now, reading a unit file that says
-i /etc/pinggy/pinggy_key is immediately interpretable. Reading one that says -i /home/deploy/.ssh/id_ed25519 raises questions about intent, key scope, and whether the service is using a shared identity.
Building the systemd Service Unit
The systemd service unit is the foundation of persistent Pinggy operation on Linux. Everything else -- OpenTofu, logging, restart policies -- builds on top of it. A naive unit file works, but a production-hardened one handles the subtleties of how SSH behaves when a tunnel drops.
[Unit] Description=Pinggy SSH Tunnel Documentation=https://pinggy.io/docs/ After=network-online.target Wants=network-online.target [Service] Type=simple User=pinggy Group=pinggy # The tunnel command. Adjust token and local port as needed. ExecStart=/usr/bin/ssh \ -p 443 \ -i /home/pinggy/.ssh/pinggy_key \ -o StrictHostKeyChecking=no \ -o ServerAliveInterval=30 \ -o ServerAliveCountMax=3 \ -o ExitOnForwardFailure=yes \ -o BatchMode=yes \ -R0:localhost:8000 \ [email protected] # Restart on any non-zero exit. SSH exits 0 only on clean disconnect. Restart=on-failure RestartSec=10s # Log output goes to the journal StandardOutput=journal StandardError=journal SyslogIdentifier=pinggy-tunnel # Hardening NoNewPrivileges=true ProtectSystem=strict ProtectHome=read-only PrivateTmp=true [Install] WantedBy=multi-user.target
A few decisions in that unit file deserve explanation. After=network-online.target with Wants=network-online.target ensures the tunnel doesn't attempt to start before the network stack is fully initialized -- without this, the service will fail immediately at boot on most systems and keep restarting until the network comes up, flooding the journal with unnecessary failures. The ExitOnForwardFailure=yes SSH option is critical: it causes SSH to exit with a non-zero status if the remote port binding fails (for example if a tunnel with the same token is already active), which in turn triggers systemd's Restart=on-failure policy. Without it, SSH can appear to stay running while actually forwarding nothing.
ExitOnForwardFailure=yes, SSH succeeds at establishing the connection to Pinggy's server but the remote port binding fails quietly. SSH exits 0 (success), systemd sees a clean exit, and does not restart the service. Your tunnel is dead, your logs show nothing alarming, and you discover the problem when an external request fails.
With the flag set, SSH exits non-zero on binding failure. systemd's
Restart=on-failure fires, waits RestartSec, and retries. The failure becomes visible and self-healing.
- No download required -- ships with every Linux distro
- Transparent -- you see exactly what flags are set
- Auditable for security teams
- Auto-reconnect needs an external loop or systemd restart policy
- No JSON config -- flags only
- No built-in web debugger
- Built-in
autoreconnect: trueconfig option - JSON config file -- easier to template with OpenTofu
- Web debugger for inspecting live tunnel traffic
- Better flag ergonomics for complex tunnel types (UDP, TLS)
- Additional binary to download, verify, and update
- One more dependency in your provisioning chain
The unit runs as a dedicated pinggy user with no login shell. Create it with sudo useradd --system --no-create-home --shell /usr/sbin/nologin pinggy, then place the SSH key at /home/pinggy/.ssh/pinggy_key (or a more appropriate path like /etc/pinggy/) with ownership and permissions set to pinggy:pinggy 600. Running a tunnel as root is unnecessary and introduces unnecessary risk.
The ServerAliveInterval=30 and ServerAliveCountMax=3 combination gives the tunnel 90 seconds to recover from a keepalive failure before SSH gives up and exits. Systemd then waits the configured RestartSec=10s before restarting it. In practice, this means a dropped internet connection causes a tunnel outage of at most two minutes before the service is reconnected -- acceptable for most development and internal tooling scenarios.
The OpenTofu Configuration
With the systemd unit design settled, the OpenTofu configuration has three jobs: write the unit file to disk, create the dedicated system user, and ensure the service is enabled and running. The key primitive here is terraform_data with a local-exec provisioner -- the modern replacement for null_resource that ships built into OpenTofu without requiring any external provider. Per the OpenTofu documentation, terraform_data is a provider-free resource that participates fully in the dependency graph and state lifecycle.
OpenTofu's own documentation is emphatic on this point: provisioners should be used only when no provider-based alternative exists. For managing Linux services on a single machine, there is no provider -- hence the use of local-exec here. If you are managing a fleet via SSH, look at the remote-exec provisioner or, better, Ansible called from OpenTofu via local-exec. The pattern below is appropriate for single-machine or CI-runner deployments where OpenTofu runs directly on the target host.
Start by laying out a project directory:
pinggy-infra/
main.tf
variables.tf
outputs.tf
templates/
pinggy-tunnel.service.tpl
The template file lets you inject values -- the token, the local port, the key path -- at apply time rather than hardcoding them into the unit file. OpenTofu's templatefile() function handles the rendering.
[Unit] Description=Pinggy SSH Tunnel (${tunnel_label}) Documentation=https://pinggy.io/docs/ After=network-online.target Wants=network-online.target [Service] Type=simple User=${service_user} Group=${service_user} ExecStart=/usr/bin/ssh \ -p 443 \ -i ${ssh_key_path} \ -o StrictHostKeyChecking=no \ -o ServerAliveInterval=30 \ -o ServerAliveCountMax=3 \ -o ExitOnForwardFailure=yes \ -o BatchMode=yes \ -R0:localhost:${local_port} \ ${pinggy_token}@pro.pinggy.io Restart=on-failure RestartSec=10s StandardOutput=journal StandardError=journal SyslogIdentifier=pinggy-${tunnel_label} NoNewPrivileges=true ProtectSystem=strict ProtectHome=read-only PrivateTmp=true [Install] WantedBy=multi-user.target
variable "pinggy_token" { description = "Pinggy Pro access token from dashboard.pinggy.io" type = string sensitive = true } variable "local_port" { description = "Local port to expose through the tunnel" type = number default = 8000 } variable "tunnel_label" { description = "Short label used in the service name and log identifier" type = string default = "main" } variable "service_user" { description = "System user that runs the tunnel service" type = string default = "pinggy" } variable "ssh_key_path" { description = "Absolute path to the passphrase-free SSH private key" type = string default = "/etc/pinggy/pinggy_key" }
terraform { required_version = ">= 1.8" # No providers required -- terraform_data is built in } # 1. Render the systemd unit file content from the template locals { unit_content = templatefile("${path.module}/templates/pinggy-tunnel.service.tpl", { pinggy_token = var.pinggy_token local_port = var.local_port tunnel_label = var.tunnel_label service_user = var.service_user ssh_key_path = var.ssh_key_path }) service_name = "pinggy-${var.tunnel_label}.service" } # 2. Create the dedicated system user resource "terraform_data" "pinggy_user" { provisioner "local-exec" { command = <<-EOT if ! id -u ${var.service_user} >/dev/null 2>&1; then sudo useradd \ --system \ --no-create-home \ --shell /usr/sbin/nologin \ ${var.service_user} echo "Created system user: ${var.service_user}" else echo "System user ${var.service_user} already exists, skipping." fi EOT } } # 3. Create the SSH key directory with correct ownership resource "terraform_data" "pinggy_key_dir" { depends_on = [terraform_data.pinggy_user] provisioner "local-exec" { command = <<-EOT sudo mkdir -p /etc/pinggy sudo chown ${var.service_user}:${var.service_user} /etc/pinggy sudo chmod 700 /etc/pinggy echo "Key directory ready at /etc/pinggy" EOT } } # 4. Write the systemd unit file resource "terraform_data" "pinggy_unit_file" { # Trigger redeployment whenever the rendered content changes triggers_replace = local.unit_content provisioner "local-exec" { command = <<-EOT sudo tee /etc/systemd/system/${local.service_name} > /dev/null <<'UNITEOF' ${local.unit_content} UNITEOF echo "Wrote /etc/systemd/system/${local.service_name}" EOT } provisioner "local-exec" { when = destroy command = "sudo rm -f /etc/systemd/system/${local.service_name} && sudo systemctl daemon-reload" } } # 5. Reload systemd and enable + start the service resource "terraform_data" "pinggy_service" { depends_on = [terraform_data.pinggy_unit_file] triggers_replace = local.unit_content provisioner "local-exec" { command = <<-EOT sudo systemctl daemon-reload sudo systemctl enable ${local.service_name} sudo systemctl restart ${local.service_name} echo "Service ${local.service_name} enabled and started" EOT } provisioner "local-exec" { when = destroy command = <<-EOT sudo systemctl stop ${local.service_name} || true sudo systemctl disable ${local.service_name} || true echo "Service ${local.service_name} stopped and disabled" EOT } }
The triggers_replace = local.unit_content pattern is what makes this configuration converge correctly on changes. Every time you update a variable -- the port number, the token, the tunnel label -- OpenTofu hashes the new rendered unit content, compares it against the stored hash in the state file, and marks those resources for replacement. That replacement runs the destroy-time provisioner first (stopping the old service), then the create-time provisioner (writing the new unit file and restarting). Without this trigger, OpenTofu would consider the resources already created and skip re-execution on subsequent applies.
terraform_data with local-exec has no way to introspect whether the systemd unit file on disk matches what the template would render. Once it runs and records success in the state file, OpenTofu considers the resource satisfied -- re-applying changes nothing.
triggers_replace creates an explicit dependency: if the value passed to it changes, OpenTofu treats the resource as requiring replacement. Since local.unit_content is the fully-rendered unit file, any variable change -- port, token, label -- causes a hash mismatch, which triggers destruction (stopping the old service) and re-creation (writing and starting the new one).
tofu apply that changes tunnel parameters. For production services, schedule applies during low-traffic windows or use Pinggy's persistent subdomain feature to minimize reconnection impact.Outputs, Variables File, and Applying
output "service_name" { description = "Name of the deployed systemd service" value = local.service_name } output "status_command" { description = "Command to check tunnel status" value = "systemctl status ${local.service_name}" } output "logs_command" { description = "Command to follow tunnel logs" value = "journalctl -u ${local.service_name} -f" }
Create a terraform.tfvars file to hold your values. Never commit the token to source control -- use an environment variable or a secrets manager in CI.
pinggy_token = "xGBTh6cy58q" local_port = 8000 tunnel_label = "myapp" service_user = "pinggy" ssh_key_path = "/etc/pinggy/pinggy_key"
Add *.tfvars and *.tfstate* to your .gitignore. The state file contains the hashed unit content (which in turn contains your token) and should be treated as a secret. For shared or CI deployments, store state remotely in an S3 bucket, GCS, or any other OpenTofu-supported backend.
Now initialize and apply:
# Initialize (downloads no providers -- terraform_data is built in) $ tofu init OpenTofu has been successfully initialized! # Preview what will run $ tofu plan Plan: 4 to add, 0 to change, 0 to destroy. # Apply (will prompt for confirmation unless -auto-approve is set) $ tofu apply # Verify the service is running $ systemctl status pinggy-myapp.service Active: active (running) since Thu 2026-03-12 09:15:04 UTC; 12s ago # Watch the tunnel logs live $ journalctl -u pinggy-myapp.service -f
Provisioning the SSH Key via OpenTofu
The one gap in the configuration above is that it does not generate or place the SSH key -- it assumes the key already exists at var.ssh_key_path. You have two reasonable options.
The first is to generate it outside OpenTofu as a one-time operation and treat it as a pre-existing secret. This is the simpler approach and appropriate for single-machine deployments where you control the key lifecycle manually.
The second is to generate the key as part of the OpenTofu run, which is useful for ephemeral environments or CI runners. Add a resource before pinggy_key_dir:
resource "terraform_data" "pinggy_ssh_key" { depends_on = [terraform_data.pinggy_key_dir] provisioner "local-exec" { command = <<-EOT KEY_PATH="${var.ssh_key_path}" if [ ! -f "$KEY_PATH" ]; then sudo -u ${var.service_user} ssh-keygen \ -t ed25519 \ -f "$KEY_PATH" \ -N "" \ -C "pinggy-tunnel-$(hostname)" echo "Generated new SSH key at $KEY_PATH" else echo "SSH key already exists at $KEY_PATH, skipping generation." fi EOT } provisioner "local-exec" { when = destroy command = "sudo rm -f ${var.ssh_key_path} ${var.ssh_key_path}.pub || true" } }
A passphrase-free private key placed on disk is a credential that anyone with root access can read. Restrict access with strict filesystem permissions (600, owned by the pinggy service user), run the service as a non-root user, and ensure the system has appropriate protections against privilege escalation. If your threat model requires stronger key protection, investigate using an SSH agent with the key loaded from a hardware token or secrets manager at tunnel start time.
Managing Multiple Tunnels
One of the advantages of the IaC approach becomes apparent when you need more than one tunnel. If your application exposes both an HTTP API on port 8000 and a separate metrics endpoint on port 9090, you need two Pinggy connections -- two tokens, two service units, two SSH processes. With the configuration above, you can use OpenTofu's for_each to manage them from a single declaration:
variable "tunnels" { description = "Map of tunnel configurations: label => {token, local_port}" type = map(object({ token = string local_port = number })) sensitive = true } locals { tunnel_units = { for label, cfg in var.tunnels : label => templatefile( "${path.module}/templates/pinggy-tunnel.service.tpl", { pinggy_token = cfg.token local_port = cfg.local_port tunnel_label = label service_user = var.service_user ssh_key_path = var.ssh_key_path } ) } } resource "terraform_data" "pinggy_unit_files" { for_each = local.tunnel_units triggers_replace = each.value provisioner "local-exec" { command = <<-EOT sudo tee /etc/systemd/system/pinggy-${each.key}.service > /dev/null <<'UNITEOF' ${each.value} UNITEOF sudo systemctl daemon-reload sudo systemctl enable pinggy-${each.key}.service sudo systemctl restart pinggy-${each.key}.service EOT } provisioner "local-exec" { when = destroy command = <<-EOT sudo systemctl stop pinggy-${each.key}.service || true sudo systemctl disable pinggy-${each.key}.service || true sudo rm -f /etc/systemd/system/pinggy-${each.key}.service sudo systemctl daemon-reload EOT } }
The corresponding terraform.tfvars entry for two tunnels would look like:
tunnels = { api = { token = "xGBTh6cy58q" local_port = 8000 } metrics = { token = "yHCUi7dz69r" local_port = 9090 } }
When you need to remove the metrics tunnel, delete its entry from the map and run tofu apply. OpenTofu will detect the missing key, run the destroy-time provisioner for that resource only, and leave the API tunnel untouched. No manual systemctl disable commands, no risk of accidentally stopping the wrong service.
Logging and Monitoring
With the SyslogIdentifier directive set in the unit file, all tunnel output is tagged and queryable through the standard systemd journal. This integrates cleanly with any log aggregation stack that reads from journald -- Loki with Promtail, Elasticsearch with Filebeat, or simply a systemd-journal-remote setup forwarding to a central log server.
# Follow live logs for the API tunnel $ journalctl -u pinggy-api.service -f # Show all Pinggy tunnel logs since boot $ journalctl -b SYSLOG_IDENTIFIER=pinggy-api # Count restart events in the last 24 hours $ journalctl -u pinggy-api.service --since "24 hours ago" | \ grep "Started Pinggy" | wc -l # Check if service is currently active $ systemctl is-active pinggy-api.service active
A high restart count in the 24-hour query is a useful health signal. If a tunnel is restarting more than a dozen times per day, the SSH connection is unstable -- possibly due to network issues, token conflicts, or the remote port binding failing repeatedly. You can alert on this with a simple cron job or integrate it into a monitoring platform by exporting journal metrics via node_exporter's systemd collector.
Using the Configuration in CI Pipelines
A common pattern is to provision a Pinggy tunnel at the start of an integration test run so that external webhooks or browser-based tests can reach a service running on the CI runner. OpenTofu handles this naturally: the tunnel is provisioned at apply, tests run, and tofu destroy tears the tunnel down at the end of the pipeline.
- name: Install OpenTofu run: | curl --proto '=https' --tlsv1.2 -fsSL \ https://get.opentofu.org/install-opentofu.sh | sudo bash -s -- --install-method deb - name: Provision Pinggy tunnel env: TF_VAR_pinggy_token: ${{ secrets.PINGGY_TOKEN }} run: | cd pinggy-infra tofu init -input=false tofu apply -auto-approve -input=false - name: Run integration tests run: npm run test:integration - name: Destroy tunnel if: always() env: TF_VAR_pinggy_token: ${{ secrets.PINGGY_TOKEN }} run: | cd pinggy-infra tofu destroy -auto-approve -input=false
The if: always() condition on the destroy step ensures the tunnel is cleaned up even if the test step fails. The Pinggy token is passed via environment variable following OpenTofu's convention that any environment variable prefixed with TF_VAR_ overrides the corresponding variable -- keeping the secret out of the command line and out of log output.
Wrapping Up
ExitOnForwardFailure=yes. SSH connected but port binding failed silently. The process is alive but forwarding nothing. Add the flag and restart.After=network-online.target. SSH starts before the network is ready. Add the directive and Wants=network-online.target to the [Unit] section.triggers_replace value. Hash the rendered unit content into triggers_replace so any template change forces re-execution.BatchMode=yes is missing. Regenerate the key with -N "" and add -o BatchMode=yes to the ExecStart command.ps aux outputps. Use the Pinggy CLI with a config file (not a flag), or pass via environment variable using EnvironmentFile= in the unit with restricted file permissions.|| true guard. If the service isn't running (already stopped), systemctl stop returns non-zero and OpenTofu aborts the destroy. Always append || true to gracefully handle already-stopped services.Manually running SSH tunnel commands works fine until it doesn't -- until you forget which machines have tunnels running, until a token rotates and half your services are silently disconnected, until a new team member needs to reproduce your setup and has no idea where to start. Encoding tunnel configuration in OpenTofu gives you something you can read, diff, plan, and apply with confidence.
The patterns here -- terraform_data with local-exec, triggers_replace keyed to rendered content, systemd unit hardening, destroy-time provisioner cleanup -- apply well beyond Pinggy. Any time you need to manage Linux service configuration from an IaC workflow without a dedicated provider, this is the structure to reach for. And because OpenTofu operates under the Mozilla Public License via the Linux Foundation, you're not trading one vendor dependency for another.
Infrastructure should be reproducible by definition. If it can only be reproduced by whoever set it up originally, configuration drift is already underway.