Open /etc/ssh/sshd_config as root, find the line that reads PermitRootLogin, set its value to no, save the file, and restart the SSH service with sudo systemctl restart ssh. That is the complete answer. Your root account is no longer reachable directly over SSH.
But there is a lot more happening behind that single directive than the change itself, and if you are managing any server that faces the internet, you will want to understand all of it. This article walks through the full configuration, explains why root SSH access is treated as such a serious exposure, covers the options you may not know exist, and ends with a hardening checklist that goes well beyond the one-liner.
Before touching sshd_config on a remote server, open a second terminal and confirm you have an active, working SSH session as a non-root user with sudo privileges. If you lock yourself out through a misconfiguration, you will need console access to recover. Never close your only session until you have tested the changes.
The Configuration Change
The file that controls SSH server behavior on Ubuntu is /etc/ssh/sshd_config. It is read by the SSH daemon (sshd) at startup and on reload. The directive you need is PermitRootLogin.
# Open the config file with your preferred editor $ sudo nano /etc/ssh/sshd_config
Search for PermitRootLogin in the file. On a fresh Ubuntu install it is often commented out or set to prohibit-password. Change it to read exactly:
If your sshd_config contains no PermitRootLogin line at all -- not even a commented one -- OpenSSH falls back to its compiled-in default, which is prohibit-password. That is not the same as no. Do not assume an absent line means root login is disabled. Add the directive explicitly, regardless of whether the line exists already. Adding a line that was not there before is valid -- you do not need to find an existing one to edit.
PermitRootLogin no
Save the file, then test the configuration for syntax errors before restarting the daemon. This step prevents you from accidentally dropping your SSH connection due to a typo.
# Test the config file for syntax errors first $ sudo sshd -t # No output means no errors. Now restart the service. $ sudo systemctl restart ssh # Verify it is running cleanly $ sudo systemctl status ssh
On Ubuntu 22.04 and later, the service name is ssh. On some older Ubuntu versions you may see it as sshd. If systemctl restart ssh returns an error, try sudo systemctl restart sshd.
Starting with Ubuntu 22.10, Canonical switched OpenSSH to systemd socket-based activation by default. Under this model, ssh.socket owns the listening port, not ssh.service directly. Authentication-related changes like PermitRootLogin take effect with sudo systemctl restart ssh as shown. However, if you are also changing the Port or ListenAddress directives on Ubuntu 24.04, you must additionally restart the socket: run sudo systemctl daemon-reload then sudo systemctl restart ssh.socket. Failing to do so leaves the old port active even after ssh.service is restarted. To confirm which model your system is using, run systemctl status ssh.socket -- if the unit exists and is active, you are on socket-based activation.
Use sudo systemctl reload ssh instead of restart if you want to apply the new config without terminating existing sessions. The reload sends a SIGHUP to the daemon, which re-reads its config file. Active connections are unaffected.
Verifying It Worked
Do not just assume the change took effect. From a different machine or terminal window, attempt an SSH connection as root and confirm it fails cleanly.
$ ssh root@your-server-ip root@your-server-ip: Permission denied (publickey). # Or if password auth is still enabled: Permission denied, please try again.
A clean rejection of the root login attempt confirms the directive is active. You can also inspect the auth log on the server to see the explicit denial recorded:
# Ubuntu 20.04 and earlier $ sudo tail -f /var/log/auth.log # Ubuntu 22.04+ (journald-based) $ sudo journalctl -u ssh -f # You will see entries like: sshd[12345]: User root from 203.0.113.42 not allowed because listed in DenyUsers # or sshd[12345]: ROOT LOGIN REFUSED from 203.0.113.42
Why This Matters: The Threat Model
SSH brute force bots are not sophisticated. They connect to port 22, try root as the username, and cycle through millions of common passwords. This is automated, constant, and costs the attacker almost nothing. What makes root SSH access particularly dangerous is that there is only one root account and its username is always known. Attackers do not need to guess the username at all -- half the work is already done for them.
By disabling direct root login you introduce a second unknown: the attacker now has to guess both a valid username and that user's credentials before they can even attempt privilege escalation to root. That is a meaningfully harder problem. Log observation on any internet-facing server shows that the vast majority of automated login attempts target the root account specifically.
Attackers already know the username.
PermitRootLogin noforces them to guess it.
There is also an audit trail argument. When multiple administrators share a server, all actions taken as root through direct SSH are attributed to "root" with no way to distinguish who ran what. When everyone logs in as their own named user and escalates with sudo, every privileged action is logged with the originating user's identity. That traceability matters for incident response and compliance.
The attacks this configuration change directly mitigates map to the following MITRE ATT&CK Enterprise techniques. Understanding where your controls land in the framework tells you exactly which adversary behaviors you are cutting off -- and which ones remain in play.
T1110.001 and T1110.003 (password guessing and spraying) are the direct threats neutralized by disabling root SSH login. Because root is the one account with a known username on every system, bots targeting it are executing a pure T1110.001 campaign -- no username enumeration required. Setting PermitRootLogin no collapses the attack surface for those sub-techniques immediately.
T1021.004 is broader -- it covers any adversary using valid credentials to reach a machine over SSH, not just brute force. Disabling root narrows the valid-account space they can exploit, but this technique remains relevant for any non-root user with weak or compromised credentials.
T1078 (Valid Accounts) is where a successful brute force escalates to. If an attacker gets in as root, they have a valid account with no privilege escalation step required. Removing root from the SSH surface forces them to clear an additional hurdle before T1078 becomes exploitable.
T1098.004 (SSH Authorized Keys) is the persistence technique attackers use after they are in -- adding their own public key to authorized_keys so they can return even after the initial credential is revoked. See the authorized_keys hygiene section below.
What Happens After a Successful Root Login
The threat model section explains why root SSH access is a dangerous opening. But it is worth being explicit about what an attacker actually does once they have it -- because understanding the post-access kill chain is what makes this hardening feel urgent rather than theoretical.
Root access over SSH delivers an interactive shell with unrestricted privileges. From that position, an attacker's first moves are typically fast and automated. They will establish persistence before doing anything else, so that even if the initial entry vector is closed, they can still return.
Persistence (T1098.004, T1053.003)
The simplest persistence mechanism is adding their own SSH public key to /root/.ssh/authorized_keys. This is MITRE ATT&CK technique T1098.004 and it is trivially fast -- a single line appended to a file. The moment that key is in place, they can return indefinitely with a valid authentication that bypasses any password rotation you do afterward. They may also drop a cron job (T1053.003) or install a systemd service to maintain access or run callbacks even if the authorized_keys file is cleaned.
Credential Access (T1003)
With root on a Linux system, an attacker can read /etc/shadow directly, pulling hashed passwords for every account on the machine. Even if you disable root SSH afterward, those hashes can be cracked offline. Any password reused elsewhere -- other servers, cloud provider consoles, admin portals -- becomes compromised. This is why a single successful root login is not just one machine's problem.
Lateral Movement (T1021.004)
Root's ~/.ssh/ directory often contains private keys used to connect to other servers -- deployment keys, backup keys, keys to internal infrastructure. An attacker with root can silently read and copy all of them, then use those keys to pivot laterally across the environment without triggering any additional authentication failures. This is how a brute-forced internet-facing server becomes a foothold into an entire internal network.
Impact: What They Are Actually There For
Persistence, credential access, and lateral movement are setup steps. They are the attacker getting comfortable. The end goal varies by actor, but the categories are consistent and all of them are enabled by the root access that PermitRootLogin no is designed to prevent from ever existing.
Cryptomining. The lowest-effort monetization. The attacker installs a miner, redirects CPU cycles to generate cryptocurrency, and tries to stay quiet enough that the server keeps running. Victims notice via billing spikes on cloud instances or suddenly degraded application performance. This is the most common outcome on compromised internet-facing servers with no other strategic value.
Data exfiltration. If the server stores database credentials, application secrets, customer records, or intellectual property, all of it is reachable from a root shell. The attacker identifies what is valuable, compresses it, and exfiltrates it -- often over encrypted channels that look indistinguishable from normal HTTPS traffic. The breach may not surface for months.
Ransomware deployment. With root, an attacker can encrypt all mounted filesystems and any reachable network shares. On a server with database backups or shared storage attached, this can mean data loss that no amount of operating system recovery can fix. Ransomware operators increasingly target Linux infrastructure specifically because backups and primary data are often co-located there.
Botnet recruitment. The compromised server becomes one node in a larger infrastructure used for DDoS attacks, spam campaigns, or as a relay for further attacks against other targets. The original owner faces abuse complaints and potential IP blocklisting while the actual attacker has moved on.
None of these scenarios require sophistication beyond gaining the initial root shell. That is the entire point of PermitRootLogin no -- if root is not reachable over SSH, the attacker has to first compromise a non-root account and then escalate separately. Each additional step is another opportunity for detection.
A successful root SSH login is not an incident with a bounded blast radius. It is potentially a full-environment compromise. The attacker has credentials, persistence, access to private keys, and no privilege escalation needed. This is why PermitRootLogin no is not optional hardening -- it is the minimum acceptable configuration for any internet-facing Linux server.
Understanding All the PermitRootLogin Values
The PermitRootLogin directive has four possible values, and understanding the differences between them matters if you are auditing a system someone else configured:
- yes -- Root can log in via SSH using any authentication method. This is the value you never want on an internet-facing server.
- prohibit-password -- Root can log in, but only using public key authentication. Password-based root login is rejected. This is the Ubuntu default on some versions and is a reasonable middle ground for servers where automated tooling requires root SSH access.
- forced-commands-only -- Root can log in only if a
command=option is set in theauthorized_keysfile, and only that specific command can run. Useful for automated backup scripts or monitoring agents that need root-level access without allowing an interactive shell. - no -- Root login is refused entirely, regardless of authentication method. This is the recommended setting for all general-purpose servers.
On Ubuntu 22.04 and 24.04, a fresh installation may show PermitRootLogin prohibit-password as the default, or the line may be absent entirely (which also defaults to prohibit-password per the OpenSSH documentation). An absent or commented-out directive does not mean root login is disabled -- it means you are relying on the compiled-in default. Always set it explicitly.
Ubuntu's Drop-In Config Files: A Gotcha
Starting with Ubuntu 22.04, OpenSSH on Ubuntu supports a drop-in directory at /etc/ssh/sshd_config.d/. Files in this directory ending in .conf are included automatically and can override settings in the main sshd_config file depending on order. This trips up many administrators who set PermitRootLogin no in the main file but find the change has no effect.
# Check for drop-in files that might override your setting $ ls /etc/ssh/sshd_config.d/ # On cloud instances (AWS, GCP, Azure, DigitalOcean), you may see: 50-cloud-init.conf # Check its contents $ cat /etc/ssh/sshd_config.d/50-cloud-init.conf # It might contain: PermitRootLogin yes
Cloud providers sometimes inject their own drop-in files during instance provisioning. If a drop-in file sets PermitRootLogin yes and it is processed after your main config, it wins. The fix is to either edit or remove the conflicting drop-in file, or add your own drop-in with a higher filename priority (files are processed alphabetically, so a file starting with 99- overrides one starting with 50-).
# Create a high-priority override drop-in $ echo "PermitRootLogin no" | sudo tee /etc/ssh/sshd_config.d/99-disable-root.conf # Verify the effective configuration (OpenSSH 8.7+) $ sudo sshd -T | grep permitrootlogin permitrootlogin no
The sshd -T command is your single source of truth. It prints the complete effective configuration after all files are processed and merged, so what you see there is exactly what the running daemon is using. Get into the habit of running it after any SSH config change.
Making Sure You Have a Working Non-Root User First
Disabling root SSH access is only safe if you already have a non-root user with sudo access that you can log in as. If you skip this step and disable root login, you will lock yourself out entirely on a server where root was your only account.
# Create a new user $ sudo adduser kandi # Add them to the sudo group $ sudo usermod -aG sudo kandi # Verify the group membership $ groups kandi kandi : kandi sudo # Test by opening a NEW terminal and logging in as that user $ ssh kandi@your-server-ip # Confirm sudo works kandi@server:~$ sudo whoami root
Only after confirming that your non-root user can log in and successfully run sudo commands should you make the change to PermitRootLogin.
What Does Adding a User to the sudo Group Actually Grant?
The sudo group on Ubuntu is configured in /etc/sudoers to allow its members to run any command as root, preceded by the sudo command and their own password. This matters for security in two specific ways.
First, privilege escalation is explicit and deliberate. A non-root user browsing the wrong directory or running a misconfigured script cannot cause root-level damage unless they consciously prefix the command with sudo. The mental friction is intentional. Second, and more importantly for incident response, every sudo invocation is logged with the originating username, the exact command run, and the timestamp. When multiple people share a server and all use named accounts with sudo, the audit trail shows who did what. When everyone SSHs directly as root, you get a log full of entries attributed to root with no way to separate your colleague's deployment script from an attacker's persistence implant.
On Ubuntu, sudo actions are logged to /var/log/auth.log on older releases and to journald on Ubuntu 22.04 and later. You can review them with sudo journalctl -u sudo or by filtering grep sudo /var/log/auth.log. Each line includes the user who invoked sudo, the working directory, and the full command string -- exactly the information an incident responder needs.
Adding a user to sudo is an administrative privilege that should not be handed out casually. On a server with multiple accounts, consider using more granular sudoers rules that allow specific commands rather than full root access. The full-sudo grant shown above is appropriate for a primary administrator account, not for every user on the system.
Going Further: What Else to Change in sshd_config
Since you already have the file open, there are several other settings worth reviewing at the same time. The directives most articles stop at -- disabling password auth, restricting users, tightening login timeouts -- are a good foundation. But they address access control, not the underlying cryptographic surface or the detection layer. A hardened SSH configuration needs both.
Disable Password Authentication Entirely
If you are using SSH key pairs (which you should be), there is no reason to leave password authentication enabled at all. An attacker cannot brute-force a key they do not have.
PasswordAuthentication no ChallengeResponseAuthentication no KbdInteractiveAuthentication no
Before setting PasswordAuthentication no, make absolutely certain your SSH public key is correctly installed in ~/.ssh/authorized_keys for your non-root user and that you can log in with it successfully. Disabling password auth while you only have password access will lock you out.
Restrict Which Users Can Log In
The AllowUsers directive creates an explicit allowlist. Only usernames that appear on this list can authenticate via SSH, even if their credentials are valid. This is a powerful control for multi-user systems.
# Only these users can SSH in -- everyone else is denied AllowUsers kandi deploy # Alternatively, restrict by group AllowGroups sshusers
Reduce the Authentication Window
By default, SSH gives a connecting client 120 seconds to complete authentication. Tightening this reduces the window for slow brute-force attempts and leaves fewer half-open connections on busy servers.
# Seconds to complete authentication LoginGraceTime 30 # Max failed auth attempts before disconnecting MaxAuthTries 3 # Max simultaneous unauthenticated connections MaxStartups 10:30:60
The MaxStartups value uses a start:rate:full format. The example above means: allow up to 10 unauthenticated connections without any throttling; begin randomly dropping new connections at a 30% rate once 10 are open; refuse all new connections once 60 unauthenticated sessions exist. This mitigates certain types of connection flooding.
Disable X11 Forwarding and Other Unused Features
Every feature you do not need is an attack surface you are carrying for no benefit. On a headless server, X11 forwarding serves no purpose.
X11Forwarding no AllowTcpForwarding no AllowAgentForwarding no PrintMotd no
If you use SSH tunneling legitimately -- for example, forwarding a database port from a remote server to your local machine -- do not disable AllowTcpForwarding. Only disable features you have confirmed you do not use. Blanket hardening that breaks your own workflows is counterproductive.
Agent Forwarding: Why Disabling It Matters
SSH agent forwarding (AllowAgentForwarding) is disabled in the hardened config above, and it deserves a separate explanation because it is both useful and genuinely dangerous in a way that is not obvious.
When agent forwarding is enabled, a connection from your local machine to Server A carries your SSH agent's socket with it. From Server A, you can then SSH into Server B using the private key that lives on your local machine -- without copying the private key anywhere. In a bastion host setup, this sounds ideal. The problem is that while you are connected to Server A, any process on Server A with root access can use your forwarded agent to authenticate as you to any server your local key can reach. You do not have to do anything. The attacker or malicious process just needs to find the SSH_AUTH_SOCK environment variable and use it.
This is MITRE ATT&CK technique T1563.001 -- SSH Hijacking. The attacker does not steal your private key -- they hijack the live agent socket and issue authentication requests through it while your connection is open.
The safer alternative for bastion host setups is SSH ProxyJump. Instead of forwarding your agent through the bastion, the client makes two independent connections -- one to the bastion and one to the target -- without any socket exposure on the intermediate host.
# Instead of agent forwarding through a bastion, use ProxyJump $ ssh -J bastion-user@bastion-ip target-user@internal-server-ip # Or configure it persistently in ~/.ssh/config Host internal-server HostName 10.0.1.50 User youruser ProxyJump bastion-user@bastion-ip
ProxyJump was introduced in OpenSSH 7.3 (2016) and is available on all supported Ubuntu releases. If you have been using agent forwarding for bastion access, switching to ProxyJump is a direct trade: the same reach with none of the agent hijacking exposure.
Lock Down the Cryptographic Algorithms
The directives above address access control. The directives below address what many tutorials skip entirely: the cryptographic algorithms the SSH daemon offers to clients during the handshake. Older Ubuntu installations ship with defaults that still include weak or deprecated algorithms for compatibility. Explicitly restricting to modern algorithms closes a class of downgrade and cipher-negotiation attacks that no amount of authentication hardening addresses.
The key exchange algorithms (KexAlgorithms), symmetric ciphers (Ciphers), and message authentication codes (MACs) should all be restricted to currently recommended values. The ssh-audit hardening guides and the Mozilla infosec team both maintain guidance on current safe defaults. The values below reflect recommendations for Ubuntu 22.04+ with OpenSSH 8.9 or later, aligned with ssh-audit guidance (last updated April 2025 to include sntrup761x25519-sha512 for post-quantum readiness).
Colin Watson, Debian OpenSSH maintainer, recommends using the subtraction syntax (prefixing with
-) rather than an explicit positive list when the goal is removing weak algorithms -- on the grounds that it more precisely communicates intent and has been supported since OpenSSH 7.5. (ssh-audit issue #324)
The explicit positive list below remains the safest approach for internet-facing production servers where you have full control over clients -- it ensures no algorithm outside the list can be negotiated regardless of OpenSSH version or future additions. On environments with mixed client ages or automated tooling, the subtraction approach (-weak-algo) may be more practical. Always run ssh-audit from your actual client machines before finalizing.
# Key exchange: prioritize post-quantum hybrid, then Curve25519, then strong DH # sntrup761x25519-sha512 added in OpenSSH 8.5; provides post-quantum forward secrecy KexAlgorithms [email protected],curve25519-sha256,[email protected],diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha256 # Ciphers: AEAD-only (authenticated encryption -- no separate MAC needed) Ciphers [email protected],[email protected],[email protected],aes256-ctr,aes192-ctr,aes128-ctr # MACs: ETM (Encrypt-then-MAC) variants only -- not MAC-then-encrypt MACs [email protected],[email protected],[email protected] # Host key types: ED25519 first, then RSA-SHA2 -- no legacy RSA-SHA1 or ECDSA HostKeyAlgorithms ssh-ed25519,[email protected],rsa-sha2-512,[email protected],rsa-sha2-256,[email protected] # Verify effective algorithms after reload (OpenSSH 6.8+) # $ sudo sshd -T | grep -E "kexalgorithms|ciphers|macs|hostkeyalgorithms"
Restricting algorithms to a strong explicit set can break connections from older SSH clients or legacy automation tooling that was compiled against an earlier OpenSSH. Before applying to a production server, run ssh-audit (covered in the audit section below) against the updated configuration from your actual client machines to confirm they can still connect. In environments with mixed client ages, you may need to retain diffie-hellman-group-exchange-sha256 or one of the aes-ctr ciphers. If [email protected] is not recognized on your version of Ubuntu, your OpenSSH is pre-8.5 -- omit that entry or upgrade.
Enforce File Permission Checks with StrictModes
The StrictModes directive instructs the SSH daemon to refuse connections when the user's home directory or .ssh directory has permissions that would allow other users to write to them. It is on by default, but it is often inadvertently disabled during troubleshooting and then left off. Confirm it is explicitly set.
# Refuse connections if home dir / .ssh permissions are too permissive StrictModes yes
If a shared hosting environment or a misconfigured deployment script makes a home directory world-writable, StrictModes yes is the last line of defense against an attacker planting keys there and using them.
Use Match Blocks to Restrict Access by Network or Address
Global directives in sshd_config apply to every incoming connection. Match blocks let you apply different rules to specific users, groups, source addresses, or combinations. This is significantly more powerful than most administrators realize and opens up controls that no other single directive can provide.
A common and high-value use case: allow SSH from internal network addresses only, and if you must allow external connections, restrict them to specific users with certificate-based authentication required. Another: allow a deploy user from a CI/CD system's known IP range but deny it interactive shell access entirely.
# Global defaults (restrictive) PasswordAuthentication no AllowUsers kandi # Deploy user: allowed only from CI subnet, no interactive shell Match User deploy Address 10.10.5.0/24 ForceCommand /usr/local/bin/deploy-entrypoint.sh AllowTcpForwarding no X11Forwarding no PermitTTY no # Monitoring agent: read-only forced command, no terminal Match User monitor ForceCommand /usr/local/bin/collect-metrics.sh PermitTTY no AllowTcpForwarding no
ForceCommand inside a Match block is one of the most underused SSH hardening controls available. A user whose SSH access is locked to a specific forced command cannot get an interactive shell even if they have valid credentials and an authorized key. This is how you give a backup agent, a deploy pipeline, or a monitoring script the specific access it needs without handing it a full shell.
Improve Logging Verbosity for Detection
The default LogLevel INFO records successful and failed authentication events. It does not log key fingerprints used during authentication, which matters for identifying which authorized key was used when investigating a suspicious login. VERBOSE adds that detail without flooding logs the way DEBUG would.
# Log key fingerprints used during auth -- essential for T1098.004 detection LogLevel VERBOSE # Confirm what gets logged with VERBOSE set # Accepted publickey for kandi from 203.0.113.10 port 52341 ssh2: ED25519 SHA256:abc123... # ^ The fingerprint tells you *which* key authenticated -- not just "a key"
Without VERBOSE, you know someone logged in as kandi. With it, you know exactly which key they used. During incident response -- particularly when investigating a potential T1098.004 persistence implant -- that fingerprint is the difference between knowing what to revoke and guessing.
A Complete Hardened sshd_config Block
Here is a consolidated view of the recommended directives for a typical Ubuntu server. Two directives in this block have not been covered yet and are worth a brief note before you copy-paste. PermitEmptyPasswords no prevents any user account with no password set from authenticating over SSH -- it is not a common scenario, but if one ever exists on your system, a missing directive leaves that account reachable. ClientAliveInterval and ClientAliveCountMax together control what happens to idle connections. The daemon will send a keepalive message every 300 seconds; if the client does not respond after 2 attempts, the connection is dropped. This closes orphaned sessions that no longer have a human on the other end -- sessions that an attacker could otherwise resume through certain hijacking techniques.
# Core access control PermitRootLogin no PasswordAuthentication no # ChallengeResponseAuthentication is deprecated in OpenSSH 9.x -- use KbdInteractiveAuthentication KbdInteractiveAuthentication no PermitEmptyPasswords no StrictModes yes # User allowlist (adjust to your actual usernames) AllowUsers youruser # Reduce authentication window LoginGraceTime 30 MaxAuthTries 3 MaxStartups 10:30:60 # Disable unused features X11Forwarding no AllowTcpForwarding no AllowAgentForwarding no PrintMotd no # Keep-alive to detect dead connections ClientAliveInterval 300 ClientAliveCountMax 2 # Verbose logging: captures key fingerprints for T1098.004 detection LogLevel VERBOSE
Generating and Deploying SSH Key Pairs
Disabling password authentication is mentioned in the hardening section, but only works safely if you have SSH key authentication already in place. Many articles skip over key generation entirely, which leaves readers knowing they should use keys but unsure how to set them up. Here is the complete setup.
Generate a Key Pair on Your Local Machine
Run this on the computer you connect from, not on the server:
# Generate an Ed25519 key pair (preferred -- faster and more secure than RSA) $ ssh-keygen -t ed25519 -C "yourname@hostname-$(date +%Y%m)" # If you need RSA for compatibility, use a 4096-bit key $ ssh-keygen -t rsa -b 4096 -C "yourname@hostname-$(date +%Y%m)" # You will be prompted for a save location (accept the default) # and a passphrase -- use one. It protects the key if your laptop is stolen.
Ed25519 keys are shorter, faster to verify, and considered more resistant to side-channel attacks than RSA. Use Ed25519 unless you are connecting to legacy systems that do not support it. When in doubt, ssh -Q key on the remote server will list supported key types.
Copy the Public Key to the Server
# The cleanest method -- copies the key and sets permissions correctly $ ssh-copy-id -i ~/.ssh/id_ed25519.pub youruser@your-server-ip # Manual method if ssh-copy-id is not available $ cat ~/.ssh/id_ed25519.pub | ssh youruser@your-server-ip "mkdir -p ~/.ssh && chmod 700 ~/.ssh && cat >> ~/.ssh/authorized_keys && chmod 600 ~/.ssh/authorized_keys"
The permissions on ~/.ssh/ and ~/.ssh/authorized_keys are not optional. SSH will silently refuse to use an authorized_keys file with overly permissive ownership or modes. The directory must be 700 (owner only) and the file must be 600 (owner read/write only).
Windows Clients
Windows 10 version 1809 and later ship with OpenSSH as an optional feature, and Windows 11 includes it by default. The key generation steps are identical -- run ssh-keygen from PowerShell or Windows Terminal. The key pair is saved to C:\Users\YourName\.ssh\ rather than ~/.ssh/, but the filenames and format are the same.
# Generate key pair -- same command as on Linux/macOS PS> ssh-keygen -t ed25519 -C "yourname@windows-hostname" # ssh-copy-id is not available on Windows -- use this instead PS> type $env:USERPROFILE\.ssh\id_ed25519.pub | ssh youruser@your-server-ip "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys && chmod 600 ~/.ssh/authorized_keys && chmod 700 ~/.ssh" # Test key-based login PS> ssh -o PreferredAuthentications=publickey youruser@your-server-ip
If you use PuTTY rather than the built-in OpenSSH client, key generation is done with PuTTYgen, which produces .ppk format keys by default. To use the same key with both PuTTY and OpenSSH-based clients, export the public key in OpenSSH format from PuTTYgen's "Conversions" menu before copying it to authorized_keys. The .ppk private key file itself is only understood by PuTTY tools -- the OpenSSH client cannot use it.
Test Before Disabling Passwords
# Test key-based login explicitly -- -o prevents password fallback $ ssh -o PreferredAuthentications=publickey youruser@your-server-ip # If this succeeds WITHOUT prompting for a password, your key is installed correctly # Only then should you set PasswordAuthentication no in sshd_config
Authorized Keys Hygiene (T1098.004)
The authorized_keys file is a common target for attackers who already have access to a system. Adding an entry there is technique T1098.004 -- Account Manipulation: SSH Authorized Keys -- and it is a persistence technique that survives password resets, reboots, and even reinstalling most services. The file just sits there.
On any server with more than one administrator or any history of shared credentials, the authorized_keys file for every user -- especially root, even with root SSH disabled -- deserves a periodic audit.
# List all authorized_keys files on the system $ sudo find /home /root -name "authorized_keys" 2>/dev/null # Review entries in a specific user's file $ sudo cat /home/youruser/.ssh/authorized_keys # Check for recently modified authorized_keys files (last 7 days) $ sudo find /home /root -name "authorized_keys" -mtime -7 2>/dev/null # Verify the fingerprint of a key to identify its owner $ ssh-keygen -l -f /home/youruser/.ssh/authorized_keys
Every time someone leaves the team, every time a laptop is lost or replaced, and every time a contractor's engagement ends, the corresponding SSH public key should be removed from all authorized_keys files on all servers they had access to. Failing to rotate keys is one of the most common SSH security failures encountered during incident response -- an organization believes they have closed access, but a stale key on one server keeps the door open.
For environments with more than a handful of users or servers, consider centralizing SSH key management rather than maintaining individual authorized_keys files. Options include configuring AuthorizedKeysCommand in sshd_config to fetch authorized keys from a central source at login time, or using an identity provider like HashiCorp Vault's SSH secrets engine to issue short-lived signed certificates instead of persistent key pairs.
Two-Factor Authentication for SSH
SSH key authentication is significantly stronger than passwords, but keys can be stolen -- particularly if they are stored without a passphrase or on a compromised workstation. Adding a second factor means an attacker who acquires your private key still cannot log in without something you physically possess.
FIDO2 Hardware Keys (OpenSSH 8.2+)
OpenSSH 8.2, which ships with Ubuntu 20.04 and later, introduced native support for FIDO2 hardware security keys such as YubiKey and Google Titan. These keys require physical interaction to complete authentication -- the user must touch the key. This technique directly counters remote key theft because physical presence is required.
# Generate a FIDO2-backed key (touch your hardware key when prompted) $ ssh-keygen -t ed25519-sk -C "yubikey-$(date +%Y%m)" # Copy the public key to the server as normal $ ssh-copy-id -i ~/.ssh/id_ed25519_sk.pub youruser@your-server-ip # Every subsequent login requires a physical key touch to complete
The -t ed25519-sk type creates a non-resident key where the private key handle is stored as a file on disk, bound to the hardware key. Adding -O resident stores the key entirely on the hardware device, allowing you to log in from any machine by plugging in the hardware key. Resident keys are more portable but require a FIDO2 device with enough storage.
TOTP via google-authenticator-libpam
If hardware keys are not available, time-based one-time passwords (TOTP) via PAM provide a software-based second factor. This is compatible with authenticator apps including Google Authenticator, Authy, and any TOTP-compliant application.
# Install the PAM module $ sudo apt install libpam-google-authenticator -y # Run the setup as the user who will be using MFA $ google-authenticator # Follow the prompts -- scan the QR code with your authenticator app # Edit PAM SSH config $ sudo nano /etc/pam.d/sshd # Add this line at the end: auth required pam_google_authenticator.so # Update sshd_config to require both key and TOTP $ sudo nano /etc/ssh/sshd_config # Set these two directives: # ChallengeResponseAuthentication is deprecated in OpenSSH 9.x -- use KbdInteractiveAuthentication KbdInteractiveAuthentication yes AuthenticationMethods publickey,keyboard-interactive # Reload SSH $ sudo systemctl reload ssh
After configuring PAM-based MFA, open a second terminal and test login before closing your existing session. An error in /etc/pam.d/sshd can prevent all SSH logins entirely. Always keep a recovery session open until you confirm the new configuration works end to end.
What to Do If You Lock Yourself Out
It happens. A misconfigured drop-in file, a PAM change that breaks auth, a typo in AllowUsers that excludes your own username. When SSH stops working and you cannot get back in, you have a few paths depending on your environment.
Cloud Instances (AWS, GCP, Azure, DigitalOcean)
All major cloud providers offer an out-of-band console that does not use SSH. This is your primary recovery path.
# AWS: EC2 Instance Connect or Session Manager (no SSH needed) # Also: stop instance, detach root volume, attach to recovery instance, edit configs # GCP: Serial console via Cloud Console UI # Also: gcloud compute instances add-metadata to inject new SSH key # Azure: Serial console via Azure Portal # Also: az vm run-command invoke to run commands without SSH # DigitalOcean: Droplet Console in the web dashboard (VNC-based)
Physical or VPS Access
For a physical server or a VPS with KVM/IPMI access, boot into recovery mode. On Ubuntu, hold Shift during boot to get the GRUB menu, select the recovery kernel, and choose a root shell. From there you can edit /etc/ssh/sshd_config and any drop-in files directly.
# Remount filesystem as read-write first # mount -o remount,rw / # Fix the problematic config file # nano /etc/ssh/sshd_config # Check for and fix drop-in file overrides # ls /etc/ssh/sshd_config.d/ && cat /etc/ssh/sshd_config.d/*.conf # Test the config before rebooting # sshd -t # Reboot normally # reboot
Before editing sshd_config or any PAM file on a production server, copy the working version to a backup: sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config.bak.$(date +%Y%m%d). Recovery from a lockout becomes a one-liner: copy the backup back and reload.
Configuration changes to sshd_config control what the daemon allows. They do not block the attacker from repeatedly attempting connections. That is where fail2ban comes in -- it watches log files for repeated authentication failures and automatically adds firewall rules to ban the offending IP address for a configurable period.
# Install fail2ban $ sudo apt update && sudo apt install fail2ban -y # Create a local override config (never edit jail.conf directly) $ sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local # Edit the [sshd] section in jail.local $ sudo nano /etc/fail2ban/jail.local
In the [sshd] section of jail.local, set the key parameters:
[sshd] enabled = true port = ssh filter = sshd logpath = /var/log/auth.log maxretry = 5 bantime = 1h findtime = 10m
# Enable and start fail2ban $ sudo systemctl enable --now fail2ban # Check its status and see current bans $ sudo fail2ban-client status sshd
Consider Moving SSH Off Port 22
This is a debated hardening step. Changing SSH from port 22 to a high, non-standard port does not add real security -- any competent attacker will scan all ports -- but it does dramatically reduce the noise from automated scanners that only probe port 22. On servers where the auth log is flooded with bot traffic, this single change can reduce that noise by more than 90%, making legitimate connection failures much easier to spot.
# Change the listening port (use any unused port above 1024) Port 2222
If you use UFW, update the firewall rules before reloading SSH -- otherwise you will lock yourself out:
# Allow the new port BEFORE reloading SSH $ sudo ufw allow 2222/tcp # Reload SSH $ sudo systemctl reload ssh # Test the new port from another terminal $ ssh -p 2222 youruser@your-server-ip # Once confirmed working, remove the old port rule $ sudo ufw delete allow ssh
Auditing Your SSH Configuration
There are several tools that can evaluate your SSH server configuration against known security benchmarks and report on what to improve. ssh-audit is the most widely used.
# Install ssh-audit $ sudo apt install ssh-audit # Audit the local SSH server $ ssh-audit localhost # Or audit a remote server from your workstation $ ssh-audit your-server-ip
The output grades your server's key exchange algorithms, ciphers, and MAC algorithms against current recommendations. Even after disabling root login and password auth, older Ubuntu installations may still be offering weak ciphers inherited from default configuration. The ssh-audit report will surface exactly which algorithms to remove.
Wrapping Up
Disabling root SSH login is genuinely a one-line change: set PermitRootLogin no in /etc/ssh/sshd_config and reload the service. It directly mitigates MITRE ATT&CK techniques T1110.001 and T1021.004 by eliminating the only SSH username that every attacker already knows. But the value of that change is only as durable as the surrounding configuration.
A drop-in file from a cloud provider can silently override it. A user with a weak password can still be brute-forced if password auth is on. An attacker can still hammer the connection indefinitely without fail2ban in place. A stale SSH key in authorized_keys (T1098.004) keeps a door open you thought you closed months ago. And none of it matters much if you have no recovery path when a config change goes wrong at 2am on a remote server.
The full picture -- verified with sshd -T, strengthened with key-only authentication and optionally a second factor, locked down with AllowUsers, monitored with fail2ban, and audited with ssh-audit -- is the configuration a server that faces the internet needs. The one-liner gets you started. Everything in this article keeps it standing.