Here is the command you came for: to watch /var/log/syslog in real time, open a terminal and run:
New lines will stream to your screen as the system writes them. Press Ctrl+C to stop. Note the capital -F flag -- not lowercase -f. The difference matters in production and is explained in detail below. On Red Hat-based distributions like RHEL, CentOS, Rocky, and AlmaLinux, the equivalent file is /var/log/messages rather than /var/log/syslog. On Amazon Linux 2023, there is no flat syslog file at all by default -- use journalctl -f instead. If you are not sure which applies to your system, all distributions and file locations are covered below -- along with more capable techniques for when basic tail is not enough.
What tail -f Does Under the Hood
The tail command prints the last lines of a file. By default it shows the last 10 lines. The -f flag stands for follow: instead of exiting after printing, tail keeps the file open and waits for new content, printing each new line as it arrives. This is sometimes called "tailing" a log file, and it is one of the most common daily actions for anyone who administers Linux systems.
When you run tail -f /var/log/syslog, the terminal becomes a live window into your system's activity. Authentication events, kernel messages, cron jobs, network events, daemon starts and stops -- all of it flows through syslog and appears on screen in real time.
"output appended data as the file grows" -- GNU coreutils
tailman page description of the-fflag
One important detail: on GNU/Linux, the default -f is equivalent to --follow=descriptor. This means tail tracks the open file descriptor it received when it first opened the file, not the filename itself. Under the default create logrotate method -- where the old file is renamed and a new file is created -- tail -f will follow the renamed file and stop receiving new entries. That is why capital -F is the safer default for production log monitoring. It is equivalent to --follow=name --retry, meaning tail follows the filename and will reopen the new file if the old one disappears.
On Debian and Ubuntu, the default rsyslog logrotate configuration at /etc/logrotate.d/rsyslog rotates /var/log/syslog daily, keeping 7 rotations. It uses a postrotate block to send a HUP signal to rsyslog after renaming the file, which causes rsyslog to reopen the log. With -f, you may briefly lose entries during that window. With -F, tail detects the rename and follows the new file automatically.
Where Syslog Lives on Different Distributions
Log file locations vary by distribution family. Knowing which file to watch is the first practical requirement:
- Debian, Ubuntu, Linux Mint:
/var/log/syslog-- rotated daily by default, 7 rotations retained - RHEL, CentOS, Rocky Linux, AlmaLinux, Fedora:
/var/log/messages - openSUSE, SLES:
/var/log/messages - Arch Linux: syslog is handled by systemd-journald only; no flat file by default unless
rsyslogis installed separately - Alpine Linux:
/var/log/messagesvia BusyBox syslogd or/var/log/syslogif using sysklogd; BusyBoxtailsupports-fand-Fbut lacks long options like--pid - Amazon Linux 2023: no flat syslog file by default. Amazon Linux 2023 does not install rsyslog by default and relies entirely on systemd-journald. Use
journalctl -fto follow logs in real time. If you need/var/log/messages, install rsyslog manually:sudo dnf install rsyslog && sudo systemctl enable --now rsyslog
Authentication logs are separated from the general syslog stream. Look for /var/log/auth.log on Debian-based systems and /var/log/secure on Red Hat-based ones. Kernel ring buffer messages go to /var/log/kern.log on Debian systems. When chasing a specific problem, knowing which file contains the relevant messages saves significant time.
Alpine Linux and other minimal distributions ship BusyBox tail by default. BusyBox tail supports core options including -n, -f, -F, -c, -q, -v, and -s, but does not support long options such as --version or --pid. Install the coreutils package to get the full GNU tail with all options if needed.
Reading syslog requires elevated privileges on most systems. You will typically need to run tail -f as root or with sudo, or ensure your user belongs to the adm group (Debian/Ubuntu) or systemd-journal group. Running as root for casual log-watching is not great practice -- add yourself to the appropriate group instead.
Controlling How Much You See
The default behavior of tail -f shows the last 10 lines before beginning to follow. You can control that with the -n flag:
# Show last 50 lines, then follow $ tail -n 50 -f /var/log/syslog # Show last 100 lines, then follow $ tail -n 100 -f /var/log/syslog # Show everything from the beginning of the file, then follow $ tail -n +1 -f /var/log/syslog # Shorthand: -n and the number combined $ tail -100f /var/log/syslog
The -n +1 variant tells tail to start from line 1, effectively printing the entire file before entering follow mode. This is useful when you want to search through historical context while also watching for new entries -- though for large syslog files, piping through a pager or using grep first is more practical.
Filtering Output with grep
Raw syslog output can be overwhelming on a busy server. The most powerful immediate upgrade to tail -f is piping through grep to filter for only what you care about:
# Watch only SSH-related entries $ tail -f /var/log/syslog | grep -i ssh # Watch for errors and failures only $ tail -f /var/log/syslog | grep -iE 'error|fail|critical|denied' # Watch a specific process by name $ tail -f /var/log/syslog | grep nginx # Exclude noisy entries (inverted match) $ tail -f /var/log/syslog | grep -v 'systemd\[1\]' # Watch for a specific IP address $ tail -f /var/log/auth.log | grep '192.168.1.105'
The --line-buffered flag for grep is worth knowing about. By default, grep buffers its output, which means matched lines might not appear on screen immediately when piped from a live stream. Adding --line-buffered forces grep to flush each matched line right away:
This is a subtle but important detail. Without --line-buffered, you may sit watching what appears to be no output for seconds or minutes when matching events are actually firing, because the output is being held in a buffer before being flushed to the terminal.
Chain multiple grep calls to layer your filters. For example, tail -f /var/log/auth.log | grep --line-buffered 'sshd' | grep -v 'Accepted' shows all SSH daemon events except successful logins -- useful when you want to catch failed attempts and disconnects without the noise of normal traffic.
Following Multiple Files at Once
tail -f accepts multiple file arguments. When you provide more than one, it prefixes each line with the filename so you can tell the streams apart:
# Follow syslog and auth.log simultaneously $ tail -f /var/log/syslog /var/log/auth.log # Output will look like this: # ==> /var/log/syslog <== # Mar 18 09:14:22 hostname cron[1234]: (root) CMD (/usr/bin/apt) # ==> /var/log/auth.log <== # Mar 18 09:14:25 hostname sshd[5678]: Failed password for invalid user ... # Using a glob to tail all logs in /var/log $ sudo tail -f /var/log/*.log
Following multiple files is genuinely useful when troubleshooting an issue that crosses subsystem boundaries -- for example, a web application throwing errors that may originate from either the application server, the auth layer, or the kernel network stack. Watching all three simultaneously lets you correlate timestamps without switching between terminal windows.
tail -f and Log Rotation
Log rotation is a common source of confusion when using tail -f. The logrotate utility runs daily (via cron) on most Linux systems and uses the create method by default: it renames the current log file (e.g., syslog becomes syslog.1) and creates a fresh empty syslog. If you are using lowercase -f, tail holds the original file descriptor -- so after rotation it continues following the renamed syslog.1, which receives no new entries. You stop seeing live output without any error message, which can be deeply confusing.
On Debian and Ubuntu, the default configuration at /etc/logrotate.d/rsyslog rotates /var/log/syslog daily and retains 7 rotated copies, then sends a HUP signal to rsyslog so it reopens the new file. Under copytruncate mode (used by some applications that cannot be signaled), logrotate instead copies the file and truncates the original in place -- this preserves the inode, so tail -f continues working, but risks a brief window of log loss between the copy and the truncate.
The solution in either case is to use -F (capital F) instead of -f:
With -F, tail follows the filename. If the file disappears and reappears (as it does under create-mode rotation), tail detects this, reopens the new file, and continues streaming. You will briefly see a message like tail: '/var/log/syslog' has been replaced; following new file in your terminal, and then output continues uninterrupted from the new file.
-F is equivalent to --follow=name --retry. The --retry part tells tail to keep trying to open the file if it temporarily disappears, rather than giving up. On systems where log rotation runs frequently or during high-traffic periods, -F is the safer default choice. Use lowercase -f only when you know you are monitoring a file that is never rotated (such as a short-lived temporary log).
Adding Context to What You See
Syslog entries already include timestamps, but the default format is the system's local time without a year or timezone, which can cause confusion when browsing archived logs from a different date. Understanding the syslog line format helps you parse what you are reading:
# Format: TIMESTAMP HOSTNAME PROCESS[PID]: MESSAGE Mar 18 09:42:11 webserver01 sshd[8821]: Failed password for root from 203.0.113.45 port 52034 ssh2 # Each component carries specific meaning: # Mar 18 09:42:11 -- local time, RFC 3164 legacy format (no year, no timezone) # webserver01 -- the hostname that generated the event # sshd -- the process name # [8821] -- the PID of the process # Failed password -- the actual log message from the daemon
The lack of year and timezone in syslog timestamps is a known limitation of the legacy RFC 3164 format. rsyslog supports high-precision ISO 8601 timestamps per RFC 5424, but most distributions ship with the legacy format enabled by default for backward compatibility. To switch to ISO 8601 timestamps in /var/log/syslog, add $ActionFileDefaultTemplate RSYSLOG_FileFormat to /etc/rsyslog.conf and restart rsyslog.
When you need to prepend your own timestamp to output -- for instance, when watching a subprocess that does not log timestamps itself -- there are three practical patterns:
# Method 1: shell while-read loop (portable, no extra packages) $ tail -F /var/log/syslog | while IFS= read -r line; do echo "$(date '+%Y-%m-%d %H:%M:%S') $line"; done # Method 2: ts from the moreutils package (cleaner output) $ tail -F /var/log/syslog | ts '%Y-%m-%d %H:%M:%S' # Method 3: awk strftime (gawk built-in, no extra packages on most systems) $ tail -F /var/log/syslog | awk '{ print strftime("[%Y-%m-%d %H:%M:%S]"), $0 }'
The awk method using strftime is a useful middle ground: more portable than ts (which requires the moreutils package) and more predictable on high-throughput streams than the while read loop. Note that strftime is a gawk extension and may not be available in all awk implementations -- verify with awk --version before relying on it in scripts.
GNU tail supports a --pid=PID flag: tail -F --pid=1234 /var/log/syslog. When the process with that PID terminates, tail exits automatically rather than hanging. This is useful when monitoring a deployment script or a short-lived process and you want tail to clean up when the process finishes. Note: --pid is a GNU coreutils extension and is not available in BusyBox tail.
When to Use journalctl Instead
On any modern Linux distribution running systemd, journalctl is often a better tool for live log monitoring than tail -f on syslog. The journal collects structured log entries from all system services and the kernel, with rich metadata attached to every entry -- the originating unit, PID, user ID, priority level, and more.
On Debian and Ubuntu, rsyslog and systemd-journald run side by side. Journald collects logs and writes them to /run/systemd/journal/syslog; rsyslog reads from the journal via its imjournal module and writes the flat text files you tail. On Amazon Linux 2023, rsyslog is not installed by default, so /var/log/messages does not exist unless you install rsyslog manually. On Amazon Linux 2023, journalctl is the primary log interface -- full stop.
To follow the journal in real time, the equivalent of tail -f is:
The real power comes from the filtering options:
# Follow a specific service unit only $ journalctl -f -u nginx.service # Follow errors and above only (emergency, alert, critical, error) $ journalctl -f -p err # Follow from the kernel only $ journalctl -f -k # Follow with output in JSON (useful for piping to parsers) $ journalctl -f -o json # Follow two services simultaneously $ journalctl -f -u nginx.service -u postgresql.service # On Amazon Linux 2023 (no flat syslog file by default) $ journalctl -f $ journalctl -f -u sshd.service
The syslog priority levels used by -p are worth memorizing: emerg (0), alert (1), crit (2), err (3), warning (4), notice (5), info (6), debug (7). Passing a level to -p shows that level and all levels with a lower number -- so -p err shows errors, critical messages, alerts, and emergencies, but not warnings or info. These priority levels are defined in RFC 5424, the IETF standard for the syslog protocol.
Not all systems forward journal entries to syslog, and not all syslog entries appear in the journal. On Debian/Ubuntu with both rsyslog and systemd-journald running, rsyslog reads from journald via the imjournal module by default, meaning entries appear in both. On systems where ForwardToSyslog=no is set in /etc/systemd/journald.conf, syslog receives nothing from journald. Always check which logging backend your distribution defaults to before assuming a flat log file contains complete event data.
Upgrading Your Setup with multitail
For sustained real-time log monitoring across multiple files with color-coding and split-pane views, multitail is a tool worth having available. It provides a more comfortable interface than juggling multiple tail -f sessions in separate terminal panes:
# Install multitail $ sudo apt install multitail # Debian/Ubuntu $ sudo dnf install multitail # RHEL/Fedora # Watch two logs in a split screen $ multitail /var/log/syslog /var/log/auth.log # Watch with color scheme applied to each file $ multitail -ci green /var/log/syslog -ci yellow /var/log/auth.log # Merge two files into one view with shared scroll $ multitail -I /var/log/syslog -I /var/log/kern.log
multitail also supports real-time filtering per pane, color-coded pattern matching, and the ability to execute commands and treat their output as a log source. For a sysadmin spending extended time in a log-watching session, it is considerably less fatiguing than raw tail.
Practical Security Monitoring with tail -f
Watching syslog in real time is a legitimate front-line security monitoring technique, particularly on smaller systems that do not yet have a SIEM or centralized log aggregation platform. The following patterns cover the events worth watching for:
# Watch for brute-force SSH attempts $ tail -F /var/log/auth.log | grep --line-buffered 'Failed password' # Watch for successful logins (who is getting in) $ tail -F /var/log/auth.log | grep --line-buffered 'Accepted' # Watch for sudo usage $ tail -F /var/log/auth.log | grep --line-buffered 'sudo' # Watch for new user creation or privilege escalation $ tail -F /var/log/auth.log | grep --line-buffered -E 'useradd|usermod|passwd|groupadd' # Watch for PAM authentication events $ tail -F /var/log/auth.log | grep --line-buffered 'pam_unix' # Watch for kernel-level security events (SELinux/AppArmor) $ tail -F /var/log/syslog | grep --line-buffered -i 'DENIED\|AVC\|apparmor' # On RHEL/Rocky/AlmaLinux -- auth events are in /var/log/secure $ tail -F /var/log/secure | grep --line-buffered 'Failed password'
A rapid burst of Failed password entries from a single IP is a textbook brute-force indicator. Seeing Accepted password or Accepted publickey from an IP you do not recognize warrants immediate investigation. Sudden sudo activity from accounts that do not normally use elevated privileges is equally worth flagging.
Real-time log monitoring is not a replacement for intrusion detection systems, but it is the fastest possible feedback loop during an active incident. When something is happening right now, watching the log stream live tells you things a dashboard summary cannot. For a deeper treatment of locking down the SSH daemon itself, the SSH audit and hardening guide covers the practitioner-level configuration changes that reduce your attack surface before the brute-force attempts even begin.
If someone runs rm /var/log/auth.log while you have tail -F open, your terminal will display tail: '/var/log/auth.log' has become inaccessible: No such file or directory and then keep retrying. This is a useful security indicator in itself -- if a log file vanishes unexpectedly on a production system, that warrants investigation. Under tail -f (lowercase), the behavior differs: tail continues holding the deleted file's inode open and will still receive entries from the logging daemon (which also holds the file open) until the daemon reopens the file. The file is not fully released until all processes holding it close their file descriptors.
Beyond Basic Monitoring: Advanced Patterns
The patterns in the section above cover what to watch for. This section covers how to act on what you see -- hardening responses, automating responses, and building more durable monitoring setups that outlast a single terminal session.
Rate-Limiting and Automated Blocking with fail2ban
Watching Failed password entries in a terminal is useful for awareness, but it does not stop an ongoing brute-force attack. fail2ban reads the same log files you are watching, applies configurable thresholds, and automatically writes firewall rules to block offending IPs. Installing and configuring it is a meaningful step beyond passive watching:
# Install fail2ban (Debian/Ubuntu) $ sudo apt install fail2ban # Copy the default config to a local override (never edit the .conf directly) $ sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local # Check which jails are currently active $ sudo fail2ban-client status # Watch the SSH jail in real time $ sudo fail2ban-client status sshd # Tail the fail2ban log to see bans as they happen $ tail -F /var/log/fail2ban.log | grep --line-buffered 'Ban\|Unban'
This shifts the monitoring posture from observation to automated defense. You can still tail /var/log/fail2ban.log to watch bans in real time, but now each detection event results in an actual firewall rule rather than a log entry you have to act on manually.
Correlating Events Across Multiple Log Files with awk
When an incident spans multiple subsystems -- a failed login followed by a cron job firing followed by a file change -- you need to correlate timestamps across files. A simple awk pipeline can merge streams from multiple files and sort by timestamp:
# Tag each line with its source file before merging $ tail -F /var/log/auth.log /var/log/syslog | \ awk '/^==>/ { file=$2; next } { print file, $0 }' # Watch two files and highlight lines from auth.log in red via ANSI codes $ tail -F /var/log/syslog /var/log/auth.log | \ awk '/auth.log/ { print "\033[31m" $0 "\033[0m"; next } { print }' # Build a live event timeline: timestamp + source + message only $ tail -F /var/log/syslog /var/log/auth.log | \ grep --line-buffered -v '^==>' | \ awk '{ print strftime("[%H:%M:%S]"), $0 }'
Persistent Monitoring with screen or tmux
A tail -F session in a regular terminal dies the moment the SSH connection drops or the terminal window closes. For monitoring that needs to survive disconnections, run it inside a terminal multiplexer. The approach is straightforward but important for production incident response:
# Start a named tmux session for log monitoring $ tmux new-session -d -s logwatch # Run tail in the background session (survives SSH disconnect) $ tmux send-keys -t logwatch \ "tail -F /var/log/auth.log | tee /tmp/auth-$(date +%Y%m%d).txt" Enter # Reattach to the session from any SSH connection $ tmux attach -t logwatch # Split the pane to watch syslog in the same session $ tmux split-window -h -t logwatch tmux send-keys -t logwatch "tail -F /var/log/syslog" Enter
Combined with tee to a timestamped file, this gives you a monitoring session that persists through connection drops, captures everything to disk, and remains accessible from any SSH session on that host.
Triggering Alerts from the Log Stream
Watching logs manually is not practical at scale or during off-hours. You can build lightweight alerting directly into a shell pipeline by using while read loops that execute actions when specific patterns appear:
# Send a desktop notification (via notify-send) when an SSH login occurs $ tail -F /var/log/auth.log | grep --line-buffered 'Accepted' | \ while IFS= read -r line; do notify-send "SSH Login Detected" "$line" done # Write a structured alert entry to a separate alert log $ tail -F /var/log/auth.log | grep --line-buffered 'Failed password' | \ while IFS= read -r line; do echo "$(date --iso-8601=seconds) ALERT: $line" >> /var/log/ssh-alerts.log done # Rate-count: alert only when failures exceed a threshold in a window $ tail -F /var/log/auth.log | grep --line-buffered 'Failed password' | \ awk 'BEGIN{count=0} { count++; if (count % 10 == 0) \ system("echo \"10 failures reached\" | mail -s \"Brute-force alert\" [email protected]") }'
These are intentionally simple patterns. For production alerting, a dedicated tool like swatch, logcheck, or a lightweight SIEM integration is more appropriate. But the shell pipeline approach is useful during incident response when you need a targeted alert running in minutes, not hours.
Using auditd for Events That Syslog Does Not Capture
tail -F on syslog catches authentication events, service starts, and kernel messages -- but it does not capture file access, system call activity, or network connections at the process level. The Linux Audit framework (auditd) handles that layer, and its log is at /var/log/audit/audit.log:
# Watch the audit log in real time (requires root or audit group membership) $ sudo tail -F /var/log/audit/audit.log # Watch for file access events on a specific path $ sudo auditctl -w /etc/passwd -p rwa -k passwd-watch $ sudo tail -F /var/log/audit/audit.log | grep --line-buffered 'passwd-watch' # Decode audit log entries into human-readable format in real time $ sudo tail -F /var/log/audit/audit.log | while IFS= read -r line; do echo "$line" | ausearch --input-logs 2>/dev/null || echo "$line" done # Watch for privilege escalation via execve syscall $ sudo tail -F /var/log/audit/audit.log | grep --line-buffered 'type=EXECVE'
auditd and syslog complement each other: syslog tells you what services and daemons reported; the audit log tells you what the kernel actually observed at the system call boundary. Watching both during an active investigation gives you a fuller picture than either alone.
Forwarding the Live Stream to a Central Log Server
On multi-host environments, watching individual logs per server does not scale. rsyslog can forward events to a central log server in real time, and you can then tail the aggregated stream on a single host. The configuration is simpler than most assume:
# Forward all syslog traffic to a central log server over TCP *.* @@logserver.internal:514 # Forward only auth events (more selective) auth,authpriv.* @@logserver.internal:514
# Once rsyslog is configured to receive and write to a host-specific file: $ tail -F /var/log/remote/webserver01/syslog | grep --line-buffered 'Failed' # Or watch all hosts simultaneously from the aggregated stream $ tail -F /var/log/remote/*/syslog | grep --line-buffered -E 'Failed|Accepted|sudo'
This turns a single tail -F session on the log server into a fleet-wide monitoring view. It is a meaningful architectural step up from per-host tailing and is a natural precursor to introducing a proper SIEM -- because the centralized log stream can feed both your terminal session and a forwarding agent simultaneously.
Capturing Live Log Output to a File
In incident response or troubleshooting scenarios, you may want to both watch log output on screen and save it to a file for later analysis. The tee command handles this cleanly:
# Watch syslog and save output to a timestamped file $ tail -F /var/log/syslog | tee /tmp/syslog-capture-$(date +%Y%m%d-%H%M%S).txt # Filter first, then capture only matching lines $ tail -F /var/log/auth.log | grep --line-buffered 'Failed' | tee /tmp/failed-logins.txt # Append to an existing capture file $ tail -F /var/log/syslog | tee -a /tmp/ongoing-capture.txt
The $(date +%Y%m%d-%H%M%S) pattern in the filename embeds the current timestamp, so each capture session creates a distinct file. This is useful when you run multiple monitoring sessions across an investigation and want a clear record of what was observed at what time.
Permissions: The Right Way to Grant Log Access
Running sudo tail -f /var/log/syslog works, but using sudo for read-only log access is heavier than it needs to be. On Debian and Ubuntu systems, adding a user to the adm group grants read access to most system logs without needing sudo:
# Add a user to the adm group (Debian/Ubuntu) $ sudo usermod -aG adm youruser # Add a user to the systemd-journal group for journalctl access $ sudo usermod -aG systemd-journal youruser # Verify group membership (takes effect after next login) $ groups youruser # Verify which group owns syslog $ ls -l /var/log/syslog -rw-r----- 1 syslog adm 2.1M Mar 18 09:44 /var/log/syslog
The group change takes effect on the next login. Until then, the user's current session retains the old group memberships. You can either log out and back in, or use newgrp adm in the current session to activate the new group without a full re-login.
Do not add service accounts or application users to the adm group. That group grants broad log read access across the system, which is appropriate for human administrators but not for automated processes. If an application needs access to its own log file, configure the log file's ownership and permissions directly rather than broadening group membership.
Watching Logs on Remote Systems
Watching logs on a remote server over SSH is straightforward and does not require any extra tooling beyond an SSH connection:
# SSH in and tail in a single command $ ssh [email protected] "sudo tail -F /var/log/syslog" # SSH with a pseudo-terminal (needed if sudo prompts for password) $ ssh -t [email protected] "sudo tail -F /var/log/syslog" # Tail auth.log on the remote host and filter locally $ ssh [email protected] "sudo tail -F /var/log/auth.log" | grep 'Failed'
The -t flag allocates a pseudo-terminal, which is necessary when the remote command needs an interactive terminal -- for example, if sudo is configured to require a password and you have not run a recent sudo session on that host. Without -t, sudo's password prompt may not appear correctly, and the command will appear to hang.
Wrapping Up
tail -F /var/log/syslog is one of the commands you type without thinking after a few years on Linux. But the gap between knowing the command and knowing how to use it well -- the right flag, the right file for your distribution, the right handling for log rotation, the right filter, and the right choice of when journalctl is the better tool -- is where the real skill lives.
Here is an interactive decision guide for the choices covered in this article. Answer the questions to get the exact command for your situation:
The commands in this article are part of the GNU coreutils package (tail), part of the POSIX standard toolset (grep), and part of systemd (journalctl). All man pages referenced here are available via man tail, man grep, man journalctl, and man logrotate on any Linux system where those packages are installed.