If you searched for "view last 50 lines of a log file linux," here is your answer immediately: use tail -n 50 /var/log/syslog (swap in whichever log path you need). That command prints the final 50 lines of the file to your terminal and exits. Done. No scrolling through ten thousand lines of history, no opening a text editor on a gigabyte-sized file, no drama.
Now that you have the quick win, let's make sure you also have the full picture. The tail command is one of the most frequently reached tools in a Linux administrator's daily workflow, and there is a lot more to it than a line count. This article covers everything from the basic syntax to live log following, combining tail with grep, handling logrotate, watching multiple files simultaneously, exiting automatically with --pid, and understanding when tail is not the right tool at all.
Think of every log file as a rolling tape that a process writes to in one direction. tail is a reader positioned at the write head -- it sees only what is most recent, and it can move with the tape as new content arrives. That mental model explains almost every behavioral quirk covered in this article: why -f can go silent after rotation (it held onto the old tape reel), why -F is safer in production (it tracks the label on the reel, not the reel itself), and why grep buffering matters (a filter sitting between the tape and you must not hold up the stream). When something behaves unexpectedly, return to this picture first.
The Basic Syntax
The tail command reads the end of a file. By default, with no flags, it outputs the last 10 lines. The -n flag lets you specify a different line count.
# View the last 50 lines of a log file $ tail -n 50 /var/log/syslog # Obsolete shorthand -- works but not POSIX-compliant, avoid in scripts $ tail -50 /var/log/syslog # Default behavior -- last 10 lines $ tail /var/log/syslog # Last 100 lines of the auth log $ tail -n 100 /var/log/auth.log
The -n flag accepts a plain number (count from end) or a number preceded by + (start from that line number and print to the end of the file). These two forms behave very differently, and mixing them up is a common source of confusion.
# Print the LAST 50 lines (count from the end) $ tail -n 50 /var/log/syslog # Print from line 50 to the END of the file (skip the first 49 lines) $ tail -n +50 /var/log/syslog
tail -n +2 file.csv is a classic idiom for stripping the header row from a CSV file before piping it somewhere else. The + prefix form is rarely used for log reading, but it shows up constantly in shell scripting.
Common Log File Paths
If you are not sure which log to look at, here are the paths you will reach for most often on Debian/Ubuntu and RHEL/CentOS/Fedora systems.
# General system messages (Debian/Ubuntu) $ tail -n 50 /var/log/syslog # General system messages (RHEL/CentOS/Fedora) $ tail -n 50 /var/log/messages # Authentication events -- logins, sudo, SSH $ tail -n 50 /var/log/auth.log # Kernel ring buffer messages $ tail -n 50 /var/log/kern.log # Apache access log $ tail -n 50 /var/log/apache2/access.log # Nginx error log $ tail -n 50 /var/log/nginx/error.log # systemd journal (use journalctl, not tail directly) $ journalctl -n 50
On modern systems using systemd, the most important logs live in the journal rather than in flat files. journalctl -n 50 is the direct equivalent of tail -n 50 for the journal. You can still use tail on /var/log/syslog or /var/log/messages if your system is configured to forward journal output to those files, but many minimal installations no longer do that by default.
Following a Log in Real Time with -f
Printing the last N lines is useful for a snapshot. Following a file live as new entries are written is where tail becomes indispensable for active debugging. The -f flag (follow) keeps the file open and continuously appends new output to your terminal as lines are written. If you want a focused guide on applying this specifically to syslog, see how to monitor syslog in real time with tail -f.
Combining -n and -f gives you the last N lines of context and then live output going forward -- exactly what you want when you start watching a log mid-stream.
Press Ctrl+C to stop following. On Linux, tail -f uses the inotify kernel interface (available since kernel 2.6.13, August 2005) to watch for file changes, so output is triggered by actual file events rather than polling. On filesystems where inotify is unavailable -- such as NFS mounts, some network filesystems, or when following symlinks on older kernels -- tail automatically falls back to polling every second. You will see the message tail: inotify cannot be used, reverting to polling in those cases. The -s flag sets a custom polling interval in seconds when polling is active.
The difference between -f and -F
-f follows the file by file descriptor (equivalent to --follow=descriptor). If the file gets rotated -- moved or renamed by logrotate -- -f will keep watching the old inode and stop showing new output written to the newly created file. -F is shorthand for --follow=name --retry: it follows by filename, and when inotify is active it responds to inode changes without any periodic reopening. When running without inotify (polling mode), it reopens the file periodically to detect rotation. The --retry portion means it keeps trying to reopen the file even if it becomes temporarily inaccessible -- useful when a log file is briefly absent during rotation.
# Follows the file descriptor -- can break after log rotation $ tail -f /var/log/nginx/access.log # Follows the filename -- survives log rotation gracefully $ tail -F /var/log/nginx/access.log
In production environments where logrotate runs on a schedule, prefer -F over -f for long-running monitoring sessions. You will save yourself the confusion of a terminal that appears active but has silently stopped showing new log entries after midnight rotation.
Filtering Output with grep
A busy log file produces a lot of noise. Piping tail output into grep lets you focus on the entries that matter. The combination is one of the most common patterns in Linux log work.
# Last 100 lines, show only errors $ tail -n 100 /var/log/syslog | grep -i "error" # Follow live, filter for a specific IP address $ tail -F /var/log/nginx/access.log | grep "192.168.1.100" # Follow live, exclude noisy health-check endpoint $ tail -F /var/log/nginx/access.log | grep -v "GET /health" # Follow live, match multiple patterns with extended regex $ tail -F /var/log/auth.log | grep -E "Failed|Invalid|refused" # Case-insensitive search for "critical" or "fatal" $ tail -n 200 /var/log/syslog | grep -iE "critical|fatal"
When following live with -f or -F, grep needs to be told not to buffer its output, otherwise you may see lines arrive in unpredictable batches. The --line-buffered flag fixes this.
Saving Output While Watching Live
Sometimes you want to watch a log stream in real time and keep a copy of what you saw. The tee command splits output between your terminal and a file simultaneously. This is useful when you are monitoring a deployment or troubleshooting a race condition and you want a record you can grep over afterward.
# Watch live and save everything to a file at the same time $ tail -F /var/log/syslog | tee /tmp/session-$(date +%Y%m%d-%H%M%S).log # Append to an existing file instead of overwriting it $ tail -F /var/log/nginx/error.log | tee -a /tmp/nginx-errors.log # Filter first, then save only the matching lines $ tail -F /var/log/syslog | grep --line-buffered "ERROR" | tee /tmp/errors-only.log
The date +%Y%m%d-%H%M%S idiom in the first example stamps the filename with the session start time, which makes it easy to find the file later without overwriting previous captures.
If you are piping through both grep --line-buffered and tee, keep tee at the end of the chain. Putting it before grep means the saved file contains everything, not just the filtered lines -- which may or may not be what you want, but it is worth being deliberate about.
Watching Multiple Files at Once
tail accepts multiple filenames on the same command line. When you pass more than one file, it prefixes each block of output with a header showing which file the lines came from. Combined with -f or -F, this becomes a simple multi-log monitor.
# Last 20 lines of each file, with filename headers $ tail -n 20 /var/log/nginx/access.log /var/log/nginx/error.log # Follow both files live $ tail -F /var/log/nginx/access.log /var/log/nginx/error.log # Use a glob to watch all logs in a directory $ tail -F /var/log/nginx/*.log # Suppress the filename headers with -q (quiet mode) $ tail -q -n 50 /var/log/nginx/access.log /var/log/nginx/error.log
When monitoring a directory glob with tail -F /var/log/nginx/*.log, the glob is expanded at the time the command runs. Files created after the command starts will not be added automatically. For dynamic file discovery, tools like multitail or purpose-built log shippers such as Filebeat handle this better.
Exiting Automatically with --pid
One underused flag: --pid=PID. When combined with -f or -F, it tells tail to exit automatically shortly after the specified process ID no longer exists. This is exactly what you want when you are tailing a build log, a deployment script, or a test run -- you do not have to remember to press Ctrl+C when the process finishes. The process and tail must be running on the same machine for this to work.
# Start a long-running process and capture its PID $ ./deploy.sh > /var/log/deploy.log 2>&1 & DEPLOY_PID=$! # Follow the log and exit automatically when the deploy process ends $ tail -f --pid=$DEPLOY_PID /var/log/deploy.log # Or in one shot: make + tail that exits when make finishes $ make -j4 > build.log 2>&1 & tail -f --pid=$! build.log # Watch multiple processes -- repeat --pid for each one $ tail -f --pid=1234 --pid=5678 /var/log/app.log
The --pid flag is a clean substitute for patterns like while kill -0 $PID 2>/dev/null; do sleep 1; done. It is built into tail, works with both -f and -F, and can watch more than one process at a time by repeating the flag. According to the GNU coreutils manual, tail exits shortly after all specified PIDs no longer exist -- it does not exit the instant the last process dies but within one polling interval.
Reading by Bytes Instead of Lines
The -c flag reads a specific number of bytes from the end of the file rather than a line count. This is less common for log reading but useful when you need a predictable output size regardless of line length, or when working with binary log formats.
# Last 1024 bytes of a log file $ tail -c 1024 /var/log/syslog # Binary suffixes: K=1024, M=1024^2, G=1024^3 (kibibytes/mebibytes/gibibytes) # Decimal suffixes: KB=1000, MB=1000^2, GB=1000^3 -- use K/M/G for log work $ tail -c 10K /var/log/syslog # Follow and print last 1MiB of output $ tail -c 1M -f /var/log/apache2/access.log
tail and Log Rotation
Log rotation is where tail users get tripped up. The default logrotate behavior (the create directive) renames the current log file -- for example, syslog becomes syslog.1 -- and then creates a new empty file under the original name. A process using tail -f holds an open file descriptor to the original inode. After rotation, that inode is now named syslog.1, but tail -f keeps watching it. New log entries are written to the freshly created syslog (a different inode entirely), and tail -f misses all of them silently.
Some configurations use copytruncate instead, which copies the log contents to a backup file and then truncates the original to zero size. In that scenario, tail -f behaves differently: it detects the truncation, prints a "file truncated" message, and resumes reading from the start of the now-empty file. Neither behavior is what you want for long-running monitoring. Use -F.
# Read the rotated (previous) log -- plain .1 rotation $ tail -n 50 /var/log/syslog.1 # Read a gzip-compressed rotated log with zcat $ zcat /var/log/syslog.2.gz | tail -n 50 # Read multiple compressed rotated logs in sequence $ zcat /var/log/syslog.*.gz | tail -n 100 # Use -F to survive rotation during a live session $ tail -F /var/log/syslog
If you need to search across an entire log history including rotated files, zgrep is your friend. It works like grep but transparently handles gzip-compressed files alongside plain text ones.
Permissions and sudo
Many system log files are readable only by root or by members of the adm or syslog group. If tail returns a permissions error, you have a few options.
# Run tail with elevated privileges $ sudo tail -n 50 /var/log/auth.log # Check which group owns the log file $ ls -l /var/log/auth.log -rw-r----- 1 syslog adm 204800 Mar 18 14:22 /var/log/auth.log # Add your user to the adm group to read without sudo (requires re-login) $ sudo usermod -aG adm youruser # Follow with sudo -- works the same way $ sudo tail -F /var/log/auth.log
Adding a user to the adm group grants read access to all logs owned by that group, which can include sensitive authentication and audit data. On shared or multi-user systems, think carefully about which users should have that access. Prefer role-specific access controls over broad group membership when possible.
Performance Considerations with Large Files
tail is designed for this exact use case and handles large files efficiently. It seeks to the end of the file rather than reading it from the beginning, which means performance is not affected by file size. A 10-gigabyte log file and a 10-kilobyte log file take essentially the same time to process with tail -n 50.
The main performance concern is not tail itself but rather the downstream pipeline. Piping into a slow consumer (grep with a complex regex, an unbuffered script) on an extremely high-throughput log can cause the pipeline to back up. For production log analysis at scale, purpose-built tools like lnav, goaccess, or a log aggregation stack (Elasticsearch, Loki, Splunk) are better choices than tail | grep chains.
There is a subtler pipeline pressure point worth understanding: when tail -F generates output faster than downstream processes can consume it, the operating system's pipe buffer fills up. On Linux, the default pipe buffer is 64 KiB. When it fills, tail blocks -- it stops reading from the source file until the consumer clears space. For moderate log rates this is invisible, but under a log burst (say, a misconfigured application emitting tens of thousands of lines per second), a slow grep pattern or a script with a sleep loop will create observable latency between the log event and your terminal output. The fix is to process downstream as fast as possible, or to rate-limit at the source (application-side, or via a log shipper with buffering built in).
Extracting structured fields with awk
When grep alone is too blunt -- you need to match a condition in one field and print a different field -- awk is the right next step. It understands column-based log formats natively and can perform arithmetic on extracted values, which grep cannot.
# Nginx combined log format: $remote_addr - - [$time] "$request" $status $bytes # Print only the IP and status code for requests that returned 5xx $ tail -n 200 /var/log/nginx/access.log | awk '$9 >= 500 {print $1, $9}' # Count 5xx errors per IP in the last 500 lines, sort by frequency $ tail -n 500 /var/log/nginx/access.log | awk '$9 >= 500 {count[$1]++} END {for(ip in count) print count[ip], ip}' | sort -rn | head # Live stream: alert when response bytes exceed 10MB on a single request $ tail -F /var/log/nginx/access.log | awk --line-buffered '$10 > 10485760 {print "LARGE RESPONSE:", $0}' # Extract slow requests from an app log where field 4 is duration in ms $ tail -n 1000 /var/log/app/requests.log | awk '$4 > 2000 {print $1, $2, $4 "ms", $5}'
When piping tail -F into awk for live stream processing, add the --line-buffered equivalent: awk flushes output after each record by default when writing to a pipe, so you do not need a separate flag. The issue only arises with grep, which buffers by default. To be explicit and portable across shell scripts, you can force awk output flushing with fflush() after the print statement.
tail on macOS and BSD
Everything covered so far applies to GNU tail, which ships with Linux. macOS and other BSD-based systems include a different implementation -- BSD tail -- and there are a handful of differences worth knowing before you copy a command from this article onto a Mac.
The core flags (-n, -f, -F, -c, -q) behave the same way on both. The differences show up in the edges.
# --pid is GNU-only -- this will fail on macOS $ tail -f --pid=$DEPLOY_PID /var/log/deploy.log tail: unknown option -- -pid # --retry is GNU-only -- macOS -F already implies retry behavior # macOS -F is equivalent to GNU --follow=name --retry # K/M/G byte suffixes are GNU-only # On macOS, use raw byte counts with -c $ tail -c 10240 /var/log/system.log # 10 KiB, macOS-safe # Install GNU coreutils on macOS with Homebrew to get full feature parity $ brew install coreutils # GNU tail is then available as gtail $ gtail -f --pid=$DEPLOY_PID /var/log/deploy.log
macOS uses Unified Logging (the os_log subsystem) rather than flat text files for many system and application logs. The equivalent of journalctl on macOS is log stream for live output and log show for historical queries. Many application logs still land in ~/Library/Logs/ or /var/log/ as plain text and are fully accessible with tail.
When to Use Something Other Than tail
tail is the right tool when you want a quick snapshot of recent activity or want to follow a live stream. There are several situations where a different tool serves better. Choosing the right tool is not just a convenience decision -- using tail for a job that needs grep -r or journalctl means you may be reading logs that do not contain the event you are looking for, while the evidence sits in a rotated archive or a different log file entirely.
Before reaching for tail, ask three questions: (1) Do I need recent-only data, or could the event have happened at any point in the file's history? (2) Is the log a plain text file, or is it inside systemd's journal or a container runtime? (3) Do I need to scroll backward, search interactively, or just observe a stream? The answers map directly to the tool choices below.
grep on the full file
If you are looking for a specific error that could have occurred anywhere in the log history, not just the last N lines, run grep directly on the file. tail only shows you recent entries -- anything older than your line count is invisible to it.
To add temporal context around a match -- see the lines before and after the event -- use grep -B (before) and -A (after). This is significantly more useful than a bare match when diagnosing cascading failures, where the cause appears several lines before the symptom you searched for.
# Show 5 lines before and 10 lines after each OOM event $ grep -B 5 -A 10 "out of memory" /var/log/kern.log # Show 3 lines of context around each authentication failure $ grep -C 3 "authentication failure" /var/log/auth.log # Count total occurrences across the full log history $ grep -c "authentication failure" /var/log/auth.log # Find the same event across all rotated log files in sequence $ zgrep -h "authentication failure" /var/log/auth.log* | sort
less for interactive browsing
When you need to scroll backward through a log, search interactively, or read a large file without loading it entirely into memory, less is the right tool. Opening a 2 GB log file in nano or cat is painful; opening it in less is instant.
# Open file at the end (like tail) but stay interactive $ less +G /var/log/syslog # Inside less: / to search forward, ? to search backward, G to jump to end # Follow mode inside less (like tail -f but scrollable) $ less +F /var/log/syslog # Press Ctrl+C to stop following, then scroll freely, then F to resume
The less +F pattern is worth internalizing separately from tail -F. It gives you the same live stream, but you can interrupt it with Ctrl+C, scroll back through the buffer to read earlier entries, and then press F to resume following. This toggle is unavailable with tail, which requires you to kill the process and run a new command with a different line count to look back.
journalctl for systemd logs
On systemd systems, service logs written to the journal are not accessible as plain text files. tail cannot read them. Use journalctl instead, which has its own -n and -f equivalents.
# Last 50 journal entries (equivalent to tail -n 50) $ journalctl -n 50 # Follow journal live (equivalent to tail -f) $ journalctl -f # Last 50 lines for a specific service $ journalctl -u nginx.service -n 50 # Follow a specific service live $ journalctl -u nginx.service -f # Priority filter -- errors and above only $ journalctl -p err -n 50
docker logs and container runtimes
If your service runs inside a Docker container, the log file on the host is not where the output lives. Docker captures stdout and stderr from the container process and stores them in its own logging driver. Running tail on the host will either find nothing or show raw JSON metadata files that are not meant to be read directly.
# Last 50 lines from a container (equivalent to tail -n 50) $ docker logs --tail 50 container_name # Follow container output live (equivalent to tail -F) $ docker logs -f container_name # Last 50 lines then follow, with timestamps prepended $ docker logs --tail 50 -f -t container_name # For Podman (rootless containers) -- syntax is identical $ podman logs --tail 50 -f container_name # If your container logs TO a file (via a volume mount), tail works normally $ tail -F /mnt/app-logs/app.log
The underlying JSON log files Docker maintains -- typically at /var/lib/docker/containers/<id>/<id>-json.log -- are technically readable with tail, but they contain raw JSON with log lines wrapped in metadata fields. You would need to pipe through jq to make the output readable. Use docker logs instead; it handles the parsing for you.
lnav for colorized, structured log viewing
lnav (Log File Navigator) is a terminal-based log viewer that understands common log formats, colorizes output by severity, allows SQL-style queries against log data, and handles multiple files and compressed archives. It is the tool to install when tail | grep chains start feeling unwieldy.
Building Alerting Pipelines from tail
Watching a log stream is passive. A more powerful pattern is wiring tail -F into a notification or action pipeline that does something when a specific pattern appears -- without a dedicated log aggregation stack. This approach has real limitations at scale, but it fills a meaningful gap for small services, self-hosted tools, or environments where standing up a full observability stack is not worth the overhead.
The general structure is: tail -F logfile | grep --line-buffered "PATTERN" | while read line; do ACTION; done. The action can be a curl request to a webhook, a message to a Slack channel, an entry into a separate alert file, or a systemd service restart. The key requirement is that every step in the chain must be non-blocking -- if the action takes more than a second or two, you will miss subsequent matches.
# Send a Slack webhook message when a CRITICAL error appears in the log $ tail -F /var/log/app/app.log | grep --line-buffered "CRITICAL" | \ while read -r line; do curl -s -X POST -H 'Content-type: application/json' \ --data "{\"text\": \"CRITICAL: $line\"}" \ https://hooks.slack.com/services/YOUR/WEBHOOK/URL done # Deduplicated alerting: only fire if the same pattern hasn't fired in 60s $ tail -F /var/log/app/app.log | grep --line-buffered "CRITICAL" | \ while read -r line; do now=$(date +%s) last=$(cat /tmp/last-alert 2>/dev/null || echo 0) if [ $((now - last)) -gt 60 ]; then echo "$now" > /tmp/last-alert echo "ALERT $(date): $line" >> /var/log/app/alerts.log fi done # Count error frequency: alert only when rate exceeds threshold per minute # Pipe through a rate-limiting wrapper using awk $ tail -F /var/log/nginx/error.log | grep --line-buffered "\[error\]" | \ awk 'BEGIN{t=systime(); n=0} { n++ if (systime()-t >= 60) { if (n > 20) print "RATE ALERT: "n" errors/min" t=systime(); n=0 } }'
These shell-based alerting pipelines are fragile in ways that dedicated tools are not. They do not survive reboots, they cannot handle log rotation without -F, they offer no deduplication without extra logic, and a fast log burst will generate a flood of webhook requests. For anything beyond a development machine or a lightly loaded service, a proper log shipper (Filebeat, Promtail, Vector) with an alerting backend is the correct architecture. Use the patterns above to understand the mechanics and to prototype, not as production infrastructure.
The incident investigation workflow
Understanding the individual commands is one layer. Knowing how to sequence them under pressure is another. Here is a systematic approach for when something is actively broken and you are starting cold on an unfamiliar system.
# ── STEP 1: Orient -- what changed recently? ───────────────────────────── # Check the last 100 lines of syslog for the shape of recent activity $ sudo tail -n 100 /var/log/syslog | less # Or on systemd systems, show all logs from the last 15 minutes $ journalctl --since "15 minutes ago" | less # ── STEP 2: Identify -- what services are producing errors? ────────────── # Count error-level events by systemd unit in the last hour $ journalctl --since "1 hour ago" -p err | awk '{print $5}' | sort | uniq -c | sort -rn | head # Scan for error/critical/fatal patterns across all common log paths at once $ sudo tail -n 50 /var/log/syslog /var/log/auth.log /var/log/kern.log | grep -iE "error|critical|fatal|failed|denied" # ── STEP 3: Focus -- isolate the specific service log ──────────────────── # Follow the specific service log with errors highlighted and saved $ sudo tail -n 100 -F /var/log/nginx/error.log | grep --line-buffered -iE "error|crit" | tee /tmp/incident-$(date +%Y%m%d-%H%M%S).log # ── STEP 4: Correlate -- check if auth/kernel events coincide ──────────── # Watch two files simultaneously, filter both for relevant patterns $ sudo tail -F /var/log/nginx/error.log /var/log/auth.log | grep --line-buffered -iE "error|failed|refused" # ── STEP 5: Scope -- how long has this been happening? ─────────────────── # Search back through rotated logs for first occurrence of the error $ zgrep -h "connection refused" /var/log/syslog* | head -3
The triage sequence above is deliberate about saving output to a timestamped file in step 3. During an active incident, the temptation is to stare at the stream without capturing it. Having a session log means you can grep it for patterns after the fact, share it with a colleague, or attach it to a post-incident review without relying on terminal scroll history.
Quick Reference
Here is the full set of patterns covered in this article, collected in one place for easy copying.
# ── BASIC READING ──────────────────────────────────── # Last 50 lines $ tail -n 50 /var/log/syslog # Last 10 lines (default) $ tail /var/log/syslog # From line 50 to end of file $ tail -n +50 /var/log/syslog # Last 1KB of bytes $ tail -c 1K /var/log/syslog # ── LIVE FOLLOWING ─────────────────────────────────── # Follow by file descriptor (fast, breaks on rotation) $ tail -f /var/log/syslog # Follow by name (survives logrotate) $ tail -F /var/log/syslog # Last 50 lines then follow $ tail -n 50 -f /var/log/syslog # ── FILTERING ──────────────────────────────────────── # Snapshot filtered by keyword $ tail -n 100 /var/log/syslog | grep -i "error" # Live stream filtered (line-buffered to avoid batching) $ tail -F /var/log/syslog | grep --line-buffered "ERROR" # Exclude pattern from live stream $ tail -F /var/log/nginx/access.log | grep -v "GET /health" # ── MULTIPLE FILES ─────────────────────────────────── # View last 20 lines of each file $ tail -n 20 /var/log/nginx/access.log /var/log/nginx/error.log # Follow multiple files live $ tail -F /var/log/nginx/*.log # Quiet mode -- suppress filename headers $ tail -q -n 50 /var/log/nginx/access.log /var/log/nginx/error.log # ── ROTATED LOGS ───────────────────────────────────── # Previous rotation $ tail -n 50 /var/log/syslog.1 # Gzip-compressed rotation $ zcat /var/log/syslog.2.gz | tail -n 50 # ── PERMISSIONS ────────────────────────────────────── # Read a root-owned log $ sudo tail -n 50 /var/log/auth.log # ── SAVING OUTPUT ──────────────────────────────────── # Watch live and save to a timestamped file $ tail -F /var/log/syslog | tee /tmp/session-$(date +%Y%m%d-%H%M%S).log # Filter then save matching lines only $ tail -F /var/log/syslog | grep --line-buffered "ERROR" | tee /tmp/errors.log # ── CONTAINER LOGS ─────────────────────────────────── $ docker logs --tail 50 container_name $ docker logs -f container_name $ docker logs --tail 50 -f -t container_name # ── JOURNALCTL EQUIVALENTS ─────────────────────────── $ journalctl -n 50 $ journalctl -f $ journalctl -u nginx.service -n 50 -f
Wrapping Up
The command you came here for is simple: tail -n 50 /path/to/logfile. But knowing just the number flag leaves out the patterns that make log work on Linux feel fluid. The difference between -f and -F will save you from silently broken monitoring sessions. Piping through grep --line-buffered will keep your live streams responsive. Knowing when to reach for journalctl, less +F, or zcat instead of tail is what separates the person who can navigate a system under pressure from the person who is still scrolling.
The deeper skill is not command syntax -- it is building a coherent picture of what a system is doing from fragments of evidence. Log files are not documentation; they are traces left by processes that were not designed to explain themselves. tail gives you the most recent traces. Adding grep filters those traces by what matters. Adding awk lets you quantify them. Piping into a save file captures the investigation while it is live. Each layer transforms raw data into something you can reason about.
Most production problems leave evidence at multiple log levels and in multiple files simultaneously. The multi-file patterns and the incident triage sequence in this article are the habits that surface that cross-file evidence quickly. The alerting pipeline patterns are the bridge from reactive (you checked the log) to proactive (the log tells you). Understanding where tail ends and where purpose-built tools begin is the final layer -- not because tail is inadequate, but because knowing its limits is what lets you use it confidently within them.
The terminal rewards specificity. The more precisely you can express what you want to see, the faster you find it.