When something goes wrong at the hardware or driver level on a Linux system, dmesg is where you look first. It exposes the kernel ring buffer -- a fixed-size circular log maintained by the kernel itself, written to before any userspace logging daemon is running. That means it captures everything from the moment the kernel starts: memory detection, CPU topology, driver initialization, USB device plug events, disk errors, and kernel panics.

This article is a practical quick-reference. It covers the flags you'll actually use, how to filter signal from noise, how to work with timestamps correctly, and what common kernel messages are telling you.

What Is the Kernel Ring Buffer?

The kernel maintains a fixed-size in-memory buffer where it writes diagnostic messages. Because it is circular, once it fills up, older messages are overwritten by newer ones. On a busy system, or one that has been up for a long time, early boot messages may have already been rotated out.

The dmesg command reads this buffer and prints it to stdout. On modern Linux systems using systemd-journald, kernel messages are also captured by the journal -- which means journalctl -k can give you persistent access to kernel messages even after the ring buffer has rotated. But dmesg remains the fastest way to get there, and it works even on systems without systemd.

Note

On kernels 4.8 and later, kernel.dmesg_restrict is set to 1 by default on many distributions. Unprivileged users will see an empty buffer or a permission error. Use sudo dmesg to ensure you get the full output.

Essential Flags

Raw dmesg output is readable but rarely comfortable. These flags make it usable.

Human-Readable Timestamps

By default, each message is prefixed with seconds elapsed since boot -- not a real clock time. On a system that has been running for weeks, that number is nearly meaningless. Use -T to convert it to wall-clock time:

$ sudo dmesg -T

The output now shows a proper date and time for each message. Combine with | less for comfortable scrolling through long output.

Warning

The -T timestamps are computed by combining the kernel's monotonic clock offset with the current wall time at the moment dmesg runs. If the system clock was adjusted (NTP sync, timezone change, manual set) after boot, early messages may show slightly inaccurate times. For forensic-grade timestamps, use journald, which records wall-clock time at message ingestion.

Follow Mode

To watch new kernel messages in real time -- for example, while plugging in a USB device or loading a driver -- use -w (or --follow):

$ sudo dmesg -Tw

This is the kernel equivalent of tail -f on a syslog file. The terminal stays open and prints new messages as they arrive. Press Ctrl+C to exit.

Filter by Log Level

The kernel assigns each message a log level from 0 (emergency) to 7 (debug). Use -l to filter by one or more levels:

log level filtering
# Show only errors and critical messages
$ sudo dmesg -T -l err,crit

# Show only warnings
$ sudo dmesg -T -l warn

# Show everything at notice level and above (0-5)
$ sudo dmesg -T -l emerg,alert,crit,err,warn,notice

The available log levels, from highest to lowest severity, are: emerg, alert, crit, err, warn, notice, info, and debug. On a healthy system, you should see nothing at emerg or alert. Regular err messages often indicate driver problems or hardware failures worth investigating.

Filter by Facility

Kernel messages are also tagged with a facility indicating which subsystem produced them. Use -f to filter:

facility filtering
# Kernel messages only (excludes userspace daemon messages)
$ sudo dmesg -T -f kern

# Combine facility and level filters
$ sudo dmesg -T -f kern -l err

Common facilities include kern (the kernel itself), user (userspace programs), daemon (system daemons), and syslog. In practice, kern is the one you'll use most when diagnosing hardware and driver issues.

Searching and Filtering Output

For targeted investigation, pipe dmesg through grep. This is faster than scrolling and essential on systems with verbose kernel output.

grep patterns
# Find USB-related messages
$ sudo dmesg -T | grep -i usb

# Find disk and storage errors
$ sudo dmesg -T | grep -i -E 'error|fail|I/O error|ata[0-9]'

# Find network interface events
$ sudo dmesg -T | grep -i eth0
$ sudo dmesg -T | grep -i 'link is'

# Find out of memory (OOM) killer events
$ sudo dmesg -T | grep -i 'oom\|killed process'

# Find CPU-related messages (thermal, microcode)
$ sudo dmesg -T | grep -i 'cpu\|thermal\|microcode'
Pro Tip

Add -A 3 or -B 3 to your grep to show a few lines of context around each match. Kernel error messages are frequently preceded by a device identification line that tells you exactly which piece of hardware is involved: sudo dmesg -T | grep -i 'i/o error' -A 3

Reading Common Kernel Messages

Knowing the commands is only part of it. Here is a guide to what you'll commonly see and what it means.

Boot and Hardware Detection

The first section of a fresh boot covers memory detection, CPU topology, and ACPI table parsing. Lines like BIOS-provided physical RAM map and ACPI: IRQ0 used by override are normal and informational. What you want to watch for are lines tagged with [Firmware Bug] or ACPI BIOS Error -- these indicate firmware issues that may cause instability.

Driver Initialization

After the kernel loads, drivers initialize and report what they found. A successfully loaded driver usually prints something like:

example kernel output
[Fri Mar 20 09:14:02 2026] e1000e 0000:00:1f.6 eth0: renamed from enp0s31f6
[Fri Mar 20 09:14:02 2026] e1000e 0000:00:1f.6 eth0: NIC Link is Up 1000 Mbps Full Duplex

If a driver fails to load or encounters an error, you'll see lines containing failed, error, or can't. These are worth recording before you start troubleshooting. Checking which process is active on a given interface can help correlate driver messages with runtime behavior.

Storage and Filesystem Errors

Disk errors are among the most important things dmesg surfaces. An ATA or SCSI error looks like this:

disk error example
[Fri Mar 20 14:23:11 2026] ata1.00: end_request: I/O error, dev sda, sector 4096000
[Fri Mar 20 14:23:11 2026] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6
[Fri Mar 20 14:23:11 2026] ata1.00: FAILED: status: { DRDY ERR }

Repeated I/O errors on the same device are a strong signal of impending disk failure. Cross-reference with smartctl -a /dev/sda to check SMART data. A single isolated error after a cable reseat or power event is less concerning, but a stream of them is not something to defer.

OOM Killer Events

When the system runs out of memory and the OOM killer activates, dmesg records exactly which process was killed and why. The output is verbose but structured:

OOM killer output (excerpt)
[Fri Mar 20 03:41:07 2026] Out of memory: Killed process 8921 (java) total-vm:4194304kB,
                             anon-rss:3801088kB, file-rss:4096kB, shmem-rss:0kB
[Fri Mar 20 03:41:07 2026] oom_reaper: reaped process 8921 (java), now anon-rss:0kB

If you find OOM events in dmesg on a production system, the immediate diagnosis is that something consumed more memory than was available. Whether the fix is adding swap, increasing RAM, tuning the JVM heap, or capping a runaway process depends on context -- but dmesg tells you it happened and which process paid the price.

Clearing the Buffer and Checking Its Size

You can clear the kernel ring buffer with sudo dmesg -C. This is useful before a test run -- plug in a device, reproduce a condition, then read dmesg to see only messages from that event. Clearing requires root and only affects the in-memory buffer, not the journal.

buffer management
# Check the size of the kernel ring buffer
$ sudo dmesg --buffer-size

# Clear the buffer (root required)
$ sudo dmesg -C

# Print and then clear in one step
$ sudo dmesg -c
Caution

Clearing the ring buffer with dmesg -C is irreversible for any messages not already captured by journald or a syslog daemon. On a system where journald persistence is configured, kernel messages are already saved to disk -- so clearing is safe. On a system logging only to the ring buffer, you lose those messages permanently.

dmesg vs. journalctl -k

On systemd-based systems, journalctl -k is often a better tool for historical kernel message investigation because journald persists messages across reboots. The two commands overlap significantly but differ in important ways.

dmesg reads directly from the in-memory ring buffer. It is always available, even on minimal systems without systemd, and is the correct tool for live diagnostics and situations where you need the most recent kernel output instantly. It also works over SSH before systemd-journald has fully started -- which matters during early-boot troubleshooting.

journalctl -k queries the journal database, which survives reboots (if persistence is configured) and supports richer filtering syntax. Use journalctl -k -b -1 to see kernel messages from the previous boot -- something dmesg cannot do on its own, since the ring buffer is cleared at shutdown. For investigating a crash that required a hard reboot, journalctl -k -b -1 is the right starting point, assuming persistent journald logging was enabled before the event.

Use dmesg for what is happening now. Use journalctl -k for what happened last night.

How to Use dmesg for Common Diagnostic Tasks

Step 1: Run dmesg with human-readable timestamps

Run sudo dmesg -T to print the full kernel ring buffer with human-readable wall-clock timestamps instead of the default seconds-since-boot format. Pipe through less for comfortable scrolling: sudo dmesg -T | less

Step 2: Filter kernel messages by log level

Use sudo dmesg -l err,warn to display only error and warning messages. Combine with -T for readable timestamps. Use sudo dmesg -l crit,alert,emerg to surface only the highest-severity kernel messages.

Step 3: Search dmesg output for a specific device or driver

Pipe dmesg through grep to find messages about a specific device or subsystem. For example: sudo dmesg -T | grep -i usb to inspect USB-related events, or sudo dmesg -T | grep -i 'error\|fail' to catch all error-related lines in one pass.

Frequently Asked Questions

What does dmesg stand for?

dmesg stands for "diagnostic message" (sometimes cited as "display message"). It prints the kernel ring buffer, which holds messages written by the Linux kernel during boot and while the system is running -- covering hardware detection, driver initialization, kernel errors, and system events.

Why do dmesg timestamps show seconds instead of a real date?

By default, dmesg shows timestamps as seconds elapsed since boot, not wall-clock time. To see human-readable dates and times, run dmesg -T (or dmesg --ctime). On systems using util-linux 2.23 or later, dmesg -T is available on nearly all major distributions.

Does dmesg require root to run?

On modern Linux kernels (4.8 and later), the kernel restricts access to dmesg output for unprivileged users by default via the kernel.dmesg_restrict sysctl setting. Running dmesg without sudo may return an empty buffer or a permission error. Use sudo dmesg to ensure you see the full output.