Docs / Linux Basics / How to Use dmesg to Diagnose Hardware and Kernel Issues

How to Use dmesg to Diagnose Hardware and Kernel Issues

By Admin · Mar 15, 2026 · Updated Apr 23, 2026 · 232 views · 3 min read

dmesg displays the kernel ring buffer — a log of messages from the kernel about hardware detection, driver loading, and system events. It is your primary tool for diagnosing hardware problems, driver issues, and boot failures.

Basic dmesg Usage

# View all kernel messages
dmesg

# View with timestamps
dmesg -T

# View with human-readable timestamps (best option)
dmesg --time-format=iso

# Follow new messages in real time (like tail -f)
dmesg -w

# Show only recent messages (last 30 lines)
dmesg | tail -30

Filtering Messages by Priority

# Show only errors and above
dmesg -l err

# Show errors and warnings
dmesg -l err,warn

# Priority levels (0-7):
# emerg (0) — System is unusable
# alert (1) — Action must be taken immediately
# crit  (2) — Critical conditions
# err   (3) — Error conditions
# warn  (4) — Warning conditions
# notice(5) — Normal but significant
# info  (6) — Informational
# debug (7) — Debug messages

# Show only critical and above
dmesg -l emerg,alert,crit

Filtering by Facility

# Show only kernel messages
dmesg -f kern

# Show only hardware/driver messages
dmesg -f syslog

# Common facilities: kern, user, daemon, syslog

Diagnosing Common Issues

Disk Errors

# Check for disk errors (bad sectors, I/O errors)
dmesg -T | grep -i "error|fail|bad|i/o"

# Look for specific disk messages
dmesg -T | grep -i "sd[a-z]|vd[a-z]|nvme"

# Common disk error messages:
# "I/O error, dev sda" — Physical disk problem
# "EXT4-fs error" — Filesystem corruption
# "task blocked for more than 120 seconds" — Disk too slow/hung

Memory Issues

# Check for Out-of-Memory (OOM) events
dmesg -T | grep -i "oom|out of memory|killed process"

# Look for memory hardware errors
dmesg -T | grep -i "mce|memory|ecc|edac"

# Example OOM message:
# Out of memory: Killed process 1234 (mysqld) total-vm:2048000kB

Network Issues

# Check network interface status
dmesg -T | grep -i "eth|link|network|nic"

# Look for link up/down events
dmesg -T | grep -i "link is|carrier"

# Example messages:
# "eth0: link up" — Network cable connected
# "eth0: link down" — Network cable disconnected or NIC failure

CPU Issues

# Check for CPU errors
dmesg -T | grep -i "cpu|mce|microcode"

# Machine Check Exceptions indicate hardware problems
dmesg -T | grep -i "machine check"

After a Crash or Unexpected Reboot

# Check what happened before the last crash
# dmesg shows current boot only; for previous boot:
journalctl -b -1 -p err    # Previous boot, errors only
journalctl -b -1 | tail -50 # Last 50 lines of previous boot

# Common crash indicators in dmesg:
# "Kernel panic" — Critical kernel error
# "BUG:" — Kernel bug detected
# "Call Trace:" — Stack trace (shows where the crash happened)
# "RIP:" — Instruction pointer at crash point

Clearing dmesg

# Clear the kernel ring buffer (useful after resolving issues)
sudo dmesg -C

# Note: This does not delete the messages from journald
# They are still available via: journalctl -k

Creating a Diagnostic Report

#!/bin/bash
# Save a complete diagnostic report
echo "=== System Info ===" > /tmp/diag-report.txt
uname -a >> /tmp/diag-report.txt
echo "" >> /tmp/diag-report.txt
echo "=== dmesg Errors ===" >> /tmp/diag-report.txt
dmesg -T -l err,crit,alert,emerg >> /tmp/diag-report.txt
echo "" >> /tmp/diag-report.txt
echo "=== dmesg Warnings ===" >> /tmp/diag-report.txt
dmesg -T -l warn >> /tmp/diag-report.txt
echo "" >> /tmp/diag-report.txt
echo "=== Memory ===" >> /tmp/diag-report.txt
free -h >> /tmp/diag-report.txt
echo "" >> /tmp/diag-report.txt
echo "=== Disk ===" >> /tmp/diag-report.txt
df -h >> /tmp/diag-report.txt
echo "Report saved to /tmp/diag-report.txt"

Make checking dmesg part of your troubleshooting routine. When something goes wrong, dmesg often contains the first clue about the root cause.

Was this article helpful?