Docs / Email Servers / How to Set Up Email Alerts for Server Events

How to Set Up Email Alerts for Server Events

By Admin · Mar 2, 2026 · Updated Apr 23, 2026 · 27 views · 4 min read

How to Set Up Email Alerts for Server Events

Proactive email alerting ensures you know about critical server events before they escalate into outages. On your Breeze instance, you can configure a variety of tools to send notifications when disk space runs low, services crash, CPU spikes, or security events occur.

Prerequisites

  • A working mail transfer agent (Postfix or msmtp) on your Breeze server
  • The mailutils or mailx package installed
  • A destination email address for receiving alerts
sudo apt install -y mailutils

Disk Space Alerts

Create a script that checks disk usage and sends an alert when it exceeds a threshold:

sudo nano /usr/local/bin/disk-alert.sh
#!/bin/bash
THRESHOLD=85
MAILTO="admin@yourdomain.com"
HOSTNAME=$(hostname)

df -H --output=pcent,target | tail -n+2 | while read usage mount; do
    pct=$(echo "$usage" | tr -d '% ')
    if [ "$pct" -ge "$THRESHOLD" ]; then
        echo "WARNING: Disk usage on $mount is at ${pct}% on $HOSTNAME" | \
            mail -s "[Breeze Alert] Disk space critical on $HOSTNAME" "$MAILTO"
    fi
done
sudo chmod +x /usr/local/bin/disk-alert.sh

Schedule it with cron to run every hour:

echo "0 * * * * root /usr/local/bin/disk-alert.sh" | sudo tee /etc/cron.d/disk-alert

Service Health Monitoring

Monitor critical services and alert when they go down:

sudo nano /usr/local/bin/service-alert.sh
#!/bin/bash
SERVICES=("nginx" "mysql" "postfix" "php8.2-fpm")
MAILTO="admin@yourdomain.com"
HOSTNAME=$(hostname)

for svc in "${SERVICES[@]}"; do
    if ! systemctl is-active --quiet "$svc"; then
        echo "CRITICAL: $svc is not running on $HOSTNAME" | \
            mail -s "[Breeze Alert] $svc down on $HOSTNAME" "$MAILTO"

        # Attempt auto-restart
        sudo systemctl restart "$svc"
    fi
done

CPU and Memory Alerts

Alert when CPU load or memory usage spikes above safe levels:

sudo nano /usr/local/bin/resource-alert.sh
#!/bin/bash
MAILTO="admin@yourdomain.com"
HOSTNAME=$(hostname)
CPU_THRESHOLD=90
MEM_THRESHOLD=90

# Check CPU load (1-minute average vs core count)
CORES=$(nproc)
LOAD=$(awk '{print int($1 * 100 / '"$CORES"')}' /proc/loadavg)
if [ "$LOAD" -ge "$CPU_THRESHOLD" ]; then
    top -bn1 | head -20 | mail -s "[Breeze Alert] High CPU on $HOSTNAME (${LOAD}%)" "$MAILTO"
fi

# Check memory usage
MEM_USED=$(free | awk '/^Mem:/ {printf "%.0f", $3/$2 * 100}')
if [ "$MEM_USED" -ge "$MEM_THRESHOLD" ]; then
    free -h | mail -s "[Breeze Alert] High memory on $HOSTNAME (${MEM_USED}%)" "$MAILTO"
fi

Security Event Alerts with auditd

Monitor for suspicious activity such as failed SSH logins:

sudo nano /usr/local/bin/ssh-alert.sh
#!/bin/bash
MAILTO="admin@yourdomain.com"
HOSTNAME=$(hostname)
THRESHOLD=5

FAILED=$(grep "Failed password" /var/log/auth.log | \
    awk -v d="$(date --date='1 hour ago' '+%b %e %H')" '$0 ~ d' | wc -l)

if [ "$FAILED" -ge "$THRESHOLD" ]; then
    grep "Failed password" /var/log/auth.log | tail -20 | \
        mail -s "[Breeze Alert] $FAILED failed SSH logins on $HOSTNAME" "$MAILTO"
fi

Using systemd for Service Failure Alerts

Configure systemd to email you whenever a critical service fails:

sudo nano /etc/systemd/system/alert-email@.service
[Unit]
Description=Send email alert for %i failure

[Service]
Type=oneshot
ExecStart=/bin/bash -c 'echo "Service %i failed on $(hostname) at $(date)" | mail -s "[Breeze Alert] %i failed" admin@yourdomain.com'

Then add OnFailure=alert-email@%n.service to any unit file you want to monitor:

[Unit]
Description=My Application

Best Practices

  • Avoid alert fatigue — set sensible thresholds and use rate limiting so you are not overwhelmed
  • Use a dedicated alert mailbox — keep alerts separate from your regular email
  • Test your alerts — deliberately trigger each condition to verify delivery
  • Combine with a monitoring stack — for more advanced setups, feed alerts into tools like Prometheus Alertmanager
  • Include context — always include the hostname, timestamp, and relevant diagnostic output in alert messages

Was this article helpful?