Why S3-Compatible Storage?
Object storage is cheap, durable, and accessible from anywhere:
| Provider | Price (per GB/mo) | S3-Compatible |
|---|---|---|
| AWS S3 | $0.023 | Yes (native) |
| Backblaze B2 | $0.006 | Yes |
| Wasabi | $0.007 | Yes |
| MinIO | Free (self-hosted) | Yes |
| DigitalOcean Spaces | $0.020 | Yes |
Setup
Install AWS CLI
sudo apt install -y awscli
Configure Credentials
aws configure
# AWS Access Key ID: your-key
# AWS Secret Access Key: your-secret
# Default region name: us-east-1
# Default output format: json
For non-AWS providers, create ~/.aws/config:
[profile backblaze]
endpoint_url = https://s3.us-west-004.backblazeb2.com
region = us-west-004
Backup Script
#!/bin/bash
# /opt/backup/db-to-s3.sh
set -euo pipefail
BUCKET="s3://my-backups/database"
DATE=$(date +%Y%m%d_%H%M)
TMP="/tmp/db-backup-${DATE}"
RETENTION_DAYS=30
# Dump database
mysqldump --single-transaction --routines --triggers \
-u backup_user -p"${DB_PASS}" mydb | gzip > "${TMP}.sql.gz"
# Upload to S3
aws s3 cp "${TMP}.sql.gz" "${BUCKET}/${DATE}.sql.gz" \
--storage-class STANDARD_IA
# Clean up local temp file
rm -f "${TMP}.sql.gz"
# Remove old backups from S3
aws s3 ls "${BUCKET}/" | while read -r line; do
file_date=$(echo "$line" | awk '{print $1}')
if [[ $(date -d "$file_date" +%s) -lt $(date -d "-${RETENTION_DAYS} days" +%s) ]]; then
file_name=$(echo "$line" | awk '{print $4}')
aws s3 rm "${BUCKET}/${file_name}"
fi
done
echo "[$(date)] Backup uploaded: ${DATE}.sql.gz"
S3 Lifecycle Rules
Instead of manual cleanup, configure lifecycle rules:
{
"Rules": [{
"ID": "expire-old-backups",
"Status": "Enabled",
"Filter": {"Prefix": "database/"},
"Expiration": {"Days": 30},
"Transitions": [{
"Days": 7,
"StorageClass": "GLACIER"
}]
}]
}
This keeps backups in standard storage for 7 days, moves to Glacier for cheap archival, then deletes after 30 days.
Restoration
# List available backups
aws s3 ls s3://my-backups/database/
# Download and restore
aws s3 cp s3://my-backups/database/20260315_0300.sql.gz /tmp/
gunzip < /tmp/20260315_0300.sql.gz | mysql -u root -p mydb
Cron Schedule
# Daily at 3 AM
0 3 * * * DB_PASS=your-password /opt/backup/db-to-s3.sh >> /var/log/db-backup.log 2>&1
Tip Use IAM policies to restrict the backup user to only the specific S3 bucket. If the server is compromised, the attacker can't access your other S3 data.