Docs / Getting Started / Common Mistakes New VPS Users Make (And How to Avoid Them)

Common Mistakes New VPS Users Make (And How to Avoid Them)

By Admin · Mar 15, 2026 · Updated Apr 23, 2026 · 469 views · 4 min read

Every VPS administrator has made mistakes that caused downtime, security incidents, or data loss. Learning from others' mistakes is far less painful than learning from your own. Here are the most common pitfalls and how to avoid them.

Mistake 1: Running Everything as Root

The most common and dangerous mistake. Root has unlimited power, and a single typo can destroy your server.

The Problem

# As root, this typo deletes your entire web directory:
rm -rf /var/www /myapp    # Note: space between /var/www and /myapp
# Intended: rm -rf /var/www/myapp
# Actual: Deletes /var/www AND tries to delete /myapp

The Fix

# Create a regular user and use sudo
adduser deploy
usermod -aG sudo deploy

# Disable root SSH login
sudo sed -i 's/PermitRootLogin yes/PermitRootLogin no/' /etc/ssh/sshd_config
sudo systemctl restart sshd

Mistake 2: No Firewall

A fresh VPS with no firewall exposes every running service to the internet. Automated scanners find open ports within minutes.

The Fix

# Set up UFW immediately
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow http
sudo ufw allow https
sudo ufw enable

# Verify only expected ports are open
sudo ufw status numbered

Mistake 3: No Backups

"It won't happen to me" is the mantra of every admin who's lost data. Hardware fails, software bugs corrupt data, and human error is inevitable.

The Fix

# Automated daily database backup
cat > /usr/local/bin/backup-db.sh  "$BACKUP_DIR/all-databases-$DATE.sql.gz"
# Keep only last 14 days
find "$BACKUP_DIR" -name "*.sql.gz" -mtime +14 -delete
EOF
chmod +x /usr/local/bin/backup-db.sh

# Add to crontab
echo "0 3 * * * /usr/local/bin/backup-db.sh" | sudo tee -a /var/spool/cron/crontabs/root

Mistake 4: Using Password Authentication for SSH

Password-based SSH is vulnerable to brute-force attacks. Even with a strong password, thousands of bots are trying to log in every day.

The Fix

# Check how many brute-force attempts you're getting
grep "Failed password" /var/log/auth.log | wc -l
# Often thousands per day on a fresh server!

# Switch to key-based authentication
# On your LOCAL machine:
ssh-keygen -t ed25519
ssh-copy-id deploy@your-server

# Then disable password auth on the server:
sudo nano /etc/ssh/sshd_config
# Set: PasswordAuthentication no
# Set: PubkeyAuthentication yes
sudo systemctl restart sshd

Mistake 5: Not Updating Software

Unpatched software is the #1 vector for server compromises. Known vulnerabilities are actively exploited by automated tools.

The Fix

# Enable automatic security updates
sudo apt install unattended-upgrades
sudo dpkg-reconfigure -plow unattended-upgrades

# Also update regularly
sudo apt update && sudo apt upgrade -y

Mistake 6: Exposing Database Ports to the Internet

# Check if MySQL is listening on all interfaces (BAD)
ss -tuln | grep 3306
# 0.0.0.0:3306 means it's accessible from anywhere!

# Fix: Bind MySQL to localhost only
sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf
# Set: bind-address = 127.0.0.1
sudo systemctl restart mysql

# Same for PostgreSQL, Redis, MongoDB, etc.
# NEVER expose database ports to the public internet

Mistake 7: Ignoring Disk Space

# Logs can fill up a disk quickly
# Set up log rotation (usually pre-configured, but verify)
cat /etc/logrotate.d/nginx

# Monitor disk usage
df -h
# Set up an alert when disk usage exceeds 80%
# Add to crontab:
echo '*/30 * * * * [ $(df / --output=pcent | tail -1 | tr -d " %%") -gt 80 ] && echo "Disk usage critical" | mail -s "Disk Alert" you@example.com' | sudo tee -a /var/spool/cron/crontabs/root

Mistake 8: No Monitoring

If your server goes down at 2 AM and nobody is watching, you won't know until users complain. Set up basic monitoring from day one.

The Fix

  • Uptime monitoring — Use a free service like UptimeRobot or Hetrixtools
  • Resource monitoring — Install netdata for real-time dashboards
  • Log monitoring — At minimum, check logs weekly for errors

Mistake 9: Editing Config Files Without Backups

# Always backup before editing critical configs
sudo cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak
sudo nano /etc/nginx/nginx.conf

# Test Nginx config before reloading
sudo nginx -t
# If the test passes, reload:
sudo systemctl reload nginx
# If it fails, restore the backup:
sudo cp /etc/nginx/nginx.conf.bak /etc/nginx/nginx.conf

Mistake 10: Not Testing Restores

Having backups is useless if you can't restore them. The most common backup mistake is never testing the restore process.

The Fix

# Schedule quarterly restore tests:
# 1. Spin up a test Breeze
# 2. Restore your latest backup
# 3. Verify the application works
# 4. Destroy the test server
# 5. Document the process and any issues found

Quick Reference: First-Day Security Checklist

  1. Create a non-root user with sudo
  2. Set up SSH key authentication
  3. Disable root login and password auth
  4. Enable the firewall (allow only SSH, HTTP, HTTPS)
  5. Update all packages
  6. Enable automatic security updates
  7. Bind databases to localhost
  8. Set up automated backups
  9. Install fail2ban
  10. Set up uptime monitoring

Was this article helpful?