Docs / Troubleshooting / Resolving "Too Many Open Files" Errors

Resolving "Too Many Open Files" Errors

By Admin · Feb 25, 2026 · Updated Apr 23, 2026 · 149 views · 1 min read

What Does It Mean?

Linux limits the number of file descriptors a process can open. When this limit is reached, processes cannot create new connections, open files, or accept requests.

Check Current Limits

# System-wide limit
cat /proc/sys/fs/file-max

# Per-user soft/hard limits
ulimit -Sn  # Soft limit
ulimit -Hn  # Hard limit

# Limit for a specific process
cat /proc/PID/limits | grep "open files"

# How many files are currently open
cat /proc/sys/fs/file-nr
# Format: allocated  free  maximum

Increase Limits Temporarily

ulimit -n 65535

Increase Limits Permanently

Edit /etc/security/limits.conf:

* soft nofile 65535
* hard nofile 65535
root soft nofile 65535
root hard nofile 65535

For Systemd Services

# Edit the service file
sudo systemctl edit nginx

# Add:
[Service]
LimitNOFILE=65535
sudo systemctl daemon-reload
sudo systemctl restart nginx

System-Wide Maximum

# /etc/sysctl.conf
fs.file-max = 2097152
fs.nr_open = 1048576

sudo sysctl -p

Find What Is Opening Files

# Count open files per process
lsof | awk '{print $1, $2}' | sort | uniq -c | sort -rn | head -20

# Open files for specific process
lsof -p PID | wc -l

# Open files for specific user
lsof -u www-data | wc -l

Common Causes

  • Web server with too many concurrent connections
  • Database with many client connections
  • Application not closing file handles or sockets
  • Log files being opened but never closed

Was this article helpful?