Docs / Server Management / How to Configure System Resource Limits with ulimit

How to Configure System Resource Limits with ulimit

By Admin · Mar 2, 2026 · Updated Apr 24, 2026 · 36 views · 3 min read

How to Configure System Resource Limits with ulimit

Resource limits control how much of the system's resources a single process or user can consume. Properly configured limits prevent runaway processes from exhausting memory, file descriptors, or CPU time, which could crash your entire Breeze. They also ensure fair resource allocation in multi-user environments and are essential for tuning high-performance applications like databases and web servers.

Understanding Soft and Hard Limits

Linux resource limits come in two types:

  • Soft limit: The current enforced limit. A user or process can raise it up to the hard limit.
  • Hard limit: The absolute maximum that only root can increase. Acts as a ceiling for soft limits.

View your current limits with:

# Show all current limits
ulimit -a

# Show specific limits
ulimit -n    # Open files (soft)
ulimit -Hn   # Open files (hard)
ulimit -u    # Max user processes
ulimit -v    # Virtual memory (KB)

Common Resource Limits

Flag  Resource               Default  Recommended (Server)
-n    Open files             1024     65535
-u    Max user processes     7823     65535
-l    Max locked memory      64 KB    unlimited
-s    Stack size             8192 KB  8192 KB
-f    Max file size          unlimited unlimited
-v    Virtual memory         unlimited unlimited
-c    Core dump size         0        unlimited (for debugging)

Temporary Changes with ulimit

Set limits for the current shell session:

# Increase open file limit for this session
ulimit -n 65535

# Increase max processes
ulimit -u 65535

# These changes only last until the session ends

Persistent Changes via limits.conf

For permanent changes, edit /etc/security/limits.conf:

# Format:    

# Increase open files for all users
*               soft    nofile          65535
*               hard    nofile          65535

# Increase max processes for all users
*               soft    nproc           65535
*               hard    nproc           65535

# Specific settings for the web server user
www-data        soft    nofile          65535
www-data        hard    nofile          65535

# Database user settings
mysql           soft    nofile          65535
mysql           hard    nofile          65535
mysql           soft    memlock         unlimited
mysql           hard    memlock         unlimited

Systemd Service Limits

Modern services managed by systemd do not read limits.conf. Instead, configure limits in the service unit file:

# Edit the service override
sudo systemctl edit nginx.service

# Add under [Service]
[Service]
LimitNOFILE=65535
LimitNPROC=65535
LimitMEMLOCK=infinity

# Reload and restart
sudo systemctl daemon-reload
sudo systemctl restart nginx

Kernel-Level Limits

Some limits require kernel parameter changes via sysctl:

# Maximum system-wide open files
sudo sysctl -w fs.file-max=2097152

# Maximum number of PIDs
sudo sysctl -w kernel.pid_max=4194304

# Make persistent in /etc/sysctl.conf
echo "fs.file-max = 2097152" | sudo tee -a /etc/sysctl.conf
echo "kernel.pid_max = 4194304" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p

Verifying Limits for Running Processes

Check the actual limits applied to a running process:

# Find the PID
pidof nginx

# View its limits
cat /proc/<PID>/limits

Application-Specific Tuning

  • Nginx / LiteSpeed: Set worker_rlimit_nofile in the config and LimitNOFILE in the systemd unit to handle thousands of concurrent connections
  • MySQL / MariaDB: Increase open_files_limit in my.cnf and ensure the systemd unit matches with LimitNOFILE
  • Redis: Requires high nofile limits and vm.overcommit_memory=1 in sysctl for background saving
  • Elasticsearch: Needs LimitMEMLOCK=infinity and vm.max_map_count=262144

Troubleshooting

  • If limits do not apply after reboot, ensure the pam_limits.so module is enabled in /etc/pam.d/common-session
  • Verify with su - username -c "ulimit -a" to see limits as a specific user
  • Check /var/log/syslog for "too many open files" errors that indicate limits are too low

Was this article helpful?