Docs / Networking / How to Optimize TCP Performance with Kernel Tuning

How to Optimize TCP Performance with Kernel Tuning

By Admin · Mar 15, 2026 · Updated Apr 23, 2026 · 295 views · 3 min read

Linux kernel TCP parameters significantly affect network throughput, latency, and connection handling. Proper tuning can dramatically improve performance for high-traffic web servers, API endpoints, and real-time applications.

Understanding TCP Buffers

# TCP buffers determine how much data can be in-flight
# Larger buffers = higher throughput for high-latency connections
# Too large = memory waste, bufferbloat

# View current buffer settings
sysctl net.core.rmem_max        # Maximum receive buffer
sysctl net.core.wmem_max        # Maximum send buffer
sysctl net.ipv4.tcp_rmem        # TCP receive buffer (min, default, max)
sysctl net.ipv4.tcp_wmem        # TCP send buffer (min, default, max)

High-Traffic Web Server Tuning

# /etc/sysctl.d/99-tcp-tuning.conf

# Increase buffer sizes for high throughput
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

# Increase connection backlog
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535
net.core.netdev_max_backlog = 65535

# Enable TCP Fast Open
net.ipv4.tcp_fastopen = 3

# Reuse TIME_WAIT sockets
net.ipv4.tcp_tw_reuse = 1

# Increase ephemeral port range
net.ipv4.ip_local_port_range = 1024 65535

# Enable BBR congestion control (Google)
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr

# Reduce keepalive time
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 60
net.ipv4.tcp_keepalive_probes = 3

# Apply
sudo sysctl -p /etc/sysctl.d/99-tcp-tuning.conf

Enable BBR Congestion Control

# BBR (Bottleneck Bandwidth and RTT) improves throughput significantly
# Especially on long-distance or lossy connections

# Check current congestion control
sysctl net.ipv4.tcp_congestion_control

# Enable BBR
sudo modprobe tcp_bbr
echo "tcp_bbr" | sudo tee /etc/modules-load.d/bbr.conf
echo "net.core.default_qdisc=fq" | sudo tee /etc/sysctl.d/99-bbr.conf
echo "net.ipv4.tcp_congestion_control=bbr" | sudo tee -a /etc/sysctl.d/99-bbr.conf
sudo sysctl -p /etc/sysctl.d/99-bbr.conf

# Verify
sysctl net.ipv4.tcp_congestion_control
# Should output: bbr

Connection Tracking Tuning

# For servers handling many connections (load balancers, proxies)
net.netfilter.nf_conntrack_max = 1048576
net.netfilter.nf_conntrack_tcp_timeout_established = 600
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 30

# Check current connection tracking
cat /proc/sys/net/netfilter/nf_conntrack_count
cat /proc/sys/net/netfilter/nf_conntrack_max

Benchmarking Network Performance

# Install iperf3
sudo apt install iperf3

# Server mode
iperf3 -s

# Client mode (from another machine)
iperf3 -c server-ip -t 30 -P 4
# -t 30: run for 30 seconds
# -P 4: 4 parallel streams

# Compare before and after tuning

File Descriptor Limits

# Each network connection uses a file descriptor
# Check current limits
ulimit -n

# Increase for the current session
ulimit -n 65535

# Increase permanently
echo "* soft nofile 65535" | sudo tee -a /etc/security/limits.conf
echo "* hard nofile 65535" | sudo tee -a /etc/security/limits.conf

# For systemd services
sudo systemctl edit nginx
# [Service]
# LimitNOFILE=65535

Was this article helpful?