Time to First Byte (TTFB) measures the time between a client sending a request and receiving the first byte of the response. It directly impacts Core Web Vitals (LCP) and user experience. High TTFB often indicates server-side problems that no amount of frontend optimization can fix. This guide breaks TTFB into its components and provides systematic fixes for each layer.
Understanding TTFB Components
TTFB consists of several measurable stages:
TTFB = DNS Lookup + TCP Connection + TLS Handshake + Server Processing + Network Transit
Typical breakdown for a slow page (800ms TTFB):
- DNS: 50ms (6%)
- TCP: 30ms (4%)
- TLS: 100ms (12%)
- Server: 550ms (69%)
- Transit: 70ms (9%)
Measuring TTFB
# Detailed timing with curl
curl -o /dev/null -w "\
DNS: %{time_namelookup}s\n\
Connect: %{time_connect}s\n\
TLS: %{time_appconnect}s\n\
TTFB: %{time_starttransfer}s\n\
Total: %{time_total}s\n\
" https://example.com
# Multiple samples for reliability
for i in $(seq 1 10); do
curl -s -o /dev/null -w "%{time_starttransfer}\n" https://example.com
done | awk '{ sum += $1; n++ } END { printf "Avg TTFB: %.3fs\n", sum/n }'
# From browser: DevTools > Network tab > select request > Timing tab
# Shows Waiting for server response (= server processing portion of TTFB)
Layer 1: DNS Optimization
# Check DNS resolution time
dig +stats example.com | grep "Query time"
# Solutions:
# 1. Use a fast DNS provider (Cloudflare, Route53, Google Cloud DNS)
# 2. Reduce DNS lookups with connection pooling
# 3. Preconnect to required origins in HTML
<link rel="dns-prefetch" href="//api.example.com">
<link rel="preconnect" href="https://cdn.example.com" crossorigin>
Layer 2: TLS Optimization
# Nginx TLS optimization
ssl_protocols TLSv1.3 TLSv1.2;
ssl_prefer_server_ciphers off;
# Enable TLS 1.3 0-RTT (saves a round trip for returning visitors)
ssl_early_data on;
proxy_set_header Early-Data $ssl_early_data;
# OCSP stapling (prevents client-side OCSP lookup)
ssl_stapling on;
ssl_stapling_verify on;
resolver 1.1.1.1 8.8.8.8 valid=300s;
ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem;
# Session tickets for TLS resumption
ssl_session_tickets on;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
Layer 3: Server Processing (The Biggest Opportunity)
Application-Level Caching
# Full-page cache (most impactful for CMS sites)
# Nginx microcaching for dynamic content
proxy_cache_path /tmp/nginx_cache levels=1:2 keys_zone=app:10m max_size=1g inactive=60m;
server {
location / {
proxy_cache app;
proxy_cache_valid 200 60s;
proxy_cache_use_stale error timeout updating;
proxy_cache_background_update on;
proxy_cache_lock on;
add_header X-Cache-Status $upstream_cache_status;
proxy_pass http://backend;
}
}
# This alone can reduce TTFB from 500ms to > /var/log/ttfb.log
# Alert if TTFB exceeds threshold
if (( $(echo "$ttfb > 1.0" | bc -l) )); then
echo "ALERT: TTFB ${ttfb}s for $url"
fi
done
# Run every 5 minutes via cron
# */5 * * * * /usr/local/bin/check-ttfb.sh
TTFB Targets
- Excellent: Under 200ms — achievable with edge caching or well-cached dynamic content
- Good: 200-500ms — typical for optimized dynamic applications
- Needs improvement: 500ms-1s — usually indicates missing caching or slow queries
- Poor: Over 1s — significant server-side issues requiring investigation
Summary
Reducing TTFB requires working through each layer systematically: fast DNS, optimized TLS, aggressive caching, efficient database queries, and edge delivery. For most sites, the single biggest improvement comes from implementing page caching — either Varnish, Nginx microcaching, or CDN edge caching. Start by measuring each component with curl timing, focus on the slowest layer first, and work your way through until TTFB consistently stays under 200ms for cached pages and under 500ms for dynamic content.