The drop-in

Drop the file in /etc/sysctl.d/ rather than editing /etc/sysctl.conf — that way distro upgrades won't clobber your changes, and you can rm the file to revert.

sudo nano /etc/sysctl.d/99-network-tuning.conf

Paste:

# --- TCP behaviour ---
net.ipv4.tcp_rfc1337 = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_workaround_signed_windows = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_low_latency = 1
net.ipv4.tcp_mtu_probing = 1
net.ipv4.ip_no_pmtu_disc = 0
net.ipv4.tcp_frto = 2
net.ipv4.tcp_moderate_rcvbuf = 1

# --- Congestion control ---
# 'bbr' is the modern default on most kernels >= 4.9.
# 'illinois' is a good high-bandwidth/high-latency choice on older kernels.
net.ipv4.tcp_congestion_control = bbr
net.core.default_qdisc = fq

# --- Socket buffer sizes (bytes) ---
net.core.wmem_default = 262144
net.core.wmem_max = 16777216
net.core.rmem_default = 262144
net.core.rmem_max = 16777216
net.ipv4.tcp_rmem = 4096 131072 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

# --- Connection backlog ---
net.core.somaxconn = 4096
net.core.netdev_max_backlog = 16384
net.ipv4.tcp_max_syn_backlog = 8192

Apply without rebooting:

sudo sysctl --system

What each knob actually does

tcp_rfc1337 — protects against the TIME-WAIT assassination hazard described in RFC 1337. Cheap to enable; no real downside.

tcp_window_scaling, tcp_sack — on by default in any modern kernel; listed for completeness.

tcp_mtu_probing = 1 — enables blackhole-MTU detection. Useful when a path has an undeclared MTU bottleneck (PPPoE, IPv6 tunnels, some VPNs).

tcp_congestion_control = bbr — Google's BBR algorithm tends to outperform CUBIC (the default) on lossy or long-fat networks. It's safe on Linux 4.9+, and modules-on-load by default in most distros from 2020 onward. Pair it with the fq queueing discipline (net.core.default_qdisc = fq); BBR's pacing depends on it.

rmem / wmem — the receive/send buffers. The defaults are too small for 1 Gbps+ servers; raising the maximum to 16 MiB gives the kernel room to grow the per-socket buffer for high-bandwidth flows. The kernel still negotiates per-connection sizing — you're just lifting the ceiling.

somaxconn / netdev_max_backlog — raises the listen-backlog and packet-receive-queue ceilings. If your server is hit by SYN floods or short connection bursts and you see nf_conntrack: table full or kernel: TCP: request_sock_TCP: Possible SYN flooding in dmesg, these are the right knobs.

Verify

Read back the active value of any tunable:

sysctl net.ipv4.tcp_congestion_control
sysctl net.core.somaxconn

Confirm BBR is loaded and available:

sysctl net.ipv4.tcp_available_congestion_control
# Expected: reno cubic bbr
Tip

Before going further, run iperf3 against another well-connected box (or a public iperf3 server) before and after applying changes. Tuning blind tends to regress more often than it improves — measure on your actual workload.

Beyond sysctl: the tuned package

Red Hat's tuned daemon ships a set of named profiles (throughput-performance, network-latency, virtual-guest, etc.) that bundle sysctl, CPU governor, and IRQ-affinity tweaks. On RHEL/Rocky/AlmaLinux it's installed by default; on Debian/Ubuntu it's apt install tuned.

sudo apt install tuned
sudo systemctl enable --now tuned
sudo tuned-adm profile throughput-performance
tuned-adm active

If a named profile gets you 90% of the way there, that's much easier to reason about (and audit) than a hand-rolled drop-in.

What I left out

The original version of this note included net.ipv4.tcp_fack and net.ipv4.tcp_frto_response. Both were removed from the kernel in 4.15 / 4.14 respectively — setting them today produces a "no such file or directory" error from sysctl and is harmless but pointless. I dropped them.

tcp_congestion_control = illinois still works on older kernels, but BBR has supplanted it on almost any modern Linux machine you'd actually deploy.