User Tools

Site Tools


tcptuning

TCP Tuning

By default, Centos used the following TCP parameters:

One can use “sysctl -w” to change these parameters. In order for changes to persist after reboot, one needs to edit ”/etc/sysctl.conf” (typically as root).

Click here for a description of all of these parameters.

Other helpful sites: SpeedGuide.net

PSC High Performance

 
                                 
$ sysctl -a

net.ipv4.tcp_slow_start_after_idle = 1
net.ipv4.tcp_dma_copybreak = 4096
net.ipv4.tcp_workaround_signed_windows = 0
net.ipv4.tcp_base_mss = 512
net.ipv4.tcp_mtu_probing = 0
net.ipv4.tcp_abc = 0
net.ipv4.tcp_congestion_control = bic
net.ipv4.tcp_tso_win_divisor = 3
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_no_metrics_save = 0
net.ipv4.ipfrag_max_dist = 64
net.ipv4.ipfrag_secret_interval = 600
net.ipv4.tcp_low_latency = 0
net.ipv4.tcp_frto = 0
net.ipv4.tcp_tw_reuse = 0
net.ipv4.icmp_ratemask = 6168
net.ipv4.icmp_ratelimit = 1000
net.ipv4.tcp_adv_win_scale = 2
net.ipv4.tcp_app_win = 31
net.ipv4.tcp_rmem = 4096        87380   4194304
net.ipv4.tcp_wmem = 4096        16384   4194304
net.ipv4.tcp_mem = 196608       262144  393216
net.ipv4.tcp_dsack = 1
net.ipv4.tcp_ecn = 0
net.ipv4.tcp_reordering = 3
net.ipv4.tcp_fack = 1
net.ipv4.tcp_orphan_retries = 0
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_rfc1337 = 0
net.ipv4.tcp_stdurg = 0
net.ipv4.tcp_abort_on_overflow = 0
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_fin_timeout = 60
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_retries1 = 3
net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_time = 7200
net.ipv4.tcp_max_tw_buckets = 180000
net.ipv4.tcp_max_orphans = 65536
net.ipv4.tcp_synack_retries = 5
net.ipv4.tcp_syn_retries = 5
net.ipv4.tcp_retrans_collapse = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1

net.core.netdev_budget = 300
net.core.somaxconn = 128
net.core.xfrm_larval_drop = 0
net.core.xfrm_acq_expires = 30
net.core.xfrm_aevent_rseqth = 2
net.core.xfrm_aevent_etime = 10
net.core.optmem_max = 20480
net.core.message_burst = 10
net.core.message_cost = 5
net.core.netdev_max_backlog = 1000
net.core.dev_weight = 64
net.core.rmem_default = 129024
net.core.wmem_default = 129024
net.core.rmem_max = 131071
net.core.wmem_max = 131071

net.core.rmem_max: Maximum TCP receive window. FasterData recommends 16MB (16777216) for 10G paths with a few parallel streams, or 32MB for very long end-to-end 10G or 40G paths.

net.core.wmem_max: Maximum TCP Send Window. FasterData recommends 16MB (16777216) for 10G paths with a few parallel streams, or 32MB for very long end-to-end 10G or 40G paths.

net.ipv4.tcp_rmem: Memory reserved for TCP receive buffers (reserved memory per connection default). Format: min, default, and max number of bytes to use. FasterData recommends 4096 87380 16777216 and only changing the 3rd parameter to use more than 16MB.

net.ipv4.tcp_wmem: Memory reserved for TCP send buffers (reserved memory per connection default). Format: min, default, and max number of bytes to use. FasterData recommends 4096 87380 16777216 and only changing the 3rd parameter to use more than 16MB.

net.core.netdev_max_backlog: FasterData recommends 30000, and increasing for 10G or faster links.

net.ipv4.tcp_congestion_control: according to FasterData, htcp (Hamilton TCP) or cubic is better for long, high speed links.

sysctl -w net.core.rmem_max=16777216
sysctl -w net.core.wmem_max=16777216
sysctl -w net.ipv4.tcp_rmem=4096 87380 16777216
sysctl -w net.ipv4.tcp_wmem=4096 87380 16777216
sysctl -w net.core.netdev_max_backlog=30000
sysctl -w net.ipv4.tcp_congestion_control=htcp

SpeedGuide also recommends the following settings:

sysctl -w net.ipv4.tcp_sack = 1
sysctl -w net.ipv4.tcp_window_scaling = 1
sysctl -w net.ipv4.tcp_timestamps = 0

Although setting the tcp_timestamps = 0 is controversial (and we chose to leave it enabled): FasterData points out that some alternative congestion control protocols require accurate timestamps.

The Ethernet cards we are testing with are:

eth1: Tigon3 [partno(BCM95704A7) rev 2003 PHY(5704)] (PCI:66MHz:64-bit) 10/100/1000Base-T Ethernet 00:e0:81:2c:5e:bd
eth1: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[0] WireSpeed[1] TSOcap[1]
eth1: dma_rwctrl[763f0000] dma_mask[64-bit]

Also, we will experiment with the MTU (using Jumbo frames of size 9000):

ifconfig eth1 mtu 9000

And to test:

ping -M do -s 8992 192.168.0.3
tcptuning.txt · Last modified: 2011/06/10 17:14 by sbwood