User Tools

Site Tools


tcptuning

====== Differences ====== This shows you the differences between two versions of the page.

Link to this comparison view

tcptuning [2011/04/23 00:06]
sbwood
tcptuning [2011/06/11 00:14] (current)
sbwood
Line 1: Line 1:
 ====== TCP Tuning ====== ====== TCP Tuning ======
- 
 By default, Centos used the following TCP parameters: By default, Centos used the following TCP parameters:
  
-<code>                                       +One can use "sysctl -w" to change these parameters. In order for changes to persist after reboot, one needs to edit "/etc/sysctl.conf" (typically as root). 
-net.ipv4.tcp_syncookies = 1      + 
-net.core.rmem_max 131071 +Click [[http://www.frozentux.net/ipsysctl-tutorial/ipsysctl-tutorial.html#AEN454|here]] for a description of all of these parameters. 
-net.core.wmem_max 131071+ 
 +Other helpful sites: 
 +[[http://www.speedguide.net/articles/linux-tweaking-121 | SpeedGuide.net]]  
 + 
 +[[http://www.psc.edu/networking/projects/tcptune/ | PSC High Performance]] 
 + 
 +<code>  
 +                                  
 +$ sysctl -a 
 + 
 +net.ipv4.tcp_slow_start_after_idle = 1 
 +net.ipv4.tcp_dma_copybreak 4096 
 +net.ipv4.tcp_workaround_signed_windows = 0 
 +net.ipv4.tcp_base_mss = 512 
 +net.ipv4.tcp_mtu_probing = 0 
 +net.ipv4.tcp_abc = 0 
 +net.ipv4.tcp_congestion_control = bic 
 +net.ipv4.tcp_tso_win_divisor = 3 
 +net.ipv4.tcp_moderate_rcvbuf = 1 
 +net.ipv4.tcp_no_metrics_save = 0 
 +net.ipv4.ipfrag_max_dist = 64 
 +net.ipv4.ipfrag_secret_interval = 600 
 +net.ipv4.tcp_low_latency = 0 
 +net.ipv4.tcp_frto = 0 
 +net.ipv4.tcp_tw_reuse = 0 
 +net.ipv4.icmp_ratemask = 6168 
 +net.ipv4.icmp_ratelimit = 1000 
 +net.ipv4.tcp_adv_win_scale = 2 
 +net.ipv4.tcp_app_win 31
 net.ipv4.tcp_rmem = 4096        87380   4194304 net.ipv4.tcp_rmem = 4096        87380   4194304
 net.ipv4.tcp_wmem = 4096        16384   4194304 net.ipv4.tcp_wmem = 4096        16384   4194304
 +net.ipv4.tcp_mem = 196608       262144  393216
 +net.ipv4.tcp_dsack = 1
 +net.ipv4.tcp_ecn = 0
 +net.ipv4.tcp_reordering = 3
 +net.ipv4.tcp_fack = 1
 +net.ipv4.tcp_orphan_retries = 0
 +net.ipv4.tcp_max_syn_backlog = 1024
 +net.ipv4.tcp_rfc1337 = 0
 +net.ipv4.tcp_stdurg = 0
 +net.ipv4.tcp_abort_on_overflow = 0
 +net.ipv4.tcp_tw_recycle = 0
 +net.ipv4.tcp_syncookies = 1
 +net.ipv4.tcp_fin_timeout = 60
 +net.ipv4.tcp_retries2 = 15
 +net.ipv4.tcp_retries1 = 3
 +net.ipv4.tcp_keepalive_intvl = 75
 +net.ipv4.tcp_keepalive_probes = 9
 +net.ipv4.tcp_keepalive_time = 7200
 +net.ipv4.tcp_max_tw_buckets = 180000
 +net.ipv4.tcp_max_orphans = 65536
 +net.ipv4.tcp_synack_retries = 5
 +net.ipv4.tcp_syn_retries = 5
 +net.ipv4.tcp_retrans_collapse = 1
 +net.ipv4.tcp_sack = 1
 +net.ipv4.tcp_window_scaling = 1
 +net.ipv4.tcp_timestamps = 1
 +
 +net.core.netdev_budget = 300
 +net.core.somaxconn = 128
 +net.core.xfrm_larval_drop = 0
 +net.core.xfrm_acq_expires = 30
 +net.core.xfrm_aevent_rseqth = 2
 +net.core.xfrm_aevent_etime = 10
 +net.core.optmem_max = 20480
 +net.core.message_burst = 10
 +net.core.message_cost = 5
 net.core.netdev_max_backlog = 1000 net.core.netdev_max_backlog = 1000
 +net.core.dev_weight = 64
 +net.core.rmem_default = 129024
 +net.core.wmem_default = 129024
 +net.core.rmem_max = 131071
 +net.core.wmem_max = 131071
 +</code>
 +
 +**net.core.rmem_max**: Maximum TCP receive window. FasterData recommends 16MB (**16777216**) for 10G paths with a few parallel streams, or 32MB for very long end-to-end 10G or 40G paths.
 +
 +
 +**net.core.wmem_max**: Maximum TCP Send Window. FasterData recommends 16MB (**16777216**) for 10G paths with a few parallel streams, or 32MB for very long end-to-end 10G or 40G paths.
 +
 +**net.ipv4.tcp_rmem**:  Memory reserved for TCP receive buffers (reserved memory per connection default). Format: min, default, and max number of bytes to use. FasterData recommends **4096 87380 16777216** and only changing the 3rd parameter to use more than 16MB.
 +
 +**net.ipv4.tcp_wmem**: Memory reserved for TCP send buffers (reserved memory per connection default). Format: min, default, and max number of bytes to use. FasterData recommends **4096 87380 16777216** and only changing the 3rd parameter to use more than 16MB.
 +
 +**net.core.netdev_max_backlog**: FasterData recommends **30000**, and increasing for 10G or faster links.
 +
 +**net.ipv4.tcp_congestion_control**: according to [[http://fasterdata.es.net/fasterdata/host-tuning/linux/expert/|FasterData]], **htcp** (Hamilton TCP) or **cubic** is better for long, high speed links. 
 +
 +<code>
 +sysctl -w net.core.rmem_max=16777216
 +sysctl -w net.core.wmem_max=16777216
 +sysctl -w net.ipv4.tcp_rmem=4096 87380 16777216
 +sysctl -w net.ipv4.tcp_wmem=4096 87380 16777216
 +sysctl -w net.core.netdev_max_backlog=30000
 +sysctl -w net.ipv4.tcp_congestion_control=htcp
 +</code>
 +
 +[[http://www.speedguide.net/articles/linux-tweaking-121|SpeedGuide]] also recommends the following settings:
 +<code>
 +sysctl -w net.ipv4.tcp_sack = 1
 +sysctl -w net.ipv4.tcp_window_scaling = 1
 +sysctl -w net.ipv4.tcp_timestamps = 0
 +</code>
 +Although setting the tcp_timestamps = 0 is controversial (and we chose to leave it enabled): FasterData points out that some alternative congestion control protocols require accurate timestamps.
 +
 +The Ethernet cards we are testing with are:
 +<code>
 +eth1: Tigon3 [partno(BCM95704A7) rev 2003 PHY(5704)] (PCI:66MHz:64-bit) 10/100/1000Base-T Ethernet 00:e0:81:2c:5e:bd
 +eth1: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[0] WireSpeed[1] TSOcap[1]
 +eth1: dma_rwctrl[763f0000] dma_mask[64-bit]
 +</code>
 +
 +Also, we will experiment with the MTU (using Jumbo frames of size 9000):
 +
 +<code>
 +ifconfig eth1 mtu 9000
 +</code>
 +
 +And to test:
 +<code>
 +ping -M do -s 8992 192.168.0.3
 </code> </code>
tcptuning.1303517180.txt.gz · Last modified: 2011/04/23 00:06 by sbwood