User Tools

Site Tools


tcptuning

====== Differences ====== This shows you the differences between two versions of the page.

Link to this comparison view

tcptuning [2011/04/23 00:13]
sbwood
tcptuning [2011/06/11 00:14] (current)
sbwood
Line 1: Line 1:
 ====== TCP Tuning ====== ====== TCP Tuning ======
- 
 By default, Centos used the following TCP parameters: By default, Centos used the following TCP parameters:
  
-<code>                                      +One can use "sysctl -w" to change these parameters. In order for changes to persist after reboot, one needs to edit "/etc/sysctl.conf" (typically as root). 
 + 
 +Click [[http://www.frozentux.net/ipsysctl-tutorial/ipsysctl-tutorial.html#AEN454|here]] for a description of all of these parameters. 
 + 
 +Other helpful sites: 
 +[[http://www.speedguide.net/articles/linux-tweaking-121 | SpeedGuide.net]]  
 + 
 +[[http://www.psc.edu/networking/projects/tcptune/ | PSC High Performance]] 
 + 
 +<code>  
 +                                 
 $ sysctl -a $ sysctl -a
  
Line 71: Line 80:
 </code> </code>
  
-**net.core.rmem_max**:+**net.core.rmem_max**: Maximum TCP receive window. FasterData recommends 16MB (**16777216**) for 10G paths with a few parallel streams, or 32MB for very long end-to-end 10G or 40G paths.
  
-**net.core.wmem_max**: 
  
-**net.ipv4.tcp_rmem**:+**net.core.wmem_max**: Maximum TCP Send Window. FasterData recommends 16MB (**16777216**) for 10G paths with a few parallel streams, or 32MB for very long end-to-end 10G or 40G paths.
  
-**net.ipv4.tcp_wmem**:+**net.ipv4.tcp_rmem**:  Memory reserved for TCP receive buffers (reserved memory per connection default). Format: min, default, and max number of bytes to use. FasterData recommends **4096 87380 16777216** and only changing the 3rd parameter to use more than 16MB.
  
-**net.core.netdev_max_backlog**:+**net.ipv4.tcp_wmem**: Memory reserved for TCP send buffers (reserved memory per connection default). Format: min, default, and max number of bytes to use. FasterData recommends **4096 87380 16777216** and only changing the 3rd parameter to use more than 16MB. 
 + 
 +**net.core.netdev_max_backlog**: FasterData recommends **30000**, and increasing for 10G or faster links. 
 + 
 +**net.ipv4.tcp_congestion_control**: according to [[http://fasterdata.es.net/fasterdata/host-tuning/linux/expert/|FasterData]], **htcp** (Hamilton TCP) or **cubic** is better for long, high speed links.  
 + 
 +<code> 
 +sysctl -w net.core.rmem_max=16777216 
 +sysctl -w net.core.wmem_max=16777216 
 +sysctl -w net.ipv4.tcp_rmem=4096 87380 16777216 
 +sysctl -w net.ipv4.tcp_wmem=4096 87380 16777216 
 +sysctl -w net.core.netdev_max_backlog=30000 
 +sysctl -w net.ipv4.tcp_congestion_control=htcp 
 +</code> 
 + 
 +[[http://www.speedguide.net/articles/linux-tweaking-121|SpeedGuide]] also recommends the following settings: 
 +<code> 
 +sysctl -w net.ipv4.tcp_sack = 1 
 +sysctl -w net.ipv4.tcp_window_scaling = 1 
 +sysctl -w net.ipv4.tcp_timestamps = 0 
 +</code> 
 +Although setting the tcp_timestamps = 0 is controversial (and we chose to leave it enabled): FasterData points out that some alternative congestion control protocols require accurate timestamps. 
 + 
 +The Ethernet cards we are testing with are: 
 +<code> 
 +eth1: Tigon3 [partno(BCM95704A7) rev 2003 PHY(5704)] (PCI:66MHz:64-bit) 10/100/1000Base-T Ethernet 00:e0:81:2c:5e:bd 
 +eth1: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[0] WireSpeed[1] TSOcap[1] 
 +eth1: dma_rwctrl[763f0000] dma_mask[64-bit] 
 +</code> 
 + 
 +Also, we will experiment with the MTU (using Jumbo frames of size 9000): 
 + 
 +<code> 
 +ifconfig eth1 mtu 9000 
 +</code> 
 + 
 +And to test: 
 +<code> 
 +ping -M do -s 8992 192.168.0.3 
 +</code>
tcptuning.1303517581.txt.gz · Last modified: 2011/04/23 00:13 by sbwood