Difference between net.core.rmem_max and net.ipv4.tcp_rmem
20,516
Core is the overall max receive buffer, while tcp relates to just that protocol.
As for the priority-question: It seems that the tcp-setting will take precendence over the common max setting, which is a bit confusing. Setting max has no effect on the current tcp setting (just tested on CentOS 5).
A more correct description would have been: default_max - but that was propably too longish.
Related videos on Youtube
Author by
bydsky
Updated on September 18, 2022Comments
-
bydsky over 1 year
What's the difference between net.core.rmem_max and the third value of net.ipv4.tcp_rmem? Which has the higher priority for tcp connections?
For below two examples, what's the max buffer for tcp connections?
Case 1: sysctl -w net.core.rmem_max=7388608 sysctl -w net.ipv4.tcp_rmem='4096 87380 8388608' Case 2: sysctl -w net.core.rmem_max=8388608 sysctl -w net.ipv4.tcp_rmem='4096 87380 7388608'
-
Nils over 8 yearsPriority related to tcp?
-
bydsky over 8 years@Nils Yes, for tcp connections.
-
-
nh2 over 8 yearsYour explanation makes sense, but this conflicts with what
man tcp
says abouttcp_rmem
's max value:the maximum size of the receive buffer used by each TCP socket. This value does not override the global net.core.rmem_max
- see also stackoverflow.com/questions/31546835/…. Isman tcp
wrong? -
Nils over 8 years@nh2 That would not be the first time where a man page is wrong.
-
Wildcard about 7 yearsHow exactly did you test it?
-
Jordan Pilat over 6 years@Nils, simply reading the values won't tell you if one overrides another -- you have to actually try to get a TCP buffer which exceeds the net.core.[wmem/rmem]_max buffer in order to test out such overriding.
-
nh2 over 3 yearsI've reported the apparent man page bug here: bugzilla.kernel.org/show_bug.cgi?id=209327