I experimented with Jumbo frames.
I found that the server setting does not have a noticable influence (the server runs simultaneously more services - nfs, httpd ... , and samba)
But(!) the client side, running on Linux
(Debian Squeeze, Wheezy, Fedora 15,14,11 were tested),
crashes nearly to freeze if only use any mtu setting other then the default 1500 while copying from the server to client (> 100MB).
The parent of copy process freazes and from time to time the machine is completely blocked - the process could not be killed, the system cannot be halted/rebooted even form console. The manual reset was necessary.
The number of connection watched by netstat was growing until the netstat froze itself.
/* The Windows client (XP prof was the only available) does not mention any changes if the mtu is changing - does not freeze neither improves the speed. */
Most frequent error notice appeared in <dmesg> is
CIFS VFS: Send error in read = -11
I do not have any idea how to undestand/explain this behavior.
The switch was originaly suspected because it does not supported Jumbo frames. But the bug was reproduced and tested even on the direct server client cable connection
P.S. this bug was reproduced on CIFS version 1.61/1.74
On the server side was Samba 2.2 ... 3.5 running x86/amd64 Linux kernel 2.6.28 ... 3.0.0
P.P.S. Please, try to reproduce this bug, before aks me for all the version numbers, tons of logs and type of used cables.
P.P.P.S. The most similar but not the same bug was found (without any answer) the <bug 7381>. The <bug 4368> contents a notice of a Jumbo frame, but is not much close.
Since yesterday I found a 32bit Debian based eliveCD Linux
(kernel 2.6.30/CIFS 1.58)
where was no problem with transfer file using Jumbo frames.
CIFS mounted as usual using mount.cifs
Monitoring connetions with "netstat | grep 445" found that FIN_WAIT2 hooks on the line generating
CIFS VFS: No response for cmd 50 mid <xxx>
This comes in a great clock of data transfer while MTU is set gt 1500.
The mount procedere, directory listing, and copying small amount of data is not affected.
if the number of connections grow, then it looks like you have a network issue there I guess, maybe not all the network componets support jumbo frames there correctly? This does not look like a cifs vfs issue to me in any case.