I am running a file server with smbd here.
The server is a 64bit 2.8 GHz Celeron D with 2GB RAM, running 64bit Debian-testing (all packages up to date), Samba version 3.0.23d.
It is running as a pure file server so its only CPU load is serving files.
The client I am doing the tests from is WinXP Pro SP2.
It is connected via Intel PRO1000 GBit ethernet (both the server and client use e1000 chipsets).
I have done a raw TCP benchmark (using netio) and the network IS capable of transferring 120 MB/s.
The problem is: The maximal transfer rate when reading from the file server is
limited to almost exactly 50 MB/s. "Reading from the file server" means that I use a program on the client which does not do anything else besides reading the file. Especially it does not write to disk so there is no bottleneck. (If you don't believe me I can show you the source code of the benchmark program, I wrote it myself ;)
The 50 MB/s are reached when the file I'm using for the benchmark is completely cached in the file server's RAM but not cached in the client's RAM of course (this is achieved by doing the following command on the server several times: dd if=testfile of=/dev/null). I am using test-files of 512 MB, the problem also happens with 100 MB or 1 GB, it's independent of file size.
CPU usage while reading:
smbd is using around 10%, top says:
Cpu(s): 2.0%us, 6.0%sy, 0.0%ni, 82.0%id, 0.0%wa, 0.0%hi, 10.0%si, 0.0%st
The only network option my smb.conf contains is:
socket options = TCP_NODELAY IPTOS_LOWDELAY IPTOS_THROUGHPUT SO_SNDBUF=8192 SO_RCVBUF=8192
- which is everything I could google for optimizing the samba network speed. I also tried SNDBUF/RCVBUF of 16kb and 64kb, the speed does not seem to change.
I have this problem since I set up the file server almost a year ago, my Debian packages have always been up to date.
I also tested it with a friend's notebook with a new XP installation and did
not get over 50 MB/s so I doubt that my XP is broken.
Furthermore, I now set up a 1.5 TB fileserver with a Core 2 Duo (2.4 GHz) for a friend and he also has the same problem - his clients only get 50 MB/s.
Here are the answers to the questions I was asked when reporting the problem on the samba mailing list:
- I am not using LDAP.
- The network is using full-duplex.
- To repeat it: It's no hard disk problem, there are no disks involved in the actual benchmark.
Can someone please investigate this issue?
I am willing to help testing.
Thanks, Leo B.
The e1000 driver is known to have problems that tend to not show up with pure ftp tests but only with traffic going in both directions. Even if you're only reading one direction via SMB, there is traffic going from the client to the server.
Make sure that you definitely have the current e1000 linux kernel driver from the appropriate intel website.
Doh, I intentionally got Intel boards because I wanted performance and reliability. Well, at least there aren't any faster onboard NICs I think. The AMD stuff uses too much CPU time and the other no-name NICs are even worse.
I am using kernel 126.96.36.199, shouldn't this be recent enough? I thought that there are some Intel people who usually update the kernel drivers so that I don't need to download patches and stuff from intel.com?
Besides, has anyone actually confirmed that these issues are the cause of my problem? Can you give me an URL to a website with describes the e1000 issues?
Kernel 188.8.131.52 e1000-main.c says:
#define DRV_VERSION "7.1.9-k4"
Then I downloaded "INTEL(R)_QSK_2_0_GPL_SOURCES.TAR.gz" from intel.com, and it contains:
So my driver should be up to date.
given the fact that without jumbo frames ~30MB/s is usually the upper limit of I/O rates you get on GBit ethernet and taken into account that the server/client has to ack packages and smb can't be as fast as plain dummy traffic - I don't really see why you complain about poor performance if you get 50MB/s (!) ...
"given the fact that without jumbo frames ~30MB/s is usually the upper limit of
I/O rates you get on GBit ethernet"
Sorry but this is nonsense. It might apply to someone using a "gamer-board" with a Marvell onboard-NIC connected via PCI and a noname switch. I am using "professional" hardware here ;) Two Intel NICs and a Netgear GBit switch.
I repeat: I have tested TCP throughput with some application called "netio". It just creates a TCP connection between client and server and tests it speed and it was able to transfer 120 MB/s constantly in both directions. The CPU usage was far below 50% even, I think it was 30%.
(The bad thing about cheap-gamer NICs is the PCI connection. The Intel NICs are directly connected to the north bridge!)
Partly Björn is right. You will probably not be able to get the nominal transfer speed that you get with a pure TCP test. This is because the SMB protocol is quite a bit more complex than a pure TCP data stream. The client has to request chunk per chunk individually. Tuning it from 50MB to maybe 80MB can be a quite difficult task that involves a lot of components and good analysis.
If you get 50MB without further tuning I would say this is not a genuine bug in Samba, so I will mark this as invalid. If you like, feel free to further pursue this on the mailing lists for tuning tips.
(In reply to comment #7)
> Partly Björn is right. You will probably not be able to get the nominal
> transfer speed that you get with a pure TCP test.
> This is because the SMB
> protocol is quite a bit more complex than a pure TCP data stream.
Think about that again. To the ethernet hardware, a TCP data packet is just a TCP data packet, there are no "synthetic" TCP tests, it is all just the same data to the hardware, the switch doesn't recognize "Oh these are some beautiful benchmark packets, let's transfer them faster".
So if Samba is slower than the "test"-software, this has to be caused by Samba somehow.
If the complex samba protocol is not able to send its ACK-packets fast enough then I consider this as a bug.
> The client
> has to request chunk per chunk individually. Tuning it from 50MB to maybe 80MB
> can be a quite difficult task that involves a lot of components and good
Excatly, because it requires a good analysis I posted it here and not to a mailinglist. The only people who can do the good analysis are developers.
> If you get 50MB without further tuning I would say this is not a genuine bug in
> Samba, so I will mark this as invalid.
The problem does only happen in Samba, not in the program I used to test the TCP speed with.
So if this only happens in Samba, it IS genuine to Samba.
> If you like, feel free to further pursue
> this on the mailing lists for tuning tips.
People will tell me there to do the same SO_SNDBUF etc. tuning fivehundred more times even though I have already done that, the others will just ignore my mail. I have already posted to the samba mailinglist.
Here at bugzilla I will at least get the chance that an interested, bored developer starts investigating the issue someday and eventually finds out what causes it. It does not harm anyone if the bug stays open here for some months, does it?
So please just lower the priority instead of closing it! If the bug does not interest you that does not mean that you have to close it!
[Besides - I know that we cannot make any claims to opensource developers - BUT you have to consider that me and my mate both do not earn any money, we saved about one year each to be able to purchase our 1 TB fileservers and spent over 1000 EUR each, and now you two gigbit-ethernet-pessismists just come here and close our bug report so that our shiny new servers should stay limited to 50 MB/s forever just because you have heard somewhere that 30 MB/s is typical for GBit. This just is not fair!]
(In reply to comment #8)
> Think about that again. To the ethernet hardware, a TCP data
> packet is just a TCP data packet, there are no "synthetic" TCP
> tests, it is all just the same data to the hardware, the switch
> doesn't recognize "Oh these are some beautiful benchmark packets,
> let's transfer them faster". So if Samba is slower than the
> "test"-software, this has to be caused by Samba somehow.
> If the complex samba protocol is not able to send its ACK-packets
> fast enough then I consider this as a bug.
What do you get on Gigabit with a Windows 2003 server?
Also, please be careful that you do not associate overhead in
the CIFS protocol with performance issues in Samba. You seem
to be mixing the two. I'll leave this open for now.
(In reply to comment #9)
> What do you get on Gigabit with a Windows 2003 server?
I will try to test that very soon.
> Also, please be careful that you do not associate overhead in
> the CIFS protocol with performance issues in Samba. You seem
> to be mixing the two. I'll leave this open for now.
Well, what do you mean by "overhead"? If you are talking about bandwidth overhead, my WinXP network graph shows that the 50 MB/s correspond to the link idleness, i.e. its about 60% idle then so there is no overhead which takes the bandwidth.
For now, I've created an attachment of a tcpdump of the benchmark.
Created attachment 2258 [details]
tcpdump of the benchmark
Im my experience 50MB/sec is pretty good for single-stream performance from Windows. It is certainly possible to write a client which will get better performance (eg, cifsfs), but with Win XP, this is about as good as it gets. No samba ug here IMHO.
I tested with a fresh WinXP x64 installation now and I got 70 MB/s. After installing SP2 it went down to 50 MB/s again. This is weird certainly, isn't it?
Very odd. It is reproducible if you uninstall SP2 ?