Bug 10879 - smbget 70x slower than mount.cifs for file transfer
Summary: smbget 70x slower than mount.cifs for file transfer
Status: NEW
Alias: None
Product: Samba 4.1 and newer
Classification: Unclassified
Component: libsmbclient (show other bugs)
Version: 4.13.3
Hardware: x64 Linux
: P5 normal (vote)
Target Milestone: ---
Assignee: Jeremy Allison
QA Contact: Samba QA Contact
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2014-10-15 12:13 UTC by Keir Lawson
Modified: 2020-12-30 09:15 UTC (History)
6 users (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Keir Lawson 2014-10-15 12:13:03 UTC
When copying a large file from my office NAS to my laptop via smbget I get a speed of around 1.6 MBps, using mount with the cifs file system option I get around 100Mbps.  I am using Sambe 4.1.12 on Fedora 20.
Comment 1 inf3rno 2016-02-26 13:04:14 UTC
Same here. https://bugzilla.gnome.org/show_bug.cgi?id=762384 I measured 86MB/s vs. 19MB/s on Fedora with Gnome3. Bad to see that this was not fixed for more than a year. :S
Comment 2 inf3rno 2016-02-26 13:11:58 UTC
I measured the speeds with different methods. What is interesting here that there is an "smbclient" command, and when I use that the speeds are normal:

# works only if I mount the location with nautilus
smbget -u root -w WORKGROUP smb://192.168.0.186/asmedia-hdd/testfile
# it requires password even if I mount with nautilus
wifi: 5.3MB/s
cable: 18.5MB/s

nautilus
# password saved by nautilus
wifi: 5.2MB/s
cable: 18.1MB/s

nautilus CIFS mounted
# password hardcoded to fstab
wifi: 11.9MB/s
cable: 85.8MB/s

smbclient //192.168.0.186/asmedia-hdd -W WORKGROUP -U root
# requires password
get testfile testfile
wifi: 11.9MB/s
cable: 83.4MB/s

Is this "smbclient" part of this samba project?
Comment 3 Stefan Metzmacher 2016-02-26 14:07:02 UTC
I think the main difference is that smbget and gvfs use
a much smaller buffer size. The do sequential reads
of ~64k buffers, while smbclient uses a 16MByte buffer by default.

The underlying code chunks the higher level buffer size
into multiple parallel network requests of possible smaller
sizes. The network buffer size also depends on the used
protocol. This SMB1 we can do up to 50 parallel 64k reads,
with SMB 2.0.2 we can do parallel 64k reads depending on the available
credits, with SMB 2.1 and higher (including SMB3) we can do
parralel 1 MByte or even 8 MByte reads depending on the available
credits.
Comment 4 Stefan Metzmacher 2016-02-26 14:12:14 UTC
(In reply to Stefan Metzmacher from comment #3)

Can you test smbget with the option --blocksize=16777216 ?
Comment 5 Stefan Metzmacher 2016-02-26 14:15:11 UTC
(In reply to inf3rno from comment #2)

Yes, "smbclient" and "smbget" are both part on the Samba project...
Comment 6 inf3rno 2016-02-27 05:50:54 UTC
(In reply to Stefan Metzmacher from comment #5)

I remember measuring the speed with CIFS mount and 64k write & 1m read buffer sizes. Both write and read was fast, so I have doubts about the 64k buffer size being slow, but ofc. I can measure the speeds with different buffer sizes.
Comment 7 inf3rno 2016-03-06 14:12:08 UTC
(In reply to Stefan Metzmacher from comment #4)

I measured smbget with different block sizes as you suggested. The results are interesting. I used a file with size of 1.55GB.

By the default block size the speed was between 23 and 24MB/s. By different block sizes (1K, 2K, 4K, 8K, 16K, 32K, 64K) the speeds were very similar, between 38 and 58MB/s. The distribution was random, it did not depend on the block size. It still did not reach the speed of the CIFS mounted partition, which was between 70 and 90MB/s. The simple mount and copy on GUI with nautilus was 18-19MB/s, so it is slower than the smbget copy. Probably there is another bottleneck in the system, not just the block size.

With smaller block sizes the progress refresh rate was much more frequent than with bigger block sizes. By the default block size the progress refresh rate was similar to the 1K block size refresh rate, so it was very fast. I assume that the default block size is much smaller than 1K and that's why the data transfer is so slow.


copy with nautilus (18-19MB/s)
```
# mount with nautilus on GUI
# copy with nautilus on GUI
```

copy with smbget default block size (23-24MB/s)
```
# mount with nautilus on GUI
smbget -u root -w WORKGROUP smb://192.168.0.186/asmedia-hdd/testfile
```

copy with smbget 16K (1-64K) block size (38-57MB/s)
```
# mount with nautilus on GUI
smbget -u root -w WORKGROUP smb://192.168.0.186/asmedia-hdd/testfile --blocksize=16777216
```

copy with CIFS mount (70-90MB/s)
```
# mount in fstab
//192.168.0.186/asmedia-hdd /media/asmedia-hdd cifs rw,workgroup=WORKGROUP,username=root,password=...,noauto,users,iocharset=utf8 0 0
# copy with nautilus on GUI
```
Comment 8 Björn Jacke 2020-12-30 09:15:56 UTC
--blocksize=16777216 or oven 1/10 of it makes the thoughput catch up with smbclient in a 1 GB LAN environment. We should consider to increase the smbget/libsmbclient default block size probably.