Bug 5597 - vfs_proxy fork not improving performance at 250ms latency
Summary: vfs_proxy fork not improving performance at 250ms latency
Status: RESOLVED WONTFIX
Alias: None
Product: Samba 4.0
Classification: Unclassified
Component: File services (show other bugs)
Version: unspecified
Hardware: x86 Linux
: P3 normal (vote)
Target Milestone: ---
Assignee: Samjam
QA Contact: samba4-qa@samba.org
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2008-07-08 13:27 UTC by Avi Norowitz
Modified: 2010-01-19 14:07 UTC (History)
1 user (show)

See Also:


Attachments
smb.conf on smbpc (371 bytes, text/plain)
2008-07-08 13:37 UTC, Avi Norowitz
no flags Details
smb.conf on smbps (382 bytes, text/plain)
2008-07-08 13:37 UTC, Avi Norowitz
no flags Details
Debug (-d 3) output on smbpc (569.79 KB, text/plain)
2008-07-08 13:38 UTC, Avi Norowitz
no flags Details
Debug (-d 3) output on smbps (562.72 KB, text/plain)
2008-07-08 13:38 UTC, Avi Norowitz
no flags Details

Note You need to log in before you can comment on or make changes to this bug.
Description Avi Norowitz 2008-07-08 13:27:11 UTC
I am testing the vfs_proxy fork of Samba 4 (maintained by Sam Liddicott a.k.a. Amin Azez):

http://repo.or.cz/w/Samba/vfs_proxy.git

I am conducting this test using the following servers:

Hostname: lpt-112
OS: Windows XP Professional SP3
LAN1 IP: 192.168.220.43/24

Hostname: smbpc
Samba version: vfs_proxy fork of Samba 4 (retrieved from GIT on July 7, 2008)
OS: CentOS Linux 4 i386
eth0 IP: 192.168.220.86/24
eth1 IP: 192.168.227.20/24

Hostname: smbps
Samba version: vfs_proxy fork of Samba 4 (retrieved from GIT on July 7, 2008)
OS: CentOS Linux 4 i386
eth0 IP: 192.168.220.85/24
eth1 IP: 192.168.227.10/24

Hostname: fs
Samba version: samba-3.0.25b-1.el4_6.4 for CentOS 4
OS: CentOS Linux 4 i386
eth0 IP: 192.168.220.72/24

lpt-112 is intended to simulate a Windows client at a branch office.

smbpc is intended to simulate a proxy at a branch office.

smbps is intended to simulate a proxy at a main office.

fs is intended to simulate a file server at a main office.

lpt-112 communicates with smbpc and smbps over the 192.168.220.0/24 network.

smbpc communicates with smbps over the 192.168.227.0/24 network.

smbps communicates with fs other over the 192.168.220.0/24 network.

I used netem on the eth1 interfaces of both smbpc and smbps to simulate a 250ms latency delay between the hosts (125ms each way):

tc qdisc add dev eth1 root netem delay 125ms

The share \\smbpc\proxy uses the VFS proxy module to connect to \\smbps\proxy (192.168.227.10), which uses the proxy module to connect to \\fs\testuser1 (192.168.220.72).

The share \\smbpc\cifs uses the VFS cifs module to connect to \\smbps\cifs (192.168.227.10), which uses the proxy module to connect to \\fs\testuser1 (192.168.220.72). (I set this up to compare performance. I assume the cifs module should not be performing any WAN optimizations.)

I am using this 926kB Excel workbook for this test:

http://warrantypartsdirect.dell.com/AMER/program/OptiPlex_Part_Guide.xls

If I open the workbook on \\smbpc\proxy (192.168.220.86) from lpt-112 using Excel 2003, the workbook opens in about 75 seconds.

If I open the workbook on \\smbpc\cifs (192.168.220.86) from lpt-112 using Excel 2003, the workbook opens in about 90 seconds.

The times vary if I repeat the tests, but the general tendency is that the share using the proxy module doesn't perform substantially better than the cifs module (and often it performs worse).

This 926kB Excel workbook is an example, but proxy doesn't seem to improve performance for any for any Excel, Word, or PDF document I try to open, save, upload, etc.

I've experimented with different values for proxy:cache-readahead and proxy:cache-readaheadblock, but none of the values seem to substantially improve performance.

I will attach the smb.conf files of each server and the smbd output at debug level 3 which was generated when I conducted the \\smbpc\proxy test.
Comment 1 Avi Norowitz 2008-07-08 13:37:16 UTC
Created attachment 3394 [details]
smb.conf on smbpc
Comment 2 Avi Norowitz 2008-07-08 13:37:43 UTC
Created attachment 3395 [details]
smb.conf on smbps
Comment 3 Avi Norowitz 2008-07-08 13:38:29 UTC
Created attachment 3396 [details]
Debug (-d 3) output on smbpc
Comment 4 Avi Norowitz 2008-07-08 13:38:55 UTC
Created attachment 3397 [details]
Debug (-d 3) output on smbps
Comment 5 Avi Norowitz 2008-07-08 15:35:04 UTC
I made an error in this paragraph:

"The share \\smbpc\cifs uses the VFS cifs module to connect to \\smbps\cifs
(192.168.227.10), which uses the *proxy* module to connect to \\fs\testuser1
(192.168.220.72). (I set this up to compare performance. I assume the cifs
module should not be performing any WAN optimizations.)"

I intended to write:

"The share \\smbpc\cifs uses the VFS cifs module to connect to \\smbps\cifs
(192.168.227.10), which uses the *cifs* module to connect to \\fs\testuser1
(192.168.220.72). (I set this up to compare performance. I assume the cifs
module should not be performing any WAN optimizations.)"
Comment 6 Samjam 2008-07-17 04:35:49 UTC
Andrew, please assign this bug to me.

The lack of performance at high latency is because of the current read-ahead model.

Read-ahead is designed to keep the pipe full of data.

Currently, it works by the client proxy requesting data in advance, however the client doesn't know how well the read-request will compress (and if it is using the cache to compress against this can easily be >100:1) and so it doesn't know how many "uncompressed bytes" to read-ahead by to keep the pipeline full.

For un-cached data that will zlib compress, you can counter this by dividing the read-ahead size by the compression ratio, as long as this is small; e.g. if you compress at 50%, then double the read-ahead. 

If you are getting very high rates of compression (with zlib, or on a warm cache) then the read-ahead becomes too high, and the server will drop the connection, thinking that it has an abusive client with hundreds of outstanding read requests.

Clearly we need a new read-ahead model which keeps the pipe full at higher compression ratios. I'm working on this.
Comment 7 Avi Norowitz 2008-07-17 19:39:48 UTC
Sam,

Thanks for looking into this.

I will try experimenting with very high values for read-ahead and see if it improves performance.

Do you still want that tcptrace/xplot graph you asked me for on IRC?

(In reply to comment #6)
> Andrew, please assign this bug to me.
> The lack of performance at high latency is because of the current read-ahead
> model.
> Read-ahead is designed to keep the pipe full of data.
> Currently, it works by the client proxy requesting data in advance, however the
> client doesn't know how well the read-request will compress (and if it is using
> the cache to compress against this can easily be >100:1) and so it doesn't know
> how many "uncompressed bytes" to read-ahead by to keep the pipeline full.
> For un-cached data that will zlib compress, you can counter this by dividing
> the read-ahead size by the compression ratio, as long as this is small; e.g. if
> you compress at 50%, then double the read-ahead. 
> If you are getting very high rates of compression (with zlib, or on a warm
> cache) then the read-ahead becomes too high, and the server will drop the
> connection, thinking that it has an abusive client with hundreds of outstanding
> read requests.
> Clearly we need a new read-ahead model which keeps the pipe full at higher
> compression ratios. I'm working on this.

Comment 8 Matthias Dieter Wallnöfer 2008-11-24 14:47:45 UTC
Is this still an issue?
Comment 9 Samjam 2008-11-25 02:15:12 UTC
I imagine it is still an issue.
I'm just finishing re-basing against the new merged samba master, then I can git-push
Comment 10 Samjam 2009-03-26 07:06:41 UTC
I'm still stuck on obtaining a measure of connection capacity.
Comment 11 Samjam 2009-07-02 05:33:16 UTC
I now have a solution to filling the pipe during read-ahead without knowing the capacity of the pipe. Work is in progress on this.
Comment 12 Matthias Dieter Wallnöfer 2009-12-15 10:08:49 UTC
Sam, are you still working on this?
Comment 13 Matthias Dieter Wallnöfer 2010-01-09 10:49:24 UTC
Is there still anyone working on this "vfs_proxy" fork?
Otherwise I close this bug reports with "WONTFIX" soon since we as mainstream project aren't responsible for it.
Comment 14 Matthias Dieter Wallnöfer 2010-01-19 14:07:12 UTC
Well, there hasn't been posted any comments about the actual state of this s4 vfs_proxy fork. I will close the related bugs since the project seems dead (got no response after two posts).