Bug 5124 - Parallelize the rsync run using multiple threads and/or connections
Parallelize the rsync run using multiple threads and/or connections
Status: NEW
Product: rsync
Classification: Unclassified
Component: core
3.0.0
All All
: P3 enhancement
: ---
Assigned To: Wayne Davison
Rsync QA Contact
:
Depends on:
Blocks:
  Show dependency treegraph
 
Reported: 2007-12-06 03:38 UTC by Andrew J. Kroll
Modified: 2016-09-21 23:00 UTC (History)
3 users (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Andrew J. Kroll 2007-12-06 03:38:55 UTC
I would love to see rsync grow up to working better. One huge lesson that could be learned would be in taking ideas from other tools. One such tool would be lftp. I would totally go bananas if rsync could scan dirs while syncing in parallel. The gains from doing this are phenomenal. Since lftp already has a very good working model on how to accomplish this feat, I suggest that it is taken a look at. if you have never experienced how much better lftp works over wget, give it a try, and you will see exactly what I am talking about. Granted, the way rsync does it's job is vastly different, and for good reasons, however with today's more modern systems, that have dual cores, hyper threads, and plain amazing speeds, I see no reason not to at least offer the option to do such a very useful thing. Having limits on the server side, and limits set from the client side is also a good thing too, which is something ftpd's and httpd's have. 

Thanks to all who have helped to develop this killer tool! Let's make it get faster now!
Comment 1 Matt McCutchen 2007-12-06 07:02:03 UTC
How does the feature you want differ from the incremental recursion mode that is being added to rsync 3.0.0?
Comment 2 Andrew J. Kroll 2007-12-07 04:40:17 UTC
It does so in parallel, via fork()... Have not looked to see if rsync is doint the same thing, but the point is that it opens multiple sockets to do it's job.
Comment 3 Matt McCutchen 2009-10-28 09:34:30 UTC
A stab at a more meaningful summary, and some thoughts:

My first reaction to the suggestion to use multiple connections is that it's a gimmick to get a higher total bandwidth allocation from routers that allocate bandwidth per connection; IMO, that would not be an appropriate goal.  But there's another more fundamental benefit, even if the total bandwidth were to remain the same: loss of a single packet won't stall the rsync run because the other connections can continue (at least for a while) without that packet.

But why stop at several streams?  Rsync could use datagrams (UDP) and just act on packets as they arrive, so that loss of a packet doesn't affect /any/ of the other packets.  The only drawback is that we have really good tooling for working with streams (pipes, nc, port forwarding, TLS, etc.), while the tooling for datagrams is nonexistent or less mature (there is Datagram TLS, but I've never tried it).

Rather than implement the UDP stuff ad-hoc for rsync, I would like to see it adopt an application-level scheduler that maintains a list of active tasks (scanning a directory, transferring a file, etc.) and handles the rudiments of accepting a packet and calling the appropriate routine to take the next step on that task.  If the scheduler would support asynchronous I/O, rsync could use that to dramatically cut time blocked on I/O by letting the OS decide the order in which to fulfill requests based on the actual layout of the files on disk.  Once rsync exposes a set of available tasks to the scheduler, it becomes trivial to vary the number of OS threads in which the tasks run.  This would be awesome but is probably better pursued in a successor to rsync.
Comment 4 Haravikk 2014-01-14 17:40:22 UTC
I see this is quite old, and to be honest I'm not completely familiar with rsync's implementation, but more rsync performance is of benefit to everyone so I thought I'd chip in my thoughts.

While UDP would be a good option, it's a fairly complex one to implement as you'd essentially be reinventing the wheel when it came to re-requesting packets etc., though I believe there may be new libraries out there that could help with this; many Bittorrent clients for example now use µTP (micro Transport Protocol) which is basically just UDP with some failure tolerance, though this would still require some form of SSL support for widespread adoption.

Personally I don't think the number of TCP connections is the problem though as a single connection should be capable of utilising all available bandwidth. That said, one of the problems with TCP is the self-adjusting frame-size, so to get the most out of a connection you really need to utilise it at a constant rate, otherwise the window size will go down, this means any long pauses waiting for the next chunk of the file-list can result in performance dropping until the next file starts being sent.

An alternative fix for this problem is to do something similar to Google's SPDY protocol for HTTP, which is to multiplex several TCP connections together. Basically, rsync would add its own information to packets, allowing them to be quickly routed to/from multiple threads at each end, while sending all packets over a single connection. This means you can have file-list packets mixed in with multiple packets from various different file being transferred etc.; TCP will continue to ensure they arrive in the correct order etc., and all rsync has to do is setup an appropriate number of threads for generating chunks of the file-list, performing delta comparisons, and transferring files. Basically you end up with one thread acting a message dispatch service for this single connection, taking all messages received and sending them to appropriate worker threads, and packing outgoing messages ready to send down the TCP connection; the worker threads then perform file/folder comparison for different parts of the sync operation.

Not that this latter option isn't still complex and a lot of work, but IMO it's the best way to do things (if rsync isn't already), and it allows rsync to run multiple file/folder comparisons simultaneously depending upon the hardware at each end and the current speed of the sync operation.
Comment 5 Andrew J. Kroll 2014-01-19 03:10:35 UTC
Actually having two or three TCP streams at the same time has proven to be faster, because it can scan ahead. It is proven that if you download one large file, while downloading several smaller ones, that the entire transfer is faster because the handshake turn-around is hidden. It has nothing to do with getting around per-connection bandwidth limiters, although in some cases it can help with that too. Another proven case is your typical modern web browser. There is a very good reason why multiple connections are used to load in those pretty pictures you see. It is all about getting around the latency by using TCP as a double buffer. What is needed is the ability to be scanning on one side while transferring a file, and if we have a match as we go with the other process, start sending it on a second stream. Again, look at how lftp does it. the concept simply works fantastic. You get multiple dir scans in parallel, and data when it is to be updated, while still scanning.

UDP? Interesting idea but, not needed. Just do > 1 scan and send process.
Comment 6 clawsoon 2015-07-09 20:17:08 UTC
I would also love to see multi-stream rsync.  A quick Google shows many examples of people hacking up unsatisfactory versions of multi-stream rsync with find, xargs, parallel, and for-in.  Check out this ugly-but-effective hack, for example:

http://codereaper.com/blog/2014/the-dream-of-multi-threaded-rsync/

Or this one, written in Perl:

http://moo.nac.uci.edu/~hjm/parsync/

Or these many suggestions:

https://wiki.ncsa.illinois.edu/display/~wglick/Parallel+Rsync

You can also try this, if you want:

$threads=24; $src=/src/; $dest=/dest/ rsync -aL -f"+ */" -f"- *" $src $dest && (cd $src && find . -type f | xargs -n1 -P$threads -I% rsync -az % $dest/% )

(from https://www.linkedin.com/pulse/20140731160907-45133456-tech-tip-running-rsync-over-multiple-threads)

As you can see, lots of people want to do this, even if they're not saying so in this Bugzilla thread.  I'm currently tripling my transfer speed by using xargs to launch 4 rsync process in parallel, after failing to get any improvement from playing with TCP socket options.

But trying to create a general purpose multi-stream script to do it leads to ugly, ugly hacks.  All of them have at least one of these faults:

 - They don't take advantage of the new, faster directory-traversal code in recent rsync versions.

 - They don't work with a remote rsync server.

 - Their parallelism gets much worse if one of the subdirectories ends up with much more data than the others.

 - They choke on unexpected characters in filenames (e.g. space, newline).

All of these problems would go away if rsync had native multi-streaming.