Bug 3099 - Please parallelize filesystem scan
Summary: Please parallelize filesystem scan
Alias: None
Product: rsync
Classification: Unclassified
Component: core (show other bugs)
Version: 2.6.9
Hardware: All Linux
: P3 enhancement (vote)
Target Milestone: ---
Assignee: Wayne Davison
QA Contact: Rsync QA Contact
Depends on:
Reported: 2005-09-14 09:30 UTC by H. Peter Anvin
Modified: 2015-07-17 14:37 UTC (History)
0 users

See Also:

One possible way to reorder the checksum computation. (4.56 KB, patch)
2005-09-15 16:23 UTC, Wayne Davison
no flags Details
Improved patch for eary checksums (4.79 KB, patch)
2005-09-16 09:47 UTC, Wayne Davison
no flags Details

Note You need to log in before you can comment on or make changes to this bug.
Description H. Peter Anvin 2005-09-14 09:30:54 UTC
I just had the unpleasant experience of doing an rsync --checksum of two
terabytes worth of data.  Much to my chagrin, it actually took *longer* than it
would have been to just wipe the filesystem clean and start over, because the
two checksumming passes were done in a serial fashion -- first one machine, then
the other.  Each took well over 24 hours to complete.

Please parallelize the filesystem scan phases.  There is absolutely no reason
for one machine to sit and wait for the other when it comes to searching its own
Comment 1 Wayne Davison 2005-09-15 00:33:34 UTC
That would take a redesign of the rsync protocol.  If that were done, the plans
for a new-protocol rsync already include making it incremental, which would
obviate the need for this bug.
Comment 2 H. Peter Anvin 2005-09-15 13:49:34 UTC
Pardon me for being dense, but how could it possibly require a change to the
rsync protocol for the second host in the sequence to pre-scan its filesystem,
so that that data is available when needed?
Comment 3 Wayne Davison 2005-09-15 16:23:22 UTC
Created attachment 1448 [details]
One possible way to reorder the checksum computation.

> how could it possibly require a change to the rsync protocol for the
> second host in the sequence to pre-scan its filesystem, so that that
> data is available when needed?

The only way to know what to scan is to look at the file list from the sender
(since the receiver usually doesn't know anything other than the destination
directory, and options such as -R, --exclude, and --files-from can radically
limit what files need to be scanned).

I suppose it would be possible for the receiver to compute the full-file
checksums as the file list is arriving from the sender (yes, the sender sends
the list incrementally as it is created), but the code currently doesn't know
if the destination spec is a file or a directory until after it receives the
file list, so the code would need to be made to attempt a chdir to the
destination arg and to skip the pre-caching if that doesn't work.

One bad thing about this solution is that we really should be making the
sending side not pre-compute the checksums before the start of the transfer
phase (to be like the generator, which computes the checksums while looking for
files to transfer). Computing them during the transfer makes it more likley
that the file's data in the disk cache will be able to be re-used when a file
needs to be updated. Thus, changing the receiving side to pre-compute the
checksums before starting the transfer seems to be going in the wrong direction
(though it might speed up a large transfer where few files were different, it
might also slow down a large transfer where many files were changed).

The attached patch implements a simple pre-scan that works with basic options.
It could be improved to handle things like --compare-dest better, but I think
it basically works.  If you'd care to run some speed tests, maybe you could
persuade me that this kluge would be worth looking at further (I'm not
considering it at the moment).
Comment 4 Wayne Davison 2005-09-16 09:47:05 UTC
Created attachment 1452 [details]
Improved patch for eary checksums

This version of the patch fixes a few potential problems with the first one.
Comment 5 Wayne Davison 2005-09-16 09:47:59 UTC
I've reopened this suggestion to consider the attached patch.
Comment 6 Arie Skliarouk 2013-02-10 06:45:30 UTC
Any hope for the bug to be resolved? It is really inconvenient to have production database to be down for double amount of time than what is really necessary.
Comment 7 Rainer 2015-07-17 09:01:04 UTC

I'm experiencing the very same problem: I'm trying to sync a set of VMWare disk files (about 2.5TB) with not too many changes, and direct copying is still faster than the checksumming by a quite large margin because of the sequential checksumming on source and target just doubles the time needed.

I think the point is that the GigE link between the PC and the NAS achieves about 80MB/s, and the HDD read rate is not much higher (approx. 130MB/s). 

When doing the checksumming on source and target in parallel we could ideally (if nothing changed) reach the read rate of the HDDs as 'transfer' bandwidth, because this is the speed at which we can verify that the data is the same on source and target. The sequential approach like it is now reduces the initial check to half the HDD read rate, so transfering unchanged files will only yield about 65MB/s in my case, which is slower than simple copying.

Is this patch you proposed some years ago something I can apply to and try on a current rsync version? If not, could you update it to the 3.1.x version so I can benchmark the parallel checksumming in my situation?

Best Regards
Comment 8 Chip Schweiss 2015-07-17 14:37:21 UTC
I would argue that optionally all directory scanning should be made parallel.   Modern file systems perform best when request queues are kept full.  The current mode of rsync scanning directories does nothing to take advantage of this.   

I currently use scripts to split a couple dozen or so rsync jobs in to literally 100's of jobs.   This reduces execution time from what would be days to a couple hours every night.   There are lots of scripts like this appearing on the net because the current state of rsync is inadequate.  

This ticket could reasonably combined with 5124.