To allow me to copy a very large directory that cannot be copied in one go (see bug 5727) I restricted the sizes of the files to part of the range of filesizes. However, it turns out that rsync still loads all files into the in-memory list of files to copy. This means that rsync still crashes even when I restrict the number of files-to-copy by 95%. (Copying by subdirectory doesn't work for me: there are loads of hardlinks, and because I know that files of size XX won't be hardlinked to files of size YY, I can copy in stages by limiting the filesizes).
Those options just affect what files in the file-list may be transferred. If they actually excluded files from the file-list, the receiving side would not be able to correctly handle a --delete option. You should instead sub-set the directory using an exclude, such as --exclude='dir/[a-m]*' for one run, and --exclude='dir/[n-z]*' for the other (or something similar).
Actually, I was too hasty in my closure. Filtering the file lists would work OK as long as the receiver was doing a similar filter. Delete would behave differently than with the current options, but not-deleting certain out-of-range files would be comparable to the exclusions that happen for the current filter rules.
I do not want to delete files on the recieving side. So there is no need to "prepare" for that possibility. And as I said, there are enormous amounts of hardlinks between the directories, that would be broken if I copied per-directory. There could be something like if (otherside_needs_full_list || filter (curpath)) send_this_file (curpath); in the code that sends the file list?
I am not included to change how these behave.