Bug 2218 - inplace-if-low-disk
Summary: inplace-if-low-disk
Status: CLOSED WONTFIX
Alias: None
Product: rsync
Classification: Unclassified
Component: core (show other bugs)
Version: 2.6.3
Hardware: All Linux
: P3 enhancement (vote)
Target Milestone: ---
Assignee: Wayne Davison
QA Contact: Rsync QA Contact
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2005-01-06 10:23 UTC by Baldvin Kovacs
Modified: 2005-04-01 11:21 UTC (History)
0 users

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Baldvin Kovacs 2005-01-06 10:23:55 UTC
Maybe it seems a bit perverse, but I have an idea of a feature that
I'd really like. It is "--inplace-if-low-disk". Before writing
a file, rsync could check the disk space available, and if it
is less than the size of the file to be written, it could use
the inplace mode for that file.

Every once in a while I keep huge files on user partitions (partition
backups, usually). I have scripts using rsync to mirror those
filesystems, failing if there is a change in those files.

(I am not competely idiot, just almost. It's quite a big system,
having hundreds users. We don't have too much space available,
so the only really huge partition is /u... In the case of a
break-in, I usually have to dd many partitions, and there is
only /u for that...)

Mathematically there would be a possibility of --fit-agressively
or something, which would calculate the order of transfering files
so that there is no need of extra space at the receiving side
ever. However, an algorithm like this is quite a work to do.

That's why I asked about your opinion of --inplace-if-low-disk.
The usual case would be solved by that (having that there are
two types of files: small ones, and extra large ones).

If you are accept the idea, I am also willing to code a little... :)

Baldvin
Comment 1 Wayne Davison 2005-02-13 22:17:51 UTC
Due to the current --inplace option having the potential to be very inefficient
(the man page specifies that it really only works well with appended data or
record-oriented changes), it would probably be a bad idea to have the largest
file(s) in the transfer become the least efficient in their updates.

The current CVS has an option that can set a maximum size for a file to
transfer: --max-size=1g (for instance).  That can at least be used to avoid
overflowing the disk.  You could then use --temp-dir (pointing at a drive with
more free space) to update any remaining files (leaving off the --max-size
option for a second pass).  Not ideal, certainly, but not too bad either.