Bug 13433 - out_of_memory in receive_sums on large files
Summary: out_of_memory in receive_sums on large files
Status: RESOLVED FIXED
Alias: None
Product: rsync
Classification: Unclassified
Component: core (show other bugs)
Version: 3.1.3
Hardware: All All
: P5 normal (vote)
Target Milestone: ---
Assignee: Wayne Davison
QA Contact: Rsync QA Contact
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2018-05-11 16:27 UTC by Kevin Day
Modified: 2020-07-26 09:54 UTC (History)
1 user (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Kevin Day 2018-05-11 16:27:55 UTC
I'm attempting to rsync a 4TB file. It fails with:

generating and sending sums for 0
count=33554432 rem=0 blength=131072 s2length=6 flength=4398046511104
chunk[0] offset=0 len=131072 sum1=8d15ed6f
chunk[1] offset=131072 len=131072 sum1=3d66e7f7
[omitted]
chunk[6550] offset=858521600 len=131072 sum1=d70deab6
chunk[6551] offset=858652672 len=131072 sum1=657e34df
send_files(0, /bay3/b.tc)
count=33554432 n=131072 rem=0
ERROR: out of memory in receive_sums [sender]
[sender] _exit_cleanup(code=22, file=util2.c, line=105): entered
rsync error: error allocating core memory buffers (code 22) at util2.c(105) [sender=3.1.3]

This is getting called:

92              if (!(s->sums = new_array(struct sum_buf, s->count)))
93                      out_of_memory("receive_sums");

And the size of a sum_buf(40 bytes) * the number of sums (33554432) exceeds MALLOC_MAX.

How is this supposed to work/why is it breaking here, when I'm pretty sure I've transferred files bigger than this before?
Comment 1 Dave Gordon 2018-05-16 18:58:30 UTC
Maybe try --block-size=10485760 --protocol=29 as mentioned here:
https://bugzilla.samba.org/show_bug.cgi?id=10518#c8
Comment 2 Kevin Day 2018-05-16 23:07:26 UTC
(In reply to Dave Gordon from comment #1)

It looks like that's no longer allowed?

rsync: --block-size=10485760 is too large (max: 131072)
rsync error: syntax or usage error (code 1) at main.c(1591) [client=3.1.3]


#define MAX_BLOCK_SIZE ((int32)1 << 17)

        if (block_size > MAX_BLOCK_SIZE) {
                snprintf(err_buf, sizeof err_buf,
                         "--block-size=%lu is too large (max: %u)\n", block_size, MAX_BLOCK_SIZE);
                return 0;
        }

OLD_MAX_BLOCK_SIZE is defined, but options.c would need to be patched to allow looser block sizes if protocol_version < 30
Comment 3 Kevin Day 2018-05-16 23:12:00 UTC
Just adding --protocol=29 falls back to the older chunk generator code and automatically selects 2MB chunks which is enough to at least make this work without a malloc error.
Comment 4 Ben RUBSON 2018-05-19 16:46:41 UTC
util2.c:#define MALLOC_MAX 0x40000000

Which is 1 GB.

1 GB / 40 bytes x 131072 bytes = 3276 GB,
which is then the maximum file size in protocol_version >= 30.

Did you try to increase MALLOC_MAX on sending side ?

Btw, would be interesting to know why MAX_BLOCK_SIZE has been limited to 128 KB.
rsync.h:#define MAX_BLOCK_SIZE ((int32)1 << 17)
Comment 5 MulticoreNOP 2020-06-24 11:48:33 UTC
might be related to bug #12769
Comment 6 Wayne Davison 2020-07-26 09:54:01 UTC
You can specify a larger malloc sanity check in the latest rsync (which will also let you know when the limit is exceeded instead of claiming that it is out of memory).