Bug 2785 - rsync gives following error: buffer overflow in receive_file_entry
rsync gives following error: buffer overflow in receive_file_entry
Status: CLOSED DUPLICATE of bug 2784
Product: rsync
Classification: Unclassified
Component: core
All All
: P3 major
: ---
Assigned To: Wayne Davison
Rsync QA Contact
Depends on:
  Show dependency treegraph
Reported: 2005-06-09 10:56 UTC by Daniel Carnes
Modified: 2006-03-12 02:58 UTC (History)
0 users

See Also:


Note You need to log in before you can comment on or make changes to this bug.
Description Daniel Carnes 2005-06-09 10:56:23 UTC
If I execute the following command: 
rsync -vrptgz --rsh="ssh $BCPSERVER rsync rsyncd --daemon --
config=$RSYNCDCONFIGFILE --port=$RSYNCDPORT" --rsync-path=$RSYNCPATH   
and I have a symbolic link in the directory represented by $RSYNCSOURCE then I 
get the following output:

building file list ...
5 files to consider
overflow: linkname_len=1862797370
ERROR: buffer overflow in receive_file_entry
rsync error: error allocating core memory buffers (code 22) at util.c(126)
rsync: connection unexpectedly closed (4 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(359)

If I add the -l option to the command then rsync seems to complete fine however 
the symbolic links that have the following pattern symlink -> /foo/bar get 
rsynced to symlink -> foo/bar.

I looked at the source and I have found the code that generated the errors that 
I see. below is the code section from flist.c:

	if (preserve_links && S_ISLNK(mode)) {
		linkname_len = read_int(f) + 1; /* count the '\0' */
		if (linkname_len <= 0 || linkname_len > MAXPATHLEN) {
			rprintf(FERROR, "overflow: linkname_len=%d\n",
				linkname_len - 1);

I cannot figure out why we are actually executing this section of code if our 
preserve_links && S_ISLNK variables are not set to 1, which I do not see that 
they are given the rsync arguments that we are using.
Comment 1 Daniel Carnes 2005-06-09 11:20:39 UTC
this is a duplicate of 2784 sorry.

*** This bug has been marked as a duplicate of 2784 ***