With a sambe 3.5.6 install, when setting the parameter "large readwrite" to "no" (which is often recommended by users in some wikis / blogs / communities), file truncation will not happen when rewriting / saving the file. When the new content's size is smaller than the old one, trailing garbage is left at the end and thus leaving a corrupted file on the disk.
My issue occured on a Debian install, though some other user was experiencing the same behaviour with a NAS device:
Test script to reproduce the behaviour:
echo "thisisabigfatpandaonwheelsgoingdownthestreet" > /mnt/test/newfile.txt
echo -n "blah" > /mnt/test/newfile.txt
Expected output of the second cat command:
Changes to the (debian) default smb.conf:
diff -pruN /usr/share/samba/smb.conf /etc/samba/smb.conf
--- /usr/share/samba/smb.conf 2012-04-30 08:53:47.000000000 +0200
+++ /etc/samba/smb.conf 2013-01-31 16:51:24.000000000 +0100
@@ -31,6 +31,7 @@
#======================= Global Settings =======================
+ large readwrite = no
## Browsing/Identification ###
@@ -323,3 +324,9 @@
; preexec = /bin/mount /cdrom
; postexec = /bin/umount /cdrom
+ writable = yes
+ path = /export/test
+ valid users = testuser
+ create mask = 0644
+ guest ok = no
Mount on the test client:
//smbserver/test /mnt/test cifs rw,user=testuser,passwd=xxx 0 0
Related bug in the Debian bugtracker:
Turning off "large readwrite" causes smbd to no longer negotiate CAP_W2K_SMBS support. This causes many problems with Windows and Mac clients.
The real fix is just not to do that. This parameter should just be removed I think.
"large readwrite" is already marked as an "advanced option".
There is only so much we can do to prevent people from shooting themselves in the foot.
I'm inclined to mark this wontfix.
(In reply to comment #3)
> I'm inclined to mark this wontfix.
Rargh, wrong comment on the wrong bug.
Jeff: this smells like a cifs vfs bug, or...?
(In reply to comment #5)
> Jeff: this smells like a cifs vfs bug, or...?
Maybe. There's no mention of what kernel the test client is running in the debian bug report. Without that, it's hard to say...
There is a problem with negotiating a wsize that's "too large" in that we currently have a hard time dealing with short write responses. Fixing that is unfortunately non-trivial, and even if we did fix it you'd get terrible performance from the client since it'd have to go back and fill in the holes while writing. So, we take great pains to try and get the wsize to something optimal during negotiation.
In any case, when this option is disabled, we don't get these flags set in the negotiate response, right?
...and with current clients it would setting a wsize like this:
* no CAP_LARGE_WRITE_X or is signing enabled without CAP_UNIX set?
* Limit it to max buffer offered by the server, minus the size of the
* WRITEX header, not including the 4 byte RFC1001 length.
if (!(server->capabilities & CAP_LARGE_WRITE_X) ||
(!(server->capabilities & CAP_UNIX) && server->sign))
wsize = min_t(unsigned int, wsize,
server->maxBuf - sizeof(WRITE_REQ) + 4);
...so it's unclear to me why it would be sending large write requests with those flags disabled. Perhaps a capture between client and server would be a good idea?
The client used the same kernel version as the server:
Linux 2.6.32-5-amd64 (SMP w/2 CPU cores)
The test setup is dumped already, so I unfortunately can't provide a package trace right now, maybe if I find some spare time in the future.