The Samba-Bugzilla – Bug 12819
[PATCH] sync() on receiving side for data consistency
Last modified: 2018-02-13 20:17:49 UTC
Created attachment 13253 [details]
Here is a patch which sync() once files received, for data consistency.
Thank you !
This seems wrong to me. If the OS is failing to manage write buffers and file access between processes, you would have a lot bigger problems in every process all through the system, and this wouldn't fix it.
Similarly, if rsync were corrupting data, a lot of people would already know about it. It gets used way too much and too heavily for anything like this to go unnoticed for more than a day, let alone 15 or more years.
It's almost axiomatic: No matter what problem you think you have, no matter what language or OS or platform, if you think it's fixed by either sleep() or sync(), it's not.
Thank you for your feedback Brian.
I don't have any problem.
I just want to be sure that when client (sender) has finished its transfer, its data is on server's (receiver) disks, before it disconnects.
So that when it correctly / successfully disconnects, its data is for sure on disks.
On disks means on platters, so that if there is a failure (hardware, power...), data is safe, not lost.
Of course disks which do not lie about sync() command must be used (data must be on platters, not only in disks' cache). As well as a robust filesystem, some redundancy... (but here that's off-topic).
Perhaps we could make it an option, so that those who have OS failing to manage write buffers would not be degraded even more... But certainly they should have a look to their performance issue first.
How about just using a post-xfer command on the server side that does 'sync'?
Yes Paul I thought about it but sync command may not be available if the server (receiver) is chrooted (for example using patch proposed in #12817).
Any program could make this same "just to be safe" argument practically every time they ever close-on-write for any reason. If they wrote anything, it was always for some reason, and they want to know for sure that it really got safely written. There is nothing special about rsync in that regard. cp might as well have it. The ">" operator in bash might as well have it.
The kernel and vfs and hardware drivers all already do whatever is necessary in that regard, and it's generally wrong for any application to try to do it itself. Otherwise the disk would be in a constant state of sync()'ing and never actually manage to get any other work done. Consider a multiuser host with 500 rsync receivers. Each individual sync() is incredibly disruptive to all other processes. "Everyone hold up while we flush the disk buffer...". The entire system waits while that happens.
That way just leads to things like the example you just used, lower layers that just start lying about sync() to upper layers because too many apps use it when they shouldn't. "Fine, if apps are going to sync all the time, that ends up being 86 times a second between all procs running at any given moment, which is unsupportable, so we'll just make sync() a no-op stub and we'll do it when it's' actually required, and apps can sync()-away to their hearts content".
I think the only reason rsync might have to sync is if you built rsync as a self-contained bootable executable like memtest86, or possibly as an MS-DOS executable.
Think of it this way, write() already makes a certain promise that it will not return until it's done it's job, and it will not assert success when it can't. Essentially the man page for any syscall is a contract. In fact all API's are contracts.
write() in turn is relies on various other calls to even lower layers to keep their promises too, to manage the in-kernel buffer or the cache on a raid card etc.
All of these things MUST be relied on rather than second-guessed. It would be insane for example, for write() to say "I can't really be sure this disk driver has really done it's thing. I better force it to sync before I return to the application." or "I can't really be sure malloc() really allocated the memory, I better malloc 3 or 4 copies and compare them and use whichever copies agree with each other... It's insane.
You write(), you check the return value, and you're done. The low level hardware is someone else's job, and you won't be doing a better job than they already did.
And what about a power failure between 2 ZFS transaction groups ?
Note that my patch simply adds a sync() just after recv_files(), so one sync() per connection, not per write operation.
Quite low workload actually :)
But we could make this a rsync option, so that one can enable / disable it on its own.
You tell me, what ABOUT a power failure between 2 zfs, or any other fs operations?
This does not improve or solve any problem that the fs and all the other layers aren't already handling. This is simply a misguided idea, however sensible and attractive it seems.