Bug 6843 - flock's do not get sent to physical file system, causes protocol interoperabilitiy issues with NFS
Summary: flock's do not get sent to physical file system, causes protocol interoperabi...
Alias: None
Product: CifsVFS
Classification: Unclassified
Component: kernel fs (show other bugs)
Version: 2.6
Hardware: Other Windows XP
: P3 normal
Target Milestone: ---
Assignee: Sachin Prabhu
QA Contact: cifs QA contact
Depends on:
Reported: 2009-10-24 17:56 UTC by Barry Sabsevitz (mail address dead)
Modified: 2021-03-10 20:07 UTC (History)
2 users (show)

See Also:


Note You need to log in before you can comment on or make changes to this bug.
Description Barry Sabsevitz (mail address dead) 2009-10-24 17:56:06 UTC
I'm not sure if this is by design or if it is a bug/missing functionality. If I mount a Samba share to a Linux client, and issue an flock() call, it looks like Samba does not send this down to the physical file system like it does with byte locks. Instead, it looks like Samba stores this flock request in it's tdbs. The issue that arrises is: If I want to have Samba and NFS operate together as follows:
1. mount a file system via NFS.
2. export the same file system via Samba.
3. Have a process on the NFS mount grab an flock.
4. Have a process on the CIFS mount grab an flock.

The process issuing the flock via NFS will not see that the process issuing the flock via Samba/CIFS has the lock because samba does not send this request down to the physical file system.

I know that interoperatiblity between CIFS and NFS is difficult, but was wondering whether Samba should send flock requests down to the physical file system like it does with byte locks. To reproduce you can write a program to grab an flock in exclusive mode via NFS and one grabbing an flock in exclusive mode via Samba and they will not see the lock that the other has grabbed.

Is this something that is in-plan but hasn't been done yet or is this missing functionality that should be fixed or is there a design reason this functionality was not put into Samba?

Comment 1 Jeremy Allison 2009-10-24 19:23:17 UTC
This is not a server bug. The CIFSFS client doesn't send flocks to the server.
(There is no .flock entry in the struct file_operations in the cifs client code).

Note from this Linux NFS FAQ.:


  D10. I'm trying to use flock()/BSD locks to lock files used on multiple clients, but the files become corrupted. How come?

    A. flock()/BSD locks act only locally on Linux NFS clients prior to 2.6.12. Use fcntl()/POSIX locks to ensure that file locks are visible to other clients.

    Here are some ways to serialize access to an NFS file.

        * Use the fcntl()/POSIX locking API. This type of locking provides byte-range locking across multiple clients via the NLM protocol, or via NFSv4.
        * Use a separate lockfile, and create hard links to it. See the description in the O_EXCL section of the creat(2) man page.

From the Linux NFS client source code:

        /* We're simulating flock() locks using posix locks on the server */
        fl->fl_owner = (fl_owner_t)filp;
        fl->fl_start = 0;
        fl->fl_end = OFFSET_MAX;

So flocks are simulated by using a fcntl byte range lock across the whole file. The Samba server will cope with this. 

Portable code should not be using flock anyway, it's a BSD-ism.

I'm re-catagorizing this bug as a CIFSFS client bug and re-assigning to Jeff Layton.

Jeff, just steal the same code that the nfs client code uses to simulate flock by putting a byte range lock across the whole file.

Comment 2 Jeff Layton 2009-11-02 07:06:51 UTC
Ok, might be quite some time before I have a chance to add this feature. I'm a little busy squashing bugs at the moment...
Comment 3 Jeff Layton 2011-11-07 18:30:25 UTC
Reassigning to Sachin as he has a patchset for this...
Comment 4 Sachin Prabhu 2011-11-07 20:08:27 UTC
Posted a preliminary patch to cifs-list.

Comment 5 Sachin Prabhu 2021-03-10 20:07:28 UTC
Just noticed that this old bz is still open. The original set of patches I had proposed then wasn't included into the cifs module. This was since provided by Steve French

Closing this issue.