1164 static int cephwrap_kernel_flock(struct vfs_handle_struct *handle, files_struct *fsp, 1165 uint32_t share_mode, uint32_t access_mask) 1166 { 1167 DBG_DEBUG("[CEPH] kernel_flock\n"); 1168 /* 1169 * We must return zero here and pretend all is good. 1170 * One day we might have this in CEPH. 1171 */ 1172 return 0; 1173 } This is presumably just a workaround for the SMB_VFS_KERNEL_FLOCK() call that is made on open if "kernel share modes = true" (default). In this default case, the open fails if the flock call returns -1 . I think it makes sense to copy vfs_gluster behaviour here and return -1 (errno = ENOSYS), alongside a new vfs_ceph manpage entry which explicitly recommends setting "kernel share modes = no".
Created attachment 14334 [details] fix for 4.7.next @Jeremy: JFYI this will (loudly) break existing deployments using vfs_ceph without a corresponding "kernel share modes = no" configuration. I'd like to proceed with the backport nevertheless, but would understand if you only want it in the 4.9 branch.
Created attachment 14335 [details] fix for 4.8.next
The v4-9-test branch is already carrying the fix.
Actually I'm OK with these going into release branches. It's not a VFS ABI change, and people using ceph backends are going to be technically sophisticated enough to cope with these changes. Re-assigning to Karolin for inclusion in 4.8.next, 4.7.next.
Pushed to autobuild-v4-{8,7}-test.
(In reply to Karolin Seeger from comment #5) Pushed to both branches. Closing out bug report. Thanks!