Bug 14682 - vfs_shadow_copy2: core dump in make_relative_path
Summary: vfs_shadow_copy2: core dump in make_relative_path
Status: ASSIGNED
Alias: None
Product: Samba 4.1 and newer
Classification: Unclassified
Component: VFS Modules (show other bugs)
Version: 4.14.4
Hardware: x64 FreeBSD
: P5 normal (vote)
Target Milestone: ---
Assignee: Jeremy Allison
QA Contact: Samba QA Contact
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2021-04-01 07:41 UTC by Peter Eriksson
Modified: 2021-05-28 12:33 UTC (History)
0 users

See Also:


Attachments
git-am fix for master. (4.52 KB, patch)
2021-05-27 05:52 UTC, Jeremy Allison
no flags Details

Note You need to log in before you can comment on or make changes to this bug.
Description Peter Eriksson 2021-04-01 07:41:47 UTC
Just got a couple of core dumps from smbd due to SIGSEGV in make_relative_path in vfs_shadow_copy2.c 

The (null) in abs_path and NULL cwd at frame 11 looks a bit suspicious.

FreeBSD 12.2, ZFS filesystem, Samba 4.14.2


GDB backtrace:
(gdb) bt
#0  0x00000008103e1c2a in thr_kill () from /lib/libc.so.7
#1  0x00000008103e0084 in raise () from /lib/libc.so.7
#2  0x0000000810356279 in abort () from /lib/libc.so.7
#3  0x000000080440c3e9 in dump_core () at ../../source3/lib/dumpcore.c:338
#4  0x000000080441adaa in smb_panic_s3 (why=<optimized out>) at ../../source3/lib/util.c:850
#5  0x0000000801291349 in smb_panic (why=why@entry=0x7fffffffd350 "Signal 11: Segmentation fault")
    at ../../lib/util/fault.c:197
#6  0x00000008012913c5 in fault_report (sig=11) at ../../lib/util/fault.c:81
#7  sig_fault (sig=11) at ../../lib/util/fault.c:92
#8  0x000000080934bb70 in ?? () from /lib/libthr.so.3
#9  0x000000080934b13f in ?? () from /lib/libthr.so.3
#10 <signal handler called>
#11 0x0000000814c312d6 in make_relative_path (
    abs_path=0x814f9a4f0 "/(null)/iei/kansli/Ekonomi/Avdelningar/...redacted...", cwd=0x0)
    at ../../source3/modules/vfs_shadow_copy2.c:440
#12 _shadow_copy2_strip_snapshot_internal (mem_ctx=0x814f9a080, handle=handle@entry=0x8114040c0, 
    smb_fname=smb_fname@entry=0x7fffffffdf60, ptimestamp=ptimestamp@entry=0x7fffffffde98, 
    pstripped=pstripped@entry=0x7fffffffdea0, psnappath=psnappath@entry=0x0, _already_converted=0x0, 
    function=0x814c3ba10 <__FUNCTION__.24443> "shadow_copy2_stat") at ../../source3/modules/vfs_shadow_copy2.c:656
#13 0x0000000814c31405 in _shadow_copy2_strip_snapshot (mem_ctx=<optimized out>, handle=handle@entry=0x8114040c0, 
    orig_name=orig_name@entry=0x7fffffffdf60, ptimestamp=ptimestamp@entry=0x7fffffffde98, 
    pstripped=pstripped@entry=0x7fffffffdea0, 
    function=function@entry=0x814c3ba10 <__FUNCTION__.24443> "shadow_copy2_stat")
    at ../../source3/modules/vfs_shadow_copy2.c:689
#14 0x0000000814c354e4 in shadow_copy2_stat (handle=0x8114040c0, smb_fname=0x7fffffffdf60)
    at ../../source3/modules/vfs_shadow_copy2.c:1177
#15 0x000000080171eeb5 in smb_vfs_call_stat (handle=<optimized out>, smb_fname=smb_fname@entry=0x7fffffffdf60)
    at ../../source3/smbd/vfs.c:2172
#16 0x0000000801722e0a in stat_cache_lookup (conn=0x814fbb0e0, posix_paths=<optimized out>, pp_name=0x810da0e20, 
    pp_dirpath=pp_dirpath@entry=0x7fffffffe0f8, pp_start=pp_start@entry=0x7fffffffe0e8, twrp=132616044000000000, 
    pst=0x810da0e38) at ../../source3/smbd/statcache.c:337
#17 0x000000080170a474 in unix_convert (mem_ctx=mem_ctx@entry=0x810dbf280, conn=conn@entry=0x814fbb0e0, 
    orig_path=orig_path@entry=0x810dbf690 "iei/kansli/Ekonomi/Avdelningar/...redacted...", twrp=twrp@entry=132616044000000000, 
    smb_fname_out=smb_fname_out@entry=0x7fffffffe268, ucf_flags=ucf_flags@entry=0)
    at ../../source3/smbd/filename.c:1120
#18 0x000000080170bfe6 in filename_convert_internal (ctx=ctx@entry=0x810dbf280, conn=0x814fbb0e0, 
    smbreq=smbreq@entry=0x0, 
    name_in=0x810dbf690 "iei/kansli/Ekonomi/Avdelningar/...redacted...", ucf_flags=0, twrp=132616044000000000, _smb_fname=0x7fffffffe340)
    at ../../source3/smbd/filename.c:1936
#19 0x000000080170c5b4 in filename_convert (ctx=ctx@entry=0x810dbf280, conn=<optimized out>, 
    name_in=<optimized out>, ucf_flags=<optimized out>, twrp=<optimized out>, 
    pp_smb_fname=pp_smb_fname@entry=0x7fffffffe340) at ../../source3/smbd/filename.c:2029
#20 0x0000000801750220 in smbd_smb2_create_send (in_context_blobs=..., 
    in_name=0x810debca0 "iei\\kansli\\Ekonomi\\Avdelningar\\...redacted...", in_create_options=<optimized out>, in_create_disposition=<optimized out>, 
    in_share_access=<optimized out>, in_file_attributes=<optimized out>, in_desired_access=<optimized out>, 
    in_impersonation_level=<optimized out>, in_oplock_level=<optimized out>, smb2req=0x810deb7b0, 
    ev=<optimized out>, mem_ctx=0x810deb7b0) at ../../source3/smbd/smb2_create.c:946
#21 smbd_smb2_request_process_create (smb2req=smb2req@entry=0x810deb7b0) at ../../source3/smbd/smb2_create.c:268
#22 0x0000000801746429 in smbd_smb2_request_dispatch (req=req@entry=0x810deb7b0)
    at ../../source3/smbd/smb2_server.c:3303
#23 0x0000000801747342 in smbd_smb2_io_handler (fde_flags=<optimized out>, xconn=0x810db1fe0)
    at ../../source3/smbd/smb2_server.c:4913
#24 smbd_smb2_connection_handler (ev=<optimized out>, fde=<optimized out>, flags=<optimized out>, 
    private_data=<optimized out>) at ../../source3/smbd/smb2_server.c:4951
#25 0x0000000802344eb9 in tevent_common_invoke_fd_handler (fde=fde@entry=0x810d9d900, flags=<optimized out>, 
    removed=removed@entry=0x0) at ../../lib/tevent/tevent_fd.c:138
#26 0x0000000802347495 in poll_event_loop_poll (tvalp=0x7fffffffe550, ev=0x810da0060)
    at ../../lib/tevent/tevent_poll.c:569
#27 poll_event_loop_once (ev=0x810da0060, location=<optimized out>) at ../../lib/tevent/tevent_poll.c:626
#28 0x000000080234468f in _tevent_loop_once (ev=ev@entry=0x810da0060, 
    location=location@entry=0x8018737a0 "../../source3/smbd/process.c:4232") at ../../lib/tevent/tevent.c:772
#29 0x0000000802344866 in tevent_common_loop_wait (ev=0x810da0060, 
    location=0x8018737a0 "../../source3/smbd/process.c:4232") at ../../lib/tevent/tevent.c:895
#30 0x00000008023448c2 in _tevent_loop_wait (ev=ev@entry=0x810da0060, 
    location=location@entry=0x8018737a0 "../../source3/smbd/process.c:4232") at ../../lib/tevent/tevent.c:914
#31 0x0000000801734d64 in smbd_process (ev_ctx=ev_ctx@entry=0x810da0060, msg_ctx=msg_ctx@entry=0x810d99300, 
    dce_ctx=dce_ctx@entry=0x810d8a0c0, sock_fd=sock_fd@entry=50, interactive=interactive@entry=false)
    at ../../source3/smbd/process.c:4232
#32 0x000000000102e7fd in smbd_accept_connection (ev=0x810da0060, fde=<optimized out>, flags=<optimized out>, 
    private_data=<optimized out>) at ../../source3/smbd/server.c:1020
#33 0x0000000802344eb9 in tevent_common_invoke_fd_handler (fde=fde@entry=0x810d994c0, flags=<optimized out>, 
    removed=removed@entry=0x0) at ../../lib/tevent/tevent_fd.c:138
#34 0x0000000802347495 in poll_event_loop_poll (tvalp=0x7fffffffe7b0, ev=0x810da0060)
    at ../../lib/tevent/tevent_poll.c:569
#35 poll_event_loop_once (ev=0x810da0060, location=<optimized out>) at ../../lib/tevent/tevent_poll.c:626
#36 0x000000080234468f in _tevent_loop_once (ev=ev@entry=0x810da0060, 
    location=location@entry=0x1036d90 "../../source3/smbd/server.c:1367") at ../../lib/tevent/tevent.c:772
#37 0x0000000802344866 in tevent_common_loop_wait (ev=0x810da0060, 
    location=0x1036d90 "../../source3/smbd/server.c:1367") at ../../lib/tevent/tevent.c:895
#38 0x00000008023448c2 in _tevent_loop_wait (ev=ev@entry=0x810da0060, 
    location=location@entry=0x1036d90 "../../source3/smbd/server.c:1367") at ../../lib/tevent/tevent.c:914
#39 0x00000000010302fb in smbd_parent_loop (parent=0x810d99760, ev_ctx=0x810da0060)
    at ../../source3/smbd/server.c:1367
#40 main (argc=<optimized out>, argv=<optimized out>) at ../../source3/smbd/server.c:2220
Comment 1 Peter Eriksson 2021-04-01 09:24:34 UTC
It seems the "priv" struct mostly contains NULL pointers at this time:

(gdb) frame 12
#12 _shadow_copy2_strip_snapshot_internal (mem_ctx=0x814f9a080, handle=handle@entry=0x8114040c0, 
    smb_fname=smb_fname@entry=0x7fffffffdf60, ptimestamp=ptimestamp@entry=0x7fffffffde98, 
    pstripped=pstripped@entry=0x7fffffffdea0, psnappath=psnappath@entry=0x0, _already_converted=0x0, 
    function=0x814c3ba10 <__FUNCTION__.24443> "shadow_copy2_stat") at ../../source3/modules/vfs_shadow_copy2.c:656
656	in ../../source3/modules/vfs_shadow_copy2.c

(gdb) print *priv
$17 = {config = 0x8114c11a0, snaps = 0x8114be160, shadow_cwd = 0x0, shadow_connectpath = 0x0, shadow_realpath = 0x0}



Hmm.. some wild idea (possibly not related):

#11 0x0000000813e982d6 in make_relative_path (abs_path=0x814400cf0 "/(null)/iei/kansli/Ekonomi/Avdelningar/14002 jel eu-projekt/STEM", cwd=0x0)
    at ../../source3/modules/vfs_shadow_copy2.c:440

...

#17 0x000000080170a474 in unix_convert (mem_ctx=mem_ctx@entry=0x810deaf90, conn=conn@entry=0x81452d5e0, 
    orig_path=orig_path@entry=0x810deb3a0 "iei/kansli/Ekonomi/Avdelningar/14002 jel eu-projekt/STEM/Uppf�ljningar/Uppf�ljning STEM staff.xlsx", 
    twrp=twrp@entry=132616044000000000, smb_fname_out=smb_fname_out@entry=0x7fffffffe268, ucf_flags=ucf_flags@entry=0) at ../../source3/smbd/filename.c:1120

There is a zfs snapshot directory at the top directory ("iei", in real world /export/liu/iei). The rest of the tree is just normal directories. "STEM" is the last one with just ASCII characters, the next level down ("Uppföljningar") contains UNICODE characters.

The user trying accessing the share at this time seems to have correct access rights to the files so it's not ACL-related (I think).

I've been trying to reproduce using smbclient in a test share but so far no luck.
Comment 2 Jeremy Allison 2021-04-01 16:38:32 UTC
Yep, make_relative_path() is only called from one place:

        if (pstripped != NULL) {
                stripped = talloc_strdup(mem_ctx, abs_path);
                if (stripped == NULL) {
                        ret = false;
                        goto out;
                }

                if (smb_fname->base_name[0] != '/') {
                        ret = make_relative_path(priv->shadow_cwd, stripped);

so looks like priv->shadow_cwd == NULL here.

What config does shadow_copy2 use for this share in your smb.conf ?
Comment 3 Peter Eriksson 2021-04-01 18:12:49 UTC
Parts from our smb.conf:

;; VFS objects to enable
vfs objects = shadow_copy2 zfsacl full_audit

;; Snapshots/Previous Versions
shadow:snapdir = .zfs/snapshot
shadow:format = auto-%Y-%m-%d.%H:%M:%S
shadow:sort = desc
shadow:localtime = yes
shadow:snapdirseverywhere = yes

...
[homes]
browseable = false
printable = false
public = false
writeable = true

...

[liu]
copy = homes
comment = LIU Directories
path = /export/liu
create mask = 0700
directory mask = 0700
inherit owner = no


The snapshot directory typically contains 50 snapshots and looks like this:

root@filur06:/export/liu/iei/.zfs/snapshot # ls
auto-2020-04-01.23:00:00	auto-2021-03-14.22:00:00	auto-2021-03-31.11:00:00	auto-2021-04-01.09:00:00
auto-2020-05-01.23:00:00	auto-2021-03-21.22:00:00	auto-2021-03-31.12:00:00	auto-2021-04-01.10:00:00
auto-2020-06-01.23:00:00	auto-2021-03-25.21:00:00	auto-2021-03-31.13:00:00	auto-2021-04-01.11:00:00
auto-2020-07-01.23:00:00	auto-2021-03-26.21:00:00	auto-2021-03-31.14:00:00	auto-2021-04-01.12:00:00
auto-2020-08-01.23:00:00	auto-2021-03-27.21:00:00	auto-2021-03-31.15:00:00	auto-2021-04-01.13:00:00
auto-2020-09-01.23:00:00	auto-2021-03-28.21:00:00	auto-2021-03-31.16:00:00	auto-2021-04-01.14:00:00
auto-2020-10-01.23:00:00	auto-2021-03-28.22:00:00	auto-2021-03-31.17:00:00	auto-2021-04-01.15:00:00
auto-2020-11-01.23:00:00	auto-2021-03-29.21:00:00	auto-2021-03-31.18:00:00	auto-2021-04-01.16:00:00
auto-2020-12-01.23:00:00	auto-2021-03-30.21:00:00	auto-2021-03-31.19:00:00	auto-2021-04-01.17:00:00
auto-2021-01-01.23:00:00	auto-2021-03-31.07:00:00	auto-2021-03-31.20:00:00	auto-2021-04-01.18:00:00
auto-2021-02-01.23:00:00	auto-2021-03-31.08:00:00	auto-2021-03-31.21:00:00	auto-2021-04-01.19:00:00
auto-2021-03-01.23:00:00	auto-2021-03-31.09:00:00	auto-2021-04-01.07:00:00
auto-2021-03-07.22:00:00	auto-2021-03-31.10:00:00	auto-2021-04-01.08:00:00


Btw, Have been thinking about modifying the "snapdirseverywhere" code a bit. We currently have to set it since the snapshot directory isn't directly in the share but one (or a couple of) levels down (ie, liu/iei).
Comment 4 Peter Eriksson 2021-05-07 10:36:32 UTC
Just got another core-dump. This time at 4.14.4, looks like the same bug:

(gdb) bt
#0  0x0000000804b66c2a in thr_kill () from /lib/libc.so.7
#1  0x0000000804b65084 in raise () from /lib/libc.so.7
#2  0x0000000804adb279 in abort () from /lib/libc.so.7
#3  0x00000008029b32f9 in dump_core () at ../../source3/lib/dumpcore.c:338
#4  0x00000008029c1cba in smb_panic_s3 (why=<optimized out>) at ../../source3/lib/util.c:850
#5  0x00000008012912e9 in smb_panic (
    why=why@entry=0x7fffffffd350 "Signal 11: Segmentation fault") at ../../lib/util/fault.c:197
#6  0x0000000801291365 in fault_report (sig=11) at ../../lib/util/fault.c:81
#7  sig_fault (sig=11) at ../../lib/util/fault.c:92
#8  0x0000000801517b70 in ?? () from /lib/libthr.so.3
#9  0x000000080151713f in ?? () from /lib/libthr.so.3
#10 <signal handler called>
#11 0x0000000813965976 in make_relative_path (
    abs_path=0x819ae2490 "/(null)/marwi56/MigratedFrom/IEI/axel/2021/Q1", cwd=0x0)
    at ../../source3/modules/vfs_shadow_copy2.c:440
#12 _shadow_copy2_strip_snapshot_internal (mem_ctx=0x819ae2080, handle=handle@entry=0x80fd17120, 
    smb_fname=smb_fname@entry=0x7fffffffdf60, ptimestamp=ptimestamp@entry=0x7fffffffde98, 
    pstripped=pstripped@entry=0x7fffffffdea0, psnappath=psnappath@entry=0x0, 
    _already_converted=0x0, function=0x8139700b0 <__FUNCTION__.24443> "shadow_copy2_stat")
    at ../../source3/modules/vfs_shadow_copy2.c:656
#13 0x0000000813965aa5 in _shadow_copy2_strip_snapshot (mem_ctx=<optimized out>, 
    handle=handle@entry=0x80fd17120, orig_name=orig_name@entry=0x7fffffffdf60, 
    ptimestamp=ptimestamp@entry=0x7fffffffde98, pstripped=pstripped@entry=0x7fffffffdea0, 
    function=function@entry=0x8139700b0 <__FUNCTION__.24443> "shadow_copy2_stat")
    at ../../source3/modules/vfs_shadow_copy2.c:689
#14 0x0000000813969b84 in shadow_copy2_stat (handle=0x80fd17120, smb_fname=0x7fffffffdf60)
    at ../../source3/modules/vfs_shadow_copy2.c:1177
#15 0x000000080171ea9a in smb_vfs_call_stat (handle=<optimized out>, 
    smb_fname=smb_fname@entry=0x7fffffffdf60) at ../../source3/smbd/vfs.c:2172
#16 0x00000008017229ef in stat_cache_lookup (conn=0x819aa79e0, posix_paths=<optimized out>, 
    pp_name=0x819f5a7e0, pp_dirpath=pp_dirpath@entry=0x7fffffffe0f8, 
    pp_start=pp_start@entry=0x7fffffffe0e8, twrp=132647544000000000, pst=0x819f5a7f8)
    at ../../source3/smbd/statcache.c:337
#17 0x000000080170a035 in unix_convert (mem_ctx=mem_ctx@entry=0x80fcd5680, 
    conn=conn@entry=0x819aa79e0, 
    orig_path=orig_path@entry=0x80fcd5a90 "marwi56/MigratedFrom/IEI/Axel/2021/Q1/Manuella periodiseringar Q1 2021 r�ttad.xlsx", twrp=twrp@entry=132647544000000000, 
    smb_fname_out=smb_fname_out@entry=0x7fffffffe268, ucf_flags=ucf_flags@entry=0)
    at ../../source3/smbd/filename.c:1120
#18 0x000000080170bba7 in filename_convert_internal (ctx=ctx@entry=0x80fcd5680, 
    conn=0x819aa79e0, smbreq=smbreq@entry=0x0, 
    name_in=0x80fcd5a90 "marwi56/MigratedFrom/IEI/Axel/2021/Q1/Manuella periodiseringar Q1 2021 r�ttad.xlsx", ucf_flags=0, twrp=132647544000000000, _smb_fname=0x7fffffffe340)
    at ../../source3/smbd/filename.c:1936
#19 0x000000080170c175 in filename_convert (ctx=ctx@entry=0x80fcd5680, conn=<optimized out>, 
    name_in=<optimized out>, ucf_flags=<optimized out>, twrp=<optimized out>, 
    pp_smb_fname=pp_smb_fname@entry=0x7fffffffe340) at ../../source3/smbd/filename.c:2029
#20 0x000000080174fe22 in smbd_smb2_create_send (in_context_blobs=..., 
    in_name=0x813defc70 "marwi56\\MigratedFrom\\IEI\\Axel\\2021\\Q1\\Manuella periodiseringar Q1 2021 r�ttad.xlsx", in_create_options=<optimized out>, in_create_disposition=<optimized out>, 
    in_share_access=<optimized out>, in_file_attributes=<optimized out>, 
    in_desired_access=<optimized out>, in_impersonation_level=<optimized out>, 
    in_oplock_level=<optimized out>, smb2req=0x813def7b0, ev=<optimized out>, 
    mem_ctx=0x813def7b0) at ../../source3/smbd/smb2_create.c:946
#21 smbd_smb2_request_process_create (smb2req=smb2req@entry=0x813def7b0)
    at ../../source3/smbd/smb2_create.c:268
#22 0x000000080174600e in smbd_smb2_request_dispatch (req=req@entry=0x813def7b0)
    at ../../source3/smbd/smb2_server.c:3303
#23 0x0000000801746f27 in smbd_smb2_io_handler (fde_flags=<optimized out>, xconn=0x80fcc8fe0)
    at ../../source3/smbd/smb2_server.c:4913
#24 smbd_smb2_connection_handler (ev=<optimized out>, fde=<optimized out>, 
    flags=<optimized out>, private_data=<optimized out>) at ../../source3/smbd/smb2_server.c:4951
#25 0x0000000802142ea9 in tevent_common_invoke_fd_handler (fde=fde@entry=0x80fcb4900, 
    flags=<optimized out>, removed=removed@entry=0x0) at ../../lib/tevent/tevent_fd.c:138
#26 0x0000000802145485 in poll_event_loop_poll (tvalp=0x7fffffffe550, ev=0x80fcb7060)
    at ../../lib/tevent/tevent_poll.c:569
#27 poll_event_loop_once (ev=0x80fcb7060, location=<optimized out>)
    at ../../lib/tevent/tevent_poll.c:626
#28 0x000000080214267f in _tevent_loop_once (ev=ev@entry=0x80fcb7060, 
    location=location@entry=0x801873440 "../../source3/smbd/process.c:4232")
    at ../../lib/tevent/tevent.c:772
#29 0x0000000802142856 in tevent_common_loop_wait (ev=0x80fcb7060, 
    location=0x801873440 "../../source3/smbd/process.c:4232") at ../../lib/tevent/tevent.c:895
#30 0x00000008021428b2 in _tevent_loop_wait (ev=ev@entry=0x80fcb7060, 
    location=location@entry=0x801873440 "../../source3/smbd/process.c:4232")
    at ../../lib/tevent/tevent.c:914
#31 0x0000000801734949 in smbd_process (ev_ctx=ev_ctx@entry=0x80fcb7060, 
    msg_ctx=msg_ctx@entry=0x80fcb0300, dce_ctx=dce_ctx@entry=0x80fca10c0, 
    sock_fd=sock_fd@entry=49, interactive=interactive@entry=false)
    at ../../source3/smbd/process.c:4232
#32 0x000000000102dfbd in smbd_accept_connection (ev=0x80fcb7060, fde=<optimized out>, 
    flags=<optimized out>, private_data=<optimized out>) at ../../source3/smbd/server.c:1020
#33 0x0000000802142ea9 in tevent_common_invoke_fd_handler (fde=fde@entry=0x80fcb04c0, 
    flags=<optimized out>, removed=removed@entry=0x0) at ../../lib/tevent/tevent_fd.c:138
#34 0x0000000802145485 in poll_event_loop_poll (tvalp=0x7fffffffe7b0, ev=0x80fcb7060)
    at ../../lib/tevent/tevent_poll.c:569
#35 poll_event_loop_once (ev=0x80fcb7060, location=<optimized out>)
    at ../../lib/tevent/tevent_poll.c:626
#36 0x000000080214267f in _tevent_loop_once (ev=ev@entry=0x80fcb7060, 
    location=location@entry=0x1036550 "../../source3/smbd/server.c:1367")
    at ../../lib/tevent/tevent.c:772
#37 0x0000000802142856 in tevent_common_loop_wait (ev=0x80fcb7060, 
    location=0x1036550 "../../source3/smbd/server.c:1367") at ../../lib/tevent/tevent.c:895
#38 0x00000008021428b2 in _tevent_loop_wait (ev=ev@entry=0x80fcb7060, 
    location=location@entry=0x1036550 "../../source3/smbd/server.c:1367")
    at ../../lib/tevent/tevent.c:914
#39 0x000000000102fabb in smbd_parent_loop (parent=0x80fcb0760, ev_ctx=0x80fcb7060)
    at ../../source3/smbd/server.c:1367
#40 main (argc=<optimized out>, argv=<optimized out>) at ../../source3/smbd/server.c:2220
Comment 5 Peter Eriksson 2021-05-07 15:08:05 UTC
Hmm...

(gdb) print priv->config[0]
$6 = {gmt_format = 0x813da2360 "auto-%Y-%m-%d.%H:%M:%S", use_sscanf = false, 
  use_localtime = true, snapdir = 0x80fca63a0 ".zfs/snapshot", delimiter = 0x80fca6cd0 "_GMT", 
  snapdirseverywhere = true, crossmountpoints = false, fixinodes = false, 
  sort_order = 0x80fd39b60 "desc", snapdir_absolute = false, 
  mount_point = 0x80fd37160 "/export/staff", rel_connectpath = 0x0, 
  snapshot_basepath = 0x80fdd61e0 "/export/staff/.zfs/snapshot"}

That is not the right snapshot directory for that user. It should have been

  /export/staff/marwi56/.zfs/snapshot/

But "snapdirseverywhere" is set so it should have found the one in the 'marwi56' subdir.

The user probably connected to this share using \\server\staff and then cd'd down into the home directory (\\server\staff\marwi56). There are .zfs/snapshot directories in /export/staff and /export/staff/marwi56.


Looking at the data in "priv" is seems store_cwd_data() hasn't been called since that is the one setting "priv->shadow_cwd".

The priv->snaps list of snapshots also seems empty:

(gdb) print priv->snaps[0]
$4 = {snaplist = 0x0, regex = 0x0, fetch_time = 0}
Comment 6 Jeremy Allison 2021-05-07 17:34:03 UTC
If shadow_copy2_chdir() successfully calls SMB_VFS_NEXT_CHDIR(), then store_cwd_data() *must* be called.

Could a chdir() be failing somewhere ?
Comment 7 Peter Eriksson 2021-05-07 18:29:16 UTC
I wish I knew. I've sent an email to the user that triggered this bug to try and find out what he was attempting to do (so I can try to reproduce it) but so far no response... 

And all my attempts at reproducing (accessing the "previous versions" via smbclient from a Linux client and via Windows 10 it with my own user always fails (or rather succeeds - no core dump :-). Sigh.

Nothing suspicious in the log files either. 

Annoying this.
Comment 8 Jeremy Allison 2021-05-07 19:48:08 UTC
Yeah there's definitely a bug here, but I don't immediately see it. Thanks for persevering. I'm sure eventually we'll be able to get the info we need. I'm leaving this one "assigned" so as soon as you find more info I'll get email.
Comment 9 Jeremy Allison 2021-05-27 05:52:40 UTC
Created attachment 16631 [details]
git-am fix for master.
Comment 10 Jeremy Allison 2021-05-27 05:55:02 UTC
This one looks suspiciously like the bug I just created:

https://bugzilla.samba.org/show_bug.cgi?id=14721

I've posted the fix I added there for you to try.

It's not conclusive, but it's a valgrind error in approximately the right area.
Comment 11 Peter Eriksson 2021-05-28 12:33:54 UTC
I’ll have a look at testing that patch. Seems related to reading/following symbolic links if I’m not mistaken? Hmm.. I don’t think the user that triggered the bug for us have any symbolic links in their directory. I'll have to check that though.

Hm… There might be a case where they access their home via a symbolic link - if they would mount the share directly via \\server.domain\user - then access is via a symlink farm /export/home/user -> ../staff/user). But normally users access it via AD DFS \\domain\home\user which would point them to \\server.domain\staff\user - but you never know if someone has been creative :-)

- Peter