Crash previously reported in bug 14226 but seems to be unrelated. frame #10: 0x00007ffffffff003 frame #11: 0x0000000801a012b5 libsmbd-base-samba4.so`call_trans2qfilepathinfo(conn=0x0000000815a57960, req=0x0000000815a97a60, tran_call=5, pparams=0x0000000815a4c040, total_params=210, ppdata=0x0000000815a4c050, total_data=0, max_data_bytes=528) at trans2.c:6291:11 frame #12: 0x00000008019f5e2b libsmbd-base-samba4.so`handle_trans2(conn=0x0000000815a57960, req=0x0000000815a97a60, state=0x0000000815a4bfe0) at trans2.c:9776:3 frame #13: 0x00000008019f4f1c libsmbd-base-samba4.so`reply_trans2(req=0x0000000815a97a60) at trans2.c:10017:3 frame #14: 0x0000000801a4db1d libsmbd-base-samba4.so`switch_message(type='2', req=0x0000000815a97a60) at process.c:1724:2 frame #15: 0x0000000801a53d4e libsmbd-base-samba4.so`construct_reply(xconn=0x0000000815adc960, inbuf=0x0000000000000000, size=282, unread_bytes=0, seqnum=0, encrypted=false, deferred_pcd=0x0000000000000000) at process.c:1760:14 frame #16: 0x0000000801a538e5 libsmbd-base-samba4.so`process_smb(xconn=0x0000000815adc960, inbuf="", nread=282, unread_bytes=0, seqnum=0, encrypted=false, deferred_pcd=0x0000000000000000) at process.c:2015:3 frame #17: 0x0000000801a5788e libsmbd-base-samba4.so`smbd_server_connection_read_handler(xconn=0x0000000815adc960, fd=42) at process.c:2615:2 frame #18: 0x0000000801a51574 libsmbd-base-samba4.so`smbd_server_connection_handler(ev=0x0000000815a52060, fde=0x0000000815a4aa00, flags=1, private_data=0x0000000815adc960) at process.c:2642:3 frame #19: 0x000000081283bcad libtevent.so.0`tevent_common_invoke_fd_handler + 141 frame #20: 0x000000081283eade libtevent.so.0`___lldb_unnamed_symbol61$$libtevent.so.0 + 1934 frame #21: 0x000000081283aed1 libtevent.so.0`_tevent_loop_once + 225 frame #22: 0x000000081283b15b libtevent.so.0`tevent_common_loop_wait + 91 frame #23: 0x0000000801a52455 libsmbd-base-samba4.so`smbd_process(ev_ctx=0x0000000815a52060, msg_ctx=0x0000000815a4a300, sock_fd=42, interactive=false) at process.c:4135:8 frame #24: 0x0000000001030339 smbd`smbd_accept_connection(ev=0x0000000815a52060, fde=0x0000000815a4baa0, flags=1, private_data=0x0000000815b6e380) at server.c:1010:3 frame #25: 0x000000081283bcad libtevent.so.0`tevent_common_invoke_fd_handler + 141 frame #26: 0x000000081283eade libtevent.so.0`___lldb_unnamed_symbol61$$libtevent.so.0 + 1934 frame #27: 0x000000081283aed1 libtevent.so.0`_tevent_loop_once + 225 frame #28: 0x000000081283b15b libtevent.so.0`tevent_common_loop_wait + 91 frame #29: 0x000000000102d0e6 smbd`smbd_parent_loop(ev_ctx=0x0000000815a52060, parent=0x0000000815a4aa00) at server.c:1355:8 frame #30: 0x000000000102aa15 smbd`main(argc=3, argv=0x00007fffffffecf8) at server.c:2187:2 frame #31: 0x00000000010286d9 smbd`_start + 153 (lldb) frame select 11 frame #11: 0x0000000801a012b5 libsmbd-base-samba4.so`call_trans2qfilepathinfo(conn=0x0000000815a57960, req=0x0000000815a97a60, tran_call=5, pparams=0x0000000815a4c040, total_params=210, ppdata=0x0000000815a4c050, total_data=0, max_data_bytes=528) at trans2.c:6291:11 6288 return; 6289 } 6290 -> 6291 status = smbd_do_qfilepathinfo(conn, req, req, info_level, 6292 fsp, smb_fname, 6293 delete_pending, write_time_ts, 6294 ea_list, (lldb) p *req (smb_request) $0 = { cmd = '2' flags2 = 51271 smbpid = 6324 mid = 15616 seqnum = 0 vuid = 12094 tid = 23957 wct = '\x0f' vwv = 0x0000000815a97905 buflen = 213 buf = 0x0000000815a97925 <no value available> inbuf = 0x0000000815a978e0 <no value available> outbuf = 0x0000000000000000 <no value available> unread_bytes = 0 encrypted = false conn = 0x0000000815a57960 sconn = 0x0000000815a52420 xconn = 0x0000000815adc960 pcd = { handlers = 0x0000000000000000 context = 0x0000000000000000 } chain_fsp = 0x0000000000000000 async_priv = 0x66207463656e6e6f smb2req = 0x0000000000000000 priv_paths = 0x0000000000000000 chain = 0x0000000000000000 request_time = (tv_sec = 1589456919, tv_usec = 127340) posix_pathnames = false } (lldb) p *conn (connection_struct) $1 = { next = 0x0000000000000000 prev = 0x0000000815a57960 sconn = 0x0000000815a52420 tcon = 0x0000000815b6e6a0 cnum = 23957 params = 0x0000000815be8300 force_user = false vuid_cache = 0x0000000815a6cc60 printer = false ipc = false read_only = false share_access = 2032127 ts_res = TIMESTAMP_SET_NT_OR_BETTER connectpath = 0x0000000815aeb560 "***anon***" origpath = 0x0000000815af6e60 "***anon***" cwd_fname = 0x0000000815adf680 tcon_done = true vfs_handles = 0x0000000815b6fbe0 session_info = 0x0000000815a49ca0 force_group_gid = 1007 vuid = 12094 lastused = 1589456905 lastused_count = 1589456924 num_files_open = 0 num_smb_operations = 19 encrypt_level = -1 encrypted_tid = false case_sensitive = false case_preserve = true short_case_preserve = true fs_capabilities = 3 base_share_dev = 2449487824 hide_list = 0x0000000000000000 veto_list = 0x0000000000000000 veto_oplock_list = 0x0000000000000000 aio_write_behind_list = 0x0000000000000000 pending_trans = 0x0000000000000000 spoolss_pipe = 0x0000000000000000 }
Do you have a wireshark trace or client reproducer for this ?
No because I don't know what is the root cuase - and it comes in waves :) like - nothing for few days and suddenly 20 crashes in 2 minutes (so dumping the traffic would be hard) -rw------- 1 root wheel 112181248 May 14 13:47 0-smbd-22370.core -rw------- 1 root wheel 112177152 May 14 13:47 0-smbd-22945.core -rw------- 1 root wheel 112177152 May 14 13:47 0-smbd-23005.core -rw------- 1 root wheel 112308224 May 14 13:48 0-smbd-23518.core -rw------- 1 root wheel 109940736 May 14 13:48 0-smbd-23602.core -rw------- 1 root wheel 107986944 May 14 13:48 0-smbd-23643.core -rw------- 1 root wheel 110084096 May 14 13:48 0-smbd-23665.core -rw------- 1 root wheel 107790336 May 14 13:48 0-smbd-23720.core -rw------- 1 root wheel 107790336 May 14 13:48 0-smbd-23748.core -rw------- 1 root wheel 109940736 May 14 13:48 0-smbd-23774.core -rw------- 1 root wheel 107790336 May 14 13:48 0-smbd-23799.core -rw------- 1 root wheel 107790336 May 14 13:48 0-smbd-23848.core -rw------- 1 root wheel 107790336 May 14 13:48 0-smbd-23925.core -rw------- 1 root wheel 107790336 May 14 13:48 0-smbd-23960.core -rw------- 1 root wheel 107790336 May 14 13:48 0-smbd-23977.core -rw------- 1 root wheel 107790336 May 14 13:48 0-smbd-24029.core -rw------- 1 root wheel 107790336 May 14 13:48 0-smbd-24038.core -rw------- 1 root wheel 107790336 May 14 13:48 0-smbd-24048.core but - all above the crashes are from one client and as I have per-client logging - It's like this: [2020/05/14 13:48:23.755220, 2] ../../source3/smbd/service.c:851(make_connection_snum) *** (ipv4:192.168.3.70:56814) connect to service *** initially as user *** (uid=1173, gid=1007) (pid 24029) [2020/05/14 13:48:24.116011, 0] ../../lib/util/fault.c:79(fault_report) ...(error report etc)... [2020/05/14 13:48:24.902280, 2] ../../source3/smbd/service.c:851(make_connection_snum) *** (ipv4:192.168.3.70:56815) connect to service *** initially as user *** (uid=1173, gid=1007) (pid 24038) [2020/05/14 13:48:25.203236, 0] ../../lib/util/fault.c:79(fault_report) so - no real file operations between (at least not logged)