We've hit the following panic a couple of times but have not come up with a 100% reproduction case yet. The following appears in log.smbd: [2012/05/03 18:27:58.244338, 0] lib/util.c:1117(smb_panic) PANIC (pid 1214): internal error [2012/05/03 18:27:58.270128, 0] lib/util.c:1221(log_stack_trace) BACKTRACE: 27 stack frames: #0 smbd(log_stack_trace+0x2e) [0x871e5e] #1 smbd(smb_panic+0x32) [0x871f82] #2 smbd(+0x42bc4b) [0x860c4b] #3 [0x130400] #4 smbd(security_token_has_sid+0x1a) [0x8a83ca] #5 smbd(se_access_check+0xbe) [0x88b39e] #6 smbd(smb1_file_se_access_check+0x8b) [0x54763b] #7 /usr/lib/samba/vfs/acl_xattr.so(+0x3f7a) [0x38abf7a] #8 smbd(smb_vfs_call_opendir+0x3e) [0x556a8e] #9 smbd(OpenDir+0xab) [0x4f020b] #10 smbd(close_file+0x45a) [0x5505ca] #11 smbd(file_close_conn+0x5f) [0x4eb6bf] #12 smbd(close_cnum+0x24) [0x572174] #13 smbd(conn_close_all+0xb7) [0x4f4b47] #14 smbd(+0x701b8c) [0xb36b8c] #15 smbd(+0x702041) [0xb37041] #16 smbd(+0x132f81) [0x567f81] #17 smbd(tevent_common_check_signal+0x188) [0x886af8] #18 smbd(run_events_poll+0x2c) [0x883d0c] #19 smbd(smbd_process+0x97c) [0x56e63c] #20 smbd(+0x6fffc0) [0xb34fc0] #21 smbd(run_events_poll+0x37e) [0x88405e] #22 smbd(+0x44f236) [0x884236] #23 smbd(_tevent_loop_once+0x98) [0x885008] #24 smbd(main+0x1332) [0xb364b2] #25 /lib/libc.so.6(__libc_start_main+0xf3) [0xffe3f3] #26 smbd(+0xa47e1) [0x4d97e1] I used gdb on the core file and got the following: (gdb) bt #0 0x00130424 in __kernel_vsyscall () #1 0x010130ef in raise (sig=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64 #2 0x01014a25 in abort () at abort.c:93 #3 0x00861466 in dump_core () at lib/fault.c:391 #4 0x00871fba in smb_panic (why=0xc8c011 "internal error") at lib/util.c:1133 #5 0x00860c4b in fault_report (sig=11) at lib/fault.c:53 #6 sig_fault (sig=11) at lib/fault.c:76 #7 <signal handler called> #8 security_token_has_sid (token=0x0, sid=0x1fb4840) at ../libcli/security/security_token.c:109 #9 0x0088b39e in se_access_check (sd=0x1fb4680, token=0x0, access_desired=<value optimized out>, access_granted=0xbffef5fc) at ../libcli/security/access_check.c:226 #10 0x0054763b in smb1_file_se_access_check (conn=0x1f946b8, sd=0x1fb4680, token=0x0, access_desired=1, access_granted=0xbffef5fc) at smbd/open.c:61 #11 0x038abf7a in opendir_acl_common (handle=0x1f8d7d8, fname=0x1fb3670 "exampleapiunits", mask=0x0, attr=0) at modules/vfs_acl_common.c:839 #12 0x00556a8e in smb_vfs_call_opendir (handle=<value optimized out>, fname=0x1fb3670 "exampleapiunits", mask=0x0, attributes=0) at smbd/vfs.c:1218 #13 0x004f020b in OpenDir (mem_ctx=0x1fb35a0, conn=0x1f946b8, name=0x1f9d840 "exampleapiunits", mask=0x0, attr=0) at smbd/dir.c:1384 #14 0x005505ca in rmdir_internals (req=0x0, fsp=0x1f96250, close_type=SHUTDOWN_CLOSE) at smbd/close.c:831 #15 close_directory (req=0x0, fsp=0x1f96250, close_type=SHUTDOWN_CLOSE) at smbd/close.c:1045 #16 close_file (req=0x0, fsp=0x1f96250, close_type=SHUTDOWN_CLOSE) at smbd/close.c:1101 #17 0x004eb6bf in file_close_conn (conn=0x1f946b8) at smbd/files.c:156 #18 0x00572174 in close_cnum (conn=0x1f946b8, vuid=100) at smbd/service.c:1334 #19 0x004f4b47 in conn_close_all (sconn=0x1f83b48) at smbd/conn.c:242 #20 0x00b36b8c in exit_server_common (how=SERVER_EXIT_NORMAL, reason=0xb57a27 "termination signal") at smbd/server_exit.c:104 #21 0x00b37041 in exit_server_cleanly (explanation=0xb57a27 "termination signal") at smbd/server_exit.c:205 #22 0x00567f81 in smbd_sig_term_handler (ev=0x1f83ad8, se=0x1f92c90, signum=15, count=1, siginfo=0x0, private_data=0x0) at smbd/process.c:931 #23 0x00886af8 in tevent_common_check_signal (ev=0x1f83ad8) at ../lib/tevent/tevent_signal.c:366 #24 0x00883d0c in run_events_poll (ev=0x1f83ad8, pollrtn=0, pfds=0x0, num_pfds=0) at lib/events.c:193 #25 0x0056e63c in smbd_server_connection_loop_once (sconn=0x1f83b48) at smbd/process.c:995 #26 smbd_process (sconn=0x1f83b48) at smbd/process.c:3158 #27 0x00b34fc0 in smbd_accept_connection (ev=0x1f83ad8, fde=0x1f96ce8, flags=1, private_data=0x1f97c98) at smbd/server.c:511 #28 0x0088405e in run_events_poll (ev=0x1f83ad8, pollrtn=1, pfds=0x1f97ce0, num_pfds=5) at lib/events.c:286 #29 0x00884236 in s3_event_loop_once (ev=0x1f83ad8, location=0xd0f807 "smbd/server.c:844") at lib/events.c:349 #30 0x00885008 in _tevent_loop_once (ev=0x1f83ad8, location=0xd0f807 "smbd/server.c:844") at ../lib/tevent/tevent.c:494 #31 0x00b364b2 in smbd_parent_loop (argc=1, argv=0x8004) at smbd/server.c:844 #32 main (argc=1, argv=0x8004) at smbd/server.c:1326 The crash happens with samba 3.6.4 and acl_xattr enabled using a custom compile of 3.6.4 with the patch from samba bug 8857 applied. When the crash is encountered the systems that hit this crash are under quite a few SMB connections. Please let me know if there is any other information I can provide.
It's a NULL pointer being returned from get_current_nttok(handle->conn) when the current smbd is closing itself down... Jeremy.
This function is : const struct security_token *get_current_nttok(connection_struct *conn) { return current_user.nt_user_token; } So what I'd really like to know is the contents of the current_user struct from gdb. Thanks ! Jeremy.
(gdb) frame 8 #8 security_token_has_sid (token=0x0, sid=0x1fb4840) at ../libcli/security/security_token.c:109 109 for (i = 0; i < token->num_sids; i++) { (gdb) p current_user $3 = {conn = 0x0, vuid = 0, ut = {uid = 0, gid = 0, ngroups = 0, groups = 0x0}, nt_user_token = 0x0} Let me know if this was not the right frame/function. (I'm not very good at gdb!)
No, that makes perfect sense to me. The thing I don't understand is why current_user has been nulled out in this context. Jeremy.
Ok, I think I see what might be happening here. Might not get to this until Monday, but I have an idea for a fix. Jeremy.
Oh ! This is a bug we already found and fixed... I'm pretty sure this is bug #8837 Can you apply the patchset from here: https://attachments.samba.org/attachment.cgi?id=7443 and see if it fixes your problem. Thanks ! Jeremy.
I'm having a hard time reliably reproducing the problem (without the patch), but based on what I read in bug 8837 everything matches up with what happens for us. I'm going to operate on the assumption that this is a duplicate of that bug and mark it as such. Thanks for the assistance! *** This bug has been marked as a duplicate of bug 8837 ***