Bug 14301 - smbd panic on force-close share during async io
Summary: smbd panic on force-close share during async io
Status: RESOLVED FIXED
Alias: None
Product: Samba 4.1 and newer
Classification: Unclassified
Component: File services (show other bugs)
Version: 4.9.13
Hardware: All All
: P5 normal (vote)
Target Milestone: ---
Assignee: Karolin Seeger
QA Contact: Samba QA Contact
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2020-02-27 20:51 UTC by Lev
Modified: 2023-07-12 08:54 UTC (History)
4 users (show)

See Also:


Attachments
Quick patch. (1.82 KB, patch)
2020-02-27 22:09 UTC, Jeremy Allison
no flags Details
git-am fix for master (33.42 KB, patch)
2020-03-02 21:46 UTC, Jeremy Allison
no flags Details
git-am fix for master (47.94 KB, patch)
2020-03-05 23:09 UTC, Jeremy Allison
jra: ci-passed-
Details
git-am fix for master. (50.44 KB, patch)
2020-03-06 18:13 UTC, Jeremy Allison
no flags Details
git-am for master. WIP. (19.17 KB, patch)
2020-03-10 22:47 UTC, Jeremy Allison
no flags Details
git-am for master. Updated WIP (76.70 KB, patch)
2020-03-13 01:19 UTC, Jeremy Allison
no flags Details
git-am WIP fix for master. (78.24 KB, application/mbox)
2020-03-13 20:15 UTC, Jeremy Allison
no flags Details
git-am fix for master (81.95 KB, patch)
2020-03-16 21:13 UTC, Jeremy Allison
jra: ci-passed+
Details
git-am fix for 4.12.next. (72.51 KB, patch)
2020-03-21 00:09 UTC, Jeremy Allison
slow: review+
Details
raw patch for disconnect with aio outstanding. (1.15 KB, patch)
2020-06-16 01:29 UTC, Jeremy Allison
no flags Details
raw patch for disconnect with aio outstanding. (1.43 KB, patch)
2020-06-16 01:32 UTC, Jeremy Allison
no flags Details
git-am fix for 4.12.next (1.15 KB, patch)
2020-06-17 01:20 UTC, Jeremy Allison
no flags Details
supplemental git-am fix for 4.12.x (1.38 KB, patch)
2020-06-17 21:48 UTC, Jeremy Allison
no flags Details
git-am supplemental fix for master. (8.00 KB, patch)
2020-06-22 22:10 UTC, Jeremy Allison
no flags Details
git-am supplemental fix for master. (8.49 KB, patch)
2020-06-23 01:07 UTC, Jeremy Allison
no flags Details
git-am supplemental fix for master. (8.61 KB, patch)
2020-06-23 22:52 UTC, Jeremy Allison
no flags Details
git-am fix for 4.12.next. (9.04 KB, patch)
2020-06-25 22:07 UTC, Jeremy Allison
slow: review+
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Lev 2020-02-27 20:51:50 UTC
Asynchronous read allocates tevent_req in vfswrap_pread_send():

schedule_smb2_aio_read ->
    smb_vfs_call_pread_send ->
        vfswrap_pread_send ->
            req = tevent_req_create(mem_ctx, &state, struct vfswrap_pread_state);

Then memory context request, holding this tevent_req is added to fsp->aio_requests in aio_add_req_to_fsp().

If during asynchronous read processing, the share is force closed, close_normal_file() frees fsp->aio_requests:

msg_force_tdis ->
    conn_force_tdis ->
        smbXsrv_tcon_disconnect ->
            close_cnum ->
                file_close_conn ->
                    close_file ->
                        close_normal_file ->
                            talloc_free(fsp->aio_requests[0]);
							
Then, when read request returns, it tried to access tevent_req, that was already released, and aborts on use-after-free:

#6  0x00007f248bde5bc4 in talloc_abort_access_after_free () at ../lib/talloc/talloc.c:515
#7  0x00007f248bde5c4b in talloc_chunk_from_ptr (ptr=0x555a26a66120) at ../lib/talloc/talloc.c:532
#8  0x00007f248bde791e in __talloc_get_name (ptr=0x555a26a66120) at ../lib/talloc/talloc.c:1548
#9  0x00007f248bde7ab9 in _talloc_get_type_abort (ptr=0x555a26a66120, name=0x7f248cc08a25 "struct tevent_req", location=0x7f248cc08a00 "../source3/modules/vfs_default.c:752") at ../lib/talloc/talloc.c:1605
#10 0x00007f248c9b5900 in vfs_pread_done (subreq=0x555a26a61040) at ../source3/modules/vfs_default.c:751
#11 0x00007f248b7ce697 in _tevent_req_notify_callback (req=0x555a26a61040, location=0x7f2486b3c9c0 "../lib/pthreadpool/pthreadpool_tevent.c:421") at ../lib/tevent/tevent_req.c:139
#12 0x00007f248b7ce7f8 in tevent_req_finish (req=0x555a26a61040, state=TEVENT_REQ_DONE, location=0x7f2486b3c9c0 "../lib/pthreadpool/pthreadpool_tevent.c:421") at ../lib/tevent/tevent_req.c:191
#13 0x00007f248b7ce825 in _tevent_req_done (req=0x555a26a61040, location=0x7f2486b3c9c0 "../lib/pthreadpool/pthreadpool_tevent.c:421") at ../lib/tevent/tevent_req.c:197
#14 0x00007f2486b37d54 in pthreadpool_tevent_job_done (ctx=0x555a26978be0, im=0x555a26a61890, private_data=0x555a26a615f0) at ../lib/pthreadpool/pthreadpool_tevent.c:421
#15 0x00007f248b7cd585 in tevent_common_invoke_immediate_handler (im=0x555a26a61890, removed=0x0) at ../lib/tevent/tevent_immediate.c:165
#16 0x00007f248b7cd68b in tevent_common_loop_immediate (ev=0x555a26978be0) at ../lib/tevent/tevent_immediate.c:202
#17 0x00007f248b7d7d2c in epoll_event_loop_once (ev=0x555a26978be0, location=0x7f248cc5bae0 "../source3/smbd/process.c:4130") at ../lib/tevent/tevent_epoll.c:918
#18 0x00007f248b7d45d2 in std_event_loop_once (ev=0x555a26978be0, location=0x7f248cc5bae0 "../source3/smbd/process.c:4130") at ../lib/tevent/tevent_standard.c:110
#19 0x00007f248b7cc02f in _tevent_loop_once (ev=0x555a26978be0, location=0x7f248cc5bae0 "../source3/smbd/process.c:4130") at ../lib/tevent/tevent.c:772
#20 0x00007f248b7cc381 in tevent_common_loop_wait (ev=0x555a26978be0, location=0x7f248cc5bae0 "../source3/smbd/process.c:4130") at ../lib/tevent/tevent.c:895
#21 0x00007f248b7d4674 in std_event_loop_wait (ev=0x555a26978be0, location=0x7f248cc5bae0 "../source3/smbd/process.c:4130") at ../lib/tevent/tevent_standard.c:141
#22 0x00007f248b7cc424 in _tevent_loop_wait (ev=0x555a26978be0, location=0x7f248cc5bae0 "../source3/smbd/process.c:4130") at ../lib/tevent/tevent.c:914
#23 0x00007f248cad8c98 in smbd_process (ev_ctx=0x555a26978be0, msg_ctx=0x555a26992880, sock_fd=39, interactive=false) at ../source3/smbd/process.c:4130
#24 0x0000555a24bb74a6 in smbd_accept_connection (ev=0x555a26978be0, fde=0x555a269c6490, flags=1, private_data=0x555a26a193f0) at ../source3/smbd/server.c:1044

The issue can be easily reproduced if configure "aio read size = 1", so that all read requests are asynchronous and add some sleep into vfs_pread_do(). Then, during the sleep call "smbcontrol smbd close-share", and smbd will panic after read request is done.
Comment 1 Jeremy Allison 2020-02-27 22:09:58 UTC
Created attachment 15825 [details]
Quick patch.

OK, this is what I'm looking at. Not tested yet. If it works it'll need doing in pwrite and fsync also.
Comment 2 Jeremy Allison 2020-02-27 22:19:15 UTC
The other way to do this would be to add a cancel function that is called from the SHUTDOWN close path in source3/smbd/close.c.

But we would still need the change to make vfs_pread_done() take the state pointer, not the req pointer as the passed in pointer value, as removing the entries in the fsp->aio_requests array still depends on talloc_free() being called on req.
Comment 3 Jeremy Allison 2020-02-27 23:28:39 UTC
Yes, this patch works. Let me try and create a more comprehensive fix for all the async operations.

Can you test this for me please ?
Comment 4 Jeremy Allison 2020-02-28 01:02:19 UTC
Still looking at the pwrite and fsync async code paths to see if they suffer from the same problem.
Comment 5 Jeremy Allison 2020-02-28 05:26:36 UTC
FYI. I've tested my async pread patch and reproduced the crash, and my patch (somewhat cleaned-up as a 3-patch set of git-am fixes with comments :-) fixes the issue.

I'll try and reproduce with the fsync and pwrite code paths tomorrow and fix them as needed. Then I'll get it into CI. If you could also test the patches yourself I'd appreciate it.
Comment 6 Lev 2020-02-28 13:37:40 UTC
(In reply to Jeremy Allison from comment #5)

Thanks, Jeremy. I applied the patch, yes, in fixes the smbd panic. However the "interrupted" read request never returns to the client, so it fails with timeout. Not sure this is the desired behavior, I'd expect to get some I/O error:

# time smbclient -Ulev%lev //127.0.0.1/vol-1 -c "get file"
parallel_read returned NT_STATUS_IO_TIMEOUT

real    0m20.272s
user    0m0.057s
sys     0m0.008s

The sleep I added to vfs_pread_do was just 10sec, I did "smbcontrol smbd close-share vol-1" during this sleep.

-Lev.
Comment 7 Jeremy Allison 2020-02-28 16:47:46 UTC
Actually, that is the expected behavior, and is the only possible one.

Once you've disconnected the share forcefully all the file handles (fsp's) are gone internally. There's nothing to hang the return on.

It's a destructive behavior for a client, which is why it should be a last resort once all clients are already disconnected.
Comment 8 Jeremy Allison 2020-02-28 18:43:20 UTC
Making things clearer, doing 'smbcontrol smbd close-share' is a synchronous action, that immediately removes all open fsp's and closes all connection structs. Once that's done the best we can do is drop the unfinished requests as they complete.

Changing that to make 'close-share' asynchronous is a different job. Not saying it can't be done, but that's not how the code is currently designed.
Comment 9 Jeremy Allison 2020-03-02 19:47:08 UTC
OK, I think I've figured out how to do this so that the SHUTDOWN close causes a NT_STATUS_CANCELED return from the IO :-).

Updated patch in ci soon.
Comment 10 Jeremy Allison 2020-03-02 21:46:05 UTC
Created attachment 15837 [details]
git-am fix for master

Here's what I've got in ci right now. It should fix the issue you had with the previous patch where outstanding requests got dropped when the share was force-disconnected (we now reply to them with NT_STATUS_INVALID_HANDLE, as the underlying handle on the file got closed).
Comment 11 Jeremy Allison 2020-03-02 22:42:23 UTC
OK, hand tested this under valgrind under SMB1/2/3 with 20+ outstanding requests being cancelled and it's rock-solid ! WooHoo :-). Once it passes ci I'll try and get it merged.
Comment 13 Jeremy Allison 2020-03-03 23:18:48 UTC
Now with added test.
Comment 14 Jeremy Allison 2020-03-05 23:09:27 UTC
Created attachment 15844 [details]
git-am fix for master

What I have in CI now. Fixes vfs_aio_pthread. Tested to destruction under valgrind plus has the vfs_delay_inject test.
Comment 15 Jeremy Allison 2020-03-06 05:09:46 UTC
Comment on attachment 15844 [details]
git-am fix for master

>From 0cb7622fc5ef57453362785ff9759edef4e39e9f Mon Sep 17 00:00:00 2001
>From: Jeremy Allison <jra@samba.org>
>Date: Thu, 27 Feb 2020 16:30:51 -0800
>Subject: [PATCH 01/24] s3: VFS: vfs_default: Add tevent_req pointer to state
> struct in vfswrap_pread_state.
>
>We will need this to detect when this request is outstanding but
>has been destroyed in a SHUTDOWN_CLOSE on this file.
>
>BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>
>Signed-off-by: Jeremy Allison <jra@samba.org>
>---
> source3/modules/vfs_default.c | 2 ++
> 1 file changed, 2 insertions(+)
>
>diff --git a/source3/modules/vfs_default.c b/source3/modules/vfs_default.c
>index a30f3ba1d31..4bb4adf5f7e 100644
>--- a/source3/modules/vfs_default.c
>+++ b/source3/modules/vfs_default.c
>@@ -780,6 +780,7 @@ static ssize_t vfswrap_pwrite(vfs_handle_struct *handle, files_struct *fsp, cons
> }
> 
> struct vfswrap_pread_state {
>+	struct tevent_req *req;
> 	ssize_t ret;
> 	int fd;
> 	void *buf;
>@@ -809,6 +810,7 @@ static struct tevent_req *vfswrap_pread_send(struct vfs_handle_struct *handle,
> 		return NULL;
> 	}
> 
>+	state->req = req;
> 	state->ret = -1;
> 	state->fd = fsp->fh->fd;
> 	state->buf = data;
>-- 
>2.25.0.265.gbab2e86ba0-goog
>
>
>From 4b5704442f1b6f75aad501235f27a74c548d14fe Mon Sep 17 00:00:00 2001
>From: Jeremy Allison <jra@samba.org>
>Date: Thu, 27 Feb 2020 16:34:51 -0800
>Subject: [PATCH 02/24] s3: VFS: vfs_default. Pass in struct
> vfswrap_pread_state as the callback data to the subreq.
>
>Find the req we're finishing off by looking inside vfswrap_pread_state.
>In a shutdown close the caller calls talloc_free(req), so we can't
>access it directly as callback data.
>
>The next commit will NULL out the vfswrap_pread_state->req pointer
>when a caller calls talloc_free(req), and the request is still in
>flight.
>
>BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>
>Signed-off-by: Jeremy Allison <jra@samba.org>
>---
> source3/modules/vfs_default.c | 9 ++++-----
> 1 file changed, 4 insertions(+), 5 deletions(-)
>
>diff --git a/source3/modules/vfs_default.c b/source3/modules/vfs_default.c
>index 4bb4adf5f7e..b8c36180b7c 100644
>--- a/source3/modules/vfs_default.c
>+++ b/source3/modules/vfs_default.c
>@@ -827,7 +827,7 @@ static struct tevent_req *vfswrap_pread_send(struct vfs_handle_struct *handle,
> 	if (tevent_req_nomem(subreq, req)) {
> 		return tevent_req_post(req, ev);
> 	}
>-	tevent_req_set_callback(subreq, vfs_pread_done, req);
>+	tevent_req_set_callback(subreq, vfs_pread_done, state);
> 
> 	talloc_set_destructor(state, vfs_pread_state_destructor);
> 
>@@ -868,10 +868,9 @@ static int vfs_pread_state_destructor(struct vfswrap_pread_state *state)
> 
> static void vfs_pread_done(struct tevent_req *subreq)
> {
>-	struct tevent_req *req = tevent_req_callback_data(
>-		subreq, struct tevent_req);
>-	struct vfswrap_pread_state *state = tevent_req_data(
>-		req, struct vfswrap_pread_state);
>+	struct vfswrap_pread_state *state = tevent_req_callback_data(
>+		subreq, struct vfswrap_pread_state);
>+	struct tevent_req *req = state->req;
> 	int ret;
> 
> 	ret = pthreadpool_tevent_job_recv(subreq);
>-- 
>2.25.0.265.gbab2e86ba0-goog
>
>
>From 4d018cd2c2d0d077523234fbfb484a47b043d096 Mon Sep 17 00:00:00 2001
>From: Jeremy Allison <jra@samba.org>
>Date: Thu, 27 Feb 2020 16:40:46 -0800
>Subject: [PATCH 03/24] s3: VFS: vfs_default. Protect vfs_pread_done() from
> accessing a freed req pointer.
>
>If the fsp is forced closed by a SHUTDOWN_CLOSE whilst the
>request is in flight (share forced closed by smbcontrol),
>then we set state->req = NULL in the state destructor.
>
>The existing state destructor prevents the state memory
>from being freed, so when the thread completes and calls
>vfs_pread_done(), just throw away the result if
>state->req == NULL.
>
>BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>
>Signed-off-by: Jeremy Allison <jra@samba.org>
>---
> source3/modules/vfs_default.c | 20 ++++++++++++++++++++
> 1 file changed, 20 insertions(+)
>
>diff --git a/source3/modules/vfs_default.c b/source3/modules/vfs_default.c
>index b8c36180b7c..21bc9c7adf7 100644
>--- a/source3/modules/vfs_default.c
>+++ b/source3/modules/vfs_default.c
>@@ -863,6 +863,15 @@ static void vfs_pread_do(void *private_data)
> 
> static int vfs_pread_state_destructor(struct vfswrap_pread_state *state)
> {
>+	/*
>+	 * This destructor only gets called if the request is still
>+	 * in flight, which is why we deny it by returning -1. We
>+	 * also set the req pointer to NULL so the _done function
>+	 * can detect the caller doesn't want the result anymore.
>+	 *
>+	 * Forcing the fsp closed by a SHUTDOWN_CLOSE can cause this.
>+	 */
>+	state->req = NULL;
> 	return -1;
> }
> 
>@@ -877,6 +886,17 @@ static void vfs_pread_done(struct tevent_req *subreq)
> 	TALLOC_FREE(subreq);
> 	SMBPROFILE_BYTES_ASYNC_END(state->profile_bytes);
> 	talloc_set_destructor(state, NULL);
>+	if (req == NULL) {
>+		/*
>+		 * We were shutdown closed in flight. No one
>+		 * wants the result, and state has been reparented
>+		 * to the NULL context, so just free it so we
>+		 * don't leak memory.
>+		 */
>+		DBG_NOTICE("pread request abandoned in flight\n");
>+		TALLOC_FREE(state);
>+		return;
>+	}
> 	if (ret != 0) {
> 		if (ret != EAGAIN) {
> 			tevent_req_error(req, ret);
>-- 
>2.25.0.265.gbab2e86ba0-goog
>
>
>From 828f0d2015765d89a481ee9f09405b7ad739a0dc Mon Sep 17 00:00:00 2001
>From: Jeremy Allison <jra@samba.org>
>Date: Thu, 27 Feb 2020 16:44:39 -0800
>Subject: [PATCH 04/24] s3: VFS: vfs_default: Add tevent_req pointer to state
> struct in vfswrap_pwrite_state.
>
>We will need this to detect when this request is outstanding but
>has been destroyed in a SHUTDOWN_CLOSE on this file.
>
>BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>
>Signed-off-by: Jeremy Allison <jra@samba.org>
>---
> source3/modules/vfs_default.c | 2 ++
> 1 file changed, 2 insertions(+)
>
>diff --git a/source3/modules/vfs_default.c b/source3/modules/vfs_default.c
>index 21bc9c7adf7..cbc8335cd12 100644
>--- a/source3/modules/vfs_default.c
>+++ b/source3/modules/vfs_default.c
>@@ -929,6 +929,7 @@ static ssize_t vfswrap_pread_recv(struct tevent_req *req,
> }
> 
> struct vfswrap_pwrite_state {
>+	struct tevent_req *req;
> 	ssize_t ret;
> 	int fd;
> 	const void *buf;
>@@ -958,6 +959,7 @@ static struct tevent_req *vfswrap_pwrite_send(struct vfs_handle_struct *handle,
> 		return NULL;
> 	}
> 
>+	state->req = req;
> 	state->ret = -1;
> 	state->fd = fsp->fh->fd;
> 	state->buf = data;
>-- 
>2.25.0.265.gbab2e86ba0-goog
>
>
>From 6ae4c8e1b2af38ba5ced0871fa3e6a47755d152a Mon Sep 17 00:00:00 2001
>From: Jeremy Allison <jra@samba.org>
>Date: Thu, 27 Feb 2020 16:49:38 -0800
>Subject: [PATCH 05/24] s3: VFS: vfs_default. Pass in struct
> vfswrap_pwrite_state as the callback data to the subreq.
>
>Find the req we're finishing off by looking inside vfswrap_pwrite_state.
>In a shutdown close the caller calls talloc_free(req), so we can't
>access it directly as callback data.
>
>The next commit will NULL out the vfswrap_pwrite_state->req pointer
>when a caller calls talloc_free(req), and the request is still in
>flight.
>
>BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>
>Signed-off-by: Jeremy Allison <jra@samba.org>
>---
> source3/modules/vfs_default.c | 9 ++++-----
> 1 file changed, 4 insertions(+), 5 deletions(-)
>
>diff --git a/source3/modules/vfs_default.c b/source3/modules/vfs_default.c
>index cbc8335cd12..641764e41f1 100644
>--- a/source3/modules/vfs_default.c
>+++ b/source3/modules/vfs_default.c
>@@ -976,7 +976,7 @@ static struct tevent_req *vfswrap_pwrite_send(struct vfs_handle_struct *handle,
> 	if (tevent_req_nomem(subreq, req)) {
> 		return tevent_req_post(req, ev);
> 	}
>-	tevent_req_set_callback(subreq, vfs_pwrite_done, req);
>+	tevent_req_set_callback(subreq, vfs_pwrite_done, state);
> 
> 	talloc_set_destructor(state, vfs_pwrite_state_destructor);
> 
>@@ -1017,10 +1017,9 @@ static int vfs_pwrite_state_destructor(struct vfswrap_pwrite_state *state)
> 
> static void vfs_pwrite_done(struct tevent_req *subreq)
> {
>-	struct tevent_req *req = tevent_req_callback_data(
>-		subreq, struct tevent_req);
>-	struct vfswrap_pwrite_state *state = tevent_req_data(
>-		req, struct vfswrap_pwrite_state);
>+	struct vfswrap_pwrite_state *state = tevent_req_callback_data(
>+		subreq, struct vfswrap_pwrite_state);
>+	struct tevent_req *req = state->req;
> 	int ret;
> 
> 	ret = pthreadpool_tevent_job_recv(subreq);
>-- 
>2.25.0.265.gbab2e86ba0-goog
>
>
>From 0c7f9dacc930db61588e5217c093c609a57ab8f1 Mon Sep 17 00:00:00 2001
>From: Jeremy Allison <jra@samba.org>
>Date: Thu, 27 Feb 2020 16:51:35 -0800
>Subject: [PATCH 06/24] s3: VFS: vfs_default. Protect vfs_pwrite_done() from
> accessing a freed req pointer.
>
>If the fsp is forced closed by a SHUTDOWN_CLOSE whilst the
>request is in flight (share forced closed by smbcontrol),
>then we set state->req = NULL in the state destructor.
>
>The existing state destructor prevents the state memory
>from being freed, so when the thread completes and calls
>vfs_pwrite_done(), just throw away the result if
>state->req == NULL.
>
>BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>
>Signed-off-by: Jeremy Allison <jra@samba.org>
>---
> source3/modules/vfs_default.c | 20 ++++++++++++++++++++
> 1 file changed, 20 insertions(+)
>
>diff --git a/source3/modules/vfs_default.c b/source3/modules/vfs_default.c
>index 641764e41f1..3425ee31dcb 100644
>--- a/source3/modules/vfs_default.c
>+++ b/source3/modules/vfs_default.c
>@@ -1012,6 +1012,15 @@ static void vfs_pwrite_do(void *private_data)
> 
> static int vfs_pwrite_state_destructor(struct vfswrap_pwrite_state *state)
> {
>+	/*
>+	 * This destructor only gets called if the request is still
>+	 * in flight, which is why we deny it by returning -1. We
>+	 * also set the req pointer to NULL so the _done function
>+	 * can detect the caller doesn't want the result anymore.
>+	 *
>+	 * Forcing the fsp closed by a SHUTDOWN_CLOSE can cause this.
>+	 */
>+	state->req = NULL;
> 	return -1;
> }
> 
>@@ -1026,6 +1035,17 @@ static void vfs_pwrite_done(struct tevent_req *subreq)
> 	TALLOC_FREE(subreq);
> 	SMBPROFILE_BYTES_ASYNC_END(state->profile_bytes);
> 	talloc_set_destructor(state, NULL);
>+	if (req == NULL) {
>+		/*
>+		 * We were shutdown closed in flight. No one
>+		 * wants the result, and state has been reparented
>+		 * to the NULL context, so just free it so we
>+		 * don't leak memory.
>+		 */
>+		DBG_NOTICE("pwrite request abandoned in flight\n");
>+		TALLOC_FREE(state);
>+		return;
>+	}
> 	if (ret != 0) {
> 		if (ret != EAGAIN) {
> 			tevent_req_error(req, ret);
>-- 
>2.25.0.265.gbab2e86ba0-goog
>
>
>From 6a6f281b0ceced08dc0bac42140ffb0a0ca5e4ae Mon Sep 17 00:00:00 2001
>From: Jeremy Allison <jra@samba.org>
>Date: Thu, 27 Feb 2020 16:53:10 -0800
>Subject: [PATCH 07/24] s3: VFS: vfs_default: Add tevent_req pointer to state
> struct in vfswrap_fsync_state.
>
>We will need this to detect when this request is outstanding but
>has been destroyed in a SHUTDOWN_CLOSE on this file.
>
>BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>
>Signed-off-by: Jeremy Allison <jra@samba.org>
>---
> source3/modules/vfs_default.c | 2 ++
> 1 file changed, 2 insertions(+)
>
>diff --git a/source3/modules/vfs_default.c b/source3/modules/vfs_default.c
>index 3425ee31dcb..28b8c04dee4 100644
>--- a/source3/modules/vfs_default.c
>+++ b/source3/modules/vfs_default.c
>@@ -1078,6 +1078,7 @@ static ssize_t vfswrap_pwrite_recv(struct tevent_req *req,
> }
> 
> struct vfswrap_fsync_state {
>+	struct tevent_req *req;
> 	ssize_t ret;
> 	int fd;
> 
>@@ -1102,6 +1103,7 @@ static struct tevent_req *vfswrap_fsync_send(struct vfs_handle_struct *handle,
> 		return NULL;
> 	}
> 
>+	state->req = req;
> 	state->ret = -1;
> 	state->fd = fsp->fh->fd;
> 
>-- 
>2.25.0.265.gbab2e86ba0-goog
>
>
>From 354418f43854fe9e42bc5a431db1b2cfdd336895 Mon Sep 17 00:00:00 2001
>From: Jeremy Allison <jra@samba.org>
>Date: Thu, 27 Feb 2020 16:54:47 -0800
>Subject: [PATCH 08/24] s3: VFS: vfs_default. Pass in struct
> vfswrap_fsync_state as the callback data to the subreq.
>
>Find the req we're finishing off by looking inside vfswrap_fsync_state.
>In a shutdown close the caller calls talloc_free(req), so we can't
>access it directly as callback data.
>
>The next commit will NULL out the vfswrap_fsync_state->req pointer
>when a caller calls talloc_free(req), and the request is still in
>flight.
>
>BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>
>Signed-off-by: Jeremy Allison <jra@samba.org>
>---
> source3/modules/vfs_default.c | 9 ++++-----
> 1 file changed, 4 insertions(+), 5 deletions(-)
>
>diff --git a/source3/modules/vfs_default.c b/source3/modules/vfs_default.c
>index 28b8c04dee4..f9d958a003d 100644
>--- a/source3/modules/vfs_default.c
>+++ b/source3/modules/vfs_default.c
>@@ -1116,7 +1116,7 @@ static struct tevent_req *vfswrap_fsync_send(struct vfs_handle_struct *handle,
> 	if (tevent_req_nomem(subreq, req)) {
> 		return tevent_req_post(req, ev);
> 	}
>-	tevent_req_set_callback(subreq, vfs_fsync_done, req);
>+	tevent_req_set_callback(subreq, vfs_fsync_done, state);
> 
> 	talloc_set_destructor(state, vfs_fsync_state_destructor);
> 
>@@ -1156,10 +1156,9 @@ static int vfs_fsync_state_destructor(struct vfswrap_fsync_state *state)
> 
> static void vfs_fsync_done(struct tevent_req *subreq)
> {
>-	struct tevent_req *req = tevent_req_callback_data(
>-		subreq, struct tevent_req);
>-	struct vfswrap_fsync_state *state = tevent_req_data(
>-		req, struct vfswrap_fsync_state);
>+	struct vfswrap_fsync_state *state = tevent_req_callback_data(
>+		subreq, struct vfswrap_fsync_state);
>+	struct tevent_req *req = state->req;
> 	int ret;
> 
> 	ret = pthreadpool_tevent_job_recv(subreq);
>-- 
>2.25.0.265.gbab2e86ba0-goog
>
>
>From d76aa10c02d666dfbf554967ca5ce9c85eec6110 Mon Sep 17 00:00:00 2001
>From: Jeremy Allison <jra@samba.org>
>Date: Thu, 27 Feb 2020 16:56:41 -0800
>Subject: [PATCH 09/24] s3: VFS: vfs_default. Protect vfs_fsync_done() from
> accessing a freed req pointer.
>
>If the fsp is forced closed by a SHUTDOWN_CLOSE whilst the
>request is in flight (share forced closed by smbcontrol),
>then we set state->req = NULL in the state destructor.
>
>The existing state destructor prevents the state memory
>from being freed, so when the thread completes and calls
>vfs_fsync_done(), just throw away the result if
>state->req == NULL.
>
>BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>
>Signed-off-by: Jeremy Allison <jra@samba.org>
>---
> source3/modules/vfs_default.c | 20 ++++++++++++++++++++
> 1 file changed, 20 insertions(+)
>
>diff --git a/source3/modules/vfs_default.c b/source3/modules/vfs_default.c
>index f9d958a003d..fac7fa30ab7 100644
>--- a/source3/modules/vfs_default.c
>+++ b/source3/modules/vfs_default.c
>@@ -1151,6 +1151,15 @@ static void vfs_fsync_do(void *private_data)
> 
> static int vfs_fsync_state_destructor(struct vfswrap_fsync_state *state)
> {
>+	/*
>+	 * This destructor only gets called if the request is still
>+	 * in flight, which is why we deny it by returning -1. We
>+	 * also set the req pointer to NULL so the _done function
>+	 * can detect the caller doesn't want the result anymore.
>+	 *
>+	 * Forcing the fsp closed by a SHUTDOWN_CLOSE can cause this.
>+	 */
>+	state->req = NULL;
> 	return -1;
> }
> 
>@@ -1165,6 +1174,17 @@ static void vfs_fsync_done(struct tevent_req *subreq)
> 	TALLOC_FREE(subreq);
> 	SMBPROFILE_BYTES_ASYNC_END(state->profile_bytes);
> 	talloc_set_destructor(state, NULL);
>+	if (req == NULL) {
>+		/*
>+		 * We were shutdown closed in flight. No one
>+		 * wants the result, and state has been reparented
>+		 * to the NULL context, so just free it so we
>+		 * don't leak memory.
>+		 */
>+		DBG_NOTICE("fsync request abandoned in flight\n");
>+		TALLOC_FREE(state);
>+		return;
>+	}
> 	if (ret != 0) {
> 		if (ret != EAGAIN) {
> 			tevent_req_error(req, ret);
>-- 
>2.25.0.265.gbab2e86ba0-goog
>
>
>From 277ca97cadd49d722ed1a1178e46362bf483f3ac Mon Sep 17 00:00:00 2001
>From: Jeremy Allison <jra@samba.org>
>Date: Fri, 28 Feb 2020 15:33:35 -0800
>Subject: [PATCH 10/24] s3: VFS: vfs_glusterfs: Add tevent_req pointer to state
> struct in vfs_gluster_pread_state.
>
>We will need this to detect when this request is outstanding but
>has been destroyed in a SHUTDOWN_CLOSE on this file.
>
>BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>
>Signed-off-by: Jeremy Allison <jra@samba.org>
>---
> source3/modules/vfs_glusterfs.c | 2 ++
> 1 file changed, 2 insertions(+)
>
>diff --git a/source3/modules/vfs_glusterfs.c b/source3/modules/vfs_glusterfs.c
>index d4b68fba376..6598aadad17 100644
>--- a/source3/modules/vfs_glusterfs.c
>+++ b/source3/modules/vfs_glusterfs.c
>@@ -713,6 +713,7 @@ static ssize_t vfs_gluster_pread(struct vfs_handle_struct *handle,
> }
> 
> struct vfs_gluster_pread_state {
>+	struct tevent_req *req;
> 	ssize_t ret;
> 	glfs_fd_t *fd;
> 	void *buf;
>@@ -748,6 +749,7 @@ static struct tevent_req *vfs_gluster_pread_send(struct vfs_handle_struct
> 		return NULL;
> 	}
> 
>+	state->req = req;
> 	state->ret = -1;
> 	state->fd = glfd;
> 	state->buf = data;
>-- 
>2.25.0.265.gbab2e86ba0-goog
>
>
>From c364533fc66a330a13cd0c7e8c580ca9da2ee5c5 Mon Sep 17 00:00:00 2001
>From: Jeremy Allison <jra@samba.org>
>Date: Fri, 28 Feb 2020 15:35:46 -0800
>Subject: [PATCH 11/24] s3: VFS: vfs_glusterfs. Pass in struct
> vfs_gluster_pread_state as the callback data to the subreq.
>
>Find the req we're finishing off by looking inside vfs_gluster_pread_state.
>In a shutdown close the caller calls talloc_free(req), so we can't
>access it directly as callback data.
>
>The next commit will NULL out the vfs_gluster_pread_state->req pointer
>when a caller calls talloc_free(req), and the request is still in
>flight.
>
>BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>
>Signed-off-by: Jeremy Allison <jra@samba.org>
>---
> source3/modules/vfs_glusterfs.c | 9 ++++-----
> 1 file changed, 4 insertions(+), 5 deletions(-)
>
>diff --git a/source3/modules/vfs_glusterfs.c b/source3/modules/vfs_glusterfs.c
>index 6598aadad17..84b284152c6 100644
>--- a/source3/modules/vfs_glusterfs.c
>+++ b/source3/modules/vfs_glusterfs.c
>@@ -766,7 +766,7 @@ static struct tevent_req *vfs_gluster_pread_send(struct vfs_handle_struct
> 	if (tevent_req_nomem(subreq, req)) {
> 		return tevent_req_post(req, ev);
> 	}
>-	tevent_req_set_callback(subreq, vfs_gluster_pread_done, req);
>+	tevent_req_set_callback(subreq, vfs_gluster_pread_done, state);
> 
> 	talloc_set_destructor(state, vfs_gluster_pread_state_destructor);
> 
>@@ -812,10 +812,9 @@ static int vfs_gluster_pread_state_destructor(struct vfs_gluster_pread_state *st
> 
> static void vfs_gluster_pread_done(struct tevent_req *subreq)
> {
>-	struct tevent_req *req = tevent_req_callback_data(
>-		subreq, struct tevent_req);
>-	struct vfs_gluster_pread_state *state = tevent_req_data(
>-		req, struct vfs_gluster_pread_state);
>+	struct vfs_gluster_pread_state *state = tevent_req_callback_data(
>+		subreq, struct vfs_gluster_pread_state);
>+	struct tevent_req *req = state->req;
> 	int ret;
> 
> 	ret = pthreadpool_tevent_job_recv(subreq);
>-- 
>2.25.0.265.gbab2e86ba0-goog
>
>
>From e46bd7ace5f14979ba6f66250236fab3f256084f Mon Sep 17 00:00:00 2001
>From: Jeremy Allison <jra@samba.org>
>Date: Fri, 28 Feb 2020 15:38:04 -0800
>Subject: [PATCH 12/24] s3: VFS: vfs_glusterfs. Protect
> vfs_gluster_pread_done() from accessing a freed req pointer.
>
>If the fsp is forced closed by a SHUTDOWN_CLOSE whilst the
>request is in flight (share forced closed by smbcontrol),
>then we set state->req = NULL in the state destructor.
>
>The existing state destructor prevents the state memory
>from being freed, so when the thread completes and calls
>vfs_gluster_pread_done(), just throw away the result if
>state->req == NULL.
>
>BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>
>Signed-off-by: Jeremy Allison <jra@samba.org>
>---
> source3/modules/vfs_glusterfs.c | 20 ++++++++++++++++++++
> 1 file changed, 20 insertions(+)
>
>diff --git a/source3/modules/vfs_glusterfs.c b/source3/modules/vfs_glusterfs.c
>index 84b284152c6..7924f123cca 100644
>--- a/source3/modules/vfs_glusterfs.c
>+++ b/source3/modules/vfs_glusterfs.c
>@@ -807,6 +807,15 @@ static void vfs_gluster_pread_do(void *private_data)
> 
> static int vfs_gluster_pread_state_destructor(struct vfs_gluster_pread_state *state)
> {
>+	/*
>+	 * This destructor only gets called if the request is still
>+	 * in flight, which is why we deny it by returning -1. We
>+	 * also set the req pointer to NULL so the _done function
>+	 * can detect the caller doesn't want the result anymore.
>+	 *
>+	 * Forcing the fsp closed by a SHUTDOWN_CLOSE can cause this.
>+	 */
>+	state->req = NULL;
> 	return -1;
> }
> 
>@@ -821,6 +830,17 @@ static void vfs_gluster_pread_done(struct tevent_req *subreq)
> 	TALLOC_FREE(subreq);
> 	SMBPROFILE_BYTES_ASYNC_END(state->profile_bytes);
> 	talloc_set_destructor(state, NULL);
>+	if (req == NULL) {
>+		/*
>+		 * We were shutdown closed in flight. No one
>+		 * wants the result, and state has been reparented
>+		 * to the NULL context, so just free it so we
>+		 * don't leak memory.
>+		 */
>+		DBG_NOTICE("gluster pread request abandoned in flight\n");
>+		TALLOC_FREE(state);
>+		return;
>+	}
> 	if (ret != 0) {
> 		if (ret != EAGAIN) {
> 			tevent_req_error(req, ret);
>-- 
>2.25.0.265.gbab2e86ba0-goog
>
>
>From 8f36bc0a13de006b7a10a0c6739b5cfe5c4a312b Mon Sep 17 00:00:00 2001
>From: Jeremy Allison <jra@samba.org>
>Date: Fri, 28 Feb 2020 15:47:52 -0800
>Subject: [PATCH 13/24] s3: VFS: vfs_glusterfs: Add tevent_req pointer to state
> struct in vfs_gluster_pwrite_state.
>
>We will need this to detect when this request is outstanding but
>has been destroyed in a SHUTDOWN_CLOSE on this file.
>
>BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>
>Signed-off-by: Jeremy Allison <jra@samba.org>
>---
> source3/modules/vfs_glusterfs.c | 2 ++
> 1 file changed, 2 insertions(+)
>
>diff --git a/source3/modules/vfs_glusterfs.c b/source3/modules/vfs_glusterfs.c
>index 7924f123cca..456e0c8a498 100644
>--- a/source3/modules/vfs_glusterfs.c
>+++ b/source3/modules/vfs_glusterfs.c
>@@ -873,6 +873,7 @@ static ssize_t vfs_gluster_pread_recv(struct tevent_req *req,
> }
> 
> struct vfs_gluster_pwrite_state {
>+	struct tevent_req *req;
> 	ssize_t ret;
> 	glfs_fd_t *fd;
> 	const void *buf;
>@@ -908,6 +909,7 @@ static struct tevent_req *vfs_gluster_pwrite_send(struct vfs_handle_struct
> 		return NULL;
> 	}
> 
>+	state->req = req;
> 	state->ret = -1;
> 	state->fd = glfd;
> 	state->buf = data;
>-- 
>2.25.0.265.gbab2e86ba0-goog
>
>
>From 3c2bff21114102a197ff035431e7f7487264020a Mon Sep 17 00:00:00 2001
>From: Jeremy Allison <jra@samba.org>
>Date: Fri, 28 Feb 2020 15:53:19 -0800
>Subject: [PATCH 14/24] s3: VFS: vfs_glusterfs. Pass in struct
> vfs_gluster_pwrite_state as the callback data to the subreq.
>
>Find the req we're finishing off by looking inside vfs_gluster_pwrite_state.
>In a shutdown close the caller calls talloc_free(req), so we can't
>access it directly as callback data.
>
>The next commit will NULL out the vfs_gluster_pwrite_state->req pointer
>when a caller calls talloc_free(req), and the request is still in
>flight.
>
>BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>
>Signed-off-by: Jeremy Allison <jra@samba.org>
>---
> source3/modules/vfs_glusterfs.c | 9 ++++-----
> 1 file changed, 4 insertions(+), 5 deletions(-)
>
>diff --git a/source3/modules/vfs_glusterfs.c b/source3/modules/vfs_glusterfs.c
>index 456e0c8a498..52c33725b8d 100644
>--- a/source3/modules/vfs_glusterfs.c
>+++ b/source3/modules/vfs_glusterfs.c
>@@ -926,7 +926,7 @@ static struct tevent_req *vfs_gluster_pwrite_send(struct vfs_handle_struct
> 	if (tevent_req_nomem(subreq, req)) {
> 		return tevent_req_post(req, ev);
> 	}
>-	tevent_req_set_callback(subreq, vfs_gluster_pwrite_done, req);
>+	tevent_req_set_callback(subreq, vfs_gluster_pwrite_done, state);
> 
> 	talloc_set_destructor(state, vfs_gluster_pwrite_state_destructor);
> 
>@@ -972,10 +972,9 @@ static int vfs_gluster_pwrite_state_destructor(struct vfs_gluster_pwrite_state *
> 
> static void vfs_gluster_pwrite_done(struct tevent_req *subreq)
> {
>-	struct tevent_req *req = tevent_req_callback_data(
>-		subreq, struct tevent_req);
>-	struct vfs_gluster_pwrite_state *state = tevent_req_data(
>-		req, struct vfs_gluster_pwrite_state);
>+	struct vfs_gluster_pwrite_state *state = tevent_req_callback_data(
>+		subreq, struct vfs_gluster_pwrite_state);
>+	struct tevent_req *req = state->req;
> 	int ret;
> 
> 	ret = pthreadpool_tevent_job_recv(subreq);
>-- 
>2.25.0.265.gbab2e86ba0-goog
>
>
>From ecc735fad2cc2af0012bac655aca162e0155c0c5 Mon Sep 17 00:00:00 2001
>From: Jeremy Allison <jra@samba.org>
>Date: Fri, 28 Feb 2020 15:55:36 -0800
>Subject: [PATCH 15/24] s3: VFS: vfs_glusterfs. Protect
> vfs_gluster_pwrite_done() from accessing a freed req pointer.
>
>If the fsp is forced closed by a SHUTDOWN_CLOSE whilst the
>request is in flight (share forced closed by smbcontrol),
>then we set state->req = NULL in the state destructor.
>
>The existing state destructor prevents the state memory
>from being freed, so when the thread completes and calls
>vfs_gluster_pwrite_done(), just throw away the result if
>state->req == NULL.
>
>BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>
>Signed-off-by: Jeremy Allison <jra@samba.org>
>---
> source3/modules/vfs_glusterfs.c | 20 ++++++++++++++++++++
> 1 file changed, 20 insertions(+)
>
>diff --git a/source3/modules/vfs_glusterfs.c b/source3/modules/vfs_glusterfs.c
>index 52c33725b8d..4e978f168d6 100644
>--- a/source3/modules/vfs_glusterfs.c
>+++ b/source3/modules/vfs_glusterfs.c
>@@ -967,6 +967,15 @@ static void vfs_gluster_pwrite_do(void *private_data)
> 
> static int vfs_gluster_pwrite_state_destructor(struct vfs_gluster_pwrite_state *state)
> {
>+	/*
>+	 * This destructor only gets called if the request is still
>+	 * in flight, which is why we deny it by returning -1. We
>+	 * also set the req pointer to NULL so the _done function
>+	 * can detect the caller doesn't want the result anymore.
>+	 *
>+	 * Forcing the fsp closed by a SHUTDOWN_CLOSE can cause this.
>+	 */
>+	state->req = NULL;
> 	return -1;
> }
> 
>@@ -981,6 +990,17 @@ static void vfs_gluster_pwrite_done(struct tevent_req *subreq)
> 	TALLOC_FREE(subreq);
> 	SMBPROFILE_BYTES_ASYNC_END(state->profile_bytes);
> 	talloc_set_destructor(state, NULL);
>+	if (req == NULL) {
>+		/*
>+		 * We were shutdown closed in flight. No one
>+		 * wants the result, and state has been reparented
>+		 * to the NULL context, so just free it so we
>+		 * don't leak memory.
>+		 */
>+		DBG_NOTICE("gluster pwrite request abandoned in flight\n");
>+		TALLOC_FREE(state);
>+		return;
>+	}
> 	if (ret != 0) {
> 		if (ret != EAGAIN) {
> 			tevent_req_error(req, ret);
>-- 
>2.25.0.265.gbab2e86ba0-goog
>
>
>From 01fdf01ed6157c176f1c9635c3be18fe9a1df8fd Mon Sep 17 00:00:00 2001
>From: Jeremy Allison <jra@samba.org>
>Date: Fri, 28 Feb 2020 15:57:20 -0800
>Subject: [PATCH 16/24] s3: VFS: vfs_glusterfs: Add tevent_req pointer to state
> struct in vfs_gluster_fsync_state.
>
>We will need this to detect when this request is outstanding but
>has been destroyed in a SHUTDOWN_CLOSE on this file.
>
>BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>
>Signed-off-by: Jeremy Allison <jra@samba.org>
>---
> source3/modules/vfs_glusterfs.c | 2 ++
> 1 file changed, 2 insertions(+)
>
>diff --git a/source3/modules/vfs_glusterfs.c b/source3/modules/vfs_glusterfs.c
>index 4e978f168d6..d5d402d72ab 100644
>--- a/source3/modules/vfs_glusterfs.c
>+++ b/source3/modules/vfs_glusterfs.c
>@@ -1114,6 +1114,7 @@ static int vfs_gluster_renameat(struct vfs_handle_struct *handle,
> }
> 
> struct vfs_gluster_fsync_state {
>+	struct tevent_req *req;
> 	ssize_t ret;
> 	glfs_fd_t *fd;
> 
>@@ -1144,6 +1145,7 @@ static struct tevent_req *vfs_gluster_fsync_send(struct vfs_handle_struct
> 		return NULL;
> 	}
> 
>+	state->req = req;
> 	state->ret = -1;
> 	state->fd = glfd;
> 
>-- 
>2.25.0.265.gbab2e86ba0-goog
>
>
>From 8d23a186e4fbbe0471eca10db11592334f0cac5a Mon Sep 17 00:00:00 2001
>From: Jeremy Allison <jra@samba.org>
>Date: Fri, 28 Feb 2020 15:59:16 -0800
>Subject: [PATCH 17/24] s3: VFS: vfs_glusterfs. Pass in struct
> vfs_gluster_fsync_state as the callback data to the subreq.
>
>Find the req we're finishing off by looking inside vfs_gluster_fsync_state.
>In a shutdown close the caller calls talloc_free(req), so we can't
>access it directly as callback data.
>
>The next commit will NULL out the vfs_gluster_fsync_state->req pointer
>when a caller calls talloc_free(req), and the request is still in
>flight.
>
>BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>
>Signed-off-by: Jeremy Allison <jra@samba.org>
>---
> source3/modules/vfs_glusterfs.c | 9 ++++-----
> 1 file changed, 4 insertions(+), 5 deletions(-)
>
>diff --git a/source3/modules/vfs_glusterfs.c b/source3/modules/vfs_glusterfs.c
>index d5d402d72ab..4706e6f9189 100644
>--- a/source3/modules/vfs_glusterfs.c
>+++ b/source3/modules/vfs_glusterfs.c
>@@ -1158,7 +1158,7 @@ static struct tevent_req *vfs_gluster_fsync_send(struct vfs_handle_struct
> 	if (tevent_req_nomem(subreq, req)) {
> 		return tevent_req_post(req, ev);
> 	}
>-	tevent_req_set_callback(subreq, vfs_gluster_fsync_done, req);
>+	tevent_req_set_callback(subreq, vfs_gluster_fsync_done, state);
> 
> 	talloc_set_destructor(state, vfs_gluster_fsync_state_destructor);
> 
>@@ -1202,10 +1202,9 @@ static int vfs_gluster_fsync_state_destructor(struct vfs_gluster_fsync_state *st
> 
> static void vfs_gluster_fsync_done(struct tevent_req *subreq)
> {
>-	struct tevent_req *req = tevent_req_callback_data(
>-		subreq, struct tevent_req);
>-	struct vfs_gluster_fsync_state *state = tevent_req_data(
>-		req, struct vfs_gluster_fsync_state);
>+	struct vfs_gluster_fsync_state *state = tevent_req_callback_data(
>+		subreq, struct vfs_gluster_fsync_state);
>+	struct tevent_req *req = state->req;
> 	int ret;
> 
> 	ret = pthreadpool_tevent_job_recv(subreq);
>-- 
>2.25.0.265.gbab2e86ba0-goog
>
>
>From 3f39badb23de37c87e77d56ac71d80007f715f38 Mon Sep 17 00:00:00 2001
>From: Jeremy Allison <jra@samba.org>
>Date: Fri, 28 Feb 2020 16:01:11 -0800
>Subject: [PATCH 18/24] s3: VFS: vfs_glusterfs. Protect
> vfs_gluster_fsync_done() from accessing a freed req pointer.
>
>If the fsp is forced closed by a SHUTDOWN_CLOSE whilst the
>request is in flight (share forced closed by smbcontrol),
>then we set state->req = NULL in the state destructor.
>
>The existing state destructor prevents the state memory
>from being freed, so when the thread completes and calls
>vfs_gluster_fsync_done(), just throw away the result if
>state->req == NULL.
>
>BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>
>Signed-off-by: Jeremy Allison <jra@samba.org>
>---
> source3/modules/vfs_glusterfs.c | 20 ++++++++++++++++++++
> 1 file changed, 20 insertions(+)
>
>diff --git a/source3/modules/vfs_glusterfs.c b/source3/modules/vfs_glusterfs.c
>index 4706e6f9189..b5300282b7b 100644
>--- a/source3/modules/vfs_glusterfs.c
>+++ b/source3/modules/vfs_glusterfs.c
>@@ -1197,6 +1197,15 @@ static void vfs_gluster_fsync_do(void *private_data)
> 
> static int vfs_gluster_fsync_state_destructor(struct vfs_gluster_fsync_state *state)
> {
>+	/*
>+	 * This destructor only gets called if the request is still
>+	 * in flight, which is why we deny it by returning -1. We
>+	 * also set the req pointer to NULL so the _done function
>+	 * can detect the caller doesn't want the result anymore.
>+	 *
>+	 * Forcing the fsp closed by a SHUTDOWN_CLOSE can cause this.
>+	 */
>+	state->req = NULL;
> 	return -1;
> }
> 
>@@ -1211,6 +1220,17 @@ static void vfs_gluster_fsync_done(struct tevent_req *subreq)
> 	TALLOC_FREE(subreq);
> 	SMBPROFILE_BYTES_ASYNC_END(state->profile_bytes);
> 	talloc_set_destructor(state, NULL);
>+	if (req == NULL) {
>+		/*
>+		 * We were shutdown closed in flight. No one
>+		 * wants the result, and state has been reparented
>+		 * to the NULL context, so just free it so we
>+		 * don't leak memory.
>+		 */
>+		DBG_NOTICE("gluster fsync request abandoned in flight\n");
>+		TALLOC_FREE(state);
>+		return;
>+	}
> 	if (ret != 0) {
> 		if (ret != EAGAIN) {
> 			tevent_req_error(req, ret);
>-- 
>2.25.0.265.gbab2e86ba0-goog
>
>
>From 49c8d1ea87e5597ab7b6d4a218d62fcf6779dcd6 Mon Sep 17 00:00:00 2001
>From: Jeremy Allison <jra@samba.org>
>Date: Mon, 2 Mar 2020 13:11:06 -0800
>Subject: [PATCH 19/24] s3: smbd: Make sure we correctly reply to outstanding
> aio requests with an error on SHUTDOWN_CLOSE.
>
>SHUTDOWN_CLOSE can be called when smbcontrol close-share
>is used to terminate active connections.
>
>Previously we just called talloc_free()
>on the outstanding request, but this
>caused crashes (before the async callback
>functions were fixed not to reference req
>directly) and also leaves the SMB2 request
>outstanding on the processing queue.
>
>Using tevent_req_error() instead
>causes the outstanding SMB1/2/3 request to
>return with NT_STATUS_INVALID_HANDLE
>and removes it from the processing queue.
>
>The callback function called from this
>calls talloc_free(req). The destructor will remove
>itself from the fsp and the aio_requests array.
>
>BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>
>Signed-off-by: Jeremy Allison <jra@samba.org>
>---
> source3/smbd/close.c | 30 ++++++++++++++++++++++++------
> 1 file changed, 24 insertions(+), 6 deletions(-)
>
>diff --git a/source3/smbd/close.c b/source3/smbd/close.c
>index f45371e656c..c7be0c8d447 100644
>--- a/source3/smbd/close.c
>+++ b/source3/smbd/close.c
>@@ -652,6 +652,7 @@ static NTSTATUS close_normal_file(struct smb_request *req, files_struct *fsp,
> 	bool is_durable = false;
> 
> 	if (fsp->num_aio_requests != 0) {
>+		unsigned num_requests = fsp->num_aio_requests;
> 
> 		if (close_type != SHUTDOWN_CLOSE) {
> 			/*
>@@ -681,13 +682,30 @@ static NTSTATUS close_normal_file(struct smb_request *req, files_struct *fsp,
> 
> 		while (fsp->num_aio_requests != 0) {
> 			/*
>-			 * The destructor of the req will remove
>-			 * itself from the fsp.
>-			 * Don't use TALLOC_FREE here, this will overwrite
>-			 * what the destructor just wrote into
>-			 * aio_requests[0].
>+			 * Previously we just called talloc_free()
>+			 * on the outstanding request, but this
>+			 * caused crashes (before the async callback
>+			 * functions were fixed not to reference req
>+			 * directly) and also leaves the SMB2 request
>+			 * outstanding on the processing queue.
>+			 *
>+			 * Using tevent_req_error() instead
>+			 * causes the outstanding SMB1/2/3 request to
>+			 * return with NT_STATUS_INVALID_HANDLE
>+			 * and removes it from the processing queue.
>+			 *
>+			 * The callback function called from this
>+			 * calls talloc_free(req). The destructor will remove
>+			 * itself from the fsp and the aio_requests array.
> 			 */
>-			talloc_free(fsp->aio_requests[0]);
>+			tevent_req_error(fsp->aio_requests[0], EBADF);
>+
>+			/* Paranoia to ensure we don't spin. */
>+			num_requests--;
>+			if (fsp->num_aio_requests != num_requests) {
>+				smb_panic("cannot cancel outstanding aio "
>+					"requests");
>+			}
> 		}
> 	}
> 
>-- 
>2.25.0.265.gbab2e86ba0-goog
>
>
>From 7fcc5cf32b6c92eeaa5e393cb2de6d38ffa0cf8a Mon Sep 17 00:00:00 2001
>From: Jeremy Allison <jra@samba.org>
>Date: Wed, 4 Mar 2020 13:29:08 -0800
>Subject: [PATCH 20/24] s3: VFS: vfs_aio_pthread. Fix leak of state struct on
> error.
>
>BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>
>Signed-off-by: Jeremy Allison <jra@samba.org>
>---
> source3/modules/vfs_aio_pthread.c | 1 +
> 1 file changed, 1 insertion(+)
>
>diff --git a/source3/modules/vfs_aio_pthread.c b/source3/modules/vfs_aio_pthread.c
>index d13ce2fdc63..37ba0c2c8a2 100644
>--- a/source3/modules/vfs_aio_pthread.c
>+++ b/source3/modules/vfs_aio_pthread.c
>@@ -308,6 +308,7 @@ static int open_async(const files_struct *fsp,
> 					     fsp->conn->sconn->pool,
> 					     aio_open_worker, opd);
> 	if (subreq == NULL) {
>+		TALLOC_FREE(opd);
> 		return -1;
> 	}
> 	tevent_req_set_callback(subreq, aio_open_handle_completion, opd);
>-- 
>2.25.0.265.gbab2e86ba0-goog
>
>
>From 98ad33d2ab32c190db650366e4e9d385d3239e18 Mon Sep 17 00:00:00 2001
>From: Jeremy Allison <jra@samba.org>
>Date: Wed, 4 Mar 2020 13:47:13 -0800
>Subject: [PATCH 21/24] s3: VFS: vfs_aio_pthread: Replace state destructor with
> explicitly called teardown function.
>
>This will allow repurposing a real destructor to allow
>connections structs to be freed whilst the aio open
>request is in flight.
>
>BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>
>Signed-off-by: Jeremy Allison <jra@samba.org>
>---
> source3/modules/vfs_aio_pthread.c | 22 +++++++++++-----------
> 1 file changed, 11 insertions(+), 11 deletions(-)
>
>diff --git a/source3/modules/vfs_aio_pthread.c b/source3/modules/vfs_aio_pthread.c
>index 37ba0c2c8a2..820e1b89c44 100644
>--- a/source3/modules/vfs_aio_pthread.c
>+++ b/source3/modules/vfs_aio_pthread.c
>@@ -62,6 +62,7 @@ struct aio_open_private_data {
> static struct aio_open_private_data *open_pd_list;
> 
> static void aio_open_do(struct aio_open_private_data *opd);
>+static void opd_free(struct aio_open_private_data *opd);
> 
> /************************************************************************
>  Find the open private data by mid.
>@@ -145,7 +146,7 @@ static void aio_open_handle_completion(struct tevent_req *subreq)
> 			close(opd->ret_fd);
> 			opd->ret_fd = -1;
> 		}
>-		TALLOC_FREE(opd);
>+		opd_free(opd);
> 	}
> }
> 
>@@ -207,16 +208,16 @@ static void aio_open_do(struct aio_open_private_data *opd)
> }
> 
> /************************************************************************
>- Open private data destructor.
>+ Open private data teardown.
> ***********************************************************************/
> 
>-static int opd_destructor(struct aio_open_private_data *opd)
>+static void opd_free(struct aio_open_private_data *opd)
> {
> 	if (opd->dir_fd != -1) {
> 		close(opd->dir_fd);
> 	}
> 	DLIST_REMOVE(open_pd_list, opd);
>-	return 0;
>+	TALLOC_FREE(opd);
> }
> 
> /************************************************************************
>@@ -250,7 +251,7 @@ static struct aio_open_private_data *create_private_open_data(const files_struct
> 	/* Copy our current credentials. */
> 	opd->ux_tok = copy_unix_token(opd, get_current_utok(fsp->conn));
> 	if (opd->ux_tok == NULL) {
>-		TALLOC_FREE(opd);
>+		opd_free(opd);
> 		return NULL;
> 	}
> 
>@@ -262,12 +263,12 @@ static struct aio_open_private_data *create_private_open_data(const files_struct
> 			fsp->fsp_name->base_name,
> 			&opd->dname,
> 			&fname) == false) {
>-		TALLOC_FREE(opd);
>+		opd_free(opd);
> 		return NULL;
> 	}
> 	opd->fname = talloc_strdup(opd, fname);
> 	if (opd->fname == NULL) {
>-		TALLOC_FREE(opd);
>+		opd_free(opd);
> 		return NULL;
> 	}
> 
>@@ -277,11 +278,10 @@ static struct aio_open_private_data *create_private_open_data(const files_struct
> 	opd->dir_fd = open(opd->dname, O_RDONLY);
> #endif
> 	if (opd->dir_fd == -1) {
>-		TALLOC_FREE(opd);
>+		opd_free(opd);
> 		return NULL;
> 	}
> 
>-	talloc_set_destructor(opd, opd_destructor);
> 	DLIST_ADD_END(open_pd_list, opd);
> 	return opd;
> }
>@@ -308,7 +308,7 @@ static int open_async(const files_struct *fsp,
> 					     fsp->conn->sconn->pool,
> 					     aio_open_worker, opd);
> 	if (subreq == NULL) {
>-		TALLOC_FREE(opd);
>+		opd_free(opd);
> 		return -1;
> 	}
> 	tevent_req_set_callback(subreq, aio_open_handle_completion, opd);
>@@ -365,7 +365,7 @@ static bool find_completed_open(files_struct *fsp,
> 		smb_fname_str_dbg(fsp->fsp_name)));
> 
> 	/* Now we can free the opd. */
>-	TALLOC_FREE(opd);
>+	opd_free(opd);
> 	return true;
> }
> 
>-- 
>2.25.0.265.gbab2e86ba0-goog
>
>
>From d1a5a94c681bee9c94970e3f3b953cf516ce719a Mon Sep 17 00:00:00 2001
>From: Jeremy Allison <jra@samba.org>
>Date: Wed, 4 Mar 2020 16:39:39 -0800
>Subject: [PATCH 22/24] s3: VFS: vfs_aio_pthread. Move xconn into state struct
> (opd).
>
>We will need this in future to cause a pending open to
>be rescheduled after the connection struct we're using
>has been shut down with an aio open in flight. This will
>allow a correct error reply to an awaiting client.
>
>BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>
>Signed-off-by: Jeremy Allison <jra@samba.org>
>---
> source3/modules/vfs_aio_pthread.c | 17 ++++++++---------
> 1 file changed, 8 insertions(+), 9 deletions(-)
>
>diff --git a/source3/modules/vfs_aio_pthread.c b/source3/modules/vfs_aio_pthread.c
>index 820e1b89c44..d5919f83b3f 100644
>--- a/source3/modules/vfs_aio_pthread.c
>+++ b/source3/modules/vfs_aio_pthread.c
>@@ -51,6 +51,7 @@ struct aio_open_private_data {
> 	const char *fname;
> 	char *dname;
> 	connection_struct *conn;
>+	struct smbXsrv_connection *xconn;
> 	const struct security_unix_token *ux_tok;
> 	uint64_t initial_allocation_size;
> 	/* Returns. */
>@@ -91,7 +92,6 @@ static void aio_open_handle_completion(struct tevent_req *subreq)
> 		tevent_req_callback_data(subreq,
> 		struct aio_open_private_data);
> 	int ret;
>-	struct smbXsrv_connection *xconn;
> 
> 	ret = pthreadpool_tevent_job_recv(subreq);
> 	TALLOC_FREE(subreq);
>@@ -128,15 +128,8 @@ static void aio_open_handle_completion(struct tevent_req *subreq)
> 
> 	opd->in_progress = false;
> 
>-	/*
>-	 * TODO: In future we need a proper algorithm
>-	 * to find the correct connection for a fsp.
>-	 * For now we only have one connection, so this is correct...
>-	 */
>-	xconn = opd->conn->sconn->client->connections;
>-
> 	/* Find outstanding event and reschedule. */
>-	if (!schedule_deferred_open_message_smb(xconn, opd->mid)) {
>+	if (!schedule_deferred_open_message_smb(opd->xconn, opd->mid)) {
> 		/*
> 		 * Outstanding event didn't exist or was
> 		 * cancelled. Free up the fd and throw
>@@ -245,6 +238,12 @@ static struct aio_open_private_data *create_private_open_data(const files_struct
> 		.mid = fsp->mid,
> 		.in_progress = true,
> 		.conn = fsp->conn,
>+		/*
>+		 * TODO: In future we need a proper algorithm
>+		 * to find the correct connection for a fsp.
>+		 * For now we only have one connection, so this is correct...
>+		 */
>+		.xconn = fsp->conn->sconn->client->connections,
> 		.initial_allocation_size = fsp->initial_allocation_size,
> 	};
> 
>-- 
>2.25.0.265.gbab2e86ba0-goog
>
>
>From 564d3c1a3ae1997fdb91130704457dec09564ba8 Mon Sep 17 00:00:00 2001
>From: Jeremy Allison <jra@samba.org>
>Date: Thu, 5 Mar 2020 10:22:00 -0800
>Subject: [PATCH 23/24] s3: VFS: vfs_aio_pthread: Make aio opens safe against
> connection teardown.
>
>Allocate state off the awaiting fsp, and add a destructor
>that catches deallocation of that fsp (caused by the
>deallocation of the containing conn struct) and record
>that fact in state. Moving to allocating off fsp instead
>of the NULL context is needed so we can detect (by the
>destructor firing) when the conn struct is torn down.
>That allows us to NULL out the saved conn struct pointer
>so we know not to access deallocated memory.
>
>This allows us to safely complete when the openat()
>returns and then return the error NT_STATUS_NETWORK_NAME_DELETED
>to the client open request.
>
>BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>
>Signed-off-by: Jeremy Allison <jra@samba.org>
>---
> source3/modules/vfs_aio_pthread.c | 55 ++++++++++++++++++++++++++++++-
> 1 file changed, 54 insertions(+), 1 deletion(-)
>
>diff --git a/source3/modules/vfs_aio_pthread.c b/source3/modules/vfs_aio_pthread.c
>index d5919f83b3f..2ad39107e64 100644
>--- a/source3/modules/vfs_aio_pthread.c
>+++ b/source3/modules/vfs_aio_pthread.c
>@@ -95,6 +95,37 @@ static void aio_open_handle_completion(struct tevent_req *subreq)
> 
> 	ret = pthreadpool_tevent_job_recv(subreq);
> 	TALLOC_FREE(subreq);
>+
>+	/*
>+	 * We're no longer in flight. Remove the
>+	 * destructor used to preserve opd so
>+	 * a talloc_free actually removes it.
>+	 */
>+	talloc_set_destructor(opd, NULL);
>+
>+	if (opd->conn == NULL) {
>+		/*
>+		 * We were shutdown closed in flight. No one
>+		 * wants the result, and state has been reparented
>+		 * to the NULL context, so just free it so we
>+		 * don't leak memory.
>+		 */
>+		DBG_NOTICE("aio open request for %s/%s abandoned in flight\n",
>+			opd->dname,
>+			opd->fname);
>+		if (opd->ret_fd != -1) {
>+			close(opd->ret_fd);
>+			opd->ret_fd = -1;
>+		}
>+		/*
>+		 * Find outstanding event and reschedule so the client
>+		 * gets an error message return from the open.
>+		 */
>+		schedule_deferred_open_message_smb(opd->xconn, opd->mid);
>+		opd_free(opd);
>+		return;
>+	}
>+
> 	if (ret != 0) {
> 		bool ok;
> 
>@@ -221,7 +252,7 @@ static struct aio_open_private_data *create_private_open_data(const files_struct
> 					int flags,
> 					mode_t mode)
> {
>-	struct aio_open_private_data *opd = talloc_zero(NULL,
>+	struct aio_open_private_data *opd = talloc_zero(fsp,
> 					struct aio_open_private_data);
> 	const char *fname = NULL;
> 
>@@ -285,6 +316,22 @@ static struct aio_open_private_data *create_private_open_data(const files_struct
> 	return opd;
> }
> 
>+static int opd_inflight_destructor(struct aio_open_private_data *opd)
>+{
>+	/*
>+	 * Setting conn to NULL allows us to
>+	 * discover the connection was torn
>+	 * down which kills the fsp that owns
>+	 * opd.
>+	 */
>+	DBG_NOTICE("aio open request for %s/%s cancelled\n",
>+		opd->dname,
>+		opd->fname);
>+	opd->conn = NULL;
>+	/* Don't let opd go away. */
>+	return -1;
>+}
>+
> /*****************************************************************
>  Setup an async open.
> *****************************************************************/
>@@ -317,6 +364,12 @@ static int open_async(const files_struct *fsp,
> 		opd->dname,
> 		opd->fname));
> 
>+	/*
>+	 * Add a destructor to protect us from connection
>+	 * teardown whilst the open thread is in flight.
>+	 */
>+	talloc_set_destructor(opd, opd_inflight_destructor);
>+
> 	/* Cause the calling code to reschedule us. */
> 	errno = EINPROGRESS; /* Maps to NT_STATUS_MORE_PROCESSING_REQUIRED. */
> 	return -1;
>-- 
>2.25.0.265.gbab2e86ba0-goog
>
>
>From 341e516697506fef2b6061ff949ecb7acb38562a Mon Sep 17 00:00:00 2001
>From: Jeremy Allison <jra@samba.org>
>Date: Tue, 3 Mar 2020 13:31:18 -0800
>Subject: [PATCH 24/24] s3: tests: Add samba3.blackbox.force-close-share
>
>Checks server stays up whilst writing to a force closed share.
>Uses existing aio_delay_inject share to delay writes while
>we force close the share.
>
>BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>
>Signed-off-by: Jeremy Allison <jra@samba.org>
>---
> .../script/tests/test_force_close_share.sh    | 100 ++++++++++++++++++
> source3/selftest/tests.py                     |   9 ++
> 2 files changed, 109 insertions(+)
> create mode 100755 source3/script/tests/test_force_close_share.sh
>
>diff --git a/source3/script/tests/test_force_close_share.sh b/source3/script/tests/test_force_close_share.sh
>new file mode 100755
>index 00000000000..ebfff5af77c
>--- /dev/null
>+++ b/source3/script/tests/test_force_close_share.sh
>@@ -0,0 +1,100 @@
>+#!/bin/bash
>+#
>+# Test smbcontrol close-share command.
>+#
>+# Copyright (C) 2020 Volker Lendecke
>+# Copyright (C) 2020 Jeremy Allison
>+#
>+# Note this is designed to be run against
>+# the aio_delay_inject share which is preconfigured
>+# with 2 second delays on pread/pwrite.
>+
>+if [ $# -lt 5 ]; then
>+    echo Usage: test_share_force_close.sh \
>+	 SERVERCONFFILE SMBCLIENT SMBCONTROL IP aio_delay_inject_sharename
>+exit 1
>+fi
>+
>+CONF=$1
>+SMBCLIENT=$2
>+SMBCONTROL=$3
>+SERVER=$4
>+SHARE=$5
>+
>+incdir=$(dirname $0)/../../../testprogs/blackbox
>+. $incdir/subunit.sh
>+
>+failed=0
>+
>+# Create the smbclient communication pipes.
>+rm -f smbclient-stdin smbclient-stdout smbclient-stderr
>+mkfifo smbclient-stdin smbclient-stdout smbclient-stderr
>+
>+# Create a large-ish testfile
>+rm testfile
>+head -c 10MB /dev/zero >testfile
>+
>+CLI_FORCE_INTERACTIVE=1; export CLI_FORCE_INTERACTIVE
>+
>+${SMBCLIENT} //${SERVER}/${SHARE} ${CONF} -U${USER}%${PASSWORD} \
>+	     < smbclient-stdin > smbclient-stdout 2>smbclient-stderr &
>+CLIENT_PID=$!
>+
>+sleep 1
>+
>+exec 100>smbclient-stdin  101<smbclient-stdout 102<smbclient-stderr
>+
>+# consume the smbclient startup messages
>+head -n 1 <&101
>+head -n 1 <&102
>+
>+# Ensure we're putting a fresh file.
>+echo "del testfile" >&100
>+echo "put testfile" >&100
>+
>+sleep 1
>+
>+# Close the aio_delay_inject share whilst we have outstanding writes.
>+
>+testit "smbcontrol" ${SMBCONTROL} ${CONF} smbd close-share ${SHARE} ||
>+    failed=$(expr $failed + 1)
>+
>+sleep 1
>+
>+# If we get one or more NT_STATUS_NETWORK_NAME_DELETED
>+# or NT_STATUS_INVALID_HANDLE on stderr from the writes we
>+# know the server stayed up and didn't crash when the
>+# close-share removed the share.
>+#
>+# BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301
>+#
>+COUNT=$(head -n 2 <&102 |
>+	    grep -e NT_STATUS_NETWORK_NAME_DELETED -e NT_STATUS_INVALID_HANDLE |
>+	    wc -l)
>+
>+testit "Verify close-share did cancel the file put" \
>+       test $COUNT -ge 1 || failed=$(expr $failed + 1)
>+
>+kill ${CLIENT_PID}
>+
>+# Rerun smbclient to remove the testfile on the server.
>+rm -f smbclient-stdin smbclient-stdout smbclient-stderr testfile
>+mkfifo smbclient-stdin smbclient-stdout
>+
>+${SMBCLIENT} //${SERVER}/${SHARE} ${CONF} -U${USER}%${PASSWORD} \
>+	     < smbclient-stdin > smbclient-stdout &
>+CLIENT_PID=$!
>+
>+sleep 1
>+
>+exec 100>smbclient-stdin  101<smbclient-stdout
>+
>+echo "del testfile" >&100
>+
>+sleep 1
>+
>+kill ${CLIENT_PID}
>+
>+rm -f smbclient-stdin smbclient-stdout testfile
>+
>+testok $0 $failed
>diff --git a/source3/selftest/tests.py b/source3/selftest/tests.py
>index 6e9d3ddb144..ff6ab1a6b15 100755
>--- a/source3/selftest/tests.py
>+++ b/source3/selftest/tests.py
>@@ -816,6 +816,15 @@ plantestsuite("samba3.blackbox.close-denied-share", "simpleserver:local",
>                '$SERVER_IP',
>                "tmp"])
> 
>+plantestsuite("samba3.blackbox.force-close-share", "simpleserver:local",
>+              [os.path.join(samba3srcdir,
>+                            "script/tests/test_force_close_share.sh"),
>+               configuration,
>+               os.path.join(bindir(), "smbclient"),
>+               os.path.join(bindir(), "smbcontrol"),
>+               '$SERVER_IP',
>+               "aio_delay_inject"])
>+
> plantestsuite("samba3.blackbox.open-eintr", "simpleserver:local",
>               [os.path.join(samba3srcdir,
>                             "script/tests/test_open_eintr.sh"),
>-- 
>2.25.0.265.gbab2e86ba0-goog
>
Comment 16 Jeremy Allison 2020-03-06 05:10:14 UTC
Withdrawing patch as it fails CI. I think I know why, should be able to fix tomorrow.
Comment 17 Jeremy Allison 2020-03-06 05:13:05 UTC
(In reply to Jeremy Allison from comment #16)

What ? How did comment #15 end up in that state. Don't know how to turn back into an attachment from a comment - bugzilla still has mysteries :-(.
Comment 18 Jeremy Allison 2020-03-06 18:13:20 UTC
Created attachment 15847 [details]
git-am fix for master.

What I have in ci-test right now.
Comment 19 Jeremy Allison 2020-03-06 21:02:57 UTC
This version (https://bugzilla.samba.org/attachment.cgi?id=15847) passes CI. Proposing for inclusion in master.
Comment 20 Stefan Metzmacher 2020-03-09 16:34:02 UTC
Sorry to come late into the discussion, but I think we need to handle
a forced tdis, like a normal tdis from the client and wait until all
requests are finished, otherwise we risk data corruption!

If the pthreadpool schedules the already queued job, after the main thread
has already closed and reused the fd value, we may read/write from/to the
wrong file!
Comment 21 Stefan Metzmacher 2020-03-09 16:36:42 UTC
(In reply to Stefan Metzmacher from comment #20)

I needed something similar for the tcp disconnect for multi-channel
https://git.samba.org/?p=metze/samba/wip.git;a=commitdiff;h=5d9e91fa88720acd2a2ac7e08082350378cd19f6
Comment 22 Stefan Metzmacher 2020-03-09 16:44:29 UTC
conn_force_tdis() needs to get the logic from smbd_smb2_tdis_send()
and mark the tcon as disconnected immediately and wait on the pending
requests before calling smbXsrv_tcon_disconnect() in the background.

Then all the changes to the async state destructors can be reverted.
Comment 23 Jeremy Allison 2020-03-09 16:47:43 UTC
(In reply to Stefan Metzmacher from comment #20)

We already handle this on close() with:

fsp->deferred_close

and don't we already handle this in SMB2 inside:

smbd_smb2_tdis_send()


638         /*
639          * Now we add our own waiter to the end of the queue,
640          * this way we get notified when all pending requests are finished
641          * and send to the socket.
Comment 24 Jeremy Allison 2020-03-09 16:51:04 UTC
(In reply to Stefan Metzmacher from comment #20)

> If the pthreadpool schedules the already queued job, after the main thread
> has already closed and reused the fd value, we may read/write from/to the
> wrong file!

I don't think that's possible. If the fd gets closed whilst the pread/pwrite/fsync is in process then either the action completes, or the syscall will return EBADF.

Can you explain how the reuse might happen ?
Comment 25 Jeremy Allison 2020-03-09 17:08:19 UTC
(In reply to Stefan Metzmacher from comment #21)

> I needed something similar for the tcp disconnect for multi-channel
> https://git.samba.org/?p=metze/samba/wip.git;a=commitdiff;h=5d9e91fa88720acd2a2ac7e08082350378cd19f6

That code isn't in master. What is it related to ?
Comment 26 Ralph Böhme 2020-03-09 17:14:44 UTC
(In reply to Stefan Metzmacher from comment #20)
As already pointed out by Jeremy: there's no fd reuse as there's no fd close because there's no fsp close as we defer the fsp close. :)
Comment 27 Jeremy Allison 2020-03-09 17:20:55 UTC
(In reply to Ralph Böhme from comment #26)

> As already pointed out by Jeremy: there's no fd reuse as there's no fd close 
> because there's no fsp close as we defer the fsp close. :)

Actually that's not correct Ralph. In close_normal_file() when called with SHUTDOWN_CLOSE we don't defer the close. We cause the outstanding io to return by calling tevent_req_error() (or the nt version) to cause the running pthreads to get their containting destructor to setup for ignoring the return, then we call fd_close().

So the fd does get closed. I'm just not sure how, once the io is outstanding on an fd, that re-opening it in the kernel can cause reuse and io on the wrong file.

Wouldn't that be a terrible Linux internal bug if that were so ?
Comment 28 Ralph Böhme 2020-03-09 17:51:12 UTC
(In reply to Jeremy Allison from comment #27)
D'oh, you're right. Sorry, too late at night...
Comment 29 Jeremy Allison 2020-03-09 17:55:12 UTC
Ralph pointed out on the phone we don't know the aio is in flight yet, so this can possibly be delayed until the close() -> reopen() occurred and then the thread uses the new (incorrect fd).

But as I pointed out, this is a problem we *already* have, with both shutdown close, and smb1 tdis. So we're no worse :-).

So we have 2 problems.

1). Crash bug
2). Possible data corruption under extreme load conditions/insane thread scheduler.

and problem #2 we've had ever since we added aio into SMB1 :-).

My plan is to first push Ralph's fixes, so problem #1 is fixed. Secondly, work on making smb1_tdis and force_close async. Once that's done we can revert the tevent req callback changes. Note I still think the vfs_aio_pthread() aync open changes are safe, as they're opening a *new* fd, not re-using an existing one - so there's no chance of re-use there.

Then we can synthesize what we've learned and use the full set to come up with a streamlined patchset containing only the back-ported fixes for 4.12, 4.11.

Sound like a plan ?
Comment 30 Stefan Metzmacher 2020-03-09 20:23:53 UTC
(In reply to Jeremy Allison from comment #29)

Yes, I meant the scheduling races.

The plan sounds good, thanks!
Comment 31 Jeremy Allison 2020-03-09 21:57:09 UTC
FYI, I'm working on making SMB1 tdis async. Should have patches to review later this week.
Comment 32 Jeremy Allison 2020-03-10 21:26:06 UTC
Just got async smb1_tdis and force_connection_close code working...

Under hand testing (and valgrind) !

Using the same wait_queue technique as
the async smb2 tdis code.

Causes closes to be delayed until
all outstanding aio is finished
on the connection. Prevents any
new aio being added by the:

tcon->status = NT_STATUS_NETWORK_NAME_DELETED

method.

Works with the vfs_default and
vfs_glusterfs state changes
reverted.

I think the aio_pthread.c code
changes should be left in as there's
no possibility of fd reuse there,
plus it can't use the fsp->aio_requests
method (as it's trying to open a new
fsp, not do aio on an existing one).

Should remove any data race
possibility !

Now to finish off and try CI..

Progress :-).
Comment 33 Jeremy Allison 2020-03-10 21:44:45 UTC
Ah. There's still a couple of issues.

1). SMB1exit call -> calls reply_exit() -> file_close_pid() -> close_file(..., SHUTDOWN_CLOSE).

I can fix this one by making reply_exit() async in the same way I fixed SMB1tdis.

2). smbXsrv_session_logoff() -> file_close_user() -> close_file(..., SHUTDOWN_CLOSE).

This one (smbXsrv_session_logoff()) I'm not sure how to fix. There's no easy async point for this call.

Metze, can you give me some  advice here ?
Comment 34 Jeremy Allison 2020-03-10 22:42:53 UTC
OK, looking closer:

For SMB1: reply_ulogoff() calls smbXsrv_session_logoff(). I could make reply_ulogoff a async point by invalidating the user and then waiting on all open fsp's to have no aio requests.

For SMB2. smbd_smb2_logoff_shutdown_done() calls smbXsrv_session_logoff(). This is already an async call, so I could just invalidate the user so we get no more aio and then add another subreq in here to cause it to wait until aio is finished.

The final case is from smbXsrv_session_logoff_all() - but that is only called from server_exit(), in which case the callback functions for the outstanding aio will never get called anyway.

So it looks like by adding async to SMB1 reply_exit(), SMB1 reply_ulogoff() and adding more async in the internals of smbd_smb2_logoff_shutdown_done() that should cover all the SHUTDOWN_CLOSE cases with outstanding aio.

Phew. That took a while to figure out. Metze, can you confirm my analysis before I spend a week more coding ? :-).
Comment 35 Jeremy Allison 2020-03-10 22:47:44 UTC
Created attachment 15854 [details]
git-am for master. WIP.

Work in progress. Makes smb1_tdis and conn_force_close async.

Posting here so I don't lose it.
Comment 36 Stefan Metzmacher 2020-03-11 08:16:42 UTC
(In reply to Jeremy Allison from comment #33)

For SMB2 we already use smb2srv_session_shutdown_send/recv, which has basically
the same logic. And that's used before smbXsrv_session_logoff() is called.

smb2srv_session_shutdown_send() also invalidates the session and prevents
further requests. So smbd_smb2_logoff_shutdown_done() doesn't need any
change at all.
Comment 37 Stefan Metzmacher 2020-03-11 08:32:51 UTC
(In reply to Jeremy Allison from comment #34)


Yes, reply_ulogoff needs the same logic as reply_tdis.

smbd_smb2_logoff_shutdown_done() is already ok and don't need any change.

Yes, smbXsrv_session_logoff_all() can be ignored for as we call exit(),
which also terminates the worker threads.

For reply_exit() I think we should also mark the files as closed.
(Without looking at the code) Something like fsp->op->global->status = NT_STATUS_FILE_CLOSED.

Just a minor comment on the style, I'd use forward declarations for
conn_force_tdis_done() and smbd_smb1_request_tdis_done() (I'd actually call this reply_tdis_done()). and move them after conn_force_tdis() and reply_tdis().

I'm not sure (and in the end I don't care much), but I'd try to see
how the diff would look like if the #if 0 patches stop after
conn_force_tdis_recv() and smbd_smb1_tdis_recv().

We also need to check if END_PROFILE() works ok if it's deferred into
the callback. I guess there was a reason why we had to do more work for the
async smb2 profiling...

Thanks for all the work to get the cleaned up!
Comment 38 Jeremy Allison 2020-03-11 15:48:53 UTC
Metze wrote:

> smbd_smb2_logoff_shutdown_done() is already ok and don't need any change.

Oh thank goodness, that will save me a lot of work :-).

> I'm not sure (and in the end I don't care much), but I'd try to see
> how the diff would look like if the #if 0 patches stop after
> conn_force_tdis_recv() and smbd_smb1_tdis_recv().

OK, I'll take a look at that. The diff looked really messy originally which is why I did it the #if 0 way, it makes the patches much clearer for reviewers.

> We also need to check if END_PROFILE() works ok if it's deferred into
> the callback.

I did look at that. I thought it was just updating a global counter, but I'll check again.

Thanks a *lot* for the preliminary review !
Comment 39 Jeremy Allison 2020-03-11 22:01:31 UTC
We're in luck. For reply_exit() we already have a bool flag in fsp, fsp->closing.

This currently only gets set in smbd_smb2_close_send() and is only checked in smbd_smb2_lock_cancel() in order to return the correct error message when cancelling an outstanding blocking lock request.

So, if we set fsp->closing in SMB1 reply_exit() (and we should also add it in
SMB1 reply_close to match SMB2) and then adding checks for fsp->closing == true in source3/smbd/files.c in the same places we already check for fsp->deferred_close to refuse to allow an fsp to be returned we have a way for reply_exit() to mark the files it wants to close as "closing in progress" so we can call close_file() on them once any aio completes.
Comment 40 Jeremy Allison 2020-03-11 22:20:51 UTC
Using fsp->closing everywhere will also allow us to remove fsp->deferred_close from the fsp struct, and remove source3/lib/tevent_wait.[ch] as this is the only place it's used.
Comment 41 Jeremy Allison 2020-03-11 22:21:32 UTC
Of course removing fsp->deferred_close is a VFS ABI change so we'll only be able to do it in master.
Comment 42 Jeremy Allison 2020-03-13 01:19:27 UTC
Created attachment 15858 [details]
git-am for master. Updated WIP

Just uploading latest WIP so anyone else interested can follow along.

I'm going to be hand-testing this over the next few days under valgrind etc.

Once it's working under valgrind I'll submit to gitlab-CI and let others formally kick the tires on it :-).
Comment 43 Jeremy Allison 2020-03-13 20:15:17 UTC
Created attachment 15861 [details]
git-am WIP fix for master.

Get the revert-commit list correct.
Comment 44 Jeremy Allison 2020-03-14 01:05:12 UTC
Ah. Fails to compile when configured with --with-profiling-data.

I'll see if there's an way way to add this without modifying struct smb_request {}, which needs to be stable for the VFS ABI.
Comment 45 Jeremy Allison 2020-03-14 05:56:07 UTC
Hmm. Think I've found a way to do this be re-using smb1req->async_priv, which is a generic void pointer already used for different purposes by different SMB1 calls.

Couple of extra macros SMB1_PROFILE_SEND(), SMB1_PROFILE_RECV() should do the trick I think.
Comment 46 Jeremy Allison 2020-03-14 06:07:26 UTC
Or I could do what all other users of tevent-based async SMB1 calls do (reply_close(), reply_lockread(), reply_read_and_X() etc. etc.) and just ignore the problem by calling START_PROFILE()/END_PROFILE() in both the initial and async return functions. I think I'll do that :-).

Sucks, but I'm not trying to boil the ocean here, and SMB1 is dying anyway. If anyone seriously objects I do know how to fix this properly for all SMB1 async calls.
Comment 47 Jeremy Allison 2020-03-16 21:13:49 UTC
Created attachment 15867 [details]
git-am fix for master

https://gitlab.com/samba-team/samba/-/merge_requests/1219

Version I've submitted to master.
Comment 48 Jeremy Allison 2020-03-21 00:09:33 UTC
Created attachment 15870 [details]
git-am fix for 4.12.next.

Cherry-picked relevant fixes that went into master, git-squashed the related ones. I don't think this is worth back-porting to 4.11, probably easier as a 4.12.next patch only.
Comment 49 Ralph Böhme 2020-04-06 14:12:12 UTC
Reassigning to Karolin for inclusion in 4.12.
Comment 50 Karolin Seeger 2020-04-07 08:09:03 UTC
(In reply to Ralph Böhme from comment #49)
Pushed to autobuild-v4-12-test.
Comment 51 Karolin Seeger 2020-04-09 07:19:27 UTC
(In reply to Karolin Seeger from comment #50)
Pushed to v4-12-test branches.
Closing out bug report.

Thanks!
Comment 52 fst@highdefinition.ch 2020-06-15 23:24:40 UTC
Is it possible that this bug has not been resolved correctly? Please advise if I should open a new bug. I am getting hunders of crashes daily with 4.12.3 on F32. I fear this bug is a combination of up to 5 different bugs that I might have to post seperately.

Jun 15 13:52:05 www smbd_audit[1224476]: [2020/06/15 13:52:05.058519,  0] ../../source3/smbd/close.c:648(assert_no_pending_aio)
Jun 15 13:52:05 www smbd_audit[1224476]:   assert_no_pending_aio: fsp->num_aio_requests=1
Jun 15 13:52:05 www smbd_audit[1224476]: [2020/06/15 13:52:05.058548,  0] ../../source3/lib/util.c:829(smb_panic_s3)
Jun 15 13:52:05 www smbd_audit[1224476]:   PANIC (pid 1224476): can not close with outstanding aio requests
Jun 15 13:52:05 www smbd_audit[1224476]: [2020/06/15 13:52:05.059230,  0] ../../lib/util/fault.c:264(log_stack_trace)
Jun 15 13:52:05 www smbd_audit[1224476]:   BACKTRACE: 33 stack frames:
Jun 15 13:52:05 www smbd_audit[1224476]:    #0 /lib64/libsamba-util.so.0(log_stack_trace+0x34) [0x7f54ac0227b4]
Jun 15 13:52:05 www smbd_audit[1224476]:    #1 /lib64/libsmbconf.so.0(smb_panic_s3+0x27) [0x7f54aba8bbc7]
Jun 15 13:52:05 www smbd_audit[1224476]:    #2 /lib64/libsamba-util.so.0(smb_panic+0x31) [0x7f54ac0228b1]
Jun 15 13:52:05 www smbd_audit[1224476]:    #3 /usr/lib64/samba/libsmbd-base-samba4.so(+0x1ed5cf) [0x7f54abe355cf]
Jun 15 13:52:05 www smbd_audit[1224476]:    #4 /usr/lib64/samba/libsmbd-base-samba4.so(close_file+0xc3) [0x7f54abe35f53]
Jun 15 13:52:05 www audit[1224476]: ANOM_ABEND auid=4294967295 uid=0 gid=0 ses=4294967295 pid=1224476 comm="smbd" exe="/usr/sbin/smbd" sig=6 res=1
Jun 15 13:52:05 www smbd_audit[1224476]:    #5 /usr/lib64/samba/libsmbd-base-samba4.so(file_close_user+0x3d) [0x7f54abdce6fd]
Jun 15 13:52:05 www smbd_audit[1224476]:    #6 /usr/lib64/samba/libsmbd-base-samba4.so(smbXsrv_session_logoff+0x51) [0x7f54abe817e1]
Jun 15 13:52:05 www smbd_audit[1224476]:    #7 /usr/lib64/samba/libsmbd-base-samba4.so(+0x239b8a) [0x7f54abe81b8a]
Jun 15 13:52:05 www smbd_audit[1224476]:    #8 /usr/lib64/samba/libdbwrap-samba4.so(+0x5c56) [0x7f54aaa61c56]
Jun 15 13:52:05 www smbd_audit[1224476]:    #9 /usr/lib64/samba/libdbwrap-samba4.so(+0x5e8f) [0x7f54aaa61e8f]
Jun 15 13:52:05 www smbd_audit[1224476]:    #10 /usr/lib64/samba/libdbwrap-samba4.so(dbwrap_traverse+0xb) [0x7f54aaa5fdcb]
Jun 15 13:52:05 www smbd_audit[1224476]:    #11 /usr/lib64/samba/libsmbd-base-samba4.so(smbXsrv_session_logoff_all+0x5c) [0x7f54abe81d5c]
Jun 15 13:52:05 www smbd_audit[1224476]:    #12 /usr/lib64/samba/libsmbd-base-samba4.so(+0x23f42e) [0x7f54abe8742e]
Jun 15 13:52:05 www smbd_audit[1224476]:    #13 /usr/lib64/samba/libsmbd-base-samba4.so(+0x23f9c4) [0x7f54abe879c4]
Jun 15 13:52:05 www smbd_audit[1224476]:    #14 /usr/lib64/samba/libsmbd-shim-samba4.so(exit_server_cleanly+0x18) [0x7f54ab4599f8]
Jun 15 13:52:05 www smbd_audit[1224476]:    #15 /usr/lib64/samba/libsmbd-base-samba4.so(smbd_server_connection_terminate_ex+0x162) [0x7f54abe61332]
Jun 15 13:52:05 www smbd_audit[1224476]:    #16 /lib64/libtevent.so.0(tevent_common_invoke_fd_handler+0x81) [0x7f54ab3dff11]
Jun 15 13:52:05 www smbd_audit[1224476]:    #17 /lib64/libtevent.so.0(+0xe417) [0x7f54ab3e6417]
Jun 15 13:52:05 www smbd_audit[1224476]:    #18 /lib64/libtevent.so.0(+0xc57b) [0x7f54ab3e457b]
Jun 15 13:52:05 www smbd_audit[1224476]:    #19 /lib64/libtevent.so.0(_tevent_loop_once+0x98) [0x7f54ab3df598]
Jun 15 13:52:05 www smbd_audit[1224476]:    #20 /lib64/libtevent.so.0(tevent_common_loop_wait+0x1b) [0x7f54ab3df87b]
Jun 15 13:52:05 www smbd_audit[1224476]:    #21 /lib64/libtevent.so.0(+0xc50b) [0x7f54ab3e450b]
Jun 15 13:52:05 www smbd_audit[1224476]:    #22 /usr/lib64/samba/libsmbd-base-samba4.so(smbd_process+0x7c7) [0x7f54abe53667]
Jun 15 13:52:05 www smbd_audit[1224476]:    #23 /usr/sbin/smbd(+0xf531) [0x55aa53384531]
Jun 15 13:52:05 www smbd_audit[1224476]:    #24 /lib64/libtevent.so.0(tevent_common_invoke_fd_handler+0x81) [0x7f54ab3dff11]
Jun 15 13:52:05 www smbd_audit[1224476]:    #25 /lib64/libtevent.so.0(+0xe417) [0x7f54ab3e6417]
Jun 15 13:52:05 www smbd_audit[1224476]:    #26 /lib64/libtevent.so.0(+0xc57b) [0x7f54ab3e457b]
Jun 15 13:52:05 www smbd_audit[1224476]:    #27 /lib64/libtevent.so.0(_tevent_loop_once+0x98) [0x7f54ab3df598]
Jun 15 13:52:05 www smbd_audit[1224476]:    #28 /lib64/libtevent.so.0(tevent_common_loop_wait+0x1b) [0x7f54ab3df87b]
Jun 15 13:52:05 www smbd_audit[1224476]:    #29 /lib64/libtevent.so.0(+0xc50b) [0x7f54ab3e450b]
Jun 15 13:52:05 www smbd_audit[1224476]:    #30 /usr/sbin/smbd(main+0x1be8) [0x55aa5337ea78]
Jun 15 13:52:05 www smbd_audit[1224476]:    #31 /lib64/libc.so.6(__libc_start_main+0xf2) [0x7f54ab0e8042]
Jun 15 13:52:05 www smbd_audit[1224476]:    #32 /usr/sbin/smbd(_start+0x2e) [0x55aa5337ed8e]
Jun 15 13:52:05 www smbd_audit[1224476]: [2020/06/15 13:52:05.059517,  0] ../../source3/lib/dumpcore.c:317(dump_core)
Jun 15 13:52:05 www smbd_audit[1224476]:   coredump is handled by helper binary specified at /proc/sys/kernel/core_pattern
Jun 15 13:52:05 www smbd_audit[1224476]:
Jun 15 13:52:05 www audit: BPF prog-id=12349 op=LOAD
Jun 15 13:52:05 www audit: BPF prog-id=12350 op=LOAD
Jun 15 13:52:05 www audit: BPF prog-id=12351 op=LOAD
Jun 15 13:52:05 www systemd[1]: Started Process Core Dump (PID 1224483/UID 0).
Jun 15 13:52:05 www audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-coredump@4074-1224483-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 15 13:52:05 www systemd-coredump[1224485]: Process 1224476 (smbd) of user 0 dumped core.

  Stack trace of thread 1224476:
  #0  0x00007f54ab0fda25 raise (libc.so.6 + 0x3ca25)
  #1  0x00007f54ab0e6895 abort (libc.so.6 + 0x25895)
  #2  0x00007f54aba7c7c4 dump_core (libsmbconf.so.0 + 0x4d7c4)
  #3  0x00007f54aba8bc0a smb_panic_s3 (libsmbconf.so.0 + 0x5cc0a)
  #4  0x00007f54ac0228b1 smb_panic (libsamba-util.so.0 + 0x198b1)
  #5  0x00007f54abe355cf assert_no_pending_aio.constprop.0.isra.0 (libsmbd-base-samba4.so + 0x1ed5cf)
  #6  0x00007f54abe35f53 close_file (libsmbd-base-samba4.so + 0x1edf53)
  #7  0x00007f54abdce6fd file_close_user (libsmbd-base-samba4.so + 0x1866fd)
  #8  0x00007f54abe817e1 smbXsrv_session_logoff (libsmbd-base-samba4.so + 0x2397e1)
  #9  0x00007f54abe81b8a smbXsrv_session_logoff_all_callback (libsmbd-base-samba4.so + 0x239b8a)
  #10 0x00007f54aaa61c56 db_rbt_traverse_internal.constprop.0 (libdbwrap-samba4.so + 0x5c56)
  #11 0x00007f54aaa61e8f db_rbt_traverse (libdbwrap-samba4.so + 0x5e8f)
  #12 0x00007f54aaa5fdcb dbwrap_traverse (libdbwrap-samba4.so + 0x3dcb)
  #13 0x00007f54abe81d5c smbXsrv_session_logoff_all (libsmbd-base-samba4.so + 0x239d5c)
  #14 0x00007f54abe8742e exit_server_common (libsmbd-base-samba4.so + 0x23f42e)
  #15 0x00007f54abe879c4 smbd_exit_server_cleanly (libsmbd-base-samba4.so + 0x23f9c4)
  #16 0x00007f54ab4599f8 exit_server_cleanly (libsmbd-shim-samba4.so + 0x9f8)
  #17 0x00007f54abe61332 smbd_server_connection_terminate_ex (libsmbd-base-samba4.so + 0x219332)
  #18 0x00007f54ab3dff11 tevent_common_invoke_fd_handler (libtevent.so.0 + 0x7f11)
  #19 0x00007f54ab3e6417 epoll_event_loop_once (libtevent.so.0 + 0xe417)
  #20 0x00007f54ab3e457b std_event_loop_once (libtevent.so.0 + 0xc57b)
  #21 0x00007f54ab3df598 _tevent_loop_once (libtevent.so.0 + 0x7598)
  #22 0x00007f54ab3df87b tevent_common_loop_wait (libtevent.so.0 + 0x787b)
  #23 0x00007f54ab3e450b std_event_loop_wait (libtevent.so.0 + 0xc50b)
  #24 0x00007f54abe53667 smbd_process (libsmbd-base-samba4.so + 0x20b667)
  #25 0x000055aa53384531 smbd_accept_connection (smbd + 0xf531)
  #26 0x00007f54ab3dff11 tevent_common_invoke_fd_handler (libtevent.so.0 + 0x7f11)
  #27 0x00007f54ab3e6417 epoll_event_loop_once (libtevent.so.0 + 0xe417)
  #28 0x00007f54ab3e457b std_event_loop_once (libtevent.so.0 + 0xc57b)
  #29 0x00007f54ab3df598 _tevent_loop_once (libtevent.so.0 + 0x7598)
  #30 0x00007f54ab3df87b tevent_common_loop_wait (libtevent.so.0 + 0x787b)
  #31 0x00007f54ab3e450b std_event_loop_wait (libtevent.so.0 + 0xc50b)
  #32 0x000055aa5337ea78 main (smbd + 0x9a78)
  #33 0x00007f54ab0e8042 __libc_start_main (libc.so.6 + 0x27042)
  #34 0x000055aa5337ed8e _start (smbd + 0x9d8e)

  Stack trace of thread 1224477:
  #0  0x00007f54ab3c61b8 pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0 + 0x101b8)
  #1  0x00007f54aaa23282 pthreadpool_server (libmessages-dgm-samba4.so + 0x7282)
  #2  0x00007f54ab3bf432 start_thread (libpthread.so.0 + 0x9432)
  #3  0x00007f54ab1c29d3 __clone (libc.so.6 + 0x1019d3)

  Stack trace of thread 1224482:
  #0  0x00007f54ab3c61b8 pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0 + 0x101b8)
  #1  0x00007f54aaa23282 pthreadpool_server (libmessages-dgm-samba4.so + 0x7282)
  #2  0x00007f54ab3bf432 start_thread (libpthread.so.0 + 0x9432)
  #3  0x00007f54ab1c29d3 __clone (libc.so.6 + 0x1019d3)
Jun 15 13:52:05 www systemd[1]: systemd-coredump@4074-1224483-0.service: Succeeded.
Jun 15 13:52:05 www audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-coredump@4074-1224483-0 comm="systemd"


Further bugs and possible connections: The client is the "Roon Essentials" Software running on an "Elac Discovery" device. I am just a user of this device. They are using a standard linux CIFS mount. Besides of this panic crash, there are a lot of other very annoying errors popping up all the time on the server most likely causing service interruptions and crashes on the client:

[2020/06/16 00:38:23.396588,  3] ../../source3/smbd/smb2_server.c:3264(smbd_smb2_request_error_ex)
  smbd_smb2_request_error_ex: smbd_smb2_request_error_ex: idx[1] status[NT_STATUS_END_OF_FILE] || at ../../source3/smbd/smb2_read.c:133
18152 occurances within the last 14 hours

[2020/06/16 00:38:23.410623,  3] ../../source3/smbd/smb2_server.c:3264(smbd_smb2_request_error_ex)
  smbd_smb2_request_error_ex: smbd_smb2_request_error_ex: idx[1] status[NT_STATUS_OBJECT_NAME_NOT_FOUND] || at ../../source3/smbd/smb2_create.c:334
53572 occurances within the last 14 hours

[2020/06/16 00:39:10.327604,  3] ../../source3/smbd/smb2_server.c:3264(smbd_smb2_request_error_ex)
  smbd_smb2_request_error_ex: smbd_smb2_request_error_ex: idx[1] status[STATUS_NO_MORE_FILES] || at ../../source3/smbd/smb2_query_directory.c:159
39171 occurances within the last 14 hours

I am not sure, if those errors are in any way related to the crash. Maybe a samba bug is provoking a misbehaviour in cifs which is then casuing a panic in samba which is then also causing a core dump on the client.

The client itself is then reporting this in dmesg (further description at the end):


Jun 15 09:35:33 var-som-mx6 user.err kernel: CIFS VFS: No task to wake, unknown frame received! NumMids 1
Jun 15 09:35:33 var-som-mx6 user.debug kernel: 00000000: 4d090000 424d53fe 00010040 00000000  ...M.SMB@.......
Jun 15 09:35:33 var-som-mx6 user.debug kernel: 00000010: 00000008 00000003 00000000 0000002c  ............,...
Jun 15 09:35:33 var-som-mx6 user.debug kernel: 00000020: 00000000 0000002c 00000000 5db72925  ....,.......%).]
Jun 15 09:35:33 var-som-mx6 user.debug kernel: 00000030: 00000000 00000000 00000000 00000000  ................
Jun 15 09:35:33 var-som-mx6 user.debug kernel: 00000040: 00000000                             ....
Jun 15 09:35:33 var-som-mx6 user.err kernel: CIFS VFS: SMB response too long (36944 bytes)
Jun 15 09:35:33 var-som-mx6 user.err kernel: CIFS VFS: SMB response too long (32848 bytes)
Jun 15 09:35:33 var-som-mx6 user.err kernel: CIFS VFS: SMB response too long (131152 bytes)
Jun 15 09:35:33 var-som-mx6 user.err kernel: CIFS VFS: Send error in read = -11
Jun 15 09:35:34 var-som-mx6 user.err kernel: CIFS VFS: SMB response too long (262224 bytes)
Jun 15 09:35:34 var-som-mx6 user.err kernel: CIFS VFS: Send error in read = -11
Jun 15 09:35:34 var-som-mx6 user.err kernel: CIFS VFS: SMB response too long (524368 bytes)
Jun 15 09:35:34 var-som-mx6 user.err kernel: CIFS VFS: Send error in read = -11
Jun 15 09:35:34 var-som-mx6 user.err kernel: CIFS VFS: No task to wake, unknown frame received! NumMids 1
Jun 15 09:35:34 var-som-mx6 user.debug kernel: 00000000: c70b0000 424d53fe 00010040 00000000  .....SMB@.......
Jun 15 09:35:34 var-som-mx6 user.debug kernel: 00000010: 00000008 00000003 00000000 0000002d  ............-...
Jun 15 09:35:34 var-som-mx6 user.debug kernel: 00000020: 00000000 0000002d 00000000 9e3cbce1  ....-.........<.
Jun 15 09:35:34 var-som-mx6 user.debug kernel: 00000030: 00000000 00000000 00000000 00000000  ................
Jun 15 09:35:34 var-som-mx6 user.debug kernel: 00000040: 00000000                             ....
Jun 15 09:35:34 var-som-mx6 user.err kernel: CIFS VFS: SMB response too long (36944 bytes)
Jun 15 09:35:34 var-som-mx6 user.err kernel: CIFS VFS: Send error in read = -11
Jun 15 09:35:34 var-som-mx6 user.err kernel: CIFS VFS: SMB response too long (65616 bytes)
Jun 15 09:35:34 var-som-mx6 user.err kernel: CIFS VFS: Send error in read = -11
Jun 15 09:35:34 var-som-mx6 user.err kernel: CIFS VFS: SMB response too long (32848 bytes)
Jun 15 09:35:34 var-som-mx6 user.err kernel: CIFS VFS: SMB response too long (36944 bytes)
Jun 15 09:35:34 var-som-mx6 user.err kernel: CIFS VFS: SMB response too long (131152 bytes)
Jun 15 09:35:34 var-som-mx6 user.err kernel: CIFS VFS: SMB response too long (524368 bytes)
Jun 15 09:35:34 var-som-mx6 user.err kernel: CIFS VFS: SMB response too long (36944 bytes)
Jun 15 09:35:34 var-som-mx6 user.err kernel: CIFS VFS: Send error in read = -11
Jun 15 09:35:34 var-som-mx6 user.err kernel: CIFS VFS: SMB response too long (36944 bytes)
Jun 15 09:35:34 var-som-mx6 user.err kernel: CIFS VFS: Send error in read = -11
Jun 15 09:35:34 var-som-mx6 user.err kernel: CIFS VFS: SMB response too long (1048656 bytes)
Jun 15 09:35:34 var-som-mx6 user.err kernel: CIFS VFS: Send error in read = -11
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: Stacktrace:
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer:   at <unknown> <0xffffffff>
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer:   at (wrapper managed-to-native) System.IO.MonoIO.GetFileAttributes (string,System.IO.MonoIOError&) <0x0003b>
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer:   at System.IO.MonoIO.ExistsDirectory (string,System.IO.MonoIOError&) [0x00002] in <4303af347dd845aaa4057f77fd626b35>:0
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer:   at System.IO.Directory.Exists (string) [0x00011] in <4303af347dd845aaa4057f77fd626b35>:0
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer:   at System.IO.FileStream..ctor (string,System.IO.FileMode,System.IO.FileAccess,System.IO.FileShare,int,bool,System.IO.FileOptions) [0x000c2] in <4303af347dd845aaa4057f77fd626b35>:0
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer:   at System.IO.FileStream..ctor (string,System.IO.FileMode,System.IO.FileAccess,System.IO.FileShare) [0x0000d] in <4303af347dd845aaa4057f77fd626b35>:0
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer:   at (wrapper remoting-invoke-with-check) System.IO.FileStream..ctor (string,System.IO.FileMode,System.IO.FileAccess,System.IO.FileShare) [0x0001e] in <4303af347dd845aaa4057f77fd626b35>:0
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer:   at System.IO.File.Open (string,System.IO.FileMode,System.IO.FileAccess,System.IO.FileShare) [0x00004] in <4303af347dd845aaa4057f77fd626b35>:0
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer:   at Base.IO.LongPathFile.Open (string,System.IO.FileMode,System.IO.FileAccess,System.IO.FileShare) [0x00004] in <f1cadd5a9b7a453a93d4872736b93f0f>:0
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer:   at Sooloos.Media.MediaFile.Create (string) [0x000aa] in <e0e551505c8c42fbb4803a0e1f14213e>:0
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer:   at Sooloos.Media.MediaFile.TryCreate (string,Sooloos.Media.MediaFile&) [0x00002] in <e0e551505c8c42fbb4803a0e1f14213e>:0
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer:   at Sooloos.Media.TagExtractionService.GetCover (string,Base.ByteBuffer&) [0x00008] in <e0e551505c8c42fbb4803a0e1f14213e>:0
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer:   at Sooloos.Storage.TemporaryImageCache.ExtractTagImage (string,Sooloos.Media.Tags,Sooloos.Media.MediaFile,Base.ByteBuffer) [0x0008c] in <a3fa5fb1877944e09c84f655cb969ff0>:0
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer:   at Sooloos.Storage.DirectoryStorage.GetLocalPathSync (System.Sooid) [0x00069] in <1b120c0991fc4ef98377fbec581897c6>:0
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer:   at Sooloos.Broker.Music.ImportUtils/<>c__DisplayClass13_0.<LoadImageData>b__0 (Sooloos.CallingThread) [0x0000c] in <a515eccfa1d445d3b0aae9a66cdc76a5>:0
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer:   at Sooloos.CallingThread/<>c__DisplayClass10_0.<DoThreadPoolWork>b__0 (object) [0x0000c] in <5c04a0e754e24a11800c20bacb7142c2>:0
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer:   at ThreadUtil._DoWorkItem (ThreadUtil/_WorkItem) [0x0000c] in <f1cadd5a9b7a453a93d4872736b93f0f>:0
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer:   at ThreadUtil._WorkerThread () [0x000b1] in <f1cadd5a9b7a453a93d4872736b93f0f>:0
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer:   at System.Threading.ThreadHelper.ThreadStart_Context (object) [0x0001f] in <4303af347dd845aaa4057f77fd626b35>:0
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer:   at System.Threading.ExecutionContext.RunInternal (System.Threading.ExecutionContext,System.Threading.ContextCallback,object,bool) [0x00073] in <4303af347dd845aaa4057f77fd626b35>:0
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer:   at System.Threading.ExecutionContext.Run (System.Threading.ExecutionContext,System.Threading.ContextCallback,object,bool) [0x00004] in <4303af347dd845aaa4057f77fd626b35>:0
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer:   at System.Threading.ExecutionContext.Run (System.Threading.ExecutionContext,System.Threading.ContextCallback,object) [0x0002f] in <4303af347dd845aaa4057f77fd626b35>:0
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer:   at System.Threading.ThreadHelper.ThreadStart () [0x00014] in <4303af347dd845aaa4057f77fd626b35>:0
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer:   at (wrapper runtime-invoke) object.runtime_invoke_void__this__ (object,intptr,intptr,intptr) [0x0004f] in <4303af347dd845aaa4057f77fd626b35>:0
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: /proc/self/maps:
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 00010000-004dd000 r-xp 00000000 b3:04 96283      /opt/roon/RoonMono/bin/mono-sgen
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 004ed000-004ef000 rw-p 004cd000 b3:04 96283      /opt/roon/RoonMono/bin/mono-sgen
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 0085f000-027a4000 rw-p 00000000 00:00 0          [heap]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 65bb8000-65c69000 r-xp 00000000 b3:04 106601     /opt/roon/Appliance/libjpegdds.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 65c69000-65c78000 ---p 000b1000 b3:04 106601     /opt/roon/Appliance/libjpegdds.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 65c78000-65c7b000 rw-p 000b0000 b3:04 106601     /opt/roon/Appliance/libjpegdds.so
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 66801000-66a00000 rw-p 00000000 00:00 0          [stack:1437]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 66a01000-66c00000 rw-p 00000000 00:00 0          [stack:1436]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 66c01000-66d00000 rw-p 00000000 00:00 0          [stack:1435]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 66e01000-66f00000 rw-p 00000000 00:00 0          [stack:1434]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 68015000-68194000 rw-p 00000000 00:00 0          [stack:1433]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 68954000-68a54000 rw-s 003f0000 b3:05 187158     /var/roon/RoonEssentialsServer/Database/Core/3cbabd1f96d044d4a7b12ce3373eb161/broker_2.db/024532.log
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 69c47000-6a000000 rw-p 00000000 00:00 0          [stack:1425]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 6a0a5000-6a224000 rw-p 00000000 00:00 0          [stack:1428]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 6bf78000-6bf98000 rw-s 00010000 b3:05 227793     /var/roon/RoonEssentialsServer/Database/Core/3cbabd1f96d044d4a7b12ce3373eb161/transport/zone_16018de72489b0dd1344bb117ccf6dbc47af.db/000401.log
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 6cc5c000-6cc6c000 r--p 00000000 b3:04 106558     /opt/roon/Appliance/Roon.Audio.Signal.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 6cf01000-6d700000 rw-p 00000000 00:00 0          [stack:1422]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 6da44000-6da54000 rw-s 00000000 b3:05 227794     /var/roon/RoonEssentialsServer/Database/Core/3cbabd1f96d044d4a7b12ce3373eb161/transport/zone_16018de72489b0dd1344bb117ccf6dbc47af.db/MANIFEST-000399
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 6df2c000-6df3c000 rw-s 00000000 b3:05 170852     /var/roon/RoonEssentialsServer/Database/Orbit/orbitnew.db/000003.log
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 6df5c000-6df6c000 rw-s 00000000 b3:05 170853     /var/roon/RoonEssentialsServer/Database/Orbit/orbitnew.db/MANIFEST-000002
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 6e314000-6e354000 rw-s 00030000 b3:05 130527     /var/roon/RoonEssentialsServer/Cache/httpcache_2.db/MANIFEST-002178
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 6ee1c000-6ee2c000 rw-s 00000000 b3:05 187099     /var/roon/RoonEssentialsServer/Database/Core/3cbabd1f96d044d4a7b12ce3373eb161/clientdata.db/045527.log
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 6f202000-6f20e000 r--p 00000000 b3:04 106625     /opt/roon/Appliance/Roon.Audio.Meridian.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 6f55f000-6f56f000 rw-s 00000000 b3:05 130515     /var/roon/RoonEssentialsServer/Cache/tidal_2.db/MANIFEST-000778
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 6f709000-6f710000 r--p 00000000 b3:04 106532     /opt/roon/Appliance/Identifier.Messages.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 6f730000-6f73c000 r--p 00000000 b3:04 106576     /opt/roon/Appliance/Imagoo.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 6f73d000-6f7fc000 rw-p 00000000 00:00 0          [stack:1423]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 6f9b8000-6f9c0000 r--p 00000000 b3:04 106617     /opt/roon/Appliance/Roon.Audio.Devialet.dll
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 6f9c0000-6f9d2000 r--p 00000000 b3:04 106581     /opt/roon/Appliance/Roon.Audio.AirPlay.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 70105000-7010c000 r--p 00000000 b3:04 106603     /opt/roon/Appliance/Roon.Audio.Raat.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 7010d000-7020c000 rw-p 00000000 00:00 0          [stack:1421]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 7020c000-705f0000 r--p 00000000 b3:04 106552     /opt/roon/Appliance/Roon.Broker.Api.Remote.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 70900000-70903000 r-xp 00000000 b3:04 106598     /opt/roon/Appliance/libroonsearch.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 70903000-70913000 ---p 00003000 b3:04 106598     /opt/roon/Appliance/libroonsearch.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 70913000-70914000 rw-p 00003000 b3:04 106598     /opt/roon/Appliance/libroonsearch.so
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 70954000-70964000 rw-s 00000000 b3:05 170840     /var/roon/RoonEssentialsServer/Cache/smc.db/MANIFEST-000002
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 70a7d000-70d03000 rw-p 00000000 00:00 0          [stack:1419]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 71100000-71104000 r--p 00000000 b3:04 106528     /opt/roon/Appliance/Pebble.Messages.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 7114a000-71151000 r--p 00000000 b3:04 104473     /opt/roon/RoonMono/lib/mono/gac/I18N.West/4.0.0.0__0738eb9f132ed756/I18N.West.pdb
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 71151000-71163000 r--p 00000000 b3:04 104477     /opt/roon/RoonMono/lib/mono/gac/I18N.West/4.0.0.0__0738eb9f132ed756/I18N.West.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 71194000-71293000 rw-p 00000000 00:00 0          [stack:1418]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 712c3000-71300000 r--p 00000000 b3:04 106578     /opt/roon/Appliance/Jint.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 71301000-71400000 rw-p 00000000 00:00 0          [stack:1408]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 71401000-71500000 rw-p 00000000 00:00 0          [stack:1407]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 71501000-71600000 rw-p 00000000 00:00 0          [stack:1406]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 71601000-71700000 rw-p 00000000 00:00 0          [stack:1438]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 71701000-71800000 rw-p 00000000 00:00 0          [stack:1404]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 71902000-7190c000 r--p 00000000 b3:04 104470     /opt/roon/RoonMono/lib/mono/gac/I18N/4.0.0.0__0738eb9f132ed756/I18N.dll
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 7190c000-7191c000 rw-s 00000000 b3:05 130464     /var/roon/RoonEssentialsServer/Cache/httpcache.db/MANIFEST-024759
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 7192c000-71942000 r--p 00000000 b3:04 106585     /opt/roon/Appliance/Roon.Storage.Directory.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 71982000-71ba9000 r-xp 00000000 b3:04 106583     /opt/roon/Appliance/libroonmedia.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 71ba9000-71bb9000 ---p 00227000 b3:04 106583     /opt/roon/Appliance/libroonmedia.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 71bb9000-71bc5000 rw-p 00227000 b3:04 106583     /opt/roon/Appliance/libroonmedia.so
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 71bce000-71bde000 rw-s 00000000 b3:05 187143     /var/roon/RoonEssentialsServer/Database/Core/3cbabd1f96d044d4a7b12ce3373eb161/clientdata.db/MANIFEST-045526
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 71bde000-71bee000 r--p 00000000 b3:04 106557     /opt/roon/Appliance/Roon.Audio.UPnP.dll
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 71bee000-71c00000 r--p 00000000 b3:04 106621     /opt/roon/Appliance/Roon.Tidal.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72001000-72100000 rw-p 00000000 00:00 0          [stack:1397]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72101000-72200000 rw-p 00000000 00:00 0          [stack:1429]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72314000-72324000 rw-s 00000000 b3:05 187159     /var/roon/RoonEssentialsServer/Database/Core/3cbabd1f96d044d4a7b12ce3373eb161/broker_2.db/MANIFEST-024530
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72344000-7235e000 r--p 00000000 b3:04 106535     /opt/roon/Appliance/Roon.Backup.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 7237e000-723af000 r--p 00000000 b3:04 106561     /opt/roon/Appliance/ICSharpCode.SharpZipLib.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 723bf000-724e7000 r-xp 00000000 b3:01 5645       /usr/lib/libstdc++.so.6.0.21
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 724e7000-724f7000 ---p 00128000 b3:01 5645       /usr/lib/libstdc++.so.6.0.21
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 724f7000-724fc000 r--p 00128000 b3:01 5645       /usr/lib/libstdc++.so.6.0.21
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 724fc000-724fe000 rw-p 0012d000 b3:01 5645       /usr/lib/libstdc++.so.6.0.21
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72601000-72700000 rw-p 00000000 00:00 0          [stack:1395]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72701000-72800000 rw-p 00000000 00:00 0          [stack:1394]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72801000-72900000 rw-p 00000000 00:00 0          [stack:1393]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72901000-72a00000 rw-p 00000000 00:00 0          [stack:1441]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72b07000-72b13000 r--p 00000000 b3:04 106622     /opt/roon/Appliance/Roon.Storage.ITunes.dll
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72b13000-72b50000 r-xp 00000000 b3:04 106606     /opt/roon/Appliance/libleveldb.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72b50000-72b57000 ---p 0003d000 b3:04 106606     /opt/roon/Appliance/libleveldb.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72b57000-72b59000 rw-p 0003c000 b3:04 106606     /opt/roon/Appliance/libleveldb.so
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72ba9000-72bc6000 r--p 00000000 b3:04 106588     /opt/roon/Appliance/Roon.Audio.dll
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72bc6000-72be0000 r--p 00000000 b3:04 106533     /opt/roon/Appliance/Roon.Http.dll
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72be0000-72c00000 r--p 00000000 b3:04 106536     /opt/roon/Appliance/Roon.FileSystem.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72d01000-72d40000 rw-p 00000000 00:00 0          [stack:1391]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72d41000-72d80000 rw-p 00000000 00:00 0          [stack:1390]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72d81000-72dc0000 rw-p 00000000 00:00 0          [stack:1389]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72dc1000-72e00000 rw-p 00000000 00:00 0          [stack:1388]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72f00000-72f07000 r--p 00000000 b3:04 106551     /opt/roon/Appliance/Roon.Songkick.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72f17000-72f6d000 r--p 00000000 b3:04 106611     /opt/roon/Appliance/Roon.Metadata.dll
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72f6d000-72f77000 r--p 00000000 b3:04 106523     /opt/roon/Appliance/LevelDb.Database.dll
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72f77000-72f84000 r--p 00000000 b3:04 106594     /opt/roon/Appliance/Roon.Broker.Remoting.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72fc7000-72fd8000 r-xp 00000000 b3:01 6470       /lib/libresolv-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72fd8000-72fe7000 ---p 00011000 b3:01 6470       /lib/libresolv-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72fe7000-72fe8000 r--p 00010000 b3:01 6470       /lib/libresolv-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72fe8000-72fe9000 rw-p 00011000 b3:01 6470       /lib/libresolv-2.22.so
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72feb000-72fef000 r-xp 00000000 b3:01 6471       /lib/libnss_dns-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72fef000-72ffe000 ---p 00004000 b3:01 6471       /lib/libnss_dns-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72ffe000-72fff000 r--p 00003000 b3:01 6471       /lib/libnss_dns-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 72fff000-73000000 rw-p 00004000 b3:01 6471       /lib/libnss_dns-2.22.so
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 73001000-73100000 rw-p 00000000 00:00 0          [stack:1426]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 73301000-73400000 rw-p 00000000 00:00 0          [stack:1386]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 73401000-73500000 rw-p 00000000 00:00 0          [stack:1385]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 73600000-73602000 r--p 00000000 b3:04 104469     /opt/roon/RoonMono/lib/mono/gac/I18N/4.0.0.0__0738eb9f132ed756/I18N.pdb
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 73602000-7360b000 r-xp 00000000 b3:01 6600       /lib/libnss_files-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 7360b000-7361a000 ---p 00009000 b3:01 6600       /lib/libnss_files-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 7361a000-7361b000 r--p 00008000 b3:01 6600       /lib/libnss_files-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 7361b000-7361c000 rw-p 00009000 b3:01 6600       /lib/libnss_files-2.22.so
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 73632000-73634000 r-xp 00000000 b3:04 106548     /opt/roon/Appliance/libroonbase.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 73634000-73643000 ---p 00002000 b3:04 106548     /opt/roon/Appliance/libroonbase.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 73643000-73644000 rw-p 00001000 b3:04 106548     /opt/roon/Appliance/libroonbase.so
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 73655000-73754000 rw-p 00000000 00:00 0          [stack:1384]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 73754000-7384c000 r--p 00000000 b3:04 104480     /opt/roon/RoonMono/lib/mono/gac/System.Core/4.0.0.0__b77a5c561934e089/System.Core.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 7385c000-73872000 r--p 00000000 b3:04 106542     /opt/roon/Appliance/Messaging.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 73882000-738e5000 r-xp 00000000 b3:04 98335      /opt/roon/RoonMono/lib/libMonoPosixHelper.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 738e5000-738f4000 ---p 00063000 b3:04 98335      /opt/roon/RoonMono/lib/libMonoPosixHelper.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 738f4000-738f5000 rw-p 00062000 b3:04 98335      /opt/roon/RoonMono/lib/libMonoPosixHelper.so
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 738f6000-7392a000 r--p 00000000 b3:04 104451     /opt/roon/RoonMono/lib/mono/gac/Mono.Posix/4.0.0.0__0738eb9f132ed756/Mono.Posix.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 7393a000-73987000 r--p 00000000 b3:04 102403     /opt/roon/RoonMono/lib/mono/gac/Mono.Security/4.0.0.0__0738eb9f132ed756/Mono.Security.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 739fb000-73a00000 r--p 00000000 b3:04 106592     /opt/roon/Appliance/Roon.Audio.Mdns.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 73b04000-73b08000 r--p 00000000 b3:04 106541     /opt/roon/Appliance/Roon.Storage.CollectionDump.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 73b28000-73e29000 r--p 00000000 b3:04 104454     /opt/roon/RoonMono/lib/mono/gac/System.Xml/4.0.0.0__b77a5c561934e089/System.Xml.dll
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 73e29000-73e47000 r--p 00000000 b3:04 106499     /opt/roon/RoonMono/lib/mono/gac/System.Configuration/4.0.0.0__b03f5f7f11d50a3a/System.Configuration.dll
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 73e47000-74231000 r--p 00000000 b3:04 106539     /opt/roon/Appliance/Roon.Broker.Core.dll
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 74231000-74261000 r--p 00000000 b3:04 106615     /opt/roon/Appliance/RoonApp.dll
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 74261000-74500000 r--p 00000000 b3:04 100355     /opt/roon/RoonMono/lib/mono/gac/System/4.0.0.0__b77a5c561934e089/System.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 74604000-74609000 r--p 00000000 b3:04 106572     /opt/roon/Appliance/Roon.Broker.Concurrency.dll
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 74609000-74637000 r--p 00000000 b3:04 106597     /opt/roon/Appliance/Metadata.Messages.dll
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 74637000-74692000 r--p 00000000 b3:04 106571     /opt/roon/Appliance/Broker.Messages.Core.dll
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 74692000-74700000 r--p 00000000 b3:04 106618     /opt/roon/Appliance/RoonBase.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 74701000-76000000 rw-p 00000000 00:00 0          [stack:1383]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76000000-7600c000 r--p 00000000 b3:04 106630     /opt/roon/Appliance/Roon.Storage.Core.dll
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 7600c000-7601d000 r--p 00000000 b3:04 106573     /opt/roon/Appliance/Roon.Media.dll
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 7601d000-7602e000 r--p 00000000 b3:04 106559     /opt/roon/Appliance/Roon.Messages.dll
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 7602e000-7604b000 r--p 00000000 b3:04 106619     /opt/roon/Appliance/Base.dll
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 7604b000-76054000 r--p 00000000 b3:04 106607     /opt/roon/Appliance/Roon.Client.Models.dll
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76054000-760c0000 r--p 00000000 b3:04 106626     /opt/roon/Appliance/Roon.Broker.Api.dll
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76152000-764f9000 r--p 00000000 b3:04 106510     /opt/roon/RoonMono/lib/mono/4.5/mscorlib.dll
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 764f9000-764fd000 r--p 00000000 b3:04 106527     /opt/roon/Appliance/RoonEssentialsAppliance.exe
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 764fe000-76cfe000 rw-p 00000000 00:00 0          [stack:1382]
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76d3d000-76d4e000 r-xp 00000000 b3:01 6445       /lib/libnsl-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76d4e000-76d5d000 ---p 00011000 b3:01 6445       /lib/libnsl-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76d5d000-76d5e000 r--p 00010000 b3:01 6445       /lib/libnsl-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76d5e000-76d5f000 rw-p 00011000 b3:01 6445       /lib/libnsl-2.22.so
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76d61000-76d67000 r-xp 00000000 b3:01 6493       /lib/libnss_compat-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76d67000-76d76000 ---p 00006000 b3:01 6493       /lib/libnss_compat-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76d76000-76d77000 r--p 00005000 b3:01 6493       /lib/libnss_compat-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76d77000-76d78000 rw-p 00006000 b3:01 6493       /lib/libnss_compat-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76d78000-76e9f000 r-xp 00000000 b3:01 6468       /lib/libc-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76e9f000-76eae000 ---p 00127000 b3:01 6468       /lib/libc-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76eae000-76eb0000 r--p 00126000 b3:01 6468       /lib/libc-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76eb0000-76eb1000 rw-p 00128000 b3:01 6468       /lib/libc-2.22.so
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76eb4000-76ed0000 r-xp 00000000 b3:01 6446       /lib/libgcc_s.so.1
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76ed0000-76edf000 ---p 0001c000 b3:01 6446       /lib/libgcc_s.so.1
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76edf000-76ee0000 rw-p 0001b000 b3:01 6446       /lib/libgcc_s.so.1
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76ee0000-76ef5000 r-xp 00000000 b3:01 6480       /lib/libpthread-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76ef5000-76f05000 ---p 00015000 b3:01 6480       /lib/libpthread-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76f05000-76f06000 r--p 00015000 b3:01 6480       /lib/libpthread-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76f06000-76f07000 rw-p 00016000 b3:01 6480       /lib/libpthread-2.22.so
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76f09000-76f0b000 r-xp 00000000 b3:01 6641       /lib/libdl-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76f0b000-76f1a000 ---p 00002000 b3:01 6641       /lib/libdl-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76f1a000-76f1b000 r--p 00001000 b3:01 6641       /lib/libdl-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76f1b000-76f1c000 rw-p 00002000 b3:01 6641       /lib/libdl-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76f1c000-76f22000 r-xp 00000000 b3:01 6439       /lib/librt-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76f22000-76f31000 ---p 00006000 b3:01 6439       /lib/librt-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76f31000-76f32000 r--p 00005000 b3:01 6439       /lib/librt-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76f32000-76f33000 rw-p 00006000 b3:01 6439       /lib/librt-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76f33000-76f9c000 r-xp 00000000 b3:01 6485       /lib/libm-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76f9c000-76fab000 ---p 00069000 b3:01 6485       /lib/libm-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76fab000-76fac000 r--p 00068000 b3:01 6485       /lib/libm-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76fac000-76fad000 rw-p 00069000 b3:01 6485       /lib/libm-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76fad000-76fcd000 r-xp 00000000 b3:01 6605       /lib/ld-2.22.so
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76fd8000-76fd9000 rw-s 00000000 00:06 6680       /dev/shm/mono.1373
[..]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76fdc000-76fdd000 r--p 0001f000 b3:01 6605       /lib/ld-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 76fdd000-76fde000 rw-p 00020000 b3:01 6605       /lib/ld-2.22.so
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 7eba5000-7ebc6000 rw-p 00000000 00:00 0          [stack]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: 7ed9a000-7ed9b000 r-xp 00000000 00:00 0          [sigpage]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: ffff0000-ffff1000 r-xp 00000000 00:00 0          [vectors]
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: Native stacktrace:
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: =================================================================
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: Got a SIGSEGV while executing native code. This usually indicates
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: a fatal error in the mono runtime or one of the native libraries 
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: used by your application.
Jun 15 09:35:34 var-som-mx6 user.notice RoonServer: =================================================================
Jun 15 09:35:35 var-som-mx6 user.notice RoonServer: Error
Jun 15 09:35:37 var-som-mx6 user.notice RoonServer: Initializing
Jun 15 09:35:37 var-som-mx6 user.notice RoonServer: Started
Jun 15 09:35:38 var-som-mx6 user.notice RoonServer: Not responding
Jun 15 09:35:42 var-som-mx6 user.notice RoonServer: has mp3float: 0, aac_fixed: 0

The Roon Essentials software is scanning the server directory for all music files (flac, mp3) and tries to identify all tracks using the Roon service. This error has been here for years (initially it worked without crashing but that changed at some point in the past). I can not revert to these old versions neither on the client nor on the server. So while the client is in the process of scann the directory, it will crash randomly. I was unable to pin the crashes on client or server to a particular file or directory. It seems to be random. However setting these options:

	max protocol = NT1
	min protocol = NT1
	server multi channel support = no

Made the client run through the indexing process without crashing.
Comment 53 Jeremy Allison 2020-06-16 01:06:42 UTC
Can you log a new bug for this one.

What I think is happening is this:

1). Linux kernel cifsfs has a bug.
See the message:

Jun 15 09:35:33 var-som-mx6 user.err kernel: CIFS VFS: No task to wake, unknown frame received! NumMids 1
Jun 15 09:35:33 var-som-mx6 user.debug kernel: 00000000: 4d090000 424d53fe 00010040 00000000  ...M.SMB@.......
Jun 15 09:35:33 var-som-mx6 user.debug kernel: 00000010: 00000008 00000003 00000000 0000002c  ............,...
Jun 15 09:35:33 var-som-mx6 user.debug kernel: 00000020: 00000000 0000002c 00000000 5db72925  ....,.......%).]
Jun 15 09:35:33 var-som-mx6 user.debug kernel: 00000030: 00000000 00000000 00000000 00000000  ................
Jun 15 09:35:33 var-som-mx6 user.debug kernel: 00000040: 00000000                             ....
Jun 15 09:35:33 var-som-mx6 user.err kernel: CIFS VFS: SMB response too long (36944 bytes)

2). Linux kernel cifsfs client crashes/terminates the TCP connection to Samba - WITH OUTSTANDING AIO (that's the important part).

3). Samba (smbd) server notices the TCP connection has gone and invokes a synchronous smbd_server_connection_terminate_ex() call, which calls directly into exit_server_cleanly() -> eventually -> file_close_user() with the outstanding aio on it.

The assert_no_pending_aio() then fires causing smbd to panic.

In the case where the underlying transport has gone away, we need to ignore pending aio requests as there's nothing we can do with them and we're exiting anyway.

Ralph, let's chat about this tomorrow.
Comment 54 Jeremy Allison 2020-06-16 01:29:57 UTC
Created attachment 16043 [details]
raw patch for disconnect with aio outstanding.

To quickly get you up and running, here is the raw patch I think you need.

I'll write tests for this.
Comment 55 Jeremy Allison 2020-06-16 01:32:10 UTC
Created attachment 16044 [details]
raw patch for disconnect with aio outstanding.

This one compiles :-).
Comment 56 fst@highdefinition.ch 2020-06-16 09:42:59 UTC
Hi. Thanks for the quick feedback.

> Linux kernel cifsfs has a bug.
Do you have a pointer to this I can send to the devs of RoonLabs so they might be able to update?


Unfortunately I am still getting crashes. Do you want me to add some debug statements?


Jun 16 11:27:03 www smbd_audit[2116510]: [2020/06/16 11:27:03.055004,  0] ../../lib/util/fault.c:79(fault_report)
Jun 16 11:27:03 www smbd_audit[2116510]:   ===============================================================
Jun 16 11:27:03 www smbd_audit[2116510]: [2020/06/16 11:27:03.055024,  0] ../../lib/util/fault.c:80(fault_report)
Jun 16 11:27:03 www smbd_audit[2116510]:   INTERNAL ERROR: Signal 11 in pid 2116510 (4.12.3)
Jun 16 11:27:03 www smbd_audit[2116510]:   If you are running a recent Samba version, and if you think this problem is not yet fixed in the latest versions, please consider reporting this bug, see https://wiki.samba.org/index.php/Bug_Reporting
Jun 16 11:27:03 www smbd_audit[2116510]: [2020/06/16 11:27:03.055044,  0] ../../lib/util/fault.c:86(fault_report)
Jun 16 11:27:03 www smbd_audit[2116510]:   ===============================================================
Jun 16 11:27:03 www smbd_audit[2116510]: [2020/06/16 11:27:03.055056,  0] ../../source3/lib/util.c:829(smb_panic_s3)
Jun 16 11:27:03 www smbd_audit[2116510]:   PANIC (pid 2116510): internal error
Jun 16 11:27:03 www smbd_audit[2116510]: [2020/06/16 11:27:03.055670,  0] ../../lib/util/fault.c:264(log_stack_trace)
Jun 16 11:27:03 www smbd_audit[2116510]:   BACKTRACE: 35 stack frames:
Jun 16 11:27:03 www smbd_audit[2116510]:    #0 /lib64/libsamba-util.so.0(log_stack_trace+0x34) [0x7f7141d4b7b4]
Jun 16 11:27:03 www smbd_audit[2116510]:    #1 /lib64/libsmbconf.so.0(smb_panic_s3+0x27) [0x7f71417b4bc7]
Jun 16 11:27:03 www smbd_audit[2116510]:    #2 /lib64/libsamba-util.so.0(smb_panic+0x31) [0x7f7141d4b8b1]
Jun 16 11:27:03 www smbd_audit[2116510]:    #3 /lib64/libsamba-util.so.0(+0x19b11) [0x7f7141d4bb11]
Jun 16 11:27:03 www audit[2116510]: ANOM_ABEND auid=4294967295 uid=0 gid=0 ses=4294967295 pid=2116510 comm="smbd" exe="/usr/sbin/smbd" sig=6 res=1
Jun 16 11:27:03 www smbd_audit[2116510]:    #4 /lib64/libpthread.so.0(+0x14a90) [0x7f71410f3a90]
Jun 16 11:27:03 www smbd_audit[2116510]:    #5 /usr/lib64/samba/libsmbd-base-samba4.so(+0x1ed5bc) [0x7f7141b5e5bc]
Jun 16 11:27:03 www smbd_audit[2116510]:    #6 /usr/lib64/samba/libsmbd-base-samba4.so(close_file+0xc3) [0x7f7141b5ef73]
Jun 16 11:27:03 www smbd_audit[2116510]:    #7 /usr/lib64/samba/libsmbd-base-samba4.so(file_close_user+0x3d) [0x7f7141af76fd]
Jun 16 11:27:03 www smbd_audit[2116510]:    #8 /usr/lib64/samba/libsmbd-base-samba4.so(smbXsrv_session_logoff+0x51) [0x7f7141baa801]
Jun 16 11:27:03 www smbd_audit[2116510]:    #9 /usr/lib64/samba/libsmbd-base-samba4.so(+0x239baa) [0x7f7141baabaa]
Jun 16 11:27:03 www smbd_audit[2116510]:    #10 /usr/lib64/samba/libdbwrap-samba4.so(+0x5c56) [0x7f714078ac56]
Jun 16 11:27:03 www smbd_audit[2116510]:    #11 /usr/lib64/samba/libdbwrap-samba4.so(+0x5e8f) [0x7f714078ae8f]
Jun 16 11:27:03 www smbd_audit[2116510]:    #12 /usr/lib64/samba/libdbwrap-samba4.so(dbwrap_traverse+0xb) [0x7f7140788dcb]
Jun 16 11:27:03 www smbd_audit[2116510]:    #13 /usr/lib64/samba/libsmbd-base-samba4.so(smbXsrv_session_logoff_all+0x5c) [0x7f7141baad7c]
Jun 16 11:27:03 www smbd_audit[2116510]:    #14 /usr/lib64/samba/libsmbd-base-samba4.so(+0x23f44e) [0x7f7141bb044e]
Jun 16 11:27:03 www smbd_audit[2116510]:    #15 /usr/lib64/samba/libsmbd-base-samba4.so(+0x23f9e4) [0x7f7141bb09e4]
Jun 16 11:27:03 www smbd_audit[2116510]:    #16 /usr/lib64/samba/libsmbd-shim-samba4.so(exit_server_cleanly+0x18) [0x7f71411829f8]
Jun 16 11:27:03 www smbd_audit[2116510]:    #17 /usr/lib64/samba/libsmbd-base-samba4.so(smbd_server_connection_terminate_ex+0x162) [0x7f7141b8a352]
Jun 16 11:27:03 www smbd_audit[2116510]:    #18 /lib64/libtevent.so.0(tevent_common_invoke_fd_handler+0x81) [0x7f7141108f11]
Jun 16 11:27:03 www smbd_audit[2116510]:    #19 /lib64/libtevent.so.0(+0xe417) [0x7f714110f417]
Jun 16 11:27:03 www smbd_audit[2116510]:    #20 /lib64/libtevent.so.0(+0xc57b) [0x7f714110d57b]
Jun 16 11:27:03 www smbd_audit[2116510]:    #21 /lib64/libtevent.so.0(_tevent_loop_once+0x98) [0x7f7141108598]
Jun 16 11:27:03 www smbd_audit[2116510]:    #22 /lib64/libtevent.so.0(tevent_common_loop_wait+0x1b) [0x7f714110887b]
Jun 16 11:27:03 www smbd_audit[2116510]:    #23 /lib64/libtevent.so.0(+0xc50b) [0x7f714110d50b]
Jun 16 11:27:03 www smbd_audit[2116510]:    #24 /usr/lib64/samba/libsmbd-base-samba4.so(smbd_process+0x7c7) [0x7f7141b7c687]
Jun 16 11:27:03 www smbd_audit[2116510]:    #25 /usr/sbin/smbd(+0xf531) [0x561446182531]
Jun 16 11:27:03 www smbd_audit[2116510]:    #26 /lib64/libtevent.so.0(tevent_common_invoke_fd_handler+0x81) [0x7f7141108f11]
Jun 16 11:27:03 www smbd_audit[2116510]:    #27 /lib64/libtevent.so.0(+0xe417) [0x7f714110f417]
Jun 16 11:27:03 www smbd_audit[2116510]:    #28 /lib64/libtevent.so.0(+0xc57b) [0x7f714110d57b]
Jun 16 11:27:03 www smbd_audit[2116510]:    #29 /lib64/libtevent.so.0(_tevent_loop_once+0x98) [0x7f7141108598]
Jun 16 11:27:03 www smbd_audit[2116510]:    #30 /lib64/libtevent.so.0(tevent_common_loop_wait+0x1b) [0x7f714110887b]
Jun 16 11:27:03 www smbd_audit[2116510]:    #31 /lib64/libtevent.so.0(+0xc50b) [0x7f714110d50b]
Jun 16 11:27:03 www smbd_audit[2116510]:    #32 /usr/sbin/smbd(main+0x1be8) [0x56144617ca78]
Jun 16 11:27:03 www smbd_audit[2116510]:    #33 /lib64/libc.so.6(__libc_start_main+0xf2) [0x7f7140e11042]
Jun 16 11:27:03 www smbd_audit[2116510]:    #34 /usr/sbin/smbd(_start+0x2e) [0x56144617cd8e]
Jun 16 11:27:03 www smbd_audit[2116510]: [2020/06/16 11:27:03.055845,  0] ../../source3/lib/dumpcore.c:317(dump_core)
Jun 16 11:27:03 www smbd_audit[2116510]:   coredump is handled by helper binary specified at /proc/sys/kernel/core_pattern
Jun 16 11:27:03 www smbd_audit[2116510]:
Jun 16 11:27:03 www audit: BPF prog-id=12472 op=LOAD
Jun 16 11:27:03 www audit: BPF prog-id=12473 op=LOAD
Jun 16 11:27:03 www audit: BPF prog-id=12474 op=LOAD
Jun 16 11:27:03 www systemd[1]: Started Process Core Dump (PID 2116566/UID 0).
Jun 16 11:27:03 www audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-coredump@4108-2116566-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jun 16 11:27:03 www systemd-coredump[2116568]: Removed old coredump core.smbd.0.e450d550336f46dc982c197c9fbea594.1070219.1592220246000000000000.lz4.
Jun 16 11:27:03 www abrt-dump-journal-core[1581]: Failed to obtain all required information from journald
Jun 16 11:27:03 www systemd-coredump[2116568]: Process 2116510 (smbd) of user 0 dumped core.

                                                                 Stack trace of thread 2116510:
                                                                 #0  0x00007f7140e26a25 raise (libc.so.6 + 0x3ca25)
                                                                 #1  0x00007f7140e0f895 abort (libc.so.6 + 0x25895)
                                                                 #2  0x00007f71417a57c4 dump_core (libsmbconf.so.0 + 0x4d7c4)
                                                                 #3  0x00007f71417b4c0a smb_panic_s3 (libsmbconf.so.0 + 0x5cc0a)
                                                                 #4  0x00007f7141d4b8b1 smb_panic (libsamba-util.so.0 + 0x198b1)
                                                                 #5  0x00007f7141d4bb11 fault_report (libsamba-util.so.0 + 0x19b11)
                                                                 #6  0x00007f71410f3a90 __restore_rt (libpthread.so.0 + 0x14a90)
                                                                 #7  0x00007f7141b5e5bc assert_no_pending_aio (libsmbd-base-samba4.so + 0x1ed5bc)
                                                                 #8  0x00007f7141b5ef73 close_normal_file (libsmbd-base-samba4.so + 0x1edf73)
                                                                 #9  0x00007f7141af76fd file_close_user (libsmbd-base-samba4.so + 0x1866fd)
                                                                 #10 0x00007f7141baa801 smbXsrv_session_logoff (libsmbd-base-samba4.so + 0x239801)
                                                                 #11 0x00007f7141baabaa smbXsrv_session_logoff_all_callback (libsmbd-base-samba4.so + 0x239baa)
                                                                 #12 0x00007f714078ac56 db_rbt_traverse_internal (libdbwrap-samba4.so + 0x5c56)
                                                                 #13 0x00007f714078ae8f db_rbt_traverse (libdbwrap-samba4.so + 0x5e8f)
                                                                 #14 0x00007f7140788dcb dbwrap_traverse (libdbwrap-samba4.so + 0x3dcb)
                                                                 #15 0x00007f7141baad7c smbXsrv_session_logoff_all (libsmbd-base-samba4.so + 0x239d7c)
                                                                 #16 0x00007f7141bb044e exit_server_common (libsmbd-base-samba4.so + 0x23f44e)
                                                                 #17 0x00007f7141bb09e4 smbd_exit_server_cleanly (libsmbd-base-samba4.so + 0x23f9e4)
                                                                 #18 0x00007f71411829f8 exit_server_cleanly (libsmbd-shim-samba4.so + 0x9f8)
                                                                 #19 0x00007f7141b8a352 smbd_server_connection_terminate_ex (libsmbd-base-samba4.so + 0x219352)
                                                                 #20 0x00007f7141108f11 tevent_common_invoke_fd_handler (libtevent.so.0 + 0x7f11)
                                                                 #21 0x00007f714110f417 epoll_event_loop_once (libtevent.so.0 + 0xe417)
                                                                 #22 0x00007f714110d57b std_event_loop_once (libtevent.so.0 + 0xc57b)
                                                                 #23 0x00007f7141108598 _tevent_loop_once (libtevent.so.0 + 0x7598)
                                                                 #24 0x00007f714110887b tevent_common_loop_wait (libtevent.so.0 + 0x787b)
                                                                 #25 0x00007f714110d50b std_event_loop_wait (libtevent.so.0 + 0xc50b)
                                                                 #26 0x00007f7141b7c687 smbd_process (libsmbd-base-samba4.so + 0x20b687)
                                                                 #27 0x0000561446182531 smbd_accept_connection (smbd + 0xf531)
                                                                 #28 0x00007f7141108f11 tevent_common_invoke_fd_handler (libtevent.so.0 + 0x7f11)
                                                                 #29 0x00007f714110f417 epoll_event_loop_once (libtevent.so.0 + 0xe417)
                                                                 #30 0x00007f714110d57b std_event_loop_once (libtevent.so.0 + 0xc57b)
                                                                 #31 0x00007f7141108598 _tevent_loop_once (libtevent.so.0 + 0x7598)
                                                                 #32 0x00007f714110887b tevent_common_loop_wait (libtevent.so.0 + 0x787b)
                                                                 #33 0x00007f714110d50b std_event_loop_wait (libtevent.so.0 + 0xc50b)
                                                                 #34 0x000056144617ca78 smbd_parent_loop (smbd + 0x9a78)
                                                                 #35 0x00007f7140e11042 __libc_start_main (libc.so.6 + 0x27042)
                                                                 #36 0x000056144617cd8e _start (smbd + 0x9d8e)

                                                                 Stack trace of thread 2116511:
                                                                 #0  0x00007f71410ef1b8 pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0 + 0x101b8)
                                                                 #1  0x00007f714074c282 pthreadpool_server (libmessages-dgm-samba4.so + 0x7282)
                                                                 #2  0x00007f71410e8432 start_thread (libpthread.so.0 + 0x9432)
                                                                 #3  0x00007f7140eeb9d3 __clone (libc.so.6 + 0x1019d3)

                                                                 Stack trace of thread 2116564:
                                                                 #0  0x00007f71410ef1b8 pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0 + 0x101b8)
                                                                 #1  0x00007f714074c282 pthreadpool_server (libmessages-dgm-samba4.so + 0x7282)
                                                                 #2  0x00007f71410e8432 start_thread (libpthread.so.0 + 0x9432)
                                                                 #3  0x00007f7140eeb9d3 __clone (libc.so.6 + 0x1019d3)

                                                                 Stack trace of thread 2116563:
                                                                 #0  0x00007f71410f324f __pread64 (libpthread.so.0 + 0x1424f)
                                                                 #1  0x00007f7141956f5b pread (libsys-rw-samba4.so + 0xf5b)
                                                                 #2  0x00007f7141956fd1 sys_pread_full (libsys-rw-samba4.so + 0xfd1)
                                                                 #3  0x00007f7141aefe48 vfs_pread_do (libsmbd-base-samba4.so + 0x17ee48)
                                                                 #4  0x00007f714074c3a7 pthreadpool_server (libmessages-dgm-samba4.so + 0x73a7)
                                                                 #5  0x00007f71410e8432 start_thread (libpthread.so.0 + 0x9432)
                                                                 #6  0x00007f7140eeb9d3 __clone (libc.so.6 + 0x1019d3)
Comment 57 Jeremy Allison 2020-06-16 21:34:37 UTC
Ah, no I see the problem.

file_close_user() calls close_file with SHUTDOWN_CLOSE correctly, but with a NULL req pointer. So the check I added that the client connection is dead:

        if (close_type == SHUTDOWN_CLOSE &&
                        !NT_STATUS_IS_OK(req->xconn->transport.status)) {
                return;
        }

will cause a crash (req == NULL) at this point.

Let me look again at an alternate fix...
Comment 58 Jeremy Allison 2020-06-17 01:20:53 UTC
Created attachment 16054 [details]
git-am fix for 4.12.next

Can you test this one instead ? I think it might fix it for 4.12.next. The fix for master is different, but these are preliminary fixes anyway as we'll need regression tests for this code before it can really go in.

I just want to see if I'm correct on this :-).
Comment 59 Jeremy Allison 2020-06-17 21:48:46 UTC
Created attachment 16055 [details]
supplemental git-am fix for 4.12.x

Third time lucky..
Comment 60 Ralph Böhme 2020-06-18 19:52:02 UTC
(In reply to Jeremy Allison from comment #59)
Maybe we could make smbXsrv_client_valid_connections() public and reuse that.
Comment 61 Jeremy Allison 2020-06-18 20:00:41 UTC
Yes, my master patch does that. But smbXsrv_client_valid_connections() doesn't exist in 4.12.x, and that's what the user is using.
Comment 62 Ralph Böhme 2020-06-18 20:06:46 UTC
(In reply to Jeremy Allison from comment #61)
ah, cool! :)
Comment 63 Jeremy Allison 2020-06-18 20:27:48 UTC
 From private email:

To: Jeremy Allison <jra@samba.org>
Subject: Re: Attachement 16054
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Thunderbird/68.9.0

Am 18.06.2020 um 21:02 schrieb Jeremy Allison:

> Ping - did this one work ? If you can confirm, then
> I can start the regression tests and get a general
> fix into master, 4.12.next.
>
> Cheers,
>
> Jeremy.
>
Hi Jeremy

Sorry, I thought I replied. There were no crashes so far.

So the 4.12.x fix is correct. I'll write some regression tests and move this one forward.
Comment 64 Jeremy Allison 2020-06-22 22:10:51 UTC
Created attachment 16064 [details]
git-am supplemental fix for master.

Storing here so I don't lose it. Contains regression test and proper fix for master.
Comment 65 Jeremy Allison 2020-06-23 01:07:11 UTC
Created attachment 16065 [details]
git-am supplemental fix for master.

Slightly improved comments.
Comment 66 Jeremy Allison 2020-06-23 22:52:45 UTC
Created attachment 16069 [details]
git-am supplemental fix for master.

Got through CI.
Comment 67 Jeremy Allison 2020-06-25 22:07:26 UTC
Created attachment 16091 [details]
git-am fix for 4.12.next.

Back-ported from fix that went into master.
Comment 68 Ralph Böhme 2020-06-26 07:32:06 UTC
Reassigning to Karolin for inclusion in 4.12.
Comment 69 Karolin Seeger 2020-06-26 07:50:07 UTC
(In reply to Ralph Böhme from comment #68)
Pushed to autobuild-v4-12-test.
Comment 70 Karolin Seeger 2020-06-29 06:54:11 UTC
Pushed to v4-12-test.
Closing out bug report.

Thanks!