When a search is in-flight and currently being processed against the Elasticsearch server, we set s->pending. In the destructor of "s" we check "pending" and reject deallocation of the object. One instance where "s" is requested to be deallocated is when the client closes the top-level per-share search connection. This will implicitly close all searches associated with the mds_ctx from mds_ctx_destructor_cb(): while (mds_ctx->query_list != NULL) { /* * slq destructor removes element from list. * Don't use TALLOC_FREE()! */ talloc_free(mds_ctx->query_list); } So when this happens the Elasticsearch backend query object stays around, alongside with any active tevent_req request and a tevent_req timer set with tevent_req_set_endtime() in mds_es_search_send(). Later when the timer expires it tries to remove the search from the list of searches from the connection context, but as that is already gone we crash accessing invalid memory one way or another. Have patch need bugnumber...
This bug was referenced in samba master: 3254622a307dde7ca12d90ceb58336a6948fa6d2 c0d46796d435174ff71ede9175097fc01546d69f 5b750d6b330a53f96924106eddb5be4224a5fc4a 2fc2c7d4b0b9e5351a6f4f4e3c574e8504b0a536 9b56c7030f86f24a5b21f2a972a641afb556f7ab 9b0e61ff75db0d875da81ada6d2333b01985d264 1150d121b7f6588de1aa37eac810c19dbfc07a71 ac13935a58518a3af34fd49701846b8dbe72b7b0 c9ecd33ad7db1ebf0b45c84b3909da7f5d719856 61c6a00f550a6ffc8fe704e15bc44134befc40c8
This bug was referenced in samba v4-17-test: 3254622a307dde7ca12d90ceb58336a6948fa6d2 c0d46796d435174ff71ede9175097fc01546d69f 5b750d6b330a53f96924106eddb5be4224a5fc4a 2fc2c7d4b0b9e5351a6f4f4e3c574e8504b0a536 9b56c7030f86f24a5b21f2a972a641afb556f7ab 9b0e61ff75db0d875da81ada6d2333b01985d264 1150d121b7f6588de1aa37eac810c19dbfc07a71 ac13935a58518a3af34fd49701846b8dbe72b7b0 c9ecd33ad7db1ebf0b45c84b3909da7f5d719856 61c6a00f550a6ffc8fe704e15bc44134befc40c8
This bug was referenced in samba v4-17-stable (Release samba-4.17.0rc1): 3254622a307dde7ca12d90ceb58336a6948fa6d2 c0d46796d435174ff71ede9175097fc01546d69f 5b750d6b330a53f96924106eddb5be4224a5fc4a 2fc2c7d4b0b9e5351a6f4f4e3c574e8504b0a536 9b56c7030f86f24a5b21f2a972a641afb556f7ab 9b0e61ff75db0d875da81ada6d2333b01985d264 1150d121b7f6588de1aa37eac810c19dbfc07a71 ac13935a58518a3af34fd49701846b8dbe72b7b0 c9ecd33ad7db1ebf0b45c84b3909da7f5d719856 61c6a00f550a6ffc8fe704e15bc44134befc40c8