The Samba-Bugzilla – Bug 6527
idmap falls back to default config until rescan_trusted_domains succeeds
Last modified: 2019-02-16 20:38:35 UTC
The following applies to 3.2, 3.3, 3.4 and master.
The scenario is this:
samba is member of a domain DOM1.
DOM1 has a trusted domain DOM2.
samba has a configuration section like this:
idmap config DOM2 : backend = rid (e.g.)
idmap config DOM2 : range = 100000-200000
and a default idmap config using the tdb backend
with a different range, say 500000-600000.
Now winbindd asks its DC for the list of its trusted
domains to build the internal list of domains.
When this request does not succeed (for what reason ever,
incomplete machine account replication after a join,
network problems, ...) then winbindd will start answering
requests with an incomplete list of domains.
When a sid2uid call is answered for a sid from DOM2,
this will lead to winbindd assigning a UID from the
default config range 500000-600000 to this sid instead
of an ID from the intended range. Winbindd will also
store this mapping in the idmap cache and use it for quite
The reason is that currently sid2uid works as follows:
first the find_domain_from_sid() is called to look for
the sid in the internal list of domains. This list is
still incomplete and does not contain DOM2, so the sid
is not found to belong to DOM2. Therefore sid2uid
uses the default idmap config.
There are several possible ways around this:
* One could only start serving requests after the DC has successfully been contacted and the list of trusted domains has successfully been retrieved.
* One could also stop building the complete list of domains but add domains as they come along. In the present example of the sid2uid call, sid2uid could try to look the domain part of the sid up with lookup_sids call. If successful, add the trusted domain and proceed with the mapping.
In both cases it would be important to not process these requests when the trusted domains could not be retrieved.
Is there a chance to fix that for 3.4.1 (scheduled for August 18)?
3.4.1 will be released in 2 days and currently, there is no patch available. So it won't be fixed in 3.4.1. But it would be nice to address this one in 3.4.2.
Any chance, Michael?
I remember I fixed this some time ago.