Running Samba 3.0.2 on Redhat 7.3 on a IBM xSeries 345 (2.8 GHz, 1GB Ram) with
1Gb connection to the LAN.
Currently we are serving about 700 different shares on this machine, so we have
a big smb.conf. But we are running into a scalability issue. If we do a reload
of the smb.conf (either through smbcontrol or just by editing it) we can see all
smbd processes using a lot of CPU. It seems that each smbd process does its own
parsing of the smb.conf file.
If we have enough connections open (meaning a lot of smbd processes) this
becomes that bad that all the smbd processes hang the server. In our case
starting at 500 smbd processes will do this. We then have to wait for 15 minutes
or more for everything to start running normally again.
Two possible ways to fix this.
- Have the root smbd process to the smb.conf parsing and then distribute the
parsed data to child processes.
- Allow the smb.conf change check interval to be a option in smb.conf so that we
can increase the interval to spread the load. This only works if each smbd
process has it's own counter.
Initial profile shows over half a million calls to strwicmp. This is mostly due
to checks in add_a_service via getservicebyname and by map_parameter. Been able
to (in testing) reduce to only ~37k calls but have not confirmed this as the
source of the hang (although it seems likely).
lowering priority based on resources.
I confirm this bug in samba-3.0.13
Known issue. I have plans for this possibly in 3.0.22.
But it's a major rework of the smb.conf and parameter
Created attachment 1523 [details]
Spread the re-load of smb.conf
Jerry, what about something along the lines of the attached (untested) patch
until we have streamlined loadparm.c?
Volker, the patch is propbably ok. But I would prefer to have
some testing as proof that it does help alleviate the load.