slapd can't handle too many concurrent BINDs

Jeff.Hodges@Stanford.EDU
Fri, 17 May 96 10:41:23 -0700

slapd can't handle too many concurrent BINDs, where "too many" is defined to
be greater than the per-process file descriptor allocation on the server OS.

We noticed this via a poorly-behaved client (sendmail) that had a bug wherein
it simply did a bind for each piece of mail received. We tried it with a
high-volume input and lo & behold, slapd immediately ran out of file
descriptors and hung.

Perusing our logs (example below) indicates that it died after consuming 63
file descriptors which is a given with the way had our (default) limits set up
for that process (see limit(2)).

So, lessons & to-do's learned here seem to be...

1. a single, gluttonus client can bring down slapd.

1a. corollary: the descent of a slew of clients at once upon a single instance
of slapd will bring down the slapd, if slew > slapd's current file descriptor
limit.

2. look into allocating more-than-default file descriptors for a process when
deploying slapd, especially if your server may become well-known to a wide
field of potential clients (see limit(2)).

3. Can slapd somehow recover more gracefully from this case of running out of
file descriptors? Ostensibly, it could return an error on the BINDs that push
it over the limit, and continue working for the clients that successfully
bound.

thanks,

Jeff

------------------------------------------------------------------------------
conn=768 fd=39 connection from unknown (36.21.0.128)
conn=768 op=0 BIND dn="" method=128
conn=768 op=0 RESULT err=0 tag=97 nentries=0
conn=769 fd=60 connection from unknown (36.21.0.128)
conn=769 op=0 BIND dn="" method=128
conn=769 op=0 RESULT err=0 tag=97 nentries=0
conn=770 fd=61 connection from unknown (36.21.0.128)
conn=770 op=0 BIND dn="" method=128
conn=770 op=0 RESULT err=0 tag=97 nentries=0
conn=771 fd=62 connection from unknown (36.21.0.128)
conn=772 fd=63 connection from unknown (36.21.0.128)
accept() failed errno 24 (Too many open files)