Re: slapd can't handle too many concurrent BINDs

Tim Howes (howes@netscape.com)
Fri, 17 May 1996 11:05:34 -0700

Jeff.Hodges@Stanford.EDU wrote:
>
> slapd can't handle too many concurrent BINDs, where "too many" is defined to
> be greater than the per-process file descriptor allocation on the server OS.
>
> We noticed this via a poorly-behaved client (sendmail) that had a bug wherein
> it simply did a bind for each piece of mail received. We tried it with a
> high-volume input and lo & behold, slapd immediately ran out of file
> descriptors and hung.
>
> Perusing our logs (example below) indicates that it died after consuming 63
> file descriptors which is a given with the way had our (default) limits set up
> for that process (see limit(2)).
>
> So, lessons & to-do's learned here seem to be...
>
> 1. a single, gluttonus client can bring down slapd.
>
> 1a. corollary: the descent of a slew of clients at once upon a single instance
> of slapd will bring down the slapd, if slew > slapd's current file descriptor
> limit.
>
> 2. look into allocating more-than-default file descriptors for a process when
> deploying slapd, especially if your server may become well-known to a wide
> field of potential clients (see limit(2)).
>
> 3. Can slapd somehow recover more gracefully from this case of running out of
> file descriptors? Ostensibly, it could return an error on the BINDs that push
> it over the limit, and continue working for the clients that successfully
> bound.

I'm sure it could, though I don't have the code written or anything.
I think the approach you suggest (error on subsequent binds until
things quiet down) is the right one. -- Tim