Results 1 to 4 of 4
  1. #1
    Join Date
    Jul 2005
    Posts
    77

    Max open file descriptors stuck at 1024 on CentOS 4?

    I'm running a server daemon that over 1000 people connect to at the same time and that worked perfectly when running on FreeBSD 5.4. However, when I migrated to CentOS 4, the daemon started outputting "accept: Bad file descriptor" errors on the console when it hit 1024 open file descriptors and disconnecting the users trying to connect. ulimit -n quickly revealed that the open file limit was set to 1024 and responsible for this. I read into how to raise the limit in CentOS and tried the following things (in different combinations):


    • Making sure /proc/sys/fs/file-max was big enough
    • Adding "fs.file-max = 2048" to /etc/sysctl.conf
    • Adding "* - nofile 2048" to /etc/security/limits.conf
    • Adding "session required pam_limits.so" to /etc/pam.d/login and /etc/pam.d/sshd
    • Changing the limit using "ulimit -n 65536" as root
    • Restarting sshd several times
    • Rebooting the machine several times


    None of this seemed to work.. in fact at some point the daemon would actually segment fault when it hit the limit instead of disconnecting users attempting to connect... Lowering the limit to 1024 in /etc/security/limits.conf stopped that and brought me back to the bad file descriptor errors. Once again, the same software worked just fine with over 1024 users on FreeBSD 5.4, so I'm pretty sure the crashing isn't the software's fault.

    Am I overlooking something in the CentOS config that would cause this? I'm out of ideas.. and I've tried just about anything google would come up with.

  2. #2
    Hi,

    Just type

    ulimit -n <newlimit>

    and then check ulimit -n this will show the new limit.

    If it is getting changed at the server reboot, then think of adding the above line to

    /etc/rc.d/rc.local

    Be happy.

    Sincerely,
    Carmen [carmen@instacarma.com]
    InstaCarma.com
    24x7 Technical Support and Server Management

  3. #3
    Join Date
    Jul 2005
    Posts
    77
    Quote Originally Posted by MikeHart
    • Changing the limit using "ulimit -n 65536" as root
    Thanks for the reply, but I already tried that... and the problem with it is that you can only do that as root and it only changes the value for root. Normal users still have their own limits and they cannot be changed with ulimit.

    Editting /etc/security/limits.conf and then checking ulimit -n again as a normal user does show a change and the bad file descriptor errors stop, but instead any program that tries to access more than 1024 files/connections segment faults.

  4. #4
    Hi Mike,

    You can set the limits for the groups instead of users to globally change the limits for all the users at once.

    You also need to ensure that pam_limits is configured in the /etc/pam.d/system-auth file, or in /etc/pam.d/sshd for ssh, /etc/pam.d/su for su, or /etc/pam.d/login for local logins and telnet if you don't want to enable it for all login methods. Here are the two session entries which you need to have in /etc/pam.d/system-auth file:

    session required /lib/security/$ISA/pam_limits.so
    session required /lib/security/$ISA/pam_unix.so

    Now login to the user account since the changes will become effective for new login sessions only. Note that the ulimit options are different for other shells.

    To make the above changes permanent, you could also set the soft limit equal to the hard limit in /etc/security/limits.conf which I prefer:

    user/groupname soft nofile 63536
    user/groupname hard nofile 63536

    Sincerely,
    <<Signature to be setup in your profile>>
    Last edited by anon-e-mouse; 04-25-2006 at 01:51 AM.
    InstaCarma.com
    24x7 Technical Support and Server Management

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •