I had recently problems with servers running application server Java and suddenly began to see strange errors like “broken pipe” or exausted resources, this is often due to the high number of open files that a modern server can bind especially compared to the default Linux systems that is still standing at 1024.
Let’s see how many open files are present on our system and how to resolve, or better to prevent this problem.
Check the open files of a process
Step # 1 Find out program PID
Let’s check for a tomcat process
# ps aux | grep tomcat
Step # 2 List file opened by pid 12390
Use lsof command or /proc/PID file system to display fd lists:
# lsof -p 12390 | wc -l
# cd /proc/28290/fd # ls -l | wc -l
At this point we can see the total number of open files of that PID, if we are close to 1024 are going to have problems.
Tuning file descriptor limits on Linux
Linux limits the number of file descriptors that any one process may open; the default limits are 1024 per process. These limits can prevent optimum performance of both benchmarking clients (such as httperf and apachebench) and of the web servers themselves (Apache is not affected, since it uses a process per connection, but single process web servers such as Zeus use a file descriptor per connection, and so can easily fall foul of the default limit).
The open file limit is one of the limits that can be tuned with the ulimit command. The command ulimit -aS displays the current limit, and ulimit -aH displays the hard limit (above which the limit cannot be increased without tuning kernel parameters in /proc).
The following is an example of the output of ulimit -aH. You can see that the current shell (and its children) is restricted to 1024 open file descriptors.
core file size (blocks) unlimited data seg size (kbytes) unlimited file size (blocks) unlimited max locked memory (kbytes) unlimited max memory size (kbytes) unlimited open files 1024 pipe size (512 bytes) 8 stack size (kbytes) unlimited cpu time (seconds) unlimited max user processes 4094 virtual memory (kbytes) unlimited
The file descriptor limit can be increased using the following procedure:
Edit /etc/security/limits.conf and add the lines:
* soft nofile 1024 * hard nofile 65535
This will increase the limits for all users of the machine, if you want to do this for a specific user replace * with that username.
If you log in in that machine via ssh you could also need to edit /etc/pam.d/login, adding the line:
session required /lib/security/pam_limits.so
Note that you may need to log out and back in again before the changes take effect.
Personally, I suggest you raise this value by 10 times, so take it to 10240, while keeping an eye on the memory, which is the resource that could be involved in this increase.
- Linux Security: How to hide processes from other users
- Productivity boosting with open source applications
- 8 Simple To Follow Tips To Secure Your Apache Web Server
- The Humble “Open Source” Bundle
- Linux Games: FTL Advanced Edition expansion
Find me on Google+