# Ubuntu Open file limit

## February 23, 2016 by dc

UPDATE:If you run supervisor as root, just set minfds to right value (32K or 64K), supervisor will increase hard limit value if current value is not enough and run as root, it’s the simplest way!

Several days ago, my elasticsearch came across Too Many Open files error, the straight forward way is modify ulimit conf, but today I came across this error again, so I walk a wrong way before.

Why I came across that error again?

Elasticsearch ran as root, but it’s managed by supervisor which runs as root, because supervisor is a process management program, it must limit its child process resources, so it provides a config argument minfds, the minimum file description running supervisor, if you omitted this, default value is 1024, even your user has a high nofiles value.

Supervisor sets its soft and hard open file limit to 1024 and 4096, so the child process can not break the limits!

You can get elasticsearch node process info:

curl http://example.com:9200/_nodes/process?pretty=true


you will find the max_file_descriptors on each node, after added minfds to supervisor, the max_file_descriptors was broken!

In Ubuntu(may be almost linux), there are two levels about open file limit, the kernel space and the user space, user space must meet kernel space limit.

nr_open in /proc/sys/fs/ means the maximum fds per process can allocate, it means the hard limit in ulimit.

file-max means the maximum number of fds that the Linux kernel will allocate, it included all process in kernel.

file-nr means the number of allocated file handles, the number of allocated but unused file handles, and the maximum number of file handles. Linux 2.6 always reports 0 as the number of free file handles – this is not an error, it just means that the number of allocated file handles exactly matches the number of used file handles.