Open_files limit 1024 and network connections timeout (maximum sockets reached?)

Hello, on 5.8.16-2-MANJARO “ulimit -a” shows i had 1024 open files limit, so from root terminal ($ su) i tried “ulimit -n 30000”, it changed the ulimit -a output. The “dstat --socket” command shows that the total sockets is attacking the 1000 mark. The journalctl -r|head -n 40 shows that my proxy software shadowsocks-qt5 continue to flood by messages “ss-qt5[1763811]: TCP connection timeout.”
The server to which this app is connecting has open files limit of 51200. And this timeouts happen also for different server i have tried.

Please can you kindly suggest commands to discover what is the cause or which software limits are reached or how to fix this? Thank you

create a file /etc/sysctl.d/20-max-user-watches.conf


then reload

sudo sysctl --system

I know nothing of socks

Thanks, which effect do you expect? Because i do not know which command or action to do to discover if it helped to something. journalctl ss-qt5 log lines stays same and dstat continue to show what it displayed before.

reboot was done
open files (-n) 1024
but “dstat --socket” shows 2k sockets total, where 700 is tcp and 13 udp. :confused:
journalctl continue being flooded by “ss-qt5[1763811]: TCP connection timeout.” kind of notices.

17 days later, some people use “ulimit -n 10240” to increase open files limit, but i think it is just temporary and may not be related to network?

That’s unrelated to the limit on the number of file descriptors.

The maximum number of open file descriptors has a very significant effect on networking, since all each socket takes up a single file descriptors. In case of TCP, that puts a limit on how many concurrent connections you may have at any given moment.

That will only apply temporarily, in the current process and its children.

anyway my other app was complaining about reaching open files limit which is 1024 per “ulimit -n”
Not knowing how this is different from ulimit -n, but system-wide limit of file descriptors is:
$ sysctl fs.file-max
fs.file-max = 9223372036854775807
To change this, it is said to append “fs.file-max = 100000” to sysctl.conf and then “$ sysctl -p”, but “$ locate sysctl.conf” does not look like file is in right path, it is under /etc/ufw/ . The value is in /proc/sys/fs/file-max file (i guess changing the file is temporary effect). Maybe it would work to add new file with 644 permissions and root:root rights and mentioned line to /etc/sysctl.d/ directory.
user wide limit is said to be in:
$ grep -v “#” /etc/security/limits.conf
* hard nofile 31000
i read that the fs.file-max should be number of RAM megabytes:
$ free|grep mem|awk {‘print $2’}
x 64