Thread count in parallel nut-scanner
should scale down in case of "Too many open files"
#2576
Labels
enhancement
Low-hanging fruit
A proposal or issue that is good for newcomers to codebase or otherwise a quick win
need testing
Code looks reasonable, but the feature would better be tested against hardware or OSes
nut-scanner
portability
We want NUT to build and run everywhere possible
Milestone
As slightly noted in issue #2575 and in PRs that dealt with parallelized scans in
nut-scanner
, depending on platform defaults and particular OS deployment and third-party library specifics,nut-scanner
may run out of file descriptors despite already trying to adapt the maximums toulimit
information where available.As seen recently and culminating in commit 2c3a09e of PR #2539 (issue #2511), certain libnetsnmp builds can consume FD's for network sockets, local filesystem looking for per-host configuration files or MIB files, for directory scanning during those searches, etc. This is a variable beyond our control, different implementations and versions of third-party code can behave as they please. Example staged with that commit reverted and a scan of a large network range:
What we can do is not abort the scans upon any hiccup, but checking for
errno==EMFILE
and delaying and retrying later (or maybe even actively decreasing the thread maximum variable of the process). We already have a way to detectRunning too many scanning threads (NUM), waiting until older ones would finish
so that's about detecting the issue and extending criteria.The text was updated successfully, but these errors were encountered: