Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explore adding client integration with GNU make jobserver protocol #1887

Open
bbannier opened this issue Oct 15, 2024 · 0 comments
Open

Explore adding client integration with GNU make jobserver protocol #1887

bbannier opened this issue Oct 15, 2024 · 0 comments
Labels
Enhancement Improvement of existing functionality

Comments

@bbannier
Copy link
Member

When compiling a parser we spawn external processes to compile generated C++ files and link them into an HLTO file. We make sure to not spawn more than nproc parallel processes, but even on powerful systems especially the compilation jobs can overwhelm the system when multiple spicyc/spicyz invocations are active at the same time, e.g., in builds of larger setups with recursive invocations of make, or when executing btest with -j. Users then work around this by specifying a small value for HILTI_JIT_PARALLELISM to control the worst case, but likely slowing down the average case.

GNU make implements a job server which can be used to control parallelism in such scenarios.1 The server maintains a limited set of "tokens" which programs invoked by make obtain in order to launch jobs. With that the top-level parallelism from make -j<N> can be respected.

We should consider adding functionality so JIT can act as a job server client, ideally using a library which implements the server interaction. If a job server is detected we would override the current behavior of using the maximum concurrency of the hardware; HILTI_JIT_PARALLELISM/HIT_JIT_SEQUENTIAL would still override this behavior.

// Cap parallelism for background jobs.
//
// - if `HILTI_JIT_SEQUENTIAL` is used all parallelism is disabled and
// exactly one job is used.
// - if `HILTI_JIT_PARALLELISM` is set it is interpreted as the maximum
// number of parallel jobs to use
// - by default we use one job per available CPU (on some platforms
// `std::thread::hardware_concurrency` can return 0, so use one job
// there)
auto hilti_jit_parallelism = hilti::rt::getenv("HILTI_JIT_PARALLELISM");
uint64_t parallelism = 1;
if ( hilti::rt::getenv("HILTI_JIT_SEQUENTIAL").has_value() )
parallelism = 1;
else if ( auto e = hilti::rt::getenv("HILTI_JIT_PARALLELISM") )
parallelism = util::charsToUInt64(e->c_str(), 10, [&]() {
rt::fatalError(util::fmt("expected unsigned integer but received '%s' for HILTI_JIT_PARALLELISM", *e));
});
else {
auto j = std::thread::hardware_concurrency();
if ( j == 0 )
rt::warning(
"could not detect hardware level of concurrency, will use one thread for background compilation. Use "
"`HILTI_JIT_PARALLELISM` to override");
parallelism = std::max(j, 1U);
}

In order to make this work in BTest invocations we would need to implement support for btest to act as both a job server as well as client.

Footnotes

  1. Ninja is in the process of implementing support for it to be a client as well as a server itself.

@bbannier bbannier added the Enhancement Improvement of existing functionality label Oct 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Enhancement Improvement of existing functionality
Projects
None yet
Development

No branches or pull requests

1 participant