-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensure that client supports any number of virtual cloud providers #24
Comments
Hi @grafjo, @kindlich, have you verified that this does not work or is it just an assumption inferred from the code. We use the Vavr Future implementation that uses a dynamic thread pool with up to 32767 backing threads, which should not result in issues for any reasonable setting. I have tried to reproduce the issue with a unit test without success. |
@strieflin we can reproduce this behavior in the wild with small sized virtual machines - e.g. 2 cpu cores. Varv Future is using ForkJoinPool and that one is configured by the available processors / cpu cores "For applications that require separate or custom pools, a ForkJoinPool may be constructed with a given target parallelism level; by default, equal to the number of available processors. |
In our test case, we were runninng with:
In there, it would issue 1 HTTP Request and not more. Since Vavr by default delegates to the Java |
From Vavr documentation:
So that suggests that it shouldn't be an issue. However, the following snippet does not terminate when using int count = Runtime.getRuntime().availableProcessors();
CyclicBarrier b = new CyclicBarrier(count);
Future.sequence(Stream.range(0, count).map(i -> Future.of(() -> b.await())).toJavaList()).await(); Will continue trying to replicate using |
If you want to test with less, you can use the JVM arguments to specify the processor count Example:
Error would be:
|
Do we need similar changes in Amphora/Castor?
|
Added an issued to verify Amphora works as expected (see carbynestack/amphora#32). |
The ephemeral client uses Java Parallel Streams (JPS) to interact with the virtual cloud providers (VCPs). By default JPS allocates
n
threads on an
core machine. As the HTTP calls are blocking, this results in a timeout on a system with less cores than the number of VCPs as the distributed execution in the backend will only kick-off after all VCPs have received an invocation request.The text was updated successfully, but these errors were encountered: