You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to use the python API to parametrize a largish set of molecules in the most efficient way possible. I'm a bit confused about how resources are allocated. I'm currently doing something like this:
My impression was that in this case, the BespokeExecutor is running with a total of 5 workers, of which 3 (the QC workers) use 2 cores each, for a total of 8 cores (the limit of my machine). However when running this with 4 schemas, I see far more processes running, causing oversubscription and thus high inefficiency (after a few minutes I can see at least 15 processes, but it's a bit hard to keep track with the different stages).
What's the best way of handling larger batches like this? Or is this a non-issue? Thank you :)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I'm trying to use the python API to parametrize a largish set of molecules in the most efficient way possible. I'm a bit confused about how resources are allocated. I'm currently doing something like this:
My impression was that in this case, the
BespokeExecutor
is running with a total of 5 workers, of which 3 (the QC workers) use 2 cores each, for a total of 8 cores (the limit of my machine). However when running this with 4 schemas, I see far more processes running, causing oversubscription and thus high inefficiency (after a few minutes I can see at least 15 processes, but it's a bit hard to keep track with the different stages).What's the best way of handling larger batches like this? Or is this a non-issue? Thank you :)
Beta Was this translation helpful? Give feedback.
All reactions