-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Run one node on two processors #139
Conversation
Codecov Report
@@ Coverage Diff @@
## master #139 +/- ##
==========================================
+ Coverage 93.05% 93.64% +0.58%
==========================================
Files 22 22
Lines 1756 1777 +21
==========================================
+ Hits 1634 1664 +30
+ Misses 122 113 -9
Continue to review full report at Codecov.
|
Codecov Report
@@ Coverage Diff @@
## master #139 +/- ##
==========================================
+ Coverage 92.49% 93.05% +0.56%
==========================================
Files 22 22
Lines 1799 1829 +30
==========================================
+ Hits 1664 1702 +38
+ Misses 135 127 -8
Continue to review full report at Codecov.
|
Options for regression testing
Missing is a restart if almost solved and integral but not allowed, but this should be an option anyway. |
Has problems with some instances but this is the first run against master and #134 (with 1 proc more) |
This is what I expected. When using the pmap, adding extra processes
results in a significant warmup overhead around 10-30 seconds.
Some options that come to mind,
- don’t use multiple threads by default, but provide setting suggestions in
the reader for solving medium/large problems.
- explore if Julia Threads can be used instead of Distribute
- design some very rough problem metrics that can be used to only add
processes on larger / harder problems when where there warmup time will
have a smaller impact of total runtime.
…On Tue, Jun 4, 2019 at 10:34 AM Ole Kröger ***@***.***> wrote:
Has problems with some instances but this is the first run against master
and #134 <#134> (with 1 proc
more)
[image: Screenshot from 2019-06-04 10-32-49]
<https://user-images.githubusercontent.com/4931746/58863965-2b49c180-86b4-11e9-94a0-c2ec241a5a78.png>
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#139?email_source=notifications&email_token=AAIN5KWI47KTRRO3PMHJICTPYYSJ3A5CNFSM4HR767C2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODW324DI#issuecomment-498576909>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAIN5KQNGSQW5XLZIVS4V3DPYYSJ3ANCNFSM4HR767CQ>
.
|
The overhead you mention isn't there if we do a precompile run, right? |
At what stage of the solver should we decide this? I am wondering whether this should be determined before the relaxation or after. I think the relaxation time might be a good indicator but this is only true if we know whether Ipopt was called before or not. Is there a way to find it out? |
I would probably do before solving the relaxation and just use the number of variables and constraints. Some combination of those two properties should be a strong indicator if the problem will be solved very quickly or not. If that is problematic then I would add the time required to solve the root node relaxation and a third criteria to consider. |
One problem might be that one of the two sub-problems needs longer than the other such that one processor is idle which isn't the case in the standard parallel approach. There we might have the problem that we go into branches which aren't useful but no processor is idle. |
closing due to lack of progress. |
First implementation of #138