-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluation new version and comparison with clingcon 3. #25
Comments
The benchmark is completely dominated by the flow_shop class. Also, maybe there is something wrong with the estimate function for the number of clauses. If I increase the limit, then the new clingcon also translates. It still is not able to solve the same instances clingcon-3 does (adding just one more thread with EDIT: I checked the estimate function is working for the benchmark. Clingcon-3's estimate is set to 10000 and clingcon-4's to 1000 so this explains the difference. What might be another difference is that the new version does not add binary clauses to do order propagation initially. Does clingcon-3 do this? It would be very easy to implement. There must be some reason because clingcon-3 solves the instances with very few conflicts. Adding clauses to the binary implication graph can drastically change propagation because such clauses are always propagated first. |
It has an option to do so
Still I have no idea what the difference could be. We would need to test single instances with full translation maybe ... |
I cannot have anything to do with this. As soon as there are no constraints anymore, no reasons are generated and in the end all variables are fully assigned. |
But the translation is slightly different, right? |
Too late :) Having translation=10000 help but is still off a small bit. Also it solves other instances. |
This is an attempt to improve on the behaviour in issue #25. It does not seem to make a difference.
The instances which the old clingcon can solve require 3000 clauses and it translates them. Why there is still such a big gap I don't know. You could try to implement the old translation in the new clingcon. Maybe you did it differently. EDIT: Since we only have coefficients of 1 and -1, your estimate should be exact. |
Not that I know :) Could also be the sorting...? |
I do not know what exactly it does. Maybe it is different. Why not just try to reimplement your old version. The data structures in the new clingcon are quite simple:
We are talking about difference constraints here right? Sorting by coefficients should not matter. It should only affect the order of clauses but not the number of generated clauses. |
- Optimize memory usage to store clauses (it could be optimized even further for short clauses). - Add a decisions heuristic that is aware of chained literals. This option might be a good candidate configuration for the benchmarks in #25. Maybe with the best translation/non-translation configuration. - Fix a bug that was adding a satisfiable instead of a conflicting clause.
Hi @MaxOstrowski, can you also test configurations using
This heuristic has the potential to drastically reduce the choices when solving. The idea is that whenever the solver wants to make an order literal true or false, then the order literal of the lower or upper bound of the same variable is made true or false instead, respectively. Let's say we have variable |
Rerunning benchmarks with newest version and new options.
While clingcon-3.3.0
|
The Replacing the former with an ordered The Another issue is that we have to store all the clauses added during translation and commit them later. I am not sure if the old clingcon had to do it like this, too? But we cannot get rid of this anyway. The last thing that comes to my mind is that the clingo-API might also cause more overhead. We should run a profiler to get an idea where most time is spend! |
I'm not completely sure if I get everything right: During propagate(..) i directly use clasp.force to change all implied order literals that are not already decided. As reason I use the literal that was watched, so for
Fixpoint iteration during check is done interleaved with order_literal propagation (As the watch is immediately triggerd in the old interface (unlike the list of changes that we have now). |
|
Analyzing the current code I'm unsure if we currently do this. |
A vector will certainly speed up order literal lookup. I do not know if you use an ordered vector of value/literal pairs or use a vector with
Right now it is an
This is not possible with the Clingo-API and also does not matter during translation. There is no way around lookups (but we can at least use fast vector lookups).
This is available as an option. The default is to add the clauses
Again, this does not matter during translation and also has no effect when constraints are translated. Otherwise, it is easily possible to implement this too. I could add an option for it. |
Something like the value-domain_minimum as index. (I had to handle domains with holes, but basically it boils down to this.
Should not make much of a difference imho.
Sure, but this would then be something that old clingcon did not have (just to mention).
I know, just wanted to communicate every difference.
Sorry, I misread that it is about translation, and already answered above that we do not have to store all clauses before commiting. Could you give me a leg up and tell me why again we had to store them all before commiting? |
Even though we are just talking about (amortized) constant factors here. I think it can make quite the difference. Especially when switching to a vector.
Because interleaving adding clauses and literals is quadratic with clasp. Adding the literals and then the clauses is linear. |
clingcon-3.3.0 computes an estimate of variables that are needed for the translation and creates all these literals at once, faking createNewLiteral() afterwards. |
We could also follow such a strategy but I would not implement it like this. The time overhead of storing the clauses should be small. It increases the peak memory of course, which might be trouble for large programs. It can be reduced by running See the attached profile.pdf. For the meaning of the values check https://gperftools.github.io/gperftools/cpuprofile.html. |
Definitely nice to see where the time is spent (and kind of unexpected, that adding clauses takes that much time). |
haven't looked at it in detail, but looks much nicer... |
The times look very similar. The only class where it is really different is the I also profiled solving (without translation) on a larger instance. The good news is that the solver spends over 70% of its time doing unit propagation. This is how it should be. Any data structure optimization can only increase speed by a factor of We might still be able to find critical instances where the situation is not like this. |
This speeds up propagtion time quite a bit. See issue #25 for a discussion.
This addresses some more ideas discussed in issue #25. This speeds up translation quite a bit. When half of the literals in a domain have a value, the VarState switches to a vector representation to speed up lookups. Probably, I overdid the hacking a bit. :)
Big improvements on propagation heavy instances (like doubling the speed). Thanks a lot. |
I am done with this issue. Do you still want to investigate the translation? |
Unlikely that I find the time anytime soon. |
As you like. If you want to look at it again. 😉 We could also keep it open to not forget about it. Your call. |
Some evaluation on the minizinc competition 2019 benchmarks (testing some configurations) and an (unfair) comparison to chuffed (chuffed uses more global constraints) can be found here: potassco/flatzingo#11 In short: Our Base speed seems to be quite good and probably better than chuffed, but we are lacking a lot of constraints to be comparable (some of them seem to make sense, some just stupid). |
Do you have any more recent performance comparisons with CP solvers, especially using clingcon 5? |
Unfortunately not really. You could look at the minizinc competition results in 2021, but take it with a grain of salt. Quite some performance is left on the table due to double grounding (minizinc flattening and regrounding using clingcon).
But the techniques used should still be similar to state of the art solvers.
|
results.zip
The text was updated successfully, but these errors were encountered: