You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The output of dune build contained some slightly concerning warnings.
Warning: Coq Language Versions lower than 0.8 have been deprecated in Dune
3.8 and will be removed in an upcoming Dune version.
File "benchmarks/table/dune", line 6, characters 0-187:
6 | (rule
7 | (target foo.txt)
8 | (deps ./template.tex ../../bin/main.exe (glob_files_rec ../../resources/*.{ml,v}) (glob_files_rec ../../resources/_CoqProject))
9 | (action (run touch foo.txt))
10 | )
Error: No rule found for benchmarks/table/template.tex
[WARNING] var was deprecated in version 2.1 of the opam CLI. Use opam var instead or set OPAMCLI environment variable to 2.0.
...
*** Warning: in file Verify_find_mapi_new.v, library Verify_combinators is required from root Common and has not been found in the loadpath!
*** Warning: in file Verify_find_mapi_new.v, library Verify_opt is required from root Common and has not been found in the loadpath!
*** Warning: in file Verify_find_mapi_new.v, library Tactics is required from root Common and has not been found in the loadpath!
...
And its exit code (obtained with echo $?) was 1, ie error, but I just ignored that (is that ok?).
Then I ran
$ dune runtest
which, after a while, spawned 8 processes (proably because I have 8 cores), each of which took several GB of RAM, and since my Linux system is not very good at dealing with out-of-memory situations, it put all my UI process into swap, and completely froze my system. After chatting with @Gopiandcode at PLDI, we concluded that I should try to only run one test at a time, so today I ran
$ dune runtest -j1
However, even one single process took 9.9GB when I screenshotted it (and even more just before I killed it):
(The above screenshot shows my process manager and a tooltip with all the command line args of the executable)
The paper says that the experiments were run on a machine with 8GB of RAM, so it seems if I get the setup right on my 16GB machine, I should be able to reproduce it.
Do you have any idea what could be wrong in my setup?
My first idea was that the many >= in sisyphus.opam could cause me to get newer versions than what you used a few months ago, but with some quick --version invocations, I couldn't spot any version differences that seem significant:
$ opam --version
2.1.3
$ dune --version
3.8.2
$ coqc --version
The Coq Proof Assistant, version 8.15.2
compiled with OCaml 4.12.0
$ which coqc
~/git/clones/sisyphus/_opam/bin/coqc
$ which dune
~/git/clones/sisyphus/_opam/bin/dune
$ ocamlc --version
4.12.0
The text was updated successfully, but these errors were encountered:
Hi @samuelgruetter ! Thanks for trying out the tool -- it was great speaking to you in person at PLDI! Apologies also for the delay in my response, I was away for the past week.
Now, w.r.t. your question~ Hmmm... yes, that is surprising. My guess is that it could be that fedora configures Linux's OOM killer more aggressively than Debian, and so the process ends up being killed prematurely.
Here are a couple of things that I would recommend to do:
Try building from the artefact branch (https://github.com/verse-lab/sisyphus/tree/artefact) --- the master branch's code should be up to date, but there are some some in-progress components (such as template.tex) that may cause your build to behave strangely.
Try running each experiment individually --- the artefact branch's readme describes how to run individual case-studies; the only ones that use a large amount of memory happen to be the two that take >2 minutes to run (I think these are array_foldi and ssl_to_array if I recall correctly).
Increase the allocated swap for your system --- I'm not sure how much swap MacOS supports by default, but maybe the reason the process worked without issue on Mac was that it had a larger amount of swap.
I wanted to build this project on my own machine (Fedora 37, 16GB RAM).
Following the readme, I ran the following commands in the sisyphus directory:
The output of
dune build
contained some slightly concerning warnings.And its exit code (obtained with
echo $?
) was1
, ie error, but I just ignored that (is that ok?).Then I ran
which, after a while, spawned 8 processes (proably because I have 8 cores), each of which took several GB of RAM, and since my Linux system is not very good at dealing with out-of-memory situations, it put all my UI process into swap, and completely froze my system. After chatting with @Gopiandcode at PLDI, we concluded that I should try to only run one test at a time, so today I ran
However, even one single process took 9.9GB when I screenshotted it (and even more just before I killed it):
(The above screenshot shows my process manager and a tooltip with all the command line args of the executable)
The paper says that the experiments were run on a machine with 8GB of RAM, so it seems if I get the setup right on my 16GB machine, I should be able to reproduce it.
Do you have any idea what could be wrong in my setup?
My first idea was that the many
>=
insisyphus.opam
could cause me to get newer versions than what you used a few months ago, but with some quick--version
invocations, I couldn't spot any version differences that seem significant:The text was updated successfully, but these errors were encountered: