-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak... somewhere #794
Comments
Due to the nature of the julia garbage collector, it is a bit difficult to tell if there really is a memory leak just by looking at the memory usage. Can you try to splice in some Also, do you have lots of different |
Hmm. This is strange. If I call my function ~100 times in a tight loop, for i in 1:100
my_function()
end it fills the memory and the process gets killed. But if I add Why wouldn't Julia be able to reclaim the memory in the tight loop? The function takes as inputs a few elements of a finite field, does a lot of computations, and spits out a few elements of the same finite field. I've already run intensive benchmarks on the subroutines of my function, without ever filling the memory.
No |
It should be able to claim it. But garbage collectors are not perfect. They can hit corner cases. The main issues seem to be that the Julia collector is very aggressive in grabbing memory and not letting it go, and in not adjusting to changing conditions on the machine, such as running out of memory, or other processes starting up and using the memory. In very tight loops, there just might not be enough memory allocation calls for Julia to do enough gc steps and so memory just gets filled up. Hopefully that will all improve with time, but every now and again, when there is a gc issue like this, manually inserting gc() calls seems to be the only solution. If the example is simple enough, we could pass it to the Julia people to help them with gc tuning. But we'd first need to check that we are actually counting all the memory used. |
It's not that julia is not able to reclaim the memory, but it seems that the garbage collector thinks that there is no need to reclaim it at this point. At the moment, there is also no way to limit the memory used by julia, see https://discourse.julialang.org/t/limit-julia-memory/34409 and JuliaLang/julia#17987. |
If your code uses multivariate division or GCD then flintlib/flint#619 could explain it, for example. |
We should also valgrind the relevant Flint tests. Are all the computations using Flint's fq? |
No multivariates. It only uses prime fields and polynomials over them
I have the same problem with
It is definitely not that simple. It's a full implementation of CSIDH. Individual benchmarks on the subroutines (EC arithmetic, isogeny computations) do not fill the memory. Only the full algorithm does. I can share the code in a few days, but ATM we're rushing to finish in time for a deadline on Tuesday. |
Given the time constraints, I think your best bet is to splice in some |
Unfortunately it is similar to all the other examples of this that we have: someone is working on some complex CSIDH and the system runs out of memory. We were hoping this might be a simple example. Sorry to hear it isn't. Anyhow, on the Flint side, there shouldn't be any reason for it. The finite field stuff has been valgrinded quite a bit. It's probably just the general Julia GC problems. |
I did remove some memory leaks from the threaded polynomial factorisation, but I don't think you could be using it as we don't build with openmp support and I don't think we've had a release since we switched over to pthreads only. Those same leaks don't exist in the standard code as they were just cases of threads exiting in the middle of a function and all the cleanup being at the end of the function. |
Did it work out @defeo? |
Not really. Sprinkling |
OK. Would be interesting to see the example to reproduce. |
I will clean up the code and provide them. |
Hi. This took a long time, but we finally put the code online: https://velusqrt.isogeny.org/software.html. We managed to stabilize the code, and in particular I'm not able to reproduce the behavior described in #794 (comment) (and I have no idea why, but in the meantime I had several OS updates and now I'm at Julia 1.4.0). For some reason, on my 7.5GiB RAM, 2GiB Swap laptop, Julia decides that it's wise to eventually occupy up to 3.2GiB of resident memory (and no swap), and never release it, even when I call However, Firefox begs to differ: when Julia reaches ~3GiB, it becomes extremely unstable, and easily freezes my system when I then try to switch a few tabs or do a search. The issue looks vastly more complex than a Julia/Nemo one, I'm going to close this issue. |
Ok, thanks for the followup. Let's hope this just improves with time. |
Hi,
I'm pretty convinced I hit a memory leak in Nemo. I have a mildly complicated code doing computations in GF(p). When I try to run the code in a loop (for benchmarking), the memory usage of Julia slowly gets up, and before one minute Julia gets killed by the system. If I change
GaloisField
toFiniteField
I get the same result, which makes me think the leak must be somewhere else (aroundfmpz
, maybe?).I ran a
valgrind --leak-check=full --smc-check=all-non-file
, but I'm not sure the output contains much useful information. I'm attaching the full report. Is there anything I can do to get a more useful output from valgrind?The text was updated successfully, but these errors were encountered: