Replies: 2 comments
-
Here is another info table profile I did recently, running ghcide on its own code base: http://78.47.113.11/ghcide-old-hi.eventlog.html It shows a lot of potential leaks. Many/most of them seem to be in ghc though :( |
Beta Was this translation helpful? Give feedback.
0 replies
-
Does HLS have the ability to record/replay a session? If I could record an hour of interaction, and replay it later, that would make showing that a space leak was fixed pretty easy. Probably a big ask :) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
(Is this the best place to connect with the HLS team?)
I'm very motivated to improve HLS's performance on large code bases. (I work in industry and we have about 500 modules across 5 sub-projects). HLS works to an extent but I find myself restarting it a lot, and carefully managing open editor windows. I'm using HLS with VS Code, GHC 8.8.4, on a Macbook with 16G of memory, so resources are really tight.
To collect data, I built a version of HLS 1.0.0 with
-eventlog
so I could get a heap profile w/o rebuilding all libraries (in any case, building with-prof
led to weirddlopen
errors when I used it). I updated my VS code settings so it launches HLS with RTS flags that enable tracing (glad to share how I did that if anyone is interested). I also limit the memory used by HLS to 12G, with the-M
RTS flag.The behavior I see is HLS consumes all that memory and then appears to stop responding. Activity monitor shows it is using all the CPU (for example, I saw 1400% just now).
I'm sharing the info below in hopes someone can give me ideas on where to start tracking down & isolating the memory usage I'm seeing. Does the HLS codebase have any instrumentation I can see in the event log? I know user events can be tracked, and it's possible to put "markers" so you can correlate HLS behavior with allocations. Other ideas?
I'm using eventlog2html to produce graphs of the heap profile shown below.
This is a recent heap snapshot. I don't know if I can share the raw data as it is based on our proprietary code. But you can see how ARR_WORDS, GHC primitive types, FUN, and THUNK all dominate the memory usage. Curiously,
Data.IntMap.Internal.Bin
also seems to have really high memory usage.Here are the individual snapshots:
ARR_WORDS
ghc-prim
THUNK_1_0
FUN
containers-...:Data.IntMap.Internal.Bin
:Entire Heap
Unfortunately, the "Normalized Linechart" (see https://mpickering.github.io/eventlog2html/ for details) shows that almost all of the types captured increased allocation at the end of the run, so it's hard to blame one particular type of allocation:
Beta Was this translation helpful? Give feedback.
All reactions