-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llama uses a lot of memory with parallel compiles #45
Comments
And then using |
I poked at this a bit, and my bet is that the memory use comes from GC lagging the large RPC calls. The preprocessed source tends to be pretty large (5 - 10MB) and that's all read into memory and later copied and compressed. Memory profile svg I added some debug logging to the server:
|
|
If I use a temporary file to send the preprocessed source to the daemon, net/rpc doens't appeat in the memory profile at all, it's all in the encoding and upload SVG |
Preprocessed output tends to be quite large, often multiple megabytes. Rather than transferring it over the RPC interface, use a tempfile so that the server doesn't use a lot of memory uneccessarily. This updates nelhage#45.
Rather than read large files into memory and process them multiple times (for hashing and for compression), use streaming compression for files so that only the compressed output needs to be fully in memory. This has the side-effect of the object ID being generated by hashing the compressed text (which is better because there's less to hash after compression). This means that the function image needs to be regenerated to match. This updates nelhage#45.
Once the number of parallel compiles climbs (20 - 30), the llama daemon starts using a lot of memory and the OOM killer gets it. If I adjust the OOM score so it doesn't get killed, it will hit ~5G resident for build w/ 56 max in-flight.
Probably need to do some memory profiling to understand what's happening here.
The text was updated successfully, but these errors were encountered: