You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 28, 2022. It is now read-only.
While doing some multi-cloud benchmarks, I noticed that the taxi import would use a "normal" (~10GB) amount of memory for a long time, and then suddenly over the course of about 90 seconds spike up and OOM even on boxes as large as 128GB.
It did not happen at the same point in the import each time - I observed it at 115.5GB, 159.6GB, 162.4GB, and 165.3GB among others.
I saw this happen both on AWS and Azure instances, but not OCI. The OCI instances I was using did have >200GB of RAM though - I did not measure their memory usage during operation to see if it spiked up over 128GB at any point.
The text was updated successfully, but these errors were encountered:
While doing some multi-cloud benchmarks, I noticed that the taxi import would use a "normal" (~10GB) amount of memory for a long time, and then suddenly over the course of about 90 seconds spike up and OOM even on boxes as large as 128GB.
It did not happen at the same point in the import each time - I observed it at 115.5GB, 159.6GB, 162.4GB, and 165.3GB among others.
I saw this happen both on AWS and Azure instances, but not OCI. The OCI instances I was using did have >200GB of RAM though - I did not measure their memory usage during operation to see if it spiked up over 128GB at any point.
The text was updated successfully, but these errors were encountered: