Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Turn the mirrored allocation module (except for the buffer) into an Allocator #21

Open
gnzlbg opened this issue Feb 1, 2018 · 2 comments

Comments

@gnzlbg
Copy link
Owner

gnzlbg commented Feb 1, 2018

Currently each SliceDeque is its own allocator, this means that allocating, growing and deallocating SliceDeques require system calls because we are bypassing the global allocator (which recycles memory pages across allocations to avoid system calls).

The target is probably the 0.5 release but work for this can already start. The goals I wanted for the allocator are:

  • the allocator should prioritize avoiding system calls for small SliceDeques
  • the allocator should avoid system calls for arbitrary large SliceDeques that are allocated and deallocated in a loop

My thoughts on how to attack small SliceDeques are:

  • have a couple of free lists (of memory maps, file descriptors, etc. depending on architecture) for small multiples of the allocation granularity (1x, 2x, 4x) caching a small number N of previously freed allocations for future reuse (N ~= 10?)
  • implement this as a couple of sorted stack allocated arrays of file-descriptors/memory maps, etc.

My thought on how to attack arbitrary large SliceDeques are:

  • have a single sorted free list (of memory maps, file descriptors, etc.) that can be binary searched for a size
  • collect the last N (~10?) deallocations that don't fit in the small lists (so this list would be rotating to adapt to patterns)
  • implement this as a stack allocated array of file-descriptors/memory maps, etc. using Eytzinger layout [0, 1].

Some unknowns I have about this are:

  • Maybe this allocator could implement the Alloc trait ? I don't know why we should do this, but it might be something worth toying with
  • Maybe this allocator should be moved into its own crate? I'd prefer to keep it in this crate at first till "everything works", but this crate can be structured in such a way to make the transition to a workspace with sub-crates easy later on in case other parts of the ecosystem want to reuse this allocator.
  • Maybe this allocator should be a bit more flexible? Currently the memory is mapped to two adjacent memory regions but @mikeash blog post does this for N regions, where the user can choose the number of regions. @mikeash why did you do this? Could you maybe comment on some applications that need three or more memory regions?

[0] : P.-V. Khuong, P. Morin, Array layouts for comparison-based searching, 2017.
[1]: https://crates.io/crates/eytzinger

pinging @bill-myers because he was interested into getting this into the library

@mikeash
Copy link

mikeash commented Feb 3, 2018

The Triplicate section of this blog post explains why my version of this needs three regions:

https://mikeash.com/pyblog/friday-qa-2012-02-17-ring-buffers-and-mirrored-memory-part-ii.html

@gnzlbg
Copy link
Owner Author

gnzlbg commented Mar 11, 2018

I've submitted a PR that implements this #33

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants