-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add Window and SharedPointerWindow data structures #461
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, a couple of minor comments. Feel free to resolve them if you disagree as I do not feel strongly about either comment, just thought I'd share my view
allocator: Allocator, | ||
window: Window(Rc(T)), | ||
center: std.atomic.Value(usize), | ||
lock: std.Thread.RwLock = .{}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless explicitly required I prefer the use of a RwLock wrapper around the necessary data which in this instance is window
as far as I understand. I understand this is a matter of taste to some extent though so am ok with leaving it as is if you feel more strongly about it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree and I would argue that I am already using the pattern that you are advocating. SharedWindowPointer is the locking wrapper struct which wraps the Window type. This struct exists to serve as an alternative to RwMux that provides stronger guarantees about synchronization correctness.
The PR description discusses this somewhat. I feel that this struct implements the general pattern of read-write-locking a type that has a map-style api.
Layering on the RwMux struct feels redundant to me in this case. It add an extra layer, when this struct is supposed to serve the same role. RwMux doesn't actually abstract away any complexity from this struct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The lock
held by the struct is primarily used to control synchronisation on the window
field. Changing the window
field type to RwMux(Window(Rc(T)))
reflects that fact in the type system. I have noticed a preference for this approach in previous PR reviews as well so thought it worth a mention. At the end of the day I don't feel strongly enough to hold things up so feel free to resolve this comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could make the same argument about the inner fields within RwMux. RwMux contains a RwLock to protect a specific field that it contains. Obviously you wouldn't wrap the field inside a RwMux with another struct that's just like RwMux, since that would be redundant.
This data structure is analogous to RwMux. I feel that the same reasoning applies.
These changes are needed by #459, where they are used to track epoch states.
Previously I used an Lru to track epoch states, but the Lru requires exclusive locks on read operations, which introduces too much contention in a data structure that will be read often by many threads. The data structures in this PR were implemented to handle this use case a bit better. They can be read without an exclusive lock, and are intended to handle the predictable temporal nature of epoch transitions.
Window
is the basic data structure that supports the idea of tracking a moving window of values.SharedPointerWindow
is a wrapper forWindow
that adds two features:SharedPointerWindow
could be implemented totally generically, enabling the user to specify any arbitrary internal container type, such as an Lru or a HashMap. But this makes the type a little bit more complex/opaque so I haven't implemented it generically. This could be done in the future if the behavior is actually needed to wrap multiple different underlying data containers.