-
Notifications
You must be signed in to change notification settings - Fork 197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding a synchronization channel #142
Comments
This is an interesting request. I'm imagining that the caller would have to provide their own implementation of an interface like this:
Is that what you were imagining? I think you can emulate this behavior with the library's existing API:
Another question: since you are using Redis anyway, why not just use Redis for your distributed locking rather than mixing Redis and Azure? With Redis, we could potentially build pub-sub into the library directly without requiring the caller to wire it up. |
That's almost verbatim what as was imagining. :)
Great idea! I'll give it a go.
We're evaluating that, but have some reservations as to the production worthiness of our Redis setup, for this purpose. The RedLock algorithm basically requires multiple masters for conflict free redundancy, whereas our Redis setup is master-slave with Sentinel failover. But we could certainly add more nodes, so it may be the way forward.
Indeed, and that may be the way to go. A great way to test the feasibility anyway. But providing the means to accomplish this for any provider where polling is the only choice could also be really great. E.g for the Azure scenario, an IReleaseEvent based on Service Bus topics could make sense. And BTW, I saw #38 . We had that too, using named semaphores, and I believe it reduced the strain on the Azure blob account quite a lot. I'll add that to the implementation of your great idea to wrap the locking mechanisms. |
When diagnosing #141 I compared our own distributed lock implementation with this one. One, rather big, difference as that we have a long "BusyWaitSleepTime", which is allowed to be long since the lock provider also communicates the acquiring and releasing of locks through a pub/sub channel. In our case that channel is a Redis PubSub. If the channel fails for some reason, the wait timeout will be respected, but in many cases it will never go that far. The implementation is quite simple with a concurrent dictionary of reference counted cancellationtokensources. One cancellationtokensource per active lock key. This cancellationtokensource is used in the Task.Delay.
I would suggest adding such a mechanism to DistributedLock. It would be completely opt-in in nature, and the implementation can be completely agnostic to the provider chosen, just as in our case where Azure Blobs and Redis are combined.
The text was updated successfully, but these errors were encountered: