-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clear old locks when server starts? #103
Comments
Btw, if anyone is curious, this is really easy to do without any special API call: @signals.worker_ready.connect
def unlock_all(**kwargs):
lock_keys = redis_db.get('qo_*')
redis_db.delete(*lock_keys) |
Hi @CalebFenton you are correct, no mechanism exists for this project, so thanks for the code snippet. I think the main source of complexity here lies with how to do this in a generic fashion for the different backends (i.e. Redis or file) I really like celery-singleton's approach, so maybe it is something worth lifting directly, i.e. from celery.signals import worker_ready
from celery_once import clear_locks
from somewhere import celery_app
@worker_ready()
def unlock_all(**kwargs):
clear_locks(celery_app) I don't have time to implement such a feature currently, but PR for this would be welcome if others need it. |
Thanks for getting back with me. I think you're right that the hard part is generalizing the approach to work with different backends. I was only thinking in terms of redis. Perhaps it's a bit outside the scope of the project since "fixing" this would add a bit of complexity you might want to avoid. Maybe it's solved "good enough" by just giving an example in the docs? |
How would the proposed solution work with multiple workers connecting to the same backend? For example EDIT: actually, now that I think about, the worst case seems to be an overlap of at most 2 tasks (assuming only one lock key is used), not a big deal. |
Ahh, I think you're right @chpmrc, good find. I would be best to run this code once when celery first starts. I'm not sure if there's a good, clean hook for that. Maybe |
Due to the possibility of multiple workers (and multiple Django servers, etc.) I don't think this feature is even possible. I believe the existing mechanism to achieve a similar outcome is the timeout feature. |
If the server goes down while a task is running and has acquired a lock, the lock is still in place when the server comes back up. It isn't released until the default timeout has expired and there doesn't seem to be any clean mechanism for dealing with this.
A similar project to this called celery-singleton has a
clear_locks
API that removes any locks. This can be called when workers are first ready.Does such a mechanism exist for this project? Maybe I'm missing it. If it doesn't exist, would this be a welcome feature?
The text was updated successfully, but these errors were encountered: