Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clear old locks when server starts? #103

Open
CalebFenton opened this issue Jun 14, 2019 · 6 comments
Open

Clear old locks when server starts? #103

CalebFenton opened this issue Jun 14, 2019 · 6 comments

Comments

@CalebFenton
Copy link

If the server goes down while a task is running and has acquired a lock, the lock is still in place when the server comes back up. It isn't released until the default timeout has expired and there doesn't seem to be any clean mechanism for dealing with this.

A similar project to this called celery-singleton has a clear_locks API that removes any locks. This can be called when workers are first ready.

from celery.signals import worker_ready
from celery_singleton import clear_locks
from somewhere import celery_app

@worker_ready()
def unlock_all(**kwargs):
    clear_locks(celery_app)

Does such a mechanism exist for this project? Maybe I'm missing it. If it doesn't exist, would this be a welcome feature?

@CalebFenton
Copy link
Author

Btw, if anyone is curious, this is really easy to do without any special API call:

@signals.worker_ready.connect
def unlock_all(**kwargs):
    lock_keys = redis_db.get('qo_*')
    redis_db.delete(*lock_keys)

@cameronmaske
Copy link
Owner

cameronmaske commented Jul 18, 2019

Hi @CalebFenton you are correct, no mechanism exists for this project, so thanks for the code snippet.

I think the main source of complexity here lies with how to do this in a generic fashion for the different backends (i.e. Redis or file)

I really like celery-singleton's approach, so maybe it is something worth lifting directly, i.e.

from celery.signals import worker_ready
from celery_once import clear_locks
from somewhere import celery_app

@worker_ready()
def unlock_all(**kwargs):
    clear_locks(celery_app)

I don't have time to implement such a feature currently, but PR for this would be welcome if others need it.

@CalebFenton
Copy link
Author

Thanks for getting back with me. I think you're right that the hard part is generalizing the approach to work with different backends. I was only thinking in terms of redis. Perhaps it's a bit outside the scope of the project since "fixing" this would add a bit of complexity you might want to avoid. Maybe it's solved "good enough" by just giving an example in the docs?

@chpmrc
Copy link

chpmrc commented Jan 8, 2020

How would the proposed solution work with multiple workers connecting to the same backend? For example worker1 might set a lock, worker2 might crash, reinitialize and remove all locks on connect, which is obviously not ok.

EDIT: actually, now that I think about, the worst case seems to be an overlap of at most 2 tasks (assuming only one lock key is used), not a big deal.

@CalebFenton
Copy link
Author

Ahh, I think you're right @chpmrc, good find. I would be best to run this code once when celery first starts. I'm not sure if there's a good, clean hook for that. Maybe signal.after_setup_logger as it looks like that only gets triggered during celery init, once for each global logger.

@claytondaley
Copy link

Due to the possibility of multiple workers (and multiple Django servers, etc.) I don't think this feature is even possible. I believe the existing mechanism to achieve a similar outcome is the timeout feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants