-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Separate base task from redis logic #9
base: master
Are you sure you want to change the base?
Conversation
Separate the base task from redis-specific logic so that alternate backends can be introduced (e.g.: One used a django cache, etc.).
The codecov bot seems to be posting even though we don't want/need it. We had the same issue on another org, so I digged up how we got rid of it:
This doesn't affect the coverage status report. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hobarrera Most things noted here are pretty minor, but let me know if there are any questions. Looking good!
In these cases, this method will first revoke any extant task which | ||
matches the same unique key configuration before proceeding to publish | ||
the task. Before returning, a unique task's identifying unique key | ||
will be saved to Redis as a key, with its task id (provided by the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably shouldn't mention Redis explicitly here
will be saved to Redis as a key, with its task id (provided by the | ||
newly-created `AsyncResult` instance) serving as the value. | ||
|
||
See ``celery.Task.apply_async()`` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the double backticks here?
|
||
See ``celery.Task.apply_async()`` | ||
|
||
:param func unique_key: Function used to generate a unique key to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two things:
- Instead of
func
, this maybe would be better represented astypes.FunctionType
- I'm not familiar with the Sphinx syntax of
:param <python type> <param name>:
; usually I see types documented separately with:type <param name>: <python type>
. This could very well be a valid syntax I'm just not used to, but at any rate, I think we should stay consistent with our usual style (bears mentioning that the docstrings in thecelery_unique.backends
module use:param
and:type
.
unique_key = None | ||
unique_backend = None | ||
|
||
def apply_async(self, args=(), kwargs={}, task_id=None, **options): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The defaults for this method signature should be None
for two reasons:
- In
celery.Task.apply_async
, they are defined withNone
defaults. kwargs
is currently defaulting to a mutable type (adict
), which can (and will?) cause problems
:param func unique_key: Function used to generate a unique key to | ||
identify this task. The function will take receive the same args | ||
and kwargs the task is passed. | ||
:param UniquenessBackend backend: A backend to use to cache queued |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is UniquenessBackend
referring to? I think you meant to use celery_unique.backends.BaseBackend
.
return '{prefix}:{task_name}:{unique_key}'.format( | ||
prefix=UNIQUE_KEY_PREFIX, | ||
task_name=self.name, | ||
unique_key=key_generator(*callback_args, **callback_kwargs), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following my other comment about the argument defaults for UniqueTaskMixin.apply_async
, I think this will need to be changed to:
unique_key=key_generator(
*(callback_args or ()),
**(callback_kwargs or {}),
)
|
||
Finally, the TTL value returned by this method will always be greater | ||
than or equal to 1, in order to ensure compatibility with different | ||
backend's TTL requirements, and that a record produced for a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is super nitpicky, but the apostrophe should come after the s
in this case:
ensure compatibility with different backends' TTL requirements
Or we could make it a bit wordier:
ensure compatibility with TTL requirements of various backends
Additionally, if an `expires` keyword argument was passed, and its | ||
value represents (either as an integer or timedelta) a shorter duration | ||
of time than the values provided by `eta` or `countdown`, the TTL will | ||
be reduced to the value of `countdown`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This appears to be wrong in my original docstring, but it should be
the TTL will be reduced to the value of
expires
.
) | ||
else: | ||
seconds_until_expiry = task_options['expires'] | ||
if seconds_until_expiry < ttl_seconds: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add a blank line before this one? Looks a bit cluttered right now.
if seconds_until_expiry < ttl_seconds: | ||
ttl_seconds = seconds_until_expiry | ||
|
||
if ttl_seconds <= 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It shouldn't really ever happen that ttl_seconds
is a value greater than 0 but less than 1, but it appears at least possible when using task_options['countdown']
.
Instead this should be:
if ttl_seconds < 1:
ttl_seconds = 1
Probably more semantically correct anyways, and it ensures that our docstring's assertion that "the TTL value returned by this method will always be greater than or equal to 1" is actually true.
Separate the base task from redis-specific logic so that alternate backends can be introduced (e.g.: one using a django cache, etc.).