Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Separate base task from redis logic #9

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open

Separate base task from redis logic #9

wants to merge 4 commits into from

Conversation

WhyNotHugo
Copy link
Contributor

@WhyNotHugo WhyNotHugo commented May 10, 2017

Separate the base task from redis-specific logic so that alternate backends can be introduced (e.g.: one using a django cache, etc.).

Hugo Osvaldo Barrera added 2 commits May 8, 2017 14:34
Separate the base task from redis-specific logic so that alternate
backends can be introduced (e.g.: One used a django cache, etc.).
@WhyNotHugo
Copy link
Contributor Author

The codecov bot seems to be posting even though we don't want/need it.

We had the same issue on another org, so I digged up how we got rid of it:

This doesn't affect the coverage status report.

Copy link
Contributor

@TylerHendrickson TylerHendrickson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hobarrera Most things noted here are pretty minor, but let me know if there are any questions. Looking good!

In these cases, this method will first revoke any extant task which
matches the same unique key configuration before proceeding to publish
the task. Before returning, a unique task's identifying unique key
will be saved to Redis as a key, with its task id (provided by the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably shouldn't mention Redis explicitly here

will be saved to Redis as a key, with its task id (provided by the
newly-created `AsyncResult` instance) serving as the value.

See ``celery.Task.apply_async()``
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the double backticks here?


See ``celery.Task.apply_async()``

:param func unique_key: Function used to generate a unique key to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two things:

  1. Instead of func, this maybe would be better represented as types.FunctionType
  2. I'm not familiar with the Sphinx syntax of :param <python type> <param name>:; usually I see types documented separately with :type <param name>: <python type>. This could very well be a valid syntax I'm just not used to, but at any rate, I think we should stay consistent with our usual style (bears mentioning that the docstrings in the celery_unique.backends module use :param and :type.

unique_key = None
unique_backend = None

def apply_async(self, args=(), kwargs={}, task_id=None, **options):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The defaults for this method signature should be None for two reasons:

  1. In celery.Task.apply_async, they are defined with None defaults.
  2. kwargs is currently defaulting to a mutable type (a dict), which can (and will?) cause problems

:param func unique_key: Function used to generate a unique key to
identify this task. The function will take receive the same args
and kwargs the task is passed.
:param UniquenessBackend backend: A backend to use to cache queued
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is UniquenessBackend referring to? I think you meant to use celery_unique.backends.BaseBackend.

return '{prefix}:{task_name}:{unique_key}'.format(
prefix=UNIQUE_KEY_PREFIX,
task_name=self.name,
unique_key=key_generator(*callback_args, **callback_kwargs),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Following my other comment about the argument defaults for UniqueTaskMixin.apply_async, I think this will need to be changed to:

unique_key=key_generator(
    *(callback_args or ()),
    **(callback_kwargs or {}),
)


Finally, the TTL value returned by this method will always be greater
than or equal to 1, in order to ensure compatibility with different
backend's TTL requirements, and that a record produced for a
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is super nitpicky, but the apostrophe should come after the s in this case:

ensure compatibility with different backends' TTL requirements

Or we could make it a bit wordier:

ensure compatibility with TTL requirements of various backends

Additionally, if an `expires` keyword argument was passed, and its
value represents (either as an integer or timedelta) a shorter duration
of time than the values provided by `eta` or `countdown`, the TTL will
be reduced to the value of `countdown`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This appears to be wrong in my original docstring, but it should be

the TTL will be reduced to the value of expires.

)
else:
seconds_until_expiry = task_options['expires']
if seconds_until_expiry < ttl_seconds:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a blank line before this one? Looks a bit cluttered right now.

if seconds_until_expiry < ttl_seconds:
ttl_seconds = seconds_until_expiry

if ttl_seconds <= 0:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It shouldn't really ever happen that ttl_seconds is a value greater than 0 but less than 1, but it appears at least possible when using task_options['countdown'].

Instead this should be:

if ttl_seconds < 1:
    ttl_seconds = 1

Probably more semantically correct anyways, and it ensures that our docstring's assertion that "the TTL value returned by this method will always be greater than or equal to 1" is actually true.

@shiftgig shiftgig deleted a comment from codecov-io Dec 14, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants