New Features:
- Support for multiple job executions. A job can now properly manage multiple executions running simultaneously, allowing future support for long running scheduled jobs.
Breaking Changes:
- Dropped support for Redis server < 4
RoundRobinWorker
andRandomWorker
are deprecated. Use--dequeue-strategy <round-robin/random>
instead.Job.__init__
requires bothid
andconnection
to be passed in.Job.exists()
requiresconnection
argument to be passed in.Queue.all()
requiresconnection
argument.@job
decorator now requiresconnection
argument.
- Fixed a bug that may cause jobs from intermediate queue to be moved to FailedJobRegistry. Thanks @selwin!
- Added
worker_pool.get_worker_process()
to makeWorkerPool
easier to extend. Thanks @selwin!
- Added a way for jobs to wait for latest result
job.latest_result(timeout=60)
. Thanks @ajnisbet! - Fixed an issue where
stopped_callback
is not respected when job is enqueued viaenqueue_many()
. Thanks @eswolinsky3241! worker-pool
no longer ignores--quiet
. Thanks @Mindiell!- Added compatibility with AWS Serverless Redis. Thanks @peter-gy!
worker-pool
now starts with scheduler. Thanks @chromium7!
- Fixed a bug that may cause a crash when cleaning intermediate queue. Thanks @selwin!
- Fixed a bug that may cause canceled jobs to still run dependent jobs. Thanks @fredsod!
- Added
Callback(on_stopped='my_callback)
. Thanks @eswolinsky3241! Callback
now accepts dotted path to function as input. Thanks @rishabh-ranjan!queue.enqueue_many()
now supports job dependencies. Thanks @eswolinsky3241!rq worker
CLI script now configures logging based onDICT_CONFIG
key present in config file. Thanks @juur!- Whenever possible,
Worker
now useslmove()
to implement reliable queue pattern. Thanks @selwin! - Require
redis>=4.0.0
Scheduler
should only release locks that it successfully acquires. Thanks @xzander!- Fixes crashes that may happen by changes to
as_text()
function in v1.14. Thanks @tchapi! - Various linting, CI and code quality improvements. Thanks @robhudson!
- Fixes a crash that happens if Redis connection uses SSL. Thanks @tchapi!
- Fixes a crash if
job.meta()
is loaded using the wrong serializer. Thanks @gabriels1234!
- Added
WorkerPool
(beta) that manages multiple workers in a single CLI. Thanks @selwin! - Added a new
Callback
class that allows more flexibility in declaring job callbacks. Thanks @ronlut! - Fixed a regression where jobs with unserializable return value crashes RQ. Thanks @tchapi!
- Added
--dequeue-strategy
option to RQ's CLI. Thanks @ccrvlh! - Added
--max-idle-time
option to RQ's worker CLI. Thanks @ronlut! - Added
--maintenance-interval
option to RQ's worker CLI. Thanks @ronlut! - Fixed RQ usage in Windows as well as various other refactorings. Thanks @ccrvlh!
- Show more info on
rq info
CLI command. Thanks @iggeehu! queue.enqueue_jobs()
now properly account for job dependencies. Thanks @sim6!TimerDeathPenalty
now properly handles negative/infinite timeout. Thanks @marqueurs404!
- Added
work_horse_killed_handler
argument toWorker
. Thanks @ronlut! - Fixed an issue where results aren't properly persisted on synchronous jobs. Thanks @selwin!
- Fixed a bug where job results are not properly persisted when
result_ttl
is-1
. Thanks @sim6! - Various documentation and logging fixes. Thanks @lowercase00!
- Improve Redis connection reliability. Thanks @lowercase00!
- Scheduler reliability improvements. Thanks @OlegZv and @lowercase00!
- Fixed a bug where
dequeue_timeout
ignoresworker_ttl
. Thanks @ronlut! - Use
job.return_value()
instead ofjob.result
when processing callbacks. Thanks @selwin! - Various internal refactorings to make
Worker
code more easily extendable. Thanks @lowercase00! - RQ's source code is now black formatted. Thanks @aparcar!
- RQ now stores multiple job execution results. This feature is only available on Redis >= 5.0 Redis Streams. Please refer to the docs for more info. Thanks @selwin!
- Improve performance when enqueueing many jobs at once. Thanks @rggjan!
- Redis server version is now cached in connection object. Thanks @odarbelaeze!
- Properly handle
at_front
argument when jobs are scheduled. Thanks @gabriels1234! - Add type hints to RQ's code base. Thanks @lowercase00!
- Fixed a bug where exceptions are logged twice. Thanks @selwin!
- Don't delete
job.worker_name
after job is finished. Thanks @eswolinsky3241!
queue.enqueue_many()
now supportson_success
and onon_failure
arguments. Thanks @y4n9squared!- You can now pass
enqueue_at_front
toDependency()
objects to put dependent jobs at the front when they are enqueued. Thanks @jtfidje! - Fixed a bug where workers may wrongly acquire scheduler locks. Thanks @milesjwinter!
- Jobs should not be enqueued if any one of it's dependencies is canceled. Thanks @selwin!
- Fixed a bug when handling jobs that have been stopped. Thanks @ronlut!
- Fixed a bug in handling Redis connections that don't allow
SETNAME
command. Thanks @yilmaz-burak!
- This will be the last RQ version that supports Python 3.5.
- Allow jobs to be enqueued even when their dependencies fail via
Dependency(allow_failure=True)
. Thanks @mattchan-tencent, @caffeinatedMike and @selwin! - When stopped jobs are deleted, they should also be removed from FailedJobRegistry. Thanks @selwin!
job.requeue()
now supportsat_front()
argument. Thanks @buroa!- Added ssl support for sentinel connections. Thanks @nevious!
SimpleWorker
now works better on Windows. Thanks @caffeinatedMike!- Added
on_failure
andon_success
arguments to @job decorator. Thanks @nepta1998! - Fixed a bug in dependency handling. Thanks @th3hamm0r!
- Minor fixes and optimizations by @xavfernandez, @olaure, @kusaku.
- BACKWARDS INCOMPATIBLE: synchronous execution of jobs now correctly mimics async job execution. Exception is no longer raised when a job fails, job status will now be correctly set to
FAILED
and failure callbacks are now properly called when job is run synchronously. Thanks @ericman93! - Fixes a bug that could cause job keys to be left over when
result_ttl=0
. Thanks @selwin! - Allow
ssl_cert_reqs
argument to be passed to Redis. Thanks @mgcdanny! - Better compatibility with Python 3.10. Thanks @rpkak!
job.cancel()
should also remove itself from registries. Thanks @joshcoden!- Pubsub threads are now launched in
daemon
mode. Thanks @mik3y!
- You can now enqueue jobs from CLI. Docs here. Thanks @rpkak!
- Added a new
CanceledJobRegistry
to keep track of canceled jobs. Thanks @selwin! - Added custom serializer support to various places in RQ. Thanks @joshcoden!
cancel_job(job_id, enqueue_dependents=True)
allows you to cancel a job while enqueueing its dependents. Thanks @joshcoden!- Added
job.get_meta()
to fetch fresh meta value directly from Redis. Thanks @aparcar! - Fixes a race condition that could cause jobs to be incorrectly added to FailedJobRegistry. Thanks @selwin!
- Requeueing a job now clears
job.exc_info
. Thanks @selwin! - Repo infrastructure improvements by @rpkak.
- Other minor fixes by @cesarferradas and @bbayles.
- Added success and failure callbacks. You can now do
queue.enqueue(foo, on_success=do_this, on_failure=do_that)
. Thanks @selwin! - Added
queue.enqueue_many()
to enqueue many jobs in one go. Thanks @joshcoden! - Various improvements to CLI commands. Thanks @rpkak!
- Minor logging improvements. Thanks @clavigne and @natbusa!
- Jobs that fail due to hard shutdowns are now retried. Thanks @selwin!
Scheduler
now works with custom serializers. Thanks @alella!- Added support for click 8.0. Thanks @rpkak!
- Enqueueing static methods are now supported. Thanks @pwws!
- Job exceptions no longer get printed twice. Thanks @petrem!
- You can now declare multiple job dependencies. Thanks @skieffer and @thomasmatecki for laying the groundwork for multi dependency support in RQ.
- Added
RoundRobinWorker
andRandomWorker
classes to control how jobs are dequeued from multiple queues. Thanks @bielcardona! - Added
--serializer
option torq worker
CLI. Thanks @f0cker! - Added support for running asyncio tasks. Thanks @MyrikLD!
- Added a new
STOPPED
job status so that you can differentiate between failed and manually stopped jobs. Thanks @dralley! - Fixed a serialization bug when used with job dependency feature. Thanks @jtfidje!
clean_worker_registry()
now works in batches of 1,000 jobs to prevent modifying too many keys at once. Thanks @AxeOfMen and @TheSneak!- Workers will now wait and try to reconnect in case of Redis connection errors. Thanks @Asrst!
- Added
job.worker_name
attribute that tells you which worker is executing a job. Thanks @selwin! - Added
send_stop_job_command()
that tells a worker to stop executing a job. Thanks @selwin! - Added
JSONSerializer
as an alternative to the defaultpickle
based serializer. Thanks @JackBoreczky! - Fixes
RQScheduler
running on Redis withssl=True
. Thanks @BobReid!
- Worker now properly releases scheduler lock when run in burst mode. Thanks @selwin!
- Workers now listen to external commands via pubsub. The first two features taking advantage of this infrastructure are
send_shutdown_command()
andsend_kill_horse_command()
. Thanks @selwin! - Added
job.last_heartbeat
property that's periodically updated when job is running. Thanks @theambient! - Now horses are killed by their parent group. This helps in cleanly killing all related processes if job uses multiprocessing. Thanks @theambient!
- Fixed scheduler usage with Redis connections that uses custom parser classes. Thanks @selwin!
- Scheduler now enqueue jobs in batches to prevent lock timeouts. Thanks @nikkonrom!
- Scheduler now follows RQ worker's logging configuration. Thanks @christopher-dG!
- Scheduler now uses the class of connection that's used. Thanks @pacahon!
- Fixes a bug that puts retried jobs in
FailedJobRegistry
. Thanks @selwin! - Fixed a deprecated import. Thanks @elmaghallawy!
- Fixes for Redis server version parsing. Thanks @selwin!
- Retries can now be set through @job decorator. Thanks @nerok!
- Log messages below logging.ERROR is now sent to stdout. Thanks @selwin!
- Better logger name for RQScheduler. Thanks @atainter!
- Better handling of exceptions thrown by horses. Thanks @theambient!
- Failed jobs can now be retried. Thanks @selwin!
- Fixed scheduler on Python > 3.8.0. Thanks @selwin!
- RQ is now aware of which version of Redis server it's running on. Thanks @aparcar!
- RQ now uses
hset()
on redis-py >= 3.5.0. Thanks @aparcar! - Fix incorrect worker timeout calculation in SimpleWorker.execute_job(). Thanks @davidmurray!
- Make horse handling logic more robust. Thanks @wevsty!
- Added
job.get_position()
andqueue.get_job_position()
. Thanks @aparcar! - Longer TTLs for worker keys to prevent them from expiring inside the worker lifecycle. Thanks @selwin!
- Long job args/kwargs are now truncated during logging. Thanks @JhonnyBn!
job.requeue()
now returns the modified job. Thanks @ericatkin!
- Reverted changes to
hmset
command which causes workers on Redis server < 4 to crash. Thanks @selwin! - Merged in more groundwork to enable jobs with multiple dependencies. Thanks @thomasmatecki!
- Default serializer now uses
pickle.HIGHEST_PROTOCOL
for backward compatibility reasons. Thanks @bbayles! - Avoid deprecation warnings on redis-py >= 3.5.0. Thanks @bbayles!
- Custom serializer is now supported. Thanks @solababs!
delay()
now acceptsjob_id
argument. Thanks @grayshirt!- Fixed a bug that may cause early termination of scheduled or requeued jobs. Thanks @rmartin48!
- When a job is scheduled, always add queue name to a set containing active RQ queue names. Thanks @mdawar!
- Added
--sentry-ca-certs
and--sentry-debug
parameters torq worker
CLI. Thanks @kichawa! - Jobs cleaned up by
StartedJobRegistry
are given an exception info. Thanks @selwin! - Python 2.7 is no longer supported. Thanks @selwin!
- Support for infinite job timeout. Thanks @theY4Kman!
- Added
__main__
file so you can now dopython -m rq.cli
. Thanks @bbayles! - Fixes an issue that may cause zombie processes. Thanks @wevsty!
job_id
is now passed to logger during failed jobs. Thanks @smaccona!queue.enqueue_at()
andqueue.enqueue_in()
now supports explicitargs
andkwargs
function invocation. Thanks @selwin!
Job.fetch()
now properly handles unpickleable return values. Thanks @selwin!
enqueue_at()
andenqueue_in()
now sets job status toscheduled
. Thanks @coolhacker170597!- Failed jobs data are now automatically expired by Redis. Thanks @selwin!
- Fixes
RQScheduler
logging configuration. Thanks @FlorianPerucki!
- This release also contains an alpha version of RQ's builtin job scheduling mechanism. Thanks @selwin!
- Various internal API changes in preparation to support multiple job dependencies. Thanks @thomasmatecki!
--verbose
or--quiet
CLI arguments should override--logging-level
. Thanks @zyt312074545!- Fixes a bug in
rq info
where it doesn't show workers for empty queues. Thanks @zyt312074545! - Fixed
queue.enqueue_dependents()
on customQueue
classes. Thanks @van-ess0! RQ
and Python versions are now stored in job metadata. Thanks @eoranged!- Added
failure_ttl
argument to job decorator. Thanks @pax0r!
- Added
max_jobs
toWorker.work
and--max-jobs
torq worker
CLI. Thanks @perobertson! - Passing
--disable-job-desc-logging
torq worker
now does what it's supposed to do. Thanks @janierdavila! StartedJobRegistry
now properly handles jobs with infinite timeout. Thanks @macintoshpie!rq info
CLI command now cleans up registries when it first runs. Thanks @selwin!- Replaced the use of
procname
withsetproctitle
. Thanks @j178!
Backward incompatible changes:
-
job.status
has been removed. Usejob.get_status()
andjob.set_status()
instead. Thanks @selwin! -
FailedQueue
has been replaced withFailedJobRegistry
:get_failed_queue()
function has been removed. Please useFailedJobRegistry(queue=queue)
instead.move_to_failed_queue()
has been removed.- RQ now provides a mechanism to automatically cleanup failed jobs. By default, failed jobs are kept for 1 year.
- Thanks @selwin!
-
RQ's custom job exception handling mechanism has also changed slightly:
- RQ's default exception handling mechanism (moving jobs to
FailedJobRegistry
) can be disabled by doingWorker(disable_default_exception_handler=True)
. - Custom exception handlers are no longer executed in reverse order.
- Thanks @selwin!
- RQ's default exception handling mechanism (moving jobs to
-
Worker
names are now randomized. Thanks @selwin! -
timeout
argument onqueue.enqueue()
has been deprecated in favor ofjob_timeout
. Thanks @selwin! -
Sentry integration has been reworked:
- RQ now uses the new sentry-sdk in place of the deprecated Raven library
- RQ will look for the more explicit
RQ_SENTRY_DSN
environment variable instead ofSENTRY_DSN
before instantiating Sentry integration - Thanks @selwin!
-
Fixed
Worker.total_working_time
accounting bug. Thanks @selwin!
- Compatibility with Redis 3.0. Thanks @dash-rai!
- Added
job_timeout
argument toqueue.enqueue()
. This argument will eventually replacetimeout
argument. Thanks @selwin! - Added
job_id
argument toBaseDeathPenalty
class. Thanks @loopbio! - Fixed a bug which causes long running jobs to timeout under
SimpleWorker
. Thanks @selwin! - You can now override worker's name from config file. Thanks @houqp!
- Horses will now return exit code 1 if they don't terminate properly (e.g when Redis connection is lost). Thanks @selwin!
- Added
date_format
andlog_format
arguments toWorker
andrq worker
CLI. Thanks @shikharsg!
- Added support for Python 3.7. Since
async
is a keyword in Python 3.7,Queue(async=False)
has been changed toQueue(is_async=False)
. Theasync
keyword argument will still work, but raises aDeprecationWarning
. Thanks @dchevell!
Worker
now periodically sends heartbeats and checks whether child process is still alive while performing long running jobs. Thanks @Kriechi!Job.create
now acceptstimeout
in string format (e.g1h
). Thanks @theodesp!worker.main_work_horse()
should exit with return code0
even if job execution fails. Thanks @selwin!job.delete(delete_dependents=True)
will delete job along with its dependents. Thanks @olingerc!- Other minor fixes and documentation updates.
@job
decorator now acceptsdescription
,meta
,at_front
anddepends_on
kwargs. Thanks @jlucas91 and @nlyubchich!- Added the capability to fetch workers by queue using
Worker.all(queue=queue)
andWorker.count(queue=queue)
. - Improved RQ's default logging configuration. Thanks @samuelcolvin!
job.data
andjob.exc_info
are now stored in compressed format in Redis.
- Fixed an issue where
worker.refresh()
may fail whenbirth_date
is not set. Thanks @vanife!
- Fixed an issue where
worker.refresh()
may fail when upgrading from previous versions of RQ.
Worker
statistics!Worker
now keeps track oflast_heartbeat
,successful_job_count
,failed_job_count
andtotal_working_time
. Thanks @selwin!Worker
now sends heartbeat during suspension check. Thanks @theodesp!- Added
queue.delete()
method to deleteQueue
objects entirely from Redis. Thanks @theodesp! - More robust exception string decoding. Thanks @stylight!
- Added
--logging-level
option to command line scripts. Thanks @jiajunhuang! - Added millisecond precision to job timestamps. Thanks @samuelcolvin!
- Python 2.6 is no longer supported. Thanks @samuelcolvin!
- Fixed an issue where
job.save()
may fail with unpickleable return value.
- Replace
job.id
withJob
instance in local_job_stack
. Thanks @katichev! job.save()
no longer implicitly callsjob.cleanup()
. Thanks @katichev!- Properly catch
StopRequested
worker.heartbeat()
. Thanks @fate0! - You can now pass in timeout in days. Thanks @yaniv-g!
- The core logic of sending job to
FailedQueue
has been moved torq.handlers.move_to_failed_queue
. Thanks @yaniv-g! - RQ cli commands now accept
--path
parameter. Thanks @kirill and @sjtbham! - Make
job.dependency
slightly more efficient. Thanks @liangsijian! FailedQueue
now returns jobs with the correct class. Thanks @amjith!
- Refactored APIs to allow custom
Connection
,Job
,Worker
andQueue
classes via CLI. Thanks @jezdez! job.delete()
now properly cleans itself from job registries. Thanks @selwin!Worker
should no longer overwritejob.meta
. Thanks @WeatherGod!job.save_meta()
can now be used to persist custom job data. Thanks @katichev!- Added Redis Sentinel support. Thanks @strawposter!
- Make
Worker.find_by_key()
more efficient. Thanks @selwin! - You can now specify job
timeout
using strings such asqueue.enqueue(foo, timeout='1m')
. Thanks @luojiebin! - Better unicode handling. Thanks @myme5261314 and @jaywink!
- Sentry should default to HTTP transport. Thanks @Atala!
- Improve
HerokuWorker
termination logic. Thanks @samuelcolvin!
- Fixes a bug that prevents fetching jobs from
FailedQueue
(#765). Thanks @jsurloppe! - Fixes race condition when enqueueing jobs with dependency (#742). Thanks @th3hamm0r!
- Skip a test that requires Linux signals on MacOS (#763). Thanks @jezdez!
enqueue_job
should use Redis pipeline when available (#761). Thanks mtdewulf!
- Better support for Heroku workers (#584, #715)
- Support for connecting using a custom connection class (#741)
- Fix: connection stack in default worker (#479, #641)
- Fix:
fetch_job
now checks that a job requested actually comes from the intended queue (#728, #733) - Fix: Properly raise exception if a job dependency does not exist (#747)
- Fix: Job status not updated when horse dies unexpectedly (#710)
- Fix:
request_force_stop_sigrtmin
failing for Python 3 (#727) - Fix
Job.cancel()
method on failed queue (#707) - Python 3.5 compatibility improvements (#729)
- Improved signal name lookup (#722)
- Jobs that depend on job with result_ttl == 0 are now properly enqueued.
cancel_job
now works properly. Thanks @jlopex!- Jobs that execute successfully now no longer tries to remove itself from queue. Thanks @amyangfei!
- Worker now properly logs Falsy return values. Thanks @liorsbg!
Worker.work()
now acceptslogging_level
argument. Thanks @jlopex!- Logging related fixes by @redbaron4 and @butla!
@job
decorator now acceptsttl
argument. Thanks @javimb!Worker.__init__
now acceptsqueue_class
keyword argument. Thanks @antoineleclair!Worker
now saves warm shutdown time. You can access this property fromworker.shutdown_requested_date
. Thanks @olingerc!- Synchronous queues now properly sets completed job status as finished. Thanks @ecarreras!
Worker
now correctly deletescurrent_job_id
after failed job execution. Thanks @olingerc!Job.create()
andqueue.enqueue_call()
now acceptsmeta
argument. Thanks @tornstrom!- Added
job.started_at
property. Thanks @samuelcolvin! - Cleaned up the implementation of
job.cancel()
andjob.delete()
. Thanks @glaslos! Worker.execute_job()
now exportsRQ_WORKER_ID
andRQ_JOB_ID
to OS environment variables. Thanks @mgk!rqinfo
now accepts--config
option. Thanks @kfrendrich!Worker
class now hasrequest_force_stop()
andrequest_stop()
methods that can be overridden by custom worker classes. Thanks @samuelcolvin!- Other minor fixes by @VicarEscaped, @kampfschlaefer, @ccurvey, @zfz, @antoineleclair, @orangain, @nicksnell, @SkyLothar, @ahxxm and @horida.
- Job results are now logged on
DEBUG
level. Thanks @tbaugis! - Modified
patch_connection
so Redis connection can be easily mocked - Customer exception handlers are now called if Redis connection is lost. Thanks @jlopex!
- Jobs can now depend on jobs in a different queue. Thanks @jlopex!
- Add support for
--exception-handler
command line flag - Fix compatibility with click>=5.0
- Fix maximum recursion depth problem for very large queues that contain jobs that all fail
(July 8th, 2015)
- Fix compatibility with raven>=5.4.0
(June 3rd, 2015)
- Better API for instantiating Workers. Thanks @RyanMTB!
- Better support for unicode kwargs. Thanks @nealtodd and @brownstein!
- Workers now automatically cleans up job registries every hour
- Jobs in
FailedQueue
now have their statuses set properly enqueue_call()
no longer ignoresttl
. Thanks @mbodock!- Improved logging. Thanks @trevorprater!
(April 14th, 2015)
- Support SSL connection to Redis (requires redis-py>=2.10)
- Fix to prevent deep call stacks with large queues
(March 9th, 2015)
- Resolve performance issue when queues contain many jobs
- Restore the ability to specify connection params in config
- Record
birth_date
anddeath_date
on Worker - Add support for SSL URLs in Redis (and
REDIS_SSL
config option) - Fix encoding issues with non-ASCII characters in function arguments
- Fix Redis transaction management issue with job dependencies
(Jan 30th, 2015)
- RQ workers can now be paused and resumed using
rq suspend
andrq resume
commands. Thanks Jonathan Tushman! - Jobs that are being performed are now stored in
StartedJobRegistry
for monitoring purposes. This also prevents currently active jobs from being orphaned/lost in the case of hard shutdowns. - You can now monitor finished jobs by checking
FinishedJobRegistry
. Thanks Nic Cope for helping! - Jobs with unmet dependencies are now created with
deferred
as their status. You can monitor deferred jobs by checkingDeferredJobRegistry
. - It is now possible to enqueue a job at the beginning of queue using
queue.enqueue(func, at_front=True)
. Thanks Travis Johnson! - Command line scripts have all been refactored to use
click
. Thanks Lyon Zhang! - Added a new
SimpleWorker
that does not fork when executing jobs. Useful for testing purposes. Thanks Cal Leeming! - Added
--queue-class
and--job-class
arguments torqworker
script. Thanks David Bonner! - Many other minor bug fixes and enhancements.
(May 21st, 2014)
- Raise a warning when RQ workers are used with Sentry DSNs using asynchronous transports. Thanks Wei, Selwin & Toms!
(May 8th, 2014)
- Fix where rqworker broke on Python 2.6. Thanks, Marko!
(May 7th, 2014)
- Properly declare redis dependency.
- Fix a NameError regression that was introduced in 0.4.3.
(May 6th, 2014)
- Make job and queue classes overridable. Thanks, Marko!
- Don't require connection for @job decorator at definition time. Thanks, Sasha!
- Syntactic code cleanup.
(April 28th, 2014)
- Add missing depends_on kwarg to @job decorator. Thanks, Sasha!
(April 22nd, 2014)
- Fix bug where RQ 0.4 workers could not unpickle/process jobs from RQ < 0.4.
(April 22nd, 2014)
-
Emptying the failed queue from the command line is now as simple as running
rqinfo -X
orrqinfo --empty-failed-queue
. -
Job data is unpickled lazily. Thanks, Malthe!
-
Removed dependency on the
times
library. Thanks, Malthe! -
Job dependencies! Thanks, Selwin.
-
Custom worker classes, via the
--worker-class=path.to.MyClass
command line argument. Thanks, Selwin. -
Queue.all()
andrqinfo
now report empty queues, too. Thanks, Rob! -
Fixed a performance issue in
Queue.all()
when issued in large Redis DBs. Thanks, Rob! -
Birth and death dates are now stored as proper datetimes, not timestamps.
-
Ability to provide a custom job description (instead of using the default function invocation hint). Thanks, İbrahim.
-
Fix: temporary key for the compact queue is now randomly generated, which should avoid name clashes for concurrent compact actions.
-
Fix:
Queue.empty()
now correctly deletes job hashes from Redis.
(December 17th, 2013)
- Bug fix where the worker crashes on jobs that have their timeout explicitly removed. Thanks for reporting, @algrs.
(December 16th, 2013)
- Bug fix where a worker could time out before the job was done, removing it from any monitor overviews (#288).
(August 23th, 2013)
- Some more fixes in command line scripts for Python 3
(August 20th, 2013)
- Bug fix in setup.py
(August 20th, 2013)
-
Python 3 compatibility (Thanks, Alex!)
-
Minor bug fix where Sentry would break when func cannot be imported
(June 17th, 2013)
-
rqworker
andrqinfo
have a--url
argument to connect to a Redis url. -
rqworker
andrqinfo
have a--socket
option to connect to a Redis server through a Unix socket. -
rqworker
readsSENTRY_DSN
from the environment, unless specifically provided on the command line. -
Queue
has a new API that supports pagingget_jobs(3, 7)
, which will return at most 7 jobs, starting from the 3rd.
(February 26th, 2013)
- Fixed bug where workers would not execute builtin functions properly.
(February 18th, 2013)
-
Worker registrations now expire. This should prevent
rqinfo
from reporting about ghosted workers. (Thanks, @yaniv-aknin!) -
rqworker
will automatically clean up ghosted worker registrations from pre-0.3.6 runs. -
rqworker
grew a-q
flag, to be more silent (only warnings/errors are shown)
(February 6th, 2013)
-
ended_at
is now recorded for normally finished jobs, too. (Previously only for failed jobs.) -
Adds support for both
Redis
andStrictRedis
connection types -
Makes
StrictRedis
the default connection type if none is explicitly provided
(January 23rd, 2013)
- Restore compatibility with Python 2.6.
(January 18th, 2013)
-
Fix bug where work was lost due to silently ignored unpickle errors.
-
Jobs can now access the current
Job
instance from within. Relevant documentation here. -
Custom properties can be set by modifying the
job.meta
dict. Relevant documentation here. -
Custom properties can be set by modifying the
job.meta
dict. Relevant documentation here. -
rqworker
now has an optional--password
flag. -
Remove
logbook
dependency (in favor oflogging
)
(September 3rd, 2012)
-
Fixes broken
rqinfo
command. -
Improve compatibility with Python < 2.7.
(August 30th, 2012)
-
.enqueue()
now takes aresult_ttl
keyword argument that can be used to change the expiration time of results. -
Queue constructor now takes an optional
async=False
argument to bypass the worker (for testing purposes). -
Jobs now carry status information. To get job status information, like whether a job is queued, finished, or failed, use the property
status
, or one of the new boolean accessor propertiesis_queued
,is_finished
oris_failed
. -
Jobs return values are always stored explicitly, even if they have to explicit return value or return
None
(with given TTL of course). This makes it possible to distinguish between a job that explicitly returnedNone
and a job that isn't finished yet (seestatus
property). -
Custom exception handlers can now be configured in addition to, or to fully replace, moving failed jobs to the failed queue. Relevant documentation here and here.
-
rqworker
now supports passing in configuration files instead of the many command line options:rqworker -c settings
will sourcesettings.py
. -
rqworker
now supports one-flag setup to enable Sentry as its exception handler:rqworker --sentry-dsn="http://public:[email protected]/1"
Alternatively, you can use a settings file and configureSENTRY_DSN = 'http://public:[email protected]/1'
instead.
(August 5th, 2012)
-
Reliability improvements
- Warm shutdown now exits immediately when Ctrl+C is pressed and worker is idle
- Worker does not leak worker registrations anymore when stopped gracefully
-
.enqueue()
does not consume thetimeout
kwarg anymore. Instead, to pass RQ a timeout value while enqueueing a function, use the explicit invocation instead:```python q.enqueue(do_something, args=(1, 2), kwargs={'a': 1}, timeout=30) ```
-
Add a
@job
decorator, which can be used to do Celery-style delayed invocations:```python from redis import StrictRedis from rq.decorators import job # Connect to Redis redis = StrictRedis() @job('high', timeout=10, connection=redis) def some_work(x, y): return x + y ```
Then, in another module, you can call
some_work
:```python from foo.bar import some_work some_work.delay(2, 3) ```
(August 1st, 2012)
- Fix bug where return values that couldn't be pickled crashed the worker
(July 20th, 2012)
- Fix important bug where result data wasn't restored from Redis correctly (affected non-string results only).
(July 18th, 2012)
q.enqueue()
accepts instance methods now, too. Objects will be pickle'd along with the instance method, so beware.q.enqueue()
accepts string specification of functions now, too. Example:q.enqueue("my.math.lib.fibonacci", 5)
. Useful if the worker and the submitter of work don't share code bases.- Job can be assigned custom attrs and they will be pickle'd along with the rest of the job's attrs. Can be used when writing RQ extensions.
- Workers can now accept explicit connections, like Queues.
- Various bug fixes.
(May 15, 2012)
- Fix broken PyPI deployment.
(May 14, 2012)
- Thread-safety by using context locals
- Register scripts as console_scripts, for better portability
- Various bugfixes.
(March 28, 2012)
- Initially released version.