You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In qless 0.9.x it used to be the case that each job is processed in a separate fork from the parent.
The README still states:
The worker forks a child process for each job in order to provide resilience against memory leaks
Looks to me like this changed in current master. The SerialWorker which is also used by the ForkingWorker is processing jobs in a loop.
Was this change made deliberately? For what reasons?
What are the suggestions for getting back the old behavior and always have a fresh/clean process when working a job?
The text was updated successfully, but these errors were encountered:
If I recall correctly, for forking workers, each worker is still forked from the parent process. However, instead of exiting after it completes, it will persist and continue picking up new jobs. This is problematic if you have jobs that don't clean up properly e.g., ones that continue hogging memory even after they complete.
To restore the old behavior, we added the following middleware into the worker:
module Qless
module StopWorkerMiddleware
def around_perform(job)
super
Rails.logger.info("Job #{job.jid} complete, shutting down child worker process #{Process.pid}")
stop!
end
end
end
Qless::Workers::BaseWorker.class_eval do
include Qless::StopWorkerMiddleware
end
In qless 0.9.x it used to be the case that each job is processed in a separate fork from the parent.
The README still states:
Looks to me like this changed in current master. The
SerialWorker
which is also used by theForkingWorker
is processing jobs in a loop.Was this change made deliberately? For what reasons?
What are the suggestions for getting back the old behavior and always have a fresh/clean process when working a job?
The text was updated successfully, but these errors were encountered: