-
-
Notifications
You must be signed in to change notification settings - Fork 277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jobs are being allowed to run even though there is a lock #734
Comments
It only looks like 7 are running, 6 of them would just be waiting for the first one to complete. |
I don't fully understand, are you saying 1 is running, the other 6 are waiting for that one to finish before running? If I execute from the console, unique jobs usually prevents them from running it would just say "start" then immediately "done" so that only 1 job is running. But for some reason, these jobs are popping up (screenshot below). My worry is that the jobs entail a lot of database read/write and are locking each other in the database preventing them from finishing |
You would probably be better off using Is it a problem if a subsequent job is dropped when another is already on the queue waiting to be picked off by the sidekiq worker? |
I think I used to have it until_executed but it was still queueing the jobs so I changed it to until_and_while_executing. I'm going to try installing another Redis, perhaps something is messed up with my jobs. In the meantime I added this code to check if there is already a Sidekiq job with my process name, if there is I don't run the code in the worker workers = Sidekiq::Workers.new Thank you for the help |
Investigating, something is also strange on our part too.
|
Not sure if this is the same bug, but it is a I am able to reproduce this behavior like so TestJob.perform_async
sleep(5)
SidekiqUniqueJobs::Orphans::Reaper.call
TestJob.perform_async Sample app here |
@DCrow That actually does not reproduce anything. Your sidekiq process isn't running and since you have configured a global TTL of 1 hour the reaper would just consider the lock as active after 5 seconds. |
@mhenrixon Sorry if I wasn't clear. Prior to executing this script, sidekiq process should also be running.
|
Describe the bug
I'm not sure if I am doing something wrong. The unique jobs used to work but recently upgraded sidekiq and unique jobs.
Below is a job that is supposed to be locked until and while executing but as you can see there are 2 of them
["worker.1:3:9598347b346b",
"14frb",
{"queue"=>"default",
"payload"=>
{"retry"=>false,
"queue"=>"default",
"backtrace"=>true,
"lock"=>"until_and_while_executing",
"args"=>[],
"class"=>"RunSponsoredAdsQueue",
"jid"=>"8e11befa3b4600d7c4749941",
"created_at"=>1661867359.8881626,
"lock_timeout"=>nil,
"lock_ttl"=>nil,
"lock_prefix"=>"uniquejobs",
"lock_args"=>[],
"lock_digest"=>"uniquejobs:82157bcb53420755466db7f4590d54e6",
"enqueued_at"=>1661867361.2786112},
"run_at"=>1661867361}],
["worker.1:3:9598347b346b",
"1hwq7",
{"queue"=>"default",
"payload"=>
{"retry"=>false,
"queue"=>"default",
"backtrace"=>true,
"lock"=>"until_and_while_executing",
"args"=>[],
"class"=>"RunSponsoredAdsQueue",
"jid"=>"ecafefaf4c6b71ce01daaee4",
"created_at"=>1661867957.9982724,
"lock_timeout"=>nil,
"lock_ttl"=>nil,
"lock_prefix"=>"uniquejobs",
"lock_args"=>[],
"lock_digest"=>"uniquejobs:82157bcb53420755466db7f4590d54e6",
"enqueued_at"=>1661867959.0355272},
"run_at"=>1661867959}],
Expected behavior
If 1 job is running and has a lock, there shouldn't be another one that is allowed to run
Current behavior
Currently 7 of these jobs are running at the same time
Worker class
The text was updated successfully, but these errors were encountered: