Replies: 3 comments
-
Why would it take longer to encode 3 jobs from the same source file than 3 jobs from 3 different source files? I would think it would actually work better from the same file because you will share the disk cache hits more often. |
Beta Was this translation helpful? Give feedback.
-
Thank you for your call. I'll try to explain my observation. I ran into the same problem today with two concurrent jobs.
|
Beta Was this translation helpful? Give feedback.
-
I reproduced your issue and recognized another problem that could happen after manually stopping a single job. I refactored how the encode completions are tracked so it should stop endlessly looping as of 8.11 Beta. It might still immediately restart the job if you have encode slots open and there's nothing else in the queue though. |
Beta Was this translation helpful? Give feedback.
-
It would be nice if you could integrate to limit the concurrent jobs to one or a custom number, because I have set concurrent jobs to 3 and if i encode variants of the same source file it takes much longer to encode.
The best would be to add another Option like:
Maximum amount of concurrent jobs: "3"
When same source file, limit maximum amount of concurrent jobs to: "1" (<- user defined custom value)
or similar. Thank you for reading and hopefully considering.
Beta Was this translation helpful? Give feedback.
All reactions