You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently whenever one Extractor fails with an error the remaining Extractors will still start processing, even though Integration will still be skipped since it's checking explicitly inside the Integrator whether that have been any errors, and if so... it disallows the integration into the pipeline.
The problem with this is that the first Extractor could fail and even other long running Extractors will still try to continue, even though we know it will be useless anyway.
importpyblish.logicimportpyblish.apidefcustom_test(**vars):
# Keep default behaviordefault_result=pyblish.logic.default_test(**vars)
ifdefault_result:
returndefault_result# Add custom behavior# Fail on anything after validation having an error.after_validation=pyblish.api.ValidatorOrder+0.5ifany(order>=after_validationfororderinvars["ordersWithErrors"]):
return"failed after validation"pyblish.api.register_test(custom_test)
Note this would have the downside that also Cleanup would not get triggered, as such the local disk (staging dir) might get a filled up temporary folder. This could be an additional problem that might need to be taken care of...
Another workaround could be to have our Extractors also initially check whether any errors have occurred, and if so... to raise an Error themselves, yet have the Cleanup plug-in always run.
The text was updated successfully, but these errors were encountered:
This solution would fail the publish as soon as any plugin fails. Then you would stop long running extractors.
context=pyblish.util.collect()
# Error exit if nothing was collected.ifnotcontext:
raiseValueError("Nothing collected.")
# Error exit as soon as any error occurs.forresultinpyblish.util.publish_iter(context):
ifresult["error"]:
raiseValueError(result["error"])
This solution would fail the publish as soon as any plugin fails. Then you would stop long running extractors.
Thanks @tokejepsen - the example I provided at the top should do so too, and should also work in Pyblish QML and Pyblish Lite. :) Or did I do something wrong in my code example? Or is there a bug in Pyblish?
Issue
Currently whenever one Extractor fails with an error the remaining Extractors will still start processing, even though Integration will still be skipped since it's checking explicitly inside the Integrator whether that have been any errors, and if so... it disallows the integration into the pipeline.
The problem with this is that the first Extractor could fail and even other long running Extractors will still try to continue, even though we know it will be useless anyway.
Solution
We can override Pyblish's behavior which only stop after validation if any errors occurred to our own test that stops in our other cases too.
For example:
Note this would have the downside that also Cleanup would not get triggered, as such the local disk (staging dir) might get a filled up temporary folder. This could be an additional problem that might need to be taken care of...
Another workaround could be to have our Extractors also initially check whether any errors have occurred, and if so... to raise an Error themselves, yet have the Cleanup plug-in always run.
The text was updated successfully, but these errors were encountered: