You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are looking to run this model and reduce the processing time. Is there any way to tune hyperparameters and/or parallelise inference so that we can leverage more compute?
The text was updated successfully, but these errors were encountered:
Adding the --no_flip option to the command line in my case results in 5s improvement in the inference phase. Although there is a minor reduction in image quality, the time savings make it a worthwhile trade-off.
We are looking to run this model and reduce the processing time. Is there any way to tune hyperparameters and/or parallelise inference so that we can leverage more compute?
The text was updated successfully, but these errors were encountered: