Why has post-processing (even in previous versions as early as 3.67 take so long that it basically doesn't work anymore? Is it just me? #836
-
I've tried the newest version, and I've gone back several updates and if I add any post-processing at all like Codeformer, GPEN, etc. I'm trying to figure out if there's something I can do to get its effectiveness and speed back! |
Beta Was this translation helpful? Give feedback.
Replies: 6 comments 7 replies
-
It's just you. On the contrary it should be a lot faster since converting them from pytorch to onnx models. Stupid question but did you verify you're running on cuda? Or is your vram exhausted when using post-processors (most likely)? |
Beta Was this translation helpful? Give feedback.
-
I'm tellin ya! Somethin ain't right!! I just ordered a 12gb 3060 that will be here ina couple days and I have a gut feeling that it still isn't gonna be working right. I might just end up reinstalling windows and starting fresh, been a while anyways. But I know something's going on that is causing the major drop in performance and it isn't my specs. I don't mean to imply it isn't on my end, but its just driving me crazy I haven't been able to figure out what it is. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Update: I used pip cache purge inside the roop folder and it removed about 1700 items, I did a fresh install, and now post-processing seems to be working again! I feel like I've had much better results than this, but there could be a couple factors behind that. Once this 3060 gets delivered with that meaty 12gb, I think I'll have much better results. Currently did this with 3.9, I'm afraid to upgrade now! lol |
Beta Was this translation helpful? Give feedback.
-
Looks good to me. The only thing left for you to tweak are the number of threads. Perhaps 3 is already too much currently. You know, the most important thing when using a gpu is not running out of vram! Because if that happens, the memory is shuffled between your regular ram and the gpu video memory. This can slow processing down to a crawl! |
Beta Was this translation helpful? Give feedback.
-
Glad you could sort it out. I still believe the most important change was reducing the num threads because 6 Gb VRAM is really just too little for postprocessing in parallel. |
Beta Was this translation helpful? Give feedback.
Looks good to me. The only thing left for you to tweak are the number of threads. Perhaps 3 is already too much currently.
There is a benchmark thread here
#461
and a general FAQ Entry
https://github.com/C0untFloyd/roop-unleashed/wiki/FAQ#why-is-my-graphics-card-gpu-not--hardly-being-utilized
You know, the most important thing when using a gpu is not running out of vram! Because if that happens, the memory is shuffled between your regular ram and the gpu video memory. This can slow processing down to a crawl!