You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Memory keeps increasing during a single training epoch, after which it drops back to normal levels.
Not sure if bug or intented behavior, but it meant I had to split my training set into smaller chunks and rotate them in and out every "epoch".
I removed all the wandb logging code (especially the per-epoch logging flags) from my regime, but no luck with that.
The text was updated successfully, but these errors were encountered:
Memory keeps increasing during a single training epoch, after which it drops back to normal levels.
Not sure if bug or intented behavior, but it meant I had to split my training set into smaller chunks and rotate them in and out every "epoch".
I removed all the wandb logging code (especially the per-epoch logging flags) from my regime, but no luck with that.
The text was updated successfully, but these errors were encountered: