Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory increases during training epoch, limiting training set size. #543

Open
nkemnitz opened this issue Nov 7, 2023 · 0 comments
Open

Comments

@nkemnitz
Copy link
Collaborator

nkemnitz commented Nov 7, 2023

Memory keeps increasing during a single training epoch, after which it drops back to normal levels.
Not sure if bug or intented behavior, but it meant I had to split my training set into smaller chunks and rotate them in and out every "epoch".

I removed all the wandb logging code (especially the per-epoch logging flags) from my regime, but no luck with that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant