Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize train_CSL_graph_classification.py #84

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

mzamini92
Copy link

In both the training and evaluation loops, there are unnecessary calls to loss.detach().item(). You can calculate the loss without detaching it and only detach it if necessary at a later stage. In the train_epoch_dense function, you can remove the manual batch handling using (iter % batch_size) and instead rely on the batch_size parameter of the data_loader. The DataLoader automatically handles the batch iteration for you. In both the train_epoch_sparse and train_epoch_dense functions, you can move the optimizer.zero_grad() call outside the loop, just before the loop starts. This will avoid unnecessary repeated calls to zero_grad().

In both the training and evaluation loops, there are unnecessary calls to `loss.detach().item()`. You can calculate the loss without detaching it and only detach it if necessary at a later stage.
In the train_epoch_dense function, you can remove the manual batch handling using `(iter % batch_size)` and instead rely on the `batch_size` parameter of the `data_loader`. The DataLoader automatically handles the batch iteration for you.
In both the `train_epoch_sparse` and `train_epoch_dense` functions, you can move the `optimizer.zero_grad()` call outside the loop, just before the loop starts. This will avoid unnecessary repeated calls to `zero_grad()`.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant