-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wrong loss function? #9
Comments
Same issue |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I notice that the code uses CrossEntropyLoss for local training:
LG-FedAvg/models/Update.py
Line 28 in 7af0568
And it accepts the log-probabilities as input:
LG-FedAvg/models/Update.py
Line 50 in 7af0568
The output of CNN networks is also logsoftmax:
LG-FedAvg/models/Nets.py
Line 104 in 7af0568
But according to the doc of PyTorch, CrossEntropyLoss already has logsoftmax inside:
https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss
I think the loss should be calculated via NLLLoss instead if used with input after logsoftmax (https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html#torch.nn.NLLLoss).
Or is there any reason why the code uses logsoftmax twice for calculating the loss?
The text was updated successfully, but these errors were encountered: