You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While looking at the values contained in res (defined in the forward()), I have noticed that the implementation does not match the Eq. 1 from the paper.
Here's Eq. 1:
While this is what is implemented,
The square operation ** 2 at line 94 should be removed and instead applied on the self.lins[kk].model(diffs[kk]) (at lines 98 and 100), and on diff[kk] (at lines 103 and 105).
Thanks in advance,
Guillaume
The text was updated successfully, but these errors were encountered:
If the code is installed and the weights are loaded properly (and weren't changed by accidentally fine-tuning them, for example), it is not possible to get negative values.
Check the weights at all non-negative, by doing the following
for ll in range(5):
print(loss_fn_vgg.lins[ll].model[1].weight.flatten())
Hi,
While running the LPIPS loss based on AlexNet, I obtained a negative value,
While looking at the values contained in
res
(defined in theforward()
), I have noticed that the implementation does not match theEq. 1
from the paper.Here's Eq. 1:
While this is what is implemented,
The square operation
** 2
at line 94 should be removed and instead applied on theself.lins[kk].model(diffs[kk])
(at lines 98 and 100), and ondiff[kk]
(at lines 103 and 105).Thanks in advance,
Guillaume
The text was updated successfully, but these errors were encountered: