-
Notifications
You must be signed in to change notification settings - Fork 139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
y's associated with negative samples are occasionally the correct y #8
Comments
Thanks for pointing that out! I will keep this in mind for the next update.
…On Thu, Jan 12, 2023 at 5:53 PM Jack Hall ***@***.***> wrote:
Hi Mohammad,
Thank you so much for your nice implementation of the FF algorithm. I'm
not a pytorch guy so I'm pretty sure there is a much more efficient way to
do this but I have a possible correction to make: in lines 105 and 106 of
main.py you generate the negative samples like so:
rnd = torch.randperm(x.size(0)) x_neg = overlay_y_on_x(x, y[rnd])
with this approach, it is possible that some samples we are treating as
"negative" samples truly are positive samples (i.e. a y[rnd]_i = y_i
occasionally since there are only 10 options)
Here is my hacky, slow workaround, to use it you must import numpy as np.
Perhaps someone like yourself with more pytorch experience can optimize:
y_neg = y.clone() for idx, y_samp in enumerate(y): allowed_indices = [0,
1, 2, 3, 4, 5, 6, 7, 8, 9] allowed_indices.pop(y_samp.item()) y_neg[idx] =
torch.tensor(np.random.choice(allowed_indices)).cuda() x_neg =
overlay_y_on_x(x, y_neg)
Thanks again for posting your implementation publicly, it's certainly a
fascinating approach and I will have to spend more time letting the details
of the paper marinate :P
—
Reply to this email directly, view it on GitHub
<#8>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAYJYZGRBO2UMNU7JZLZBHTWSCDPTANCNFSM6AAAAAATZYYVWU>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Hi @mpezeshki , it seems that the issue mentioned is not fixed yet in code. Do you need us to code, fix, and submit a PR for you to approve? (So, you could just need to click few approval-related buttons to get the issue fixed) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi Mohammad,
Thank you so much for your nice implementation of the FF algorithm. I'm not a pytorch guy so I'm pretty sure there is a much more efficient way to do this but I have a possible correction to make: in lines 105 and 106 of main.py you generate the negative samples like so:
rnd = torch.randperm(x.size(0))
x_neg = overlay_y_on_x(x, y[rnd])
with this approach, it is possible that some samples we are treating as "negative" samples truly are positive samples (i.e. a y[rnd]_i = y_i occasionally since there are only 10 options)
Here is my hacky, slow, workaround, to use it you must
import numpy as np
. Perhaps someone like yourself with more pytorch experience can optimize:y_neg = y.clone()
for idx, y_samp in enumerate(y):
allowed_indices = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
allowed_indices.pop(y_samp.item())
y_neg[idx] = torch.tensor(np.random.choice(allowed_indices)).cuda()
x_neg = overlay_y_on_x(x, y_neg)
Thanks again for posting your implementation publicly, it's certainly a fascinating approach and I will have to spend more time letting the details of the paper marinate :P
The text was updated successfully, but these errors were encountered: