Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

question about grads of alphas in hard Attention #18

Open
denglixi opened this issue Mar 4, 2016 · 4 comments
Open

question about grads of alphas in hard Attention #18

denglixi opened this issue Mar 4, 2016 · 4 comments

Comments

@denglixi
Copy link

denglixi commented Mar 4, 2016

Hello, I feel realy confused about the grads of alphas in hard attention. The source code is in line 1199:

known_grads={alphas:opt_outs['masked_cost'][:,:,None]/10.*
(alphas_sample/alphas) + alpha_entropy_c*(tensor.log(alphas) + 1)})

Can anyone explain this to me, please?

@ysjakking
Copy link

@denglixi I am also confused by this. Did you find the answer?

@SijieSong
Copy link

@denglixi @ysjakking Did you figure out the answer? I am also confused. Can anyone help me?

@shaoxuan92
Copy link

Me too...

@AlvinAi96
Copy link

@denglixi @ysjakking @shaoxuan92 @SijieSong Any solution did you get? I also come across this problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants