Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot work with higher TF version #20

Open
bangnguyen89 opened this issue Mar 13, 2018 · 1 comment
Open

Cannot work with higher TF version #20

bangnguyen89 opened this issue Mar 13, 2018 · 1 comment

Comments

@bangnguyen89
Copy link

I tried to run your code with higher TF version (1.5) but there are something wrong with these codes:
decoder_cell = tf.contrib.seq2seq.DynamicAttentionWrapper(
decoder_cell, attention, state_size * 2)
wrapper_state = tf.contrib.seq2seq.DynamicAttentionWrapperState(
self.init_state, self.prev_att)

DynamicAttentionWrapper and DynamicAttentionWrapperState already deprecated in this TF version 1.5.
They replaced by AttentionWrapper and AttentionWrapperState but the input parameters are quiet complexity
Could you please help us update your code to compatible with new TF version
Thank you so much

@Sparsh-Bansal
Copy link

This may be because the pretrained model is trained on version 1.0 and 1.1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants