Skip to content

Releases: lucidrains/DALLE-pytorch

0.0.61

15 Feb 00:02
Compare
Choose a tag to compare
fix mask for full attention so that last text token can actually pred…

0.0.60

14 Feb 05:25
Compare
Choose a tag to compare
remove some dead code

0.0.59

10 Feb 20:56
Compare
Choose a tag to compare
remove ability to do bidirectional attention on text, to keep reposit…

0.0.58

10 Feb 20:49
Compare
Choose a tag to compare
handle other types being passed to `attn_types` keyword arg

0.0.56

10 Feb 18:53
Compare
Choose a tag to compare
always use mask if sparse axial or conv attention needs to be padded

0.0.55

10 Feb 18:31
Compare
Choose a tag to compare
complete sparse attention integration with transformer, user can now …

0.0.54

10 Feb 15:07
Compare
Choose a tag to compare
do not tie vae codebook to dall-e image embedding by default, and if …

0.0.53

05 Feb 22:22
Compare
Choose a tag to compare
fix causal masking in sparse conv attention

0.0.52

05 Feb 17:45
Compare
Choose a tag to compare
allow for dilation in sparse convolutional causal attention

0.0.51

05 Feb 17:28
Compare
Choose a tag to compare
complete sparse axial causal attention, where each row or column has …