Skip to content

Commit

Permalink
feat(ml): rename option
Browse files Browse the repository at this point in the history
  • Loading branch information
wr0124 committed Sep 25, 2024
1 parent 4bd6c01 commit cb7e1ec
Show file tree
Hide file tree
Showing 3 changed files with 4 additions and 4 deletions.
2 changes: 1 addition & 1 deletion docs/options.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ Here are all the available options to call with `train.py`
| --G_unet_mha_num_heads | int | 1 | number of heads in the mha architecture |
| --G_unet_mha_res_blocks | array | [2, 2, 2, 2] | distribution of resnet blocks across the UNet stages, should have same size as --G_unet_mha_channel_mults |
| --G_unet_mha_vit_efficient | flag | | if true, use efficient attention in UNet and UViT |
| --vid_max_sequence_length | int | 25 | max frame number for unet_vid in the PositionalEncoding |
| --G_unet_vid_max_sequence_length | int | 25 | max frame number for unet_vid in the PositionalEncoding |
| --G_uvit_num_transformer_blocks | int | 6 | Number of transformer blocks in UViT |

## Algorithm-specific
Expand Down
2 changes: 1 addition & 1 deletion docs/source/options.rst
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ Generator
+------------------------------------------------+-----------------+---------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| --G_unet_mha_vit_efficient | flag | | if true, use efficient attention in UNet and UViT |
+------------------------------------------------+-----------------+---------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| --vid_max_sequence_length | int | 25 | max frame number for unet_vid in the PositionalEncoding |
| --G_unet_vid_max_sequence_length | int | 25 | max frame number for unet_vid in the PositionalEncoding |
+------------------------------------------------+-----------------+---------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| --G_uvit_num_transformer_blocks | int | 6 | Number of transformer blocks in UViT |
+------------------------------------------------+-----------------+---------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Expand Down
4 changes: 2 additions & 2 deletions models/modules/unet_generator_attn/unet_generator_attn_vid.py
Original file line number Diff line number Diff line change
Expand Up @@ -379,7 +379,7 @@ def __init__(
attention_block_types=("Temporal_Self", "Temporal_Self"),
cross_frame_attention_mode=None,
temporal_position_encoding=False,
temporal_position_encoding_max_len=None,
temporal_position_encoding_max_len=25,
temporal_attention_dim_div=1,
zero_initialize=True,
):
Expand Down Expand Up @@ -438,7 +438,7 @@ def __init__(
upcast_attention=False,
cross_frame_attention_mode=None,
temporal_position_encoding=False,
temporal_position_encoding_max_len=None,
temporal_position_encoding_max_len=25,
):
super().__init__()

Expand Down

0 comments on commit cb7e1ec

Please sign in to comment.