-
Notifications
You must be signed in to change notification settings - Fork 12
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Signed-off-by: Yu Chin Fabian Lim <[email protected]>
- Loading branch information
Showing
5 changed files
with
178 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
51 changes: 51 additions & 0 deletions
51
sample-configurations/moe-scattermoe-granite-ep1-padding-free-foak-sample-configuration.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,51 @@ | ||
# FMS Acceleration Plugin Configuration. | ||
# | ||
# Each stanza incorporates various configurations for | ||
# different fine-tuning / training tasks. | ||
plugins: | ||
# Configurations to accelerate data packing/padding in training | ||
training: | ||
|
||
# attention module configurations | ||
# e.g. padding-free modifications to attention layer | ||
attention: | ||
|
||
# this controls the confgurations for padding free computation of flash attention | ||
padding_free: | ||
method: huggingface | ||
fused_ops_and_kernels: | ||
|
||
# if under training stanza, then putting | ||
# base_layer and fused_lora will be a misnomer | ||
# - this should be in peft.quantized | ||
# However, if it is specified, it will still | ||
# be read. This is useful in use cases where | ||
# the yaml is system generated and not shown | ||
# to a user. | ||
|
||
# activate various unsloth optimizations | ||
# there are two versions of the plugin | ||
# - the FastKernel version supports individual kernels | ||
# - the FastQuantized version is all-or-nothing | ||
|
||
# fast loss triton kernels | ||
fast_loss: true | ||
|
||
# fast rms norm triton kernels | ||
fast_rms_layernorm: true | ||
|
||
# fast RoPE embedding triton kernels | ||
fast_rope_embeddings: true | ||
moe: | ||
|
||
# expert-parallel for MoE | ||
scattermoe: | ||
|
||
# The level of expert parallel sharding. | ||
# - 1 means no sharding | ||
# - if > 1, please ensure that this divides the world_size. This is because | ||
# the devices will be replicated for every ep_degree devices, and | ||
# the experts will be sharded within each group. | ||
# - if > 1, also ensure that it divides the number of experts, as each device | ||
# will then have num_of_experts / ep_degree experts. | ||
ep_degree: 1 |
51 changes: 51 additions & 0 deletions
51
sample-configurations/moe-scattermoe-granite-ep2-padding-free-foak-sample-configuration.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,51 @@ | ||
# FMS Acceleration Plugin Configuration. | ||
# | ||
# Each stanza incorporates various configurations for | ||
# different fine-tuning / training tasks. | ||
plugins: | ||
# Configurations to accelerate data packing/padding in training | ||
training: | ||
|
||
# attention module configurations | ||
# e.g. padding-free modifications to attention layer | ||
attention: | ||
|
||
# this controls the confgurations for padding free computation of flash attention | ||
padding_free: | ||
method: huggingface | ||
fused_ops_and_kernels: | ||
|
||
# if under training stanza, then putting | ||
# base_layer and fused_lora will be a misnomer | ||
# - this should be in peft.quantized | ||
# However, if it is specified, it will still | ||
# be read. This is useful in use cases where | ||
# the yaml is system generated and not shown | ||
# to a user. | ||
|
||
# activate various unsloth optimizations | ||
# there are two versions of the plugin | ||
# - the FastKernel version supports individual kernels | ||
# - the FastQuantized version is all-or-nothing | ||
|
||
# fast loss triton kernels | ||
fast_loss: true | ||
|
||
# fast rms norm triton kernels | ||
fast_rms_layernorm: true | ||
|
||
# fast RoPE embedding triton kernels | ||
fast_rope_embeddings: true | ||
moe: | ||
|
||
# expert-parallel for MoE | ||
scattermoe: | ||
|
||
# The level of expert parallel sharding. | ||
# - 1 means no sharding | ||
# - if > 1, please ensure that this divides the world_size. This is because | ||
# the devices will be replicated for every ep_degree devices, and | ||
# the experts will be sharded within each group. | ||
# - if > 1, also ensure that it divides the number of experts, as each device | ||
# will then have num_of_experts / ep_degree experts. | ||
ep_degree: 2 |
51 changes: 51 additions & 0 deletions
51
sample-configurations/moe-scattermoe-granite-ep4-padding-free-foak-sample-configuration.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,51 @@ | ||
# FMS Acceleration Plugin Configuration. | ||
# | ||
# Each stanza incorporates various configurations for | ||
# different fine-tuning / training tasks. | ||
plugins: | ||
# Configurations to accelerate data packing/padding in training | ||
training: | ||
|
||
# attention module configurations | ||
# e.g. padding-free modifications to attention layer | ||
attention: | ||
|
||
# this controls the confgurations for padding free computation of flash attention | ||
padding_free: | ||
method: huggingface | ||
fused_ops_and_kernels: | ||
|
||
# if under training stanza, then putting | ||
# base_layer and fused_lora will be a misnomer | ||
# - this should be in peft.quantized | ||
# However, if it is specified, it will still | ||
# be read. This is useful in use cases where | ||
# the yaml is system generated and not shown | ||
# to a user. | ||
|
||
# activate various unsloth optimizations | ||
# there are two versions of the plugin | ||
# - the FastKernel version supports individual kernels | ||
# - the FastQuantized version is all-or-nothing | ||
|
||
# fast loss triton kernels | ||
fast_loss: true | ||
|
||
# fast rms norm triton kernels | ||
fast_rms_layernorm: true | ||
|
||
# fast RoPE embedding triton kernels | ||
fast_rope_embeddings: true | ||
moe: | ||
|
||
# expert-parallel for MoE | ||
scattermoe: | ||
|
||
# The level of expert parallel sharding. | ||
# - 1 means no sharding | ||
# - if > 1, please ensure that this divides the world_size. This is because | ||
# the devices will be replicated for every ep_degree devices, and | ||
# the experts will be sharded within each group. | ||
# - if > 1, also ensure that it divides the number of experts, as each device | ||
# will then have num_of_experts / ep_degree experts. | ||
ep_degree: 4 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters