Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Wait for #2607][layer] enable mixed precision - reshape_layer #2610

Closed
wants to merge 14 commits into from

Conversation

DonghakPark
Copy link
Member

[layer] enable mixed precision - reshape_layer

enable mixed precision on reshape layer

  • reshape layer only change dim, so change dimensions and check datatype

Self evaluation:

  1. Build test: [X]Passed [ ]Failed [ ]Skipped
  2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghak PARK [email protected]

@taos-ci
Copy link
Collaborator

taos-ci commented May 30, 2024

📝 TAOS-CI Version: 1.5.20200925. Thank you for submitting PR #2610. Please a submit 1commit/1PR (one commit per one PR) policy to get comments quickly from reviewers. Your PR must pass all verificiation processes of cibot before starting a review process from reviewers. If you are new member to join this project, please read manuals in documentation folder and wiki page. In order to monitor a progress status of your PR in more detail, visit http://ci.nnstreamer.ai/.

We will add Var32 Tensor if the Variable Weight is not Full
precision (FP32). This eables the Weight Update with full precision
and only Apply Gradient Process ueses this Tensor. Therefore, the
lifespan of this tensor should be "ApplyGradient".

. Modify TensorPool to generate Weigth considering Mixed Precsion.

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <[email protected]>
This pr create the variable fp32 tensor when we create the Weight and
Optimizer Weight.

. update the manager to create Weight with  var32 tensor which
requested to weight pool.
. update the weight requests with Weight Spec and var, grad and var32
tensors which created already.
. add clone Tensor with specific type in tensor.h

Resolves:

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <[email protected]>
This PR enables the FP16 support for the layers below:

. input layer
. mse loss layer

Resolves:

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <[email protected]>
This PR includes the mixed precision test case.

. Input - FC - MSE
 : "batch_size=2", "model_tensor_type=FP16-FP16", "loss_scale=128"

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <[email protected]>
This commit modify apply gradient in optimizer.
We do not need to save optimizer variables in weight type. Only
Optimizer needs the optimizer variables and we should update the
weight with full precision to maintain the accuracy. Therefore,
remove the var32 tensors for optimizer variables.

Resolves:

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <[email protected]>
This PR add is_NaN function to check if the tensor has NaN value. This
is for the check NaN during mixed precision training.

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <[email protected]>
This PR add loss scale parameter in runcontext and use it to update
mse loss.

. Add Loss Scale Parameter in RunLayerContext Constructor
. Add applyLossScale func to update return derivitive in Loss Layer
. Change MSE Loss Layer to apply the loss scale to return derivitive

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <[email protected]>
This PR enables the Mixed Precision Training. For now only FP16-FP32
is considered. Additional Test cases will be added.

. add getSortedLayerIdx to set the graph order for fowarding.
. change clip_weights to lazy_apply_weights to use both cases.
. add fowarding_op to run forwarding from that layer which has a
gradient with nan.
. add while loop for re-run backwarding after reset the loss scale.
. add setLossScale in RunLayerContext
. add check the gradient if mixed precsion enable.

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <[email protected]>
jijoongmoon and others added 6 commits May 30, 2024 16:29
This PR add inifinity value check in Tensor data.
. rename the hasNaN to isValid
. add infinity check in isValid Function and now it check NaN and Inf
. modify to check the blas_avx and blas_neon
. modify graph and model check is_valid rather than has_nan
. add unittest of isValid Function

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <[email protected]>
This PR chage the loss computation using full precsion rather than
half precsion to maintain accuracy.

**Changes proposed in this PR:**
- Added TOC generator for README.md

Resolves:

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <[email protected]>
This PR enables the Mixed Precsion Unittest with Torch Model.

Resolves:

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <[email protected]>
This PR add torch mixed precsion golden data generation and input and
output for test.

. some fixes to test.

Resolves:

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <[email protected]>
This PR includes more unittest and fixes for mixed precsion.
. Model Unittest
  . 2 fc layer which generate NaN or Inf Gradient from Troch.
  . MSE Loss and Check whole procedure of the mixed precsion training.
  . Even if the FC model only have one weight, but it is good enough
  to validate the mixed precsion.
  . Torch model also work similar way of NNTrainer.
  . Some fixes about the exeuction order of apply gradient when the
  mixed precision is on.
  . Update SGD to support Mixed Precision training

**Changes proposed in this PR:**
- Added TOC generator for README.md

Resolves:

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <[email protected]>
enable mixed precision on reshape layer
- reshape layer only change dim, so change dimensions and check datatype

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghak PARK <[email protected]>
Copy link
Collaborator

@taos-ci taos-ci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DonghakPark, 💯 All CI checkers are successfully verified. Thanks.

@jijoongmoon
Copy link
Collaborator

closed by #2663

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants