-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Wait for #2562 ] [ Weight ] Add Var32 Tensor in Weight @open sesame 05/03 15:14 #2563
Conversation
📝 TAOS-CI Version: 1.5.20200925. Thank you for submitting PR #2563. Please a submit 1commit/1PR (one commit per one PR) policy to get comments quickly from reviewers. Your PR must pass all verificiation processes of cibot before starting a review process from reviewers. If you are new member to join this project, please read manuals in documentation folder and wiki page. In order to monitor a progress status of your PR in more detail, visit http://ci.nnstreamer.ai/. |
cibot: @jijoongmoon, A builder checker could not be completed because one of the checkers is not completed. In order to find out a reason, please go to http://ci.nnstreamer.ai/nntrainer/ci/repo-workers/pr-checker/2563-202405022049330.26662302017212-69f3534d52f32fdab88ce58bdce83b8705f66bb1/. |
It add loss scale property as model common property. Signed-off-by: Jiho Chu <[email protected]>
This PR enables the loss scale factor in Weight. . Change the WeightSpec to incluide the loss factor . Add LossScaleForMixed Property as an layer common property, so that it can set the scale factor in initContext. . Add Loss Scale in initContext . Set the LossScaleForMixed Property when there is LossScale Model Property Resolves: **Self evaluation:** 1. Build test: [X]Passed [ ]Failed [ ]Skipped 2. Run test: [X]Passed [ ]Failed [ ]Skipped Signed-off-by: jijoong.moon <[email protected]>
This PR split the Variable and Gradient Dim in Var_Grad and Weight. By this way we can set different Variable Type and Gradient in Wegiht. . add dim_g for gradient in WeightSpec. . manager need to update to support WeightSpec. . Create Tensors according to dim_v and dim_g . Create Weight chaged in Weight.h Resolves: **Self evaluation:** 1. Build test: [X]Passed [ ]Failed [ ]Skipped 2. Run test: [X]Passed [ ]Failed [ ]Skipped Signed-off-by: jijoong.moon <[email protected]>
Loss Scale is more like Rigid Property of model, rather than flexible property. Resolves: **Self evaluation:** 1. Build test: [X]Passed [ ]Failed [ ]Skipped 2. Run test: [X]Passed [ ]Failed [ ]Skipped Signed-off-by: jijoong.moon <[email protected]>
cibot: @jijoongmoon, A builder checker could not be completed because one of the checkers is not completed. In order to find out a reason, please go to http://ci.nnstreamer.ai/nntrainer/ci/repo-workers/pr-checker/2563-202405022211050.57468700408936-eefbe653350644e0bf0e43e3276384706241e941/. |
cibot: @jijoongmoon, A builder checker could not be completed because one of the checkers is not completed. In order to find out a reason, please go to http://ci.nnstreamer.ai/nntrainer/ci/repo-workers/pr-checker/2563-202405030740520.23194599151611-705f655f1a469bab837b732dd8b13b88f498ccdb/. |
cibot: @jijoongmoon, A builder checker could not be completed because one of the checkers is not completed. In order to find out a reason, please go to http://ci.nnstreamer.ai/nntrainer/ci/repo-workers/pr-checker/2563-202405030812070.23390698432922-2165c364e068493e5d5cbf356fdf8dbb5fcf8583/. |
cibot: @jijoongmoon, nntrainer/tensor/weight.h does not include Doxygen tags such as @file @brief @author @bug. You must include the Doxygen tags in the source code. Please refer to a Doxygen manual at http://github.com/nnstreamer/TAOS-CI/blob/main/ci/doc/doxygen-documentation.md |
1 similar comment
cibot: @jijoongmoon, nntrainer/tensor/weight.h does not include Doxygen tags such as @file @brief @author @bug. You must include the Doxygen tags in the source code. Please refer to a Doxygen manual at http://github.com/nnstreamer/TAOS-CI/blob/main/ci/doc/doxygen-documentation.md |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jijoongmoon, 💯 All CI checkers are successfully verified. Thanks.
cibot: @jijoongmoon, nntrainer/tensor/weight.h does not include Doxygen tags such as @file @brief @author @bug. You must include the Doxygen tags in the source code. Please refer to a Doxygen manual at http://github.com/nnstreamer/TAOS-CI/blob/main/ci/doc/doxygen-documentation.md |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jijoongmoon, 💯 All CI checkers are successfully verified. Thanks.
We will add Var32 Tensor if the Variable Weight is not Full precision (FP32). This eables the Weight Update with full precision and only Apply Gradient Process ueses this Tensor. Therefore, the lifespan of this tensor should be "ApplyGradient". . Modify TensorPool to generate Weigth considering Mixed Precsion. **Self evaluation:** 1. Build test: [X]Passed [ ]Failed [ ]Skipped 2. Run test: [X]Passed [ ]Failed [ ]Skipped Signed-off-by: jijoong.moon <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jijoongmoon, 💯 All CI checkers are successfully verified. Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! (need rebase)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
TensorDim var32_dim(dim_v); | ||
var32_dim.setDataType(ml::train::TensorDim::DataType::FP32); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would it be convenient for you to have a new TensorDim constructor that takes data type?
(e.g., TensorDim var32_dim(dim_v, ml::train::TensorDim::DataType::FP32)
)
If so, I'll create a new PR with the change.
TensorDim var32_dim(dim_v); | ||
var32_dim.setDataType(ml::train::TensorDim::DataType::FP32); | ||
std::string var32_suffix = ":fp32"; | ||
std::string var32_name = name + var32_suffix; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
duplicated with lines 78-79?
* @return false otherwise | ||
*/ | ||
bool isMixedPrecision() const { | ||
return var->getDataType() == ml::train::TensorDim::DataType::FP32; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm quite confused here. if this function returns true when the variable is not in full precision, shouldn't it be !=
instead of ==
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in #2566
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@jijoongmoon Let's rebase and move on. |
closed by #2663 |
In this PR
We will add Var32 Tensor if the Variable Weight is not Full
precision (FP32). This eables the Weight Update with full precision
and only Apply Gradient Process ueses this Tensor. Therefore, the
lifespan of this tensor should be "ApplyGradient".
Also, modify TensorPool to generate Weigth considering Mixed Precision.
Self evaluation:
Signed-off-by: jijoong.moon [email protected]