-
Notifications
You must be signed in to change notification settings - Fork 456
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
move tensor concatenation to compute step from update step (#1498)
Summary: Pull Request resolved: #1498 We don't need to concatenate the tensor on every update step, since it is an expensive operation (creates a new tensor and allocates new memory every call, as tensors are contiguous) we can call `tensor.concat` on the compute step instead. Which happens every `compute_interval_step` batches. This optimization should boost performance of models using AUC with no regression in metric quality. We've also added an extra unit test consisting of multiple `update` calls before `compute,` ensuring tensor concatenation is done correctly in the `compute` and `update` calls Differential Revision: D51176437 fbshipit-source-id: 891c8b1de5f11c4aed68ab2de73cb7a1df335204
- Loading branch information
1 parent
c05a39d
commit 6fbeafe
Showing
3 changed files
with
79 additions
and
61 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters