Skip to content

Commit

Permalink
Merge pull request #198 from kozistr/feature/loss-functions
Browse files Browse the repository at this point in the history
[Feature] Implement loss functions
  • Loading branch information
kozistr authored Jul 19, 2023
2 parents 19b8519 + 6642951 commit e13aa94
Show file tree
Hide file tree
Showing 13 changed files with 373 additions and 104 deletions.
6 changes: 5 additions & 1 deletion README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ pytorch-optimizer

| **pytorch-optimizer** is optimizer & lr scheduler collections in PyTorch.
| I just re-implemented (speed & memory tweaks, plug-ins) the algorithm while based on the original paper. Also, It includes useful and practical optimization ideas.
| Currently, 59 optimizers, 10 lr schedulers, and 10 loss functions are supported!
| Currently, 59 optimizers, 10 lr schedulers, and 13 loss functions are supported!
|
| Highly inspired by `pytorch-optimizer <https://github.com/jettify/pytorch-optimizer>`__.
Expand Down Expand Up @@ -270,6 +270,10 @@ You can check the supported loss functions with below code.
+---------------------+-------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+
| Bi-Tempered | *The Principle of Unchanged Optimality in Reinforcement Learning Generalization* | | `https://arxiv.org/abs/1906.03361 <https://arxiv.org/abs/1906.03361>`__ | `cite <https://ui.adsabs.harvard.edu/abs/2019arXiv190600336I/exportcitation>`__ |
+---------------------+-------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+
| Tversky | *Tversky loss function for image segmentation using 3D fully convolutional deep networks* | | `https://arxiv.org/abs/1706.05721 <https://arxiv.org/abs/1706.05721>`__ | `cite <https://ui.adsabs.harvard.edu/abs/2017arXiv170605721S/exportcitation>`__ |
+---------------------+-------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+
| Lovasz Hinge | *A tractable surrogate for the optimization of the intersection-over-union measure in neural networks* | `github <https://github.com/bermanmaxim/LovaszSoftmax>`__ | `https://arxiv.org/abs/1705.08790 <https://arxiv.org/abs/1705.08790>`__ | `cite <https://github.com/bermanmaxim/LovaszSoftmax#citation>`__ |
+---------------------+-------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+

Useful Resources
----------------
Expand Down
6 changes: 6 additions & 0 deletions docs/changelogs/v2.11.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,14 @@

### Feature

* Implement Tiger optimizer (#192)
* [A Tight-fisted Optimizer](https://github.com/bojone/tiger/blob/main/README_en.md)
* Implement CAME optimizer (#196)
* [Confidence-guided Adaptive Memory Efficient Optimization](https://aclanthology.org/2023.acl-long.243/)
* Implement loss functions (#198)
* Tversky Loss : [Tversky loss function for image segmentation using 3D fully convolutional deep networks](https://arxiv.org/abs/1706.05721)
* Focal Tversky Loss
* Lovasz Hinge Loss : [The Lovász-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks](https://arxiv.org/abs/1705.08790)

### Diff

Expand Down
24 changes: 24 additions & 0 deletions docs/loss_api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -80,3 +80,27 @@ BinaryBiTemperedLogisticLoss

.. autoclass:: pytorch_optimizer.BinaryBiTemperedLogisticLoss
:members:

.. _TverskyLoss:

TverskyLoss
-----------

.. autoclass:: pytorch_optimizer.TverskyLoss
:members:

.. _FocalTverskyLoss:

FocalTverskyLoss
----------------

.. autoclass:: pytorch_optimizer.FocalTverskyLoss
:members:

.. _LovaszHingeLoss:

LovaszHingeLoss
---------------

.. autoclass:: pytorch_optimizer.LovaszHingeLoss
:members:
Loading

0 comments on commit e13aa94

Please sign in to comment.