Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump torchmetrics from 0.11.0 to 1.2.0 #58

Closed
wants to merge 1 commit into from

Conversation

dependabot[bot]
Copy link

@dependabot dependabot bot commented on behalf of github Sep 25, 2023

Bumps torchmetrics from 0.11.0 to 1.2.0.

Release notes

Sourced from torchmetrics's releases.

Clustering metrics

Torchmetrics v1.2 is out now! The latest release includes 11 new metrics within a new subdomain: Clustering. In this blog post, we briefly explain what clustering is, why it’s a useful measure and newly added metrics that can be used with code samples.

Clustering - what is it?

Clustering is an unsupervised learning technique. The term unsupervised here refers to the fact that we do not have ground truth targets as we do in classification. The primary goal of clustering is to discover hidden patterns or structures within data without prior knowledge about the meaning or importance of particular features. Thus, clustering is a form of data exploration compared to supervised learning, where the goal is “just” to predict if a data point belongs to one class.

The key goal of clustering algorithms is to split data into clusters/sets where data points from the same cluster are more similar to each other than any other points from the remaining clusters. Some of the most common and widely used clustering algorithms are K-Means, Hierarchical clustering, and Gaussian Mixture Models (GMM).

An objective quality evaluation/measure is required regardless of the clustering algorithm or internal optimization criterion used. In general, we can divide all clustering metrics into two categories: extrinsic metrics and intrinsic metrics.

Extrinsic metrics

Extrinsic metrics are characterized by requirements of some ground truth labeling, even if used for an unsupervised method. This may seem counter-intuitive at first as we, by clustering definition, do not use such ground truth labeling. However, most clustering algorithms are still developed on datasets with labels available, so these metrics use this fact as an advantage.

Intrinsic metrics

In contrast, intrinsic metrics do not need any ground truth information. These metrics estimate inter-cluster consistency (cohesion of all points assigned to a single set) compared to other clusters (separation). This is often done by comparing the distance in the embedding space.

Update to Mean Average Precision

MeanAveragePrecision, the most widely used metric for object detection in computer vision, now supports two new arguments: average and backend.

  • The average argument controls averaging over multiple classes. By the core definition, the default way is macro averaging, where the metric is calculated for each class separately and then averaged together. This will continue to be the default in Torchmetrics, but now we also support the setting average="micro". Every object under this setting is essentially considered to be the same class, and the returned value is, therefore, calculated simultaneously over all objects.

  • The second argument - backend, is important, as it indicates what computational backend will be used for the internal computations. Since MeanAveragePrecision is not a simple metric to compute, and we value the correctness of our metric, we rely on some third-party library to do the internal computations. By default, we rely on users to have the official pycocotools installed, but with the new argument, we will also be supporting other backends.

[1.2.0] - 2023-09-22

Added

  • Added metric to cluster package:
    • MutualInformationScore (#2008)
    • RandScore (#2025)
    • NormalizedMutualInfoScore (#2029)
    • AdjustedRandScore (#2032)
    • CalinskiHarabaszScore (#2036)
    • DunnIndex (#2049)
    • HomogeneityScore (#2053)
    • CompletenessScore (#2053)
    • VMeasureScore (#2053)
    • FowlkesMallowsIndex (#2066)
    • AdjustedMutualInfoScore (#2058)
    • DaviesBouldinScore (#2071)
  • Added backend argument to MeanAveragePrecision (#2034)

Full Changelog: Lightning-AI/torchmetrics@v1.1.0...v1.2.0

... (truncated)

Changelog

Sourced from torchmetrics's changelog.

[1.2.0] - 2023-09-22

Added

  • Added metric to cluster package:
    • MutualInformationScore (#2008)
    • RandScore (#2025)
    • NormalizedMutualInfoScore (#2029)
    • AdjustedRandScore (#2032)
    • CalinskiHarabaszScore (#2036)
    • DunnIndex (#2049)
    • HomogeneityScore (#2053)
    • CompletenessScore (#2053)
    • VMeasureScore (#2053)
    • FowlkesMallowsIndex (#2066)
    • AdjustedMutualInfoScore (#2058)
    • DaviesBouldinScore (#2071)
  • Added backend argument to MeanAveragePrecision (#2034)

[1.1.2] - 2023-09-11

Fixed

  • Fixed tie breaking in ndcg metric (#2031)
  • Fixed bug in BootStrapper when very few samples were evaluated that could lead to crash (#2052)
  • Fixed bug when creating multiple plots that lead to not all plots being shown (#2060)
  • Fixed performance issues in RecallAtFixedPrecision for large batch sizes (#2042)
  • Fixed bug related to MetricCollection used with custom metrics have prefix/postfix attributes (#2070)

[1.1.1] - 2023-08-29

Added

  • Added average argument to MeanAveragePrecision (#2018)

Fixed

  • Fixed bug in PearsonCorrCoef is updated on single samples at a time (#2019)
  • Fixed support for pixel-wise MSE (#2017)
  • Fixed bug in MetricCollection when used with multiple metrics that return dicts with same keys (#2027)
  • Fixed bug in detection intersection metrics when class_metrics=True resulting in wrong values (#1924)
  • Fixed missing attributes higher_is_better, is_differentiable for some metrics (#2028)

[1.1.0] - 2023-08-22

Added

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Bumps [torchmetrics](https://github.com/Lightning-AI/torchmetrics) from 0.11.0 to 1.2.0.
- [Release notes](https://github.com/Lightning-AI/torchmetrics/releases)
- [Changelog](https://github.com/Lightning-AI/torchmetrics/blob/master/CHANGELOG.md)
- [Commits](Lightning-AI/torchmetrics@v0.11.0...v1.2.0)

---
updated-dependencies:
- dependency-name: torchmetrics
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Sep 25, 2023
Copy link
Author

dependabot bot commented on behalf of github Nov 14, 2023

OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting @dependabot ignore this major version or @dependabot ignore this minor version. You can also ignore all major, minor, or patch releases for a dependency by adding an ignore condition with the desired update_types to your config file.

If you change your mind, just re-open this PR and I'll resolve any conflicts on it.

@dependabot dependabot bot deleted the dependabot/pip/torchmetrics-1.2.0 branch November 14, 2023 21:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant