Skip to content

Releases: OML-Team/open-metric-learning

OML 3.1.0

13 Jun 17:29
Compare
Choose a tag to compare

The update focuses on several components:

  • We added "official" texts support and the corresponding Python examples. (Note, texts support in Pipelines is not supported yet.)

  • We introduced the RetrievalResults (RR) class — a container to store gallery items retrieved for given queries.
    RR provides a unified way to visualize predictions and compute metrics (if ground truths are known).
    It also simplifies post-processing, where an RR object is taken as input and another RR_upd is produced as output.
    Having these two objects allows comparison retrieval results visually or by metrics.
    Moreover, you can easily create a chain of such post-processors.

    • RR is memory optimized because of using batching: in other words, it doesn't store full matrix of query-gallery distances.
      (It doesn't make search approximate though).
  • We made Model and Dataset the only classes responsible for processing modality-specific logic.
    Model is responsible for interpreting its input dimensions: for example, BxCxHxW for images or BxLxD for sequences like texts.
    Dataset is responsible for preparing an item: it may use Transforms for images or Tokenizer for texts.
    Functions computing metrics like calc_retrieval_metrics_rr, RetrievalResults, PairwiseReranker, and other classes and functions are unified
    to work with any modality.

    • We added IVisualizableDataset having method .visaulize() that shows a single item. If implemented,
      RetrievalResults is able to show the layout of retrieved results.

Migration from OML 2.* [Python API]:

The easiest way to catch up with changes is to re-read the examples!

  • The recommended way of validation is to use RetrievalResults and functions like calc_retrieval_metrics_rr,
    calc_fnmr_at_fmr_rr, and others. The EmbeddingMetrics class is kept for use with PyTorch Lightning and inside Pipelines.
    Note, the signatures of EmbeddingMetrics methods have been slightly changed, see Lightning examples for that.

  • Since modality-specific logic is confined to Dataset, it doesn't output PATHS_KEY, X1_KEY, X2_KEY, Y1_KEY, and Y2_KEY anymore.
    Keys which are not modality-specific like LABELS_KEY, IS_GALLERY, IS_QUERY_KEY, CATEGORIES_KEY are still in use.

  • inference_on_images is now inference and works with any modality.

  • Slightly changed interfaces of Datasets. For example, we have IQueryGalleryDataset and IQueryGalleryLabeledDataset interfaces.
    The first has to be used for inference, the second one for validation. Also added IVisualizableDataset interface.

  • Removed some internals like IMetricDDP, EmbeddingMetricsDDP, calc_distance_matrix, calc_gt_mask, calc_mask_to_ignore,
    apply_mask_to_ignore. These changes shouldn't affect you. Also removed code related to a pipeline with precomputed triplets.

Migration from OML 2.* [Pipelines]:

  • Feature extraction:
    No changes, except for adding an optional argument — mode_for_checkpointing = (min | max). It may be useful
    to switch between the lower, the better and the greater, the better type of metrics.

  • Pairwise-postprocessing pipeline:
    Slightly changed the name and arguments of the postprocessor sub config — pairwise_images is now pairwise_reranker
    and doesn't need transforms.