Skip to content
@hearbenchmark

HEAR Benchmark

The Holistic Evaluation of Audio Representations (HEAR). A benchmark of diverse audio tasks for audio ML researchers.

HEAR Benchmark Logo

The Holistic Evaluation of Audio Representations Benchmark

What audio embedding approach generalizes best to a wide range of downstream tasks across a variety of everyday domains without fine-tuning?

The aim of the HEAR benchmark is to develop a general-purpose audio representation that provides a strong basis for learning in a wide variety of tasks and scenarios. HEAR evaluates audio representations using a benchmark suite across a variety of domains, including speech, environmental sound, and music.

For more information on HEAR please visit https://hearbenchmark.com or read our paper https://arxiv.org/abs/2203.03022

To submit to the HEAR benchmark leaderboard see instructions on our website and then submit by making a pull request here.

Pinned Loading

  1. hearbenchmark.com hearbenchmark.com Public

    HEAR Benchmark website and leaderboard submissions

    6 2

  2. hear-eval-kit hear-eval-kit Public

    Evaluation kit for the HEAR Benchmark

    Jupyter Notebook 56 17

  3. hear-baseline hear-baseline Public

    Simple baseline model for the HEAR benchmark

    Python 23 8

  4. hear-validator hear-validator Public

    Submission validator for the HEAR Benchmark

    Python 7 4

  5. hear2021-submitted-models hear2021-submitted-models Public

    Open-source audio embedding models, submitted to the HEAR 2021 challenge

    Python 11 1

  6. hear-preprocess hear-preprocess Public

    Dataset preprocessing code for the HEAR 2021 NeurIPS competition

    Python 7 1

Repositories

Showing 10 of 11 repositories
  • hear2021-submitted-models Public

    Open-source audio embedding models, submitted to the HEAR 2021 challenge

    hearbenchmark/hear2021-submitted-models’s past year of commit activity
    Python 11 1 0 164 Updated Nov 14, 2024
  • hear-preprocess Public

    Dataset preprocessing code for the HEAR 2021 NeurIPS competition

    hearbenchmark/hear-preprocess’s past year of commit activity
    Python 7 Apache-2.0 1 44 74 Updated Nov 3, 2024
  • hear-validator Public

    Submission validator for the HEAR Benchmark

    hearbenchmark/hear-validator’s past year of commit activity
    Python 7 Apache-2.0 4 2 15 Updated Oct 31, 2024
  • hear-eval-kit Public

    Evaluation kit for the HEAR Benchmark

    hearbenchmark/hear-eval-kit’s past year of commit activity
    Jupyter Notebook 56 Apache-2.0 17 59 65 Updated Oct 29, 2024
  • hear-baseline Public

    Simple baseline model for the HEAR benchmark

    hearbenchmark/hear-baseline’s past year of commit activity
    Python 23 Apache-2.0 8 2 62 Updated Oct 29, 2024
  • hearbenchmark.com Public

    HEAR Benchmark website and leaderboard submissions

    hearbenchmark/hearbenchmark.com’s past year of commit activity
    6 Apache-2.0 2 0 2 Updated Sep 26, 2023
  • hear-leaderboard Public

    Leaderboard scripts for the HEAR benchmark, using TSP and imputation

    hearbenchmark/hear-leaderboard’s past year of commit activity
    Python 0 Apache-2.0 0 0 0 Updated Sep 26, 2023
  • .github Public
    hearbenchmark/.github’s past year of commit activity
    0 0 0 0 Updated Jun 6, 2022
  • hear2021-sample-datasets Public

    HEAR 2021 pre-processed GTZAN Music/Speech Task

    hearbenchmark/hear2021-sample-datasets’s past year of commit activity
    0 0 0 0 Updated Jan 21, 2022
  • hear2021-open-tasks-downsampled Public

    Downsampled versions of the HEAR 2021 open tasks, for rapid testing and CI/CD

    hearbenchmark/hear2021-open-tasks-downsampled’s past year of commit activity
    3 0 0 0 Updated Nov 11, 2021

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…