diff --git a/index.md b/index.md new file mode 100644 index 0000000..6c6f656 --- /dev/null +++ b/index.md @@ -0,0 +1,216 @@ +# AI Audio Datasets (AI-ADS) đŸŽ” + +AI Audio Datasets (AI-ADS) đŸŽ”, including Speech, Music, and Sound Effects, which can provide training data for Generative AI, AIGC, AI model training, intelligent audio tool development, and audio applications. + +## Table of Contents + +* [Speech](#s) +* [Music](#m) +* [Sound Effect](#se) + + +## Project List + +### Speech + +* [AISHELL-1](http://www.openslr.org/33/) - AISHELL-1 is a corpus for speech recognition research and building speech recognition systems for Mandarin. +* [AISHELL-3](https://openslr.org/93/) - AISHELL-3 is a large-scale and high-fidelity multi-speaker Mandarin speech corpus published by Beijing Shell Shell Technology Co.,Ltd. It can be used to train multi-speaker Text-to-Speech (TTS) systems.The corpus contains roughly 85 hours of emotion-neutral recordings spoken by 218 native Chinese mandarin speakers and total 88035 utterances. +* [Arabic speech Corpus](http://en.arabicspeechcorpus.com/) - The Arabic Speech Corpus (1.5 GB) is a Modern Standard Arabic (MSA) speech corpus for speech synthesis. The corpus contains phonetic and orthographic transcriptions of more than 3.7 hours of MSA speech aligned with recorded speech on the phoneme level. The annotations include word stress marks on the individual phonemes. +* [AudioMNIST](https://github.com/soerenab/AudioMNIST) - The dataset consists of 30000 audio samples of spoken digits (0-9) of 60 different speakers. +* [AVSpeech](https://looking-to-listen.github.io/avspeech/) - AVSpeech is a large-scale audio-visual dataset comprising speech clips with no interfering background signals. The segments are of varying length, between 3 and 10 seconds long, and in each clip the only visible face in the video and audible sound in the soundtrack belong to a single speaking person. In total, the dataset contains roughly 4700 hours of video segments with approximately 150,000 distinct speakers, spanning a wide variety of people, languages and face poses. +* [ATIS (Airline Travel Information Systems)](https://github.com/howl-anderson/ATIS_dataset) - The ATIS (Airline Travel Information Systems) is a dataset consisting of audio recordings and corresponding manual transcripts about humans asking for flight information on automated airline travel inquiry systems. The data consists of 17 unique intent categories. The original split contains 4478, 500 and 893 intent-labeled reference utterances in train, development and test set respectively. +* [Carnatic Varnam Dataset](https://zenodo.org/record/1257118#.WyeKMhyxXMU) - Carnatic varnam dataset is a collection of 28 solo vocal recordings, recorded for our research on intonation analysis of Carnatic ragas. The collection consists of audio recordings, time aligned tala cycle annotations and swara notations in a machine readable format. +* [Casual Conversations](https://ai.facebook.com/datasets/casual-conversations-dataset/) - Casual Conversations dataset is designed to help researchers evaluate their computer vision and audio models for accuracy across a diverse set of age, genders, apparent skin tones and ambient lighting conditions. +* [CN-Celeb](http://openslr.org/82/) - CN-Celeb is a large-scale speaker recognition dataset collected `in the wild'. This dataset contains more than 130,000 utterances from 1,000 Chinese celebrities, and covers 11 different genres in real world. +* [Clotho](https://zenodo.org/record/3490684) - Clotho is an audio captioning dataset, consisting of 4981 audio samples, and each audio sample has five captions (a total of 24 905 captions). Audio samples are of 15 to 30 s duration and captions are eight to 20 words long. +* [Common Voice](https://commonvoice.mozilla.org/en/datasets) - Common Voice is an audio dataset that consists of a unique MP3 and corresponding text file. There are 9,283 recorded hours in the dataset. The dataset also includes demographic metadata like age, sex, and accent. The dataset consists of 7,335 validated hours in 60 languages. +* [CoVoST](https://github.com/facebookresearch/covost) - CoVoST is a large-scale multilingual speech-to-text translation corpus. Its latest 2nd version covers translations from 21 languages into English and from English into 15 languages. It has total 2880 hours of speech and is diversified with 78K speakers and 66 accents. +* [CVSS](https://github.com/google-research-datasets/cvss) - CVSS is a massively multilingual-to-English speech to speech translation (S2ST) corpus, covering sentence-level parallel S2ST pairs from 21 languages into English. CVSS is derived from the Common Voice speech corpus and the CoVoST 2 speech-to-text translation (ST) corpus, by synthesizing the translation text from CoVoST 2 into speech using state-of-the-art TTS systems. +* [EasyCom](https://github.com/facebookresearch/EasyComDataset) - The Easy Communications (EasyCom) dataset is a world-first dataset designed to help mitigate the cocktail party effect from an augmented-reality (AR) -motivated multi-sensor egocentric world view. The dataset contains AR glasses egocentric multi-channel microphone array audio, wide field-of-view RGB video, speech source pose, headset microphone audio, annotated voice activity, speech transcriptions, head and face bounding boxes and source identification labels. We have created and are releasing this dataset to facilitate research in multi-modal AR solutions to the cocktail party problem. +* [ESD (Emotional Speech Database)](https://hltsingapore.github.io/ESD/) - **ESD** is an Emotional Speech Database for voice conversion research. The ESD database consists of 350 parallel utterances spoken by 10 native English and 10 native Chinese speakers and covers 5 emotion categories (neutral, happy, angry, sad and surprise). More than 29 hours of speech data were recorded in a controlled acoustic environment. The database is suitable for multi-speaker and cross-lingual emotional voice conversion studies. +* [FPT Open Speech Dataset (FOSD) ](https://data.mendeley.com/datasets/k9sxg2twv4/4) - This dataset consists of 25,921 recorded Vietnamese speeches (with their transcripts and the labelled start and end times of each speech) manually compiled from 3 sub-datasets (approximately 30 hours in total) released publicly in 2018 by FPT Corporation. +* [Free Spoken Digit Dataset (FSDD) ](https://github.com/Jakobovski/free-spoken-digit-dataset) - A free audio dataset of spoken digits. Think MNIST for audio.A simple audio/speech dataset consisting of recordings of spoken digits in wav files at 8kHz. The recordings are trimmed so that they have near minimal silence at the beginnings and ends. +* [Fluent Speech Commands](https://fluent.ai/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research/) - Fluent Speech Commands is an open source audio dataset for spoken language understanding (SLU) experiments. Each utterance is labeled with "action", "object", and "location" values; for example, "turn the lights on in the kitchen" has the label {"action": "activate", "object": "lights", "location": "kitchen"}. A model must predict each of these values, and a prediction for an utterance is deemed to be correct only if all values are correct. +* [Genshin Datasets](https://github.com/AI-Hobbyist/Genshin_Datasets) - Genshin Datasets For SVC/SVS/TTS. +* [GenshinVoice](https://github.com/w4123/GenshinVoice) - Voice dataset of Genshin Impact ćŽŸç„žèŻ­éŸłæ•°æźé›† +* [GigaSpeech](https://github.com/SpeechColab/GigaSpeech) - GigaSpeech, an evolving, multi-domain English speech recognition corpus with 10,000 hours of high quality labeled audio suitable for supervised training, and 40,000 hours of total audio suitable for semi-supervised and unsupervised training. +* [GigaSpeech 2](https://github.com/SpeechColab/GigaSpeech2) - An evolving, large-scale and multi-domain ASR corpus for low-resource languages with automated crawling, transcription and refinement. +* [How2](https://srvk.github.io/how2-dataset/) - The How2 dataset contains 13,500 videos, or 300 hours of speech, and is split into 185,187 training, 2022 development (dev), and 2361 test utterances. It has subtitles in English and crowdsourced Portuguese translations. +* [inaGVAD](https://github.com/ina-foss/InaGVAD) - a challenging French TV and Radio dataset annotated for voice activity detection (VAD) and Speaker Gender Segmentation (SGS) with evaluation scripts and detailed annotation scheme detailing non-speech event type, speaker traits and speech quality +* [KdConv](https://github.com/thu-coai/KdConv) - KdConv is a Chinese multi-domain Knowledge-driven Conversionsation dataset, grounding the topics in multi-turn conversations to knowledge graphs. KdConv contains 4.5K conversations from three domains (film, music, and travel), and 86K utterances with an average turn number of 19.0. These conversations contain in-depth discussions on related topics and natural transition between multiple topics, while the corpus can also used for exploration of transfer learning and domain adaptation. +* [Libriheavy](https://github.com/k2-fsa/libriheavy) - Libriheavy: a 50,000 hours ASR corpus with punctuation casing and context. +* [LibriSpeech](http://www.openslr.org/12) - The **LibriSpeech** corpus is a collection of approximately 1,000 hours of audiobooks that are a part of the LibriVox project. Most of the audiobooks come from the Project Gutenberg. The training data is split into 3 partitions of 100hr, 360hr, and 500hr sets while the dev and test data are split into the ’clean’ and ’other’ categories, respectively, depending upon how well or challening Automatic Speech Recognition systems would perform against. Each of the dev and test sets is around 5hr in audio length. +* [LibriTTS](http://www.openslr.org/60) - **LibriTTS** is a multi-speaker English corpus of approximately 585 hours of read English speech at 24kHz sampling rate, prepared by Heiga Zen with the assistance of Google Speech and Google Brain team members. The LibriTTS corpus is designed for TTS research. It is derived from the original materials (mp3 audio files from LibriVox and text files from Project Gutenberg) of the LibriSpeech corpus. +* [LibriTTS-R](http://www.openslr.org/141/) - LibriTTS-R: A Restored Multi-Speaker Text-to-Speech Corpus. It is derived by applying speech restoration to the LibriTTS corpus, which consists of 585 hours of speech data at 24 kHz sampling rate from 2,456 speakers and the corresponding texts. The constituent samples of LibriTTS-R are identical to those of LibriTTS, with only the sound quality improved. +* [LJSpeech (The LJ Speech Dataset)](https://keithito.com/LJ-Speech-Dataset/) - This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books. A transcription is provided for each clip. Clips vary in length from 1 to 10 seconds and have a total length of approximately 24 hours. The texts were published between 1884 and 1964, and are in the public domain. The audio was recorded in 2016-17 by the LibriVox project and is also in the public domain. +* [LRS2 (Lip Reading Sentences 2)](https://www.robots.ox.ac.uk/~vgg/data/lip_reading/lrs2.html) - The Oxford-BBC Lip Reading Sentences 2 (LRS2) dataset is one of the largest publicly available datasets for lip reading sentences in-the-wild. The database consists of mainly news and talk shows from BBC programs. Each sentence is up to 100 characters in length. +* [LRW (Lip Reading in the Wild)](https://www.robots.ox.ac.uk/~vgg/data/lip_reading/lrw1.html) - The Lip Reading in the Wild (LRW) dataset a large-scale audio-visual database that contains 500 different words from over 1,000 speakers. Each utterance has 29 frames, whose boundary is centered around the target word. The database is divided into training, validation and test sets. The training set contains at least 800 utterances for each class while the validation and test sets contain 50 utterances. +* [MuAViC](https://github.com/facebookresearch/muavic) - A Multilingual Audio-Visual Corpus for Robust Speech Recognition and Robust Speech-to-Text Translation. +* [MuST-C](https://ict.fbk.eu/must-c/) - MuST-C currently represents the largest publicly available multilingual corpus (one-to-many) for speech translation. It covers eight language directions, from English to German, Spanish, French, Italian, Dutch, Portuguese, Romanian and Russian. The corpus consists of audio, transcriptions and translations of English TED talks, and it comes with a predefined training, validation and test split. +* [MetaQA (MoviE Text Audio QA)](https://github.com/yuyuz/MetaQA) - The MetaQA dataset consists of a movie ontology derived from the WikiMovies Dataset and three sets of question-answer pairs written in natural language: 1-hop, 2-hop, and 3-hop queries. +* [MELD (Multimodal EmotionLines Dataset)](https://affective-meld.github.io/) - Multimodal EmotionLines Dataset (MELD) has been created by enhancing and extending EmotionLines dataset. MELD contains the same dialogue instances available in EmotionLines, but it also encompasses audio and visual modality along with text. MELD has more than 1400 dialogues and 13000 utterances from Friends TV series. Multiple speakers participated in the dialogues. Each utterance in a dialogue has been labeled by any of these seven emotions -- Anger, Disgust, Sadness, Joy, Neutral, Surprise and Fear. MELD also has sentiment (positive, negative and neutral) annotation for each utterance. +* [Microsoft Speech Corpus (Indian languages)](https://msropendata.com/datasets/7230b4b1-912d-400e-be58-f84e0512985e) - Microsoft Speech Corpus (Indian languages) release contains conversational and phrasal speech training and test data for Telugu, Tamil and Gujarati languages. The data package includes audio and corresponding transcripts. Data provided in this dataset shall not be used for commercial purposes. You may use the data solely for research purposes. If you publish your findings, you must provide the following attribution: “Data provided by Microsoft and SpeechOcean.com”. +* [PATS (Pose Audio Transcript Style)](https://chahuja.com/pats/) - PATS dataset consists of a diverse and large amount of aligned pose, audio and transcripts. With this dataset, we hope to provide a benchmark that would help develop technologies for virtual agents which generate natural and relevant gestures. +* [SAVEE (Surrey Audio-Visual Expressed Emotion)](http://kahlan.eps.surrey.ac.uk/savee/) - The Surrey Audio-Visual Expressed Emotion (SAVEE) dataset was recorded as a pre-requisite for the development of an automatic emotion recognition system. The database consists of recordings from 4 male actors in 7 different emotions, 480 British English utterances in total. The sentences were chosen from the standard TIMIT corpus and phonetically-balanced for each emotion. +* [SoS_Dataset](https://github.com/Sosdatasets/SoS_Dataset) - Sound of Story: Multi-modal Storytelling with Audio. Storytelling is multi-modal in the real world. When one tells a story, one may use all of the visualizations and sounds along with the story itself. However, prior studies on storytelling datasets and tasks have paid little attention to sound even though sound also conveys meaningful semantics of the story. Therefore, we propose to extend story understanding and telling areas by establishing a new component called "background sound" which is story context-based audio without any linguistic information. +* [Speech Datasets Collection](https://github.com/RevoSpeechTech/speech-datasets-collection) - This is a curated list of open speech datasets for speech-related research (mainly for Automatic Speech Recognition). Over 110 speech datasets are collected in this repository, and more than 70 datasets can be downloaded directly without further application or registration. +* [Speech Dataset Generator](https://github.com/davidmartinrius/speech-dataset-generator) - Speech Dataset Generator is dedicated to creating datasets suitable for training text-to-speech or speech-to-text models. The primary functionality involves transcribing audio files, enhancing audio quality when necessary, and generating datasets. +* [3D-Speaker-Datasets](https://3dspeaker.github.io/) - A large scale multi-Device, multi-Distance, and multi-Dialect audio dataset of human speech. +* [TED-LIUM](http://www.openslr.org/51/) - Audio transcription of TED talk. 1495 TED talk audio recordings along with full-text transcriptions of those recordings, create by Laboratoire d’Informatique de l’UniversitĂ© du Maine (LIUM). +* [The Flickr Audio Caption Corpus](https://groups.csail.mit.edu/sls/downloads/flickraudio/) - The Flickr 8k Audio Caption Corpus contains 40,000 spoken captions of 8,000 natural images. It was collected in 2015 to investigate multimodal learning schemes for unsupervised speech pattern discovery. +* [The People’s Speech](https://github.com/mlcommons/peoples-speech) - **The People's Speech** is a free-to-download 30,000-hour and growing supervised conversational English speech recognition dataset licensed for academic and commercial usage under CC-BY-SA (with a CC-BY subset). The data is collected via searching the Internet for appropriately licensed audio data with existing transcriptions. +* [The Spoken Wikipedia Corpora](https://nats.gitlab.io/swc/) - The Spoken Wikipedia project unite volunteer reader of Wikipedia article. Hundreds of spoken article in multiple languages are available to user who are – for one reason or another – unable or unwilling to consume the write version of the article. +* [TIMIT](https://github.com/philipperemy/timit) - The DARPA TIMIT Acoustic-Phonetic Continuous Speech Corpus. +* [tts-frontend-dataset](https://github.com/Jackiexiao/tts-frontend-dataset) - TTS FrontEnd DataSet: Polyphone / Prosody / TextNormalization. +* [VoxCeleb2](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/) - VoxCeleb2 is a large scale speaker recognition dataset obtained automatically from open-source media. VoxCeleb2 consists of over a million utterances from over 6k speakers. Since the dataset is collected ‘in the wild’, the speech segments are corrupted with real world noise including laughter, cross-talk, channel effects, music and other sounds. The dataset is also multilingual, with speech from speakers of 145 different nationalities, covering a wide range of accents, ages, ethnicities and languages. +* [VoxConverse](https://github.com/joonson/voxconverse) - VoxConverse is an audio-visual diarisation dataset consisting of multispeaker clips of human speech, extracted from YouTube videos. +* [VoxLingua107](https://github.com/alumae/torch-xvectors-wav) - VoxLingua107 is a dataset for spoken language recognition of 6628 hours (62 hours per language on the average) and it is accompanied by an evaluation set of 1609 verified utterances. +* [VoxPopuli](https://github.com/facebookresearch/voxpopuli) - VoxPopuli is a large-scale multilingual corpus providing 100K hours of unlabelled speech data in 23 languages. It is the largest open data to date for unsupervised representation learning as well as semi-supervised learning. VoxPopuli also contains 1.8K hours of transcribed speeches in 16 languages and their aligned oral interpretations into 5 other languages totaling 5.1K hours. +* [VoxForge](http://www.voxforge.org/home) - VoxForge is an open speech dataset that was set up to collect transcribed speech for use with Free and Open Source Speech Recognition Engines (on Linux, Windows and Mac). +* [VocalSound](https://groups.csail.mit.edu/sls/downloads/vocalsound/) - VocalSound is a free dataset consisting of 21,024 crowdsourced recordings of laughter, sighs, coughs, throat clearing, sneezes, and sniffs from 3,365 unique subjects. The VocalSound dataset also contains meta-information such as speaker age, gender, native language, country, and health condition. +* [VoiceBank + DEMAND](https://datashare.ed.ac.uk/handle/10283/2791) - VoiceBank+DEMAND is a noisy speech database for training speech enhancement algorithms and TTS models. The database was designed to train and test speech enhancement methods that operate at 48kHz. A more detailed description can be found in the paper associated with the database. +* [WaveFake](https://github.com/rub-syssec/wavefake) - WaveFake is a dataset for audio deepfake detection. The dataset consists of a large-scale dataset of over 100K generated audio clips. +* [WenetSpeech](https://wenet.org.cn/WenetSpeech/) - WenetSpeech is a multi-domain Mandarin corpus consisting of 10,000+ hours high-quality labeled speech, 2,400+ hours weakly labelled speech, and about 10,000 hours unlabeled speech, with 22,400+ hours in total. The authors collected the data from YouTube and Podcast, which covers a variety of speaking styles, scenarios, domains, topics, and noisy conditions. An optical character recognition (OCR) based method is introduced to generate the audio/text segmentation candidates for the YouTube data on its corresponding video captions. +* [WSJ0-2mix](https://www.merl.com/demos/deep-clustering) - **WSJ0-2mix** is a speech recognition corpus of speech mixtures using utterances from the Wall Street Journal (WSJ0) corpus. +* [WHAM! (WSJ0 Hipster Ambient Mixtures)](http://wham.whisper.ai/) - The **WSJ0 Hipster Ambient Mixtures** ( **WHAM!** ) dataset pairs each two-speaker mixture in the wsj0-2mix dataset with a unique noise background scene. The noise audio was collected at various urban locations throughout the San Francisco Bay Area in late 2018. The environments primarily consist of restaurants, cafes, bars, and parks. Audio was recorded using an Apogee Sennheiser binaural microphone on a tripod between 1.0 and 1.5 meters off the ground. +* [YODAS](https://huggingface.co/datasets/espnet/yodas) - This is the YODAS manual/automatic subset from our YODAS dataset, it has 369,510 hours of speech. This dataset contains audio utterances and corresponding captions (manual or automatic) from YouTube. Note that manual caption only indicates that it is uploaded by users, but not necessarily transcribed by a human. +* [YODAS2](https://huggingface.co/datasets/espnet/yodas2) - YODAS2 is the long-form dataset from YODAS dataset. It provides the same dataset as espnet/yodas but YODAS2 has the following new features: 1. formatted in the long-form (video-level) where audios are not segmented. 2. audios are encoded using higher sampling rates (i.e. 24k). +* [YTTTS](https://github.com/ryanrudes/YTTTS) - The YouTube Text-To-Speech dataset is comprised of waveform audio extracted from YouTube videos alongside their English transcriptions. + +

^ Back to Contents ^

+ + +### Music + +* [AAM: Artificial Audio Multitracks Dataset](https://zenodo.org/record/5794629) - This dataset contains 3,000 artificial music audio tracks with rich annotations. It is based on real instrument samples and generated by algorithmic composition with respect to music theory. It provides full mixes of the songs as well as single instrument tracks. The midis used for generation are also available. The annotation files include: Onsets, Pitches, Instruments, Keys, Tempos, Segments, Melody instrument, Beats, and Chords. +* [Acappella](https://ipcv.github.io/Acappella/acappella/) - Acappella comprises around 46 hours of a cappella solo singing videos sourced from YouTbe, sampled across different singers and languages. Four languages are considered: English, Spanish, Hindi and others. +* [ADD: audio-dataset-downloader](https://github.com/moiseshorta/ADD-audio-dataset-downloader) - Simple Python CLI script for downloading N-hours of audio from Youtube, based on a list of music genres. +* [ADL Piano MIDI](https://github.com/lucasnfe/adl-piano-midi) - The ADL Piano MIDI is a dataset of 11,086 piano pieces from different genres. This dataset is based on the Lakh MIDI dataset, which is a collection on 45,129 unique MIDI files that have been matched to entries in the Million Song Dataset. +* [Aligned Scores and Performances (ASAP)](https://github.com/fosfrancesco/asap-dataset) - ASAP is a dataset of aligned musical scores (both MIDI and MusicXML) and performances (audio and MIDI), all with downbeat, beat, time signature, and key signature annotations. +* [Annotated Jingju Arias Dataset](https://zenodo.org/record/1285647#.WyeR5ByxXMU) - The Annotated Jingju Arias Dataset is a collection of 34 jingju arias manually segmented in various levels using the software Praat. The selected arias contain samples of the two main shengqiang in jingju, namely xipi and erhuang, and the five main role types in terms of singing, namely, dan, jing, laodan, laosheng and xiaosheng. The dataset is formed by Praat TextGrid files for each aria, containing tiers for the following information: aria, MusicBrainz ID, artist, school, role type, shengqiang, banshi, line of lyrics, syllables, and percussion patterns. +* [Bach Doodle](https://magenta.tensorflow.org/datasets/bach-doodle) - The Bach Doodle Dataset is composed of 21.6 million harmonizations submitted from the Bach Doodle. The dataset contains both metadata about the composition (such as the country of origin and feedback), as well as a MIDI of the user-entered melody and a MIDI of the generated harmonization. The dataset contains about 6 years of user entered music. +* [Bach Violin Dataset](https://github.com/salu133445/bach-violin-dataset) - A collection of high-quality public recordings of Bach's sonatas and partitas for solo violin (BWV 1001–1006). +* [Batik-plays-Mozart dataset](https://github.com/huispaty/batik_plays_mozart) - The Batik-plays-Mozart dataset is a piano performance dataset containing 12 complete Mozart Piano Sonatas (36 distinct movements) performed on a computer-monitored Bösendorfer grand piano by Viennese concert pianist Roland Batik. The performances are provided in MIDI format (the corresponding audio files are commercially available) and note-levelaligned with scores in the New Mozart Edition in MusicXML and musicological harmony, cadence and phrase annotations previously published in The Annotated Mozart Sonatas. +* [Beijing Opera Percussion Instrument Dataset](https://zenodo.org/record/1285212#.WyeQdhyxXMU) - Beijing Opera percussion dataset is a collection of 236 examples of isolated strokes spanning the four percussion instrument classes used in Beijing Opera. It can be used to build stroke models for each percussion instrument. +* [Beijing Opera Percussion Pattern Dataset](https://zenodo.org/record/1285593#.WyeQ-xyxXMU) - Beijing Opera Percussion Pattern (BOPP) dataset is a collection of 133 audio percussion patterns covering five pattern classes. The dataset includes the audio and syllable level transcriptions for the patterns (non-time aligned). It is useful for percussion transcription and classification tasks. The patterns have been extracted from audio recordings of arias and labeled by a musicologist. +* [CAL500 (Computer Audition Lab 500)](http://calab1.ucsd.edu/~datasets/) - CAL500 (Computer Audition Lab 500) is a dataset aimed for evaluation of music information retrieval systems. It consists of 502 songs picked from western popular music. The audio is represented as a time series of the first 13 Mel-frequency cepstral coefficients (and their first and second derivatives) extracted by sliding a 12 ms half-overlapping short-time window over the waveform of each song. +* [Carnatic Music Rhythm Dataset](https://zenodo.org/record/1264394#.WyeLDByxXMU) - The Carnatic Music Rhythm Dataset is a sub-collection of 176 excerpts (16.6 hours) in four taalas of Carnatic music with audio, associated tala related metadata and time aligned markers indicating the progression through the tala cycles. It is useful as a test corpus for many automatic rhythm analysis tasks in Carnatic music. +* [CCMixter](https://members.loria.fr/ALiutkus/kam/) - CCMixter is a singing voice separation dataset consisting of 50 full-length stereo tracks from ccMixter featuring many different musical genres. For each song there are three WAV files available: the background music, the voice signal, and their sum. +* [ChMusic](https://github.com/haoranweiutd/chmusic) - **ChMusic** is a traditional Chinese music dataset for training model and performance evaluation of musical instrument recognition. This dataset cover 11 musical instruments, consisting of Erhu, Pipa, Sanxian, Dizi, Suona, Zhuiqin, Zhongruan, Liuqin, Guzheng, Yangqin and Sheng. +* [chongchong-free](https://github.com/Fiz-Vector/chongchong-free) - Chongchong Piano Downloader is a software for free downloading of Chongchong piano score, which can obtain the link of the score, analyze the content of the score, and export the file. +* [ComMU](https://pozalabs.github.io/ComMU/) - ComMU has 11,144 MIDI samples that consist of short note sequences created by professional composers with their corresponding 12 metadata. This dataset is designed for a new task, combinatorial music generation which generate diverse and high-quality music only with metadata through auto-regressive language model. +* [CoSoD](https://github.com/duguay-michele/CoSoD) - CoSoD consists of metadata and analytical data of a 331-song corpus comprising all multi-artist collaborations on the Billboard “Hot 100” year-end charts published between 2010 and 2019. Each song in the dataset is associated with two CSV files: one for metadata and one for analytical data. +* [DALI](https://github.com/gabolsgabs/DALI) - DALI: a large Dataset of synchronised Audio, LyrIcs and vocal notes. +* [DadaGP](https://github.com/dada-bots/dadaGP) - DadaGP is a new symbolic music dataset comprising 26,181 song scores in the GuitarPro format covering 739 musical genres, along with an accompanying tokenized format well-suited for generative sequence models such as the Transformer. The tokenized format is inspired by event-based MIDI encodings, often used in symbolic music generation models. The dataset is released with an encoder/decoder which converts GuitarPro files to tokens and back. +* [DeepScores](https://tuggeluk.github.io/deepscores/) - Synthetic dataset of 300000 annotated images of written music for object classification, semantic segmentation and object detection. Based on a large set of MusicXML documents that were obtained from MuseScore, a sophisticated pipeline is used to convert the source into LilyPond files, for which LilyPond is used to engrave and annotate the images. +* [dMelodies](https://github.com/ashispati/dmelodies_dataset) - dMelodies is dataset of simple 2-bar melodies generated using 9 independent latent factors of variation where each data point represents a unique melody based on the following constraints: - Each melody will correspond to a unique scale (major, minor, blues, etc.). - Each melody plays the arpeggios using the standard I-IV-V-I cadence chord pattern. - Bar 1 plays the first 2 chords (6 notes), Bar 2 plays the second 2 chords (6 notes). - Each played note is an 8th note. +* [DISCO-10M](https://huggingface.co/datasets/DISCOX/DISCO-10M) - DISCO-10M is a music dataset created to democratize research on large-scale machine learning models for music. +* [Dizi](https://github.com/hrsoup/Dizi_Dataset) - Dizi is a dataset of music style of the Northern school and the Southern School. Characteristics include melody and playing techniques of the two different music styles are deconstructed. +* [DreamSound](https://github.com/zelaki/DreamSound) - Recently, text-to-music generation models have achieved unprecedented results in synthesizing high-quality and diverse music samples from a given text prompt. Despite these advances, it remains unclear how one can generate personalized, user-specific musical concepts, manipulate them, and combine them with existing ones. Motivated by the computer vision literature, we investigate text-to-music by exploring two established methods, namely Textual Inversion and Dreambooth. Using quantitative metrics and a user study, we evaluate their ability to reconstruct and modify new musical concepts, given only a few samples. Finally, we provide a new dataset and propose an evaluation protocol for this new task. +* [EMOPIA](https://annahung31.github.io/EMOPIA/) - A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation. EMOPIA (pronounced ‘yee-mĂČ-pi-uh’) dataset is a shared multi-modal (audio and MIDI) database focusing on perceived emotion in pop piano music, to facilitate research on various tasks related to music emotion. The dataset contains 1,087 music clips from 387 songs and clip-level emotion labels annotated by four dedicated annotators. +* [ErhuPT (Erhu Playing Technique Dataset)](https://zenodo.org/record/4320991) - This dataset is an audio dataset containing about 1500 audio clips recorded by multiple professional players. +* [FiloBass](https://aim-qmul.github.io/FiloBass/) - A Dataset and Corpus Based Study of Jazz Basslines. FiloBass: a novel corpus of music scores and annotations which focuses on the important but often overlooked role of the double bass in jazz accompaniment. Inspired by recent work that sheds light on the role of the soloist, we offer a collection of 48 manually verified transcriptions of professional jazz bassists, comprising over 50,000 note events, which are based on the backing tracks used in the FiloSax dataset. For each recording we provide audio stems, scores, performance-aligned MIDI and associated metadata for beats, downbeats, chord symbols and markers for musical form. +* [Finding Tori](https://github.com/danbinaerinHan/finding-tori) - Finding Tori: Self-supervised Learning for Analyzing Korean Folk Song. we introduce a computational analysis of the field recording dataset of approximately 700 hours of Korean folk songs, which were recorded around 1980-90s. +* [FMA](https://github.com/mdeff/fma) - The Free Music Archive (FMA) is a large-scale dataset for evaluating several tasks in Music Information Retrieval. It consists of 343 days of audio from 106,574 tracks from 16,341 artists and 14,854 albums, arranged in a hierarchical taxonomy of 161 genres. It provides full-length and high-quality audio, pre-computed features, together with track- and user-level metadata, tags, and free-form text such as biographies. +* [GiantMIDI-Piano](https://github.com/bytedance/GiantMIDI-Piano) - GiantMIDI-Piano is a classical piano MIDI dataset contains 10,855 MIDI files of 2,786 composers. The curated subset by constraining composer surnames contains 7,236 MIDI files of 1,787 composers. +* [Groove (Groove MIDI Dataset)](https://magenta.tensorflow.org/datasets/groove) - The **Groove MIDI Dataset (GMD)** is composed of 13.6 hours of aligned MIDI and (synthesized) audio of human-performed, tempo-aligned expressive drumming. The dataset contains 1,150 MIDI files and over 22,000 measures of drumming. +* [GuitarSet](https://github.com/marl/GuitarSet) - GuitarSet: a dataset for guitar transcription. +* [Hindustani Music Rhythm Dataset](https://zenodo.org/record/1264742#.WyeLDByxXMU) - The Hindustani Music Rhythm Dataset is a sub-collection of 151 (5 hours) in four taals of Hindustani music with audio, associated taal related metadata and time aligned markers indicating the progression through the taal cycles. The dataset is useful as a test corpus for many automatic rhythm analysis tasks in Hindustani music. +* [HumTrans](https://github.com/shansongliu/HumTrans) - The dataset can also serve as a foundation for downstream tasks such as humming melody based music generation. It consists of 500 musical compositions of different genres and languages, with each composition divided into multiple segments. In total, the dataset comprises 1000 music segments. To collect this humming dataset, we employed 10 college students, all of whom are either music majors or proficient in playing at least one musical instrument. Each of them hummed every segment twice using the web recording interface provided by our designed website. The humming recordings were sampled at a frequency of 44,100 Hz. +* [Indian Art Music Tonic Datasets](https://zenodo.org/record/1257114#.WyeIARyxXMU) - This dataset comprises 597 commercially available audio music recordings of Indian art music (Hindustani and Carnatic music), each manually annotated with the tonic of the lead artist. This dataset is used as the test corpus for the development of tonic identification approaches. +* [Jazz Harmony Treebank](https://github.com/DCMLab/JazzHarmonyTreebank) - This repository contains the Jazz Harmony Treebank, a corpus of hierarchical harmonic analyses of jazz chord sequences selected from the iRealPro corpus published on zenodo by Shanahan et al. +* [jazznet](https://github.com/tosiron/jazznet) - jazznet: A Dataset of Fundamental Piano Patterns for Music Audio Machine Learning Research. This paper introduces the jazznet Dataset, a dataset of fundamental jazz piano music patterns for developing machine learning (ML) algorithms in music information retrieval (MIR). The dataset contains 162520 labeled piano patterns, including chords, arpeggios, scales, and chord progressions with their inversions, resulting in more than 26k hours of audio and a total size of 95GB. +* [Jingju A Cappella Singing Pitch Contour Dataset](https://zenodo.org/record/832736) - Jingju A Cappella Singing Pitch Contour Dataset is a collection of pitch contour segment ground truth for 39 jingju a cappella singing recordings. The dataset includes the ground truth for (1) melodic transcription, (2) pitch contour segmentation. It is useful for melodic transcription and pitch contour segmentation tasks. The pitch contours have been extracted from audio recordings and manually corrected and segmented by a musicologist. +* [Jingju Music Scores Collection](https://zenodo.org/record/1285613#.WyeRMRyxXMU) - This is a collection of 92 jingju music scores gathered for the analysis of jingju singing in terms of its musical system. They were transcribed from their original printed sources into a machine readable format, using MuseScore, and exporting them into MusicXML. +* [JS Fake Chorales](https://github.com/omarperacha/js-fakes) - A MIDI dataset of 500 4-part chorales generated by the KS_Chorus algorithm, annotated with results from hundreds of listening test participants, with 300 further unannotated chorales. +* [LAKH MuseNet MIDI Dataset](https://github.com/asigalov61/LAKH-MuseNet-MIDI-Dataset) - Full LAKH MIDI dataset converted to MuseNet MIDI output format (9 instruments + drums). +* [Los Angeles MIDI Dataset](https://github.com/asigalov61/Los-Angeles-MIDI-Dataset) - SOTA kilo-scale MIDI dataset for MIR and Music AI purposes. +* [LP-MusicCaps](https://github.com/seungheondoh/lp-music-caps) - LP-MusicCaps: LLM-Based Pseudo Music Captioning. +* [Lyra Dataset](https://github.com/pxaris/lyra-dataset) - Lyra is a dataset for Greek Traditional and Folk music that includes 1570 pieces, summing in around 80 hours of data. The dataset incorporates YouTube timestamped links for retrieving audio and video, along with rich metadata information with regards to instrumentation, geography and genre, among others. +* [MAESTRO](https://magenta.tensorflow.org/datasets/maestro) - The **MAESTRO** dataset contains over 200 hours of paired audio and MIDI recordings from ten years of International Piano-e-Competition. The MIDI data includes key strike velocities and sustain/sostenuto/una corda pedal positions. Audio and MIDI files are aligned with ∌3 ms accuracy and sliced to individual musical pieces, which are annotated with composer, title, and year of performance. Uncompressed audio is of CD quality or higher (44.1–48 kHz 16-bit PCM stereo). +* [MagnaTagATune](https://mirg.city.ac.uk/codeapps/the-magnatagatune-dataset) - MagnaTagATune dataset contains 25,863 music clips. Each clip is a 29-seconds-long excerpt belonging to one of the 5223 songs, 445 albums and 230 artists. The clips span a broad range of genres like Classical, New Age, Electronica, Rock, Pop, World, Jazz, Blues, Metal, Punk, and more. Each audio clip is supplied with a vector of binary annotations of 188 tags. +* [Main Dataset for "Evolution of Popular Music: USA 1960–2010"](https://figshare.com/articles/dataset/Main_Dataset_for_Evolution_of_Popular_Music_USA_1960_2010_/1309953) - This is a large file (~20MB) called EvolutionPopUSA_MainData.csv, in comma-separated data format with column headers. Each row corresponds to a recording. The file is viewable in any text editor, and can also be opened in Excel or imported to other data processing programs. +* [MetaMIDI Dataset](https://github.com/jeffreyjohnens/MetaMIDIDataset) - We introduce the MetaMIDI Dataset (MMD), a large scale collection of 436,631 MIDI files and metadata. In addition to the MIDI files, we provide artist, title and genre metadata that was collected during the scraping process when available. MIDIs in (MMD) were matched against a collection of 32,000,000 30-second audio clips retrieved from Spotify, resulting in over 10,796,557 audio-MIDI matches. +* [MIR-1K](http://mirlab.org/dataset/public/) - MIR-1K (Multimedia Information Retrieval lab, 1000 song clips) is a dataset designed for singing voice separation. +* [Mridangam Stroke Dataset](https://zenodo.org/record/1265188#.WyeMJhyxXMU) - The Mridangam Stroke dataset is a collection of 7162 audio examples of individual strokes of the Mridangam in various tonics. The dataset comprises of 10 different strokes played on Mridangams with 6 different tonic values. The dataset can be used for training models for each Mridangam stroke. +* [Mridangam Tani-avarthanam dataset](https://zenodo.org/record/1266207#.WyeMJhyxXMU) - The Mridangam Tani-avarthanam dataset is a transcribed collection of two tani-avarthanams played by the renowned Mridangam maestro Padmavibhushan Umayalpuram K. Sivaraman. The audio was recorded at IIT Madras, India and annotated by professional Carnatic percussionists. It consists of about 24 min of audio and 8800 strokes. +* [MIRMLPop](https://github.com/york135/MIRMLPop) - It contains 1) annotation of the MIR-MLPop dataset, 2) the source code to obtain the audio of the dataset, 3) source code we used to fine-tune Whisper on MIR-MLPop (both lyrics alignment & lyrics transcription), and 4) source code for evaluation. +* [MSD (Million Song Dataset)](http://millionsongdataset.com/) - The Million Song Dataset is a freely-available collection of audio features and metadata for a million contemporary popular music tracks. The core of the dataset is the feature analysis and metadata for one million songs, provided by The Echo Nest. +* [MTG-Jamendo Dataset](https://github.com/MTG/mtg-jamendo-dataset) - We present the MTG-Jamendo Dataset, a new open dataset for music auto-tagging. It is built using music available at Jamendo under Creative Commons licenses and tags provided by content uploaders. The dataset contains over 55,000 full audio tracks with 195 tags from genre, instrument, and mood/theme categories. We provide elaborated data splits for researchers and report the performance of a simple baseline approach on five different sets of tags: genre, instrument, mood/theme, top-50, and overall. +* [MTG-Jamendo](https://mtg.github.io/mtg-jamendo-dataset/) - The **MTG-Jamendo** dataset is an open dataset for music auto-tagging. The dataset contains over 55,000 full audio tracks with 195 tags categories (87 genre tags, 40 instrument tags, and 56 mood/theme tags). It is built using music available at Jamendo under Creative Commons licenses and tags provided by content uploaders. All audio is distributed in 320kbps MP3 format. +* [Music Data Sharing Platform for Computational Musicology Research (CCMUSIC DATASET)](https://zenodo.org/record/5676893#.Y7kuQXZBw7c) - This platform is a multi-functional music data sharing platform for Computational Musicology research. It contains many music datas such as the sound information of Chinese traditional musical instruments and the labeling information of Chinese pop music, which is available for free use by computational musicology researchers. +* [Music Emotion Recognition (MER)](https://github.com/juansgomez87/vis-mtg-mer) - We present a data set for the analysis of personalized Music Emotion Recognition (MER) systems. We developed the Music Enthusiasts platform aiming to improve the gathering and analysis of the so-called “ground truth” needed as input to such systems. +* [MUSAN](http://www.openslr.org/17/) - **MUSAN** is a corpus of music, speech and noise. This dataset is suitable for training models for voice activity detection (VAD) and music/speech discrimination. The dataset consists of music from several genres, speech from twelve languages, and a wide assortment of technical and non-technical noises. +* [Musdb-XL-train](https://zenodo.org/record/8191153) - The musdb-XL-train dataset consists of a limiter-applied 300,000 segments of 4-sec audio segments and the 100 original songs. For each segment, we randomly chose arbitrary segment in 4 stems (vocals, bass, drums, other) of musdb-HQ training subset and randomly mixed them. Then, we applied a commercial limiter plug-in to each stem. +* [MusicBench](https://huggingface.co/datasets/amaai-lab/MusicBench) - _MusicBench_ dataset is a collection of music-text pairs that was designed for text-to-music generation and released with [Mustango](https://github.com/AMAAI-Lab/mustango) text-to-music model. [MusicCaps](https://www.kaggle.com/datasets/googleai/musiccaps) dataset is expanded from 5,521 samples to 52,768 training and 400 test samples to create _MusicBench_! +* [MusicNet](https://zenodo.org/record/5120004#.Y56UPnZBw7d) - MusicNet is a collection of 330 freely-licensed classical music recordings, together with over 1 million annotated labels indicating the precise time of each note in every recording, the instrument that plays each note, and the note's position in the metrical structure of the composition. The labels are acquired from musical scores aligned to recordings by dynamic time warping. The labels are verified by trained musicians; we estimate a labeling error rate of 4%. We offer the MusicNet labels to the machine learning and music communities as a resource for training models and a common benchmark for comparing results. +* [MusicCaps](https://www.kaggle.com/datasets/googleai/musiccaps) - MusicCaps is a dataset composed of 5.5k music-text pairs, with rich text descriptions provided by human experts. +* [MuseData](https://musedata.org/) - MuseData is an electronic library of orchestral and piano classical music from CCARH. It consists of around 3MB of 783 files. +* [MUSDB18](https://sigsep.github.io/datasets/musdb.html) - The MUSDB18 is a dataset of 150 full lengths music tracks (~10h duration) of different genres along with their isolated drums, bass, vocals and others stems. The dataset is split into training and test sets with 100 and 50 songs, respectively. All signals are stereophonic and encoded at 44.1kHz. +* [Music Topics and Metadata](https://data.mendeley.com/datasets/3t9vbwxgr5/1) - This dataset provides a list of lyrics from 1950 to 2019 describing music metadata as sadness, danceability, loudness, acousticness, etc. We also provide some informations as lyrics which can be used to natural language processing. +* [Music genres dataset](https://github.com/trebi/music-genres-dataset) - Dataset of 1494 genres, each containing 200 songs. +* [Multimodal Sheet Music Dataset](https://github.com/CPJKU/msmd) - MSMD is a synthetic dataset of 497 pieces of (classical) music that contains both audio and score representations of the pieces aligned at a fine-grained level (344,742 pairs of noteheads aligned to their audio/MIDI counterpart). +* [MuVi-Sync](https://zenodo.org/records/10057093) - The **MuVi-Sync** dataset is a multi-model dataset comprising both music features (chord, key, loudness, and note density) and video features (scene offset, emotion, motion, and semantic) extracted from a total of 748 music videos. +* [Nlakh](https://github.com/minju0821/musical_instrument_retrieval) - **Nlakh** is a dataset for Musical Instrument Retrieval. It is a combination of the NSynth dataset, which provides a large number of instruments, and the Lakh dataset, which provides multi-track MIDI data. +* [NSynth](https://magenta.tensorflow.org/datasets/nsynth) - **NSynth** is a dataset of one shot instrumental notes, containing 305,979 musical notes with unique pitch, timbre and envelope. The sounds were collected from 1006 instruments from commercial sample libraries and are annotated based on their source (acoustic, electronic or synthetic), instrument family and sonic qualities. The instrument families used in the annotation are bass, brass, flute, guitar, keyboard, mallet, organ, reed, string, synth lead and vocal. Four second monophonic 16kHz audio snippets were generated (notes) for the instruments. +* [NES-MDB (Nintendo Entertainment System Music Database)](https://github.com/chrisdonahue/nesmdb) - The Nintendo Entertainment System Music Database (NES-MDB) is a dataset intended for building automatic music composition systems for the NES audio synthesizer. It consists of 5278 songs from the soundtracks of 397 NES games. The dataset represents 296 unique composers, and the songs contain more than two million notes combined. It has file format options for MIDI, score and NLM (NES Language Modeling). +* [Niko Chord Progression Dataset](https://github.com/billyblu2000/accomontage2) - The Niko Chord Progression Dataset is used in AccoMontage2. It contains 5k+ chord progression pieces, labeled with styles. There are four styles in total: Pop Standard, Pop Complex, Dark and R&B. +* [OnAir Music Dataset](https://github.com/sevagh/OnAir-Music-Dataset) - đŸŽ” a new stem dataset for Music Demixing research, from the OnAir royalty-free music project. +* [Opencpop](https://wenet.org.cn/opencpop/) - **Opencpop** , a publicly available high-quality Mandarin singing corpus, is designed for singing voice synthesis (SVS) systems. This corpus consists of **100 unique Mandarin songs** , which were recorded by a **professional female singer.** All audio files were recorded with studio-quality at a sampling rate of **44,100 Hz** in a **professional recording studio environment**. +* [OpenGufeng](https://github.com/mrmrmrfinch/OpenGufeng) - A melody and chord progression dataset for Chinese Gufeng Music. +* [PBSCSR](https://github.com/HMC-MIR/PBSCSR) - The Piano Bootleg Score Composer Style Recognition Dataset. Our overarching goal was to create a dataset for studying composer style recognition that is "as accessible as MNIST and as challenging as ImageNet." To achieve this goal, we sample fixed-length bootleg score fragments from piano sheet music images on IMSLP. The dataset itself contains 40,000 62x64 bootleg score images for a 9-way classification task, 100,000 62x64 bootleg score images for a 100-way classification task, and 29,310 unlabeled variable-length bootleg score images for pretraining. +* [POP909](https://github.com/music-x-lab/POP909-Dataset) - POP909 is a dataset which contains multiple versions of the piano arrangements of 909 popular songs created by professional musicians. The main body of the dataset contains the vocal melody, the lead instrument melody, and the piano accompaniment for each song in MIDI format, which are aligned to the original audio files. Furthermore, annotations are provided of tempo, beat, key, and chords, where the tempo curves are hand-labelled and others are done by MIR algorithms. +* [ProgGP](https://github.com/otnemrasordep/ProgGP) - A dataset of 173 progressive metal songs, in both GuitarPro and token formats, as per the specifications in DadaGP. +* [RWC (Real World Computing Music Database)](https://staff.aist.go.jp/m.goto/RWC-MDB/) - The RWC (Real World Computing) Music Database is a copyright-cleared music database (DB) that is available to researchers as a common foundation for research. It contains around 100 complete songs with manually labeled section boundaries. For the 50 instruments, individual sounds at half-tone intervals were captured with several variations of playing styles, dynamics, instrument manufacturers and musicians. +* [Sangeet](https://github.com/cmisra/Sangeet) - An XML Dataset for Hindustani Classical Music. SANGEET preserves all the required information of any given composition including metadata, structural, notational, rhythmic, and melodic information in a standardized way for easy and efficient storage and extraction of musical information. The dataset is intended to provide the ground truth information for music information research tasks, thereby supporting several data-driven analysis from a machine learning perspective. +* [Slakh2100](http://www.slakh.com/) - The Synthesized Lakh (Slakh) Dataset is a dataset for audio source separation that is synthesized from the Lakh MIDI Dataset v0.1 using professional-grade sample-based virtual instruments. This first release of Slakh, called Slakh2100, contains 2100 automatically mixed tracks and accompanying MIDI files synthesized using a professional-grade sampling engine. The tracks in Slakh2100 are split into training (1500 tracks), validation (375 tracks), and test (225 tracks) subsets, totaling 145 hours of mixtures. +* [SymphonyNet](https://symphonynet.github.io/) - SymponyNet is an open-source project aiming to generate complex multi-track and multi-instrument music like symphony. Our method is fully compatible with other types of music like pop, piano, solo music..etc. +* [Tabla Solo dataset](https://zenodo.org/record/1267024#.WyeM6xyxXMU) - The Tabla Solo Dataset is a transcribed collection of Tabla solo audio recordings spanning compositions from six different Gharanas of Tabla, played by Pt. Arvind Mulgaonkar. The dataset consists of audio and time aligned bol transcriptions. +* [Tegridy MIDI Dataset](https://github.com/asigalov61/Tegridy-MIDI-Dataset) - Tegridy MIDI Dataset for precise and effective Music AI models creation. +* [The Lakh MIDI Dataset](https://colinraffel.com/projects/lmd/) - The Lakh MIDI dataset is a collection of 176,581 unique MIDI files, 45,129 of which have been matched and aligned to entries in the Million Song Dataset. Its goal is to facilitate large-scale music information retrieval, both symbolic (using the MIDI files alone) and audio content-based (using information extracted from the MIDI files as annotations for the matched audio files). +* [The Italian Music Dataset](https://search.datacite.org/works/10.5281/zenodo.1298556) - The dataset is built by exploiting the Spotify and SoundCloud APIs. It is composed of over 14,500 different songs of both famous and less famous Italian musicians. Each song in the dataset is identified by its Spotify id and its title. Tracks' metadata include also lemmatized and POS-tagged lyrics and, in the most of cases, ten musical features directly gathered from Spotify. Musical features include acousticness (float), danceability (float), duration_ms (int), energy (float), instrumentalness (float), liveness (float), loudness (float), speechiness (float), tempo (float) and valence (float). +* [The Persian Piano Corpus](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/YY7SVD) - The Persian Piano corpus is a comprehensive collection of Persian piano music, spanning from early composers to contemporary figures. It has been meticulously compiled and made publicly accessible, aiming to enable researchers to explore specialized investigations and contribute to new discoveries. The instrument-based approach provides a complete corpus related to the Persian piano, including relevant labels and comprehensive metadata. +* [The Song Describer Dataset](https://github.com/mulab-mir/song-describer-dataset) - The Song Describer Dataset: a Corpus of Audio Captions for Music-and-Language Evaluation. The Song Describer dataset is an evaluation dataset made of ~1.1k captions for 706 permissively licensed music recordings. +* [Universal Music Symbol Classifier](https://github.com/apacha/MusicSymbolClassifier) - A Python project that trains a Deep Neural Network to distinguish between Music Symbols. +* [URMP (University of Rochester Multi-Modal Musical Performance)](https://labsites.rochester.edu/air/) - URMP (University of Rochester Multi-Modal Musical Performance) is a dataset for facilitating audio-visual analysis of musical performances. The dataset comprises 44 simple multi-instrument musical pieces assembled from coordinated but separately recorded performances of individual tracks. For each piece the dataset provided the musical score in MIDI format, the high-quality individual instrument audio recordings and the videos of the assembled pieces. +* [VGMIDI Dataset](https://github.com/lucasnfe/vgmidi) - VGMIDI is a dataset of piano arrangements of video game soundtracks. It contains 200 MIDI pieces labelled according to emotion and 3,850 unlabeled pieces. Each labelled piece was annotated by 30 human subjects according to the Circumplex (valence-arousal) model of emotion. +* [Virtuoso Strings](https://github.com/arme-project/virtuoso-strings) - Virtuoso Strings is a dataset for soft onsets detection for string instruments. It consists of over 144 recordings of professional performances of an excerpt from Haydn's string quartet Op. 74 No. 1 Finale, each with corresponding individual instrumental onset annotations. +* [WikiMuTe](https://zenodo.org/records/10223363) - WikiMuTe: A web-sourced dataset of semantic descriptions for music audio. In this study, we present WikiMuTe, a new and open dataset containing rich semantic descriptions of music. The data is sourced from Wikipedia's rich catalogue of articles covering musical works. Using a dedicated text-mining pipeline, we extract both long and short-form descriptions covering a wide range of topics related to music content such as genre, style, mood, instrumentation, and tempo. +* [YM2413-MDB](https://jech2.github.io/YM2413-MDB/) - **YM2413-MDB** is an 80s FM video game music dataset with multi-label emotion annotations. It includes 669 audio and MIDI files of music from Sega and MSX PC games in the 80s using YM2413, a programmable sound generator based on FM. The collected game music is arranged with a subset of 15 monophonic instruments and one drum instrument. + +

^ Back to Contents ^

+ + +### Sound Effect + +* [Animal Sound Dataset](https://github.com/YashNita/Animal-Sound-Dataset) - This data consisting of 875 animal sounds contains 10 types of animal sounds. This animal sounds dataset consists 200 cat, 200 dog, 200 bird, 75 cow, 45 lion, 40 sheep, 35 frog, 30 chicken, 25 donkey, 25 monkey sounds. +* [AudioSet](https://research.google.com/audioset/index.html) - Audioset is an audio event dataset, which consists of over 2M human-annotated 10-second video clips. These clips are collected from YouTube, therefore many of which are in poor-quality and contain multiple sound-sources. A hierarchical ontology of 632 event classes is employed to annotate these data, which means that the same sound could be annotated as different labels. For example, the sound of barking is annotated as Animal, Pets, and Dog. All the videos are split into Evaluation/Balanced-Train/Unbalanced-Train set. +* [AudioCaps](https://audiocaps.github.io/) - **AudioCaps** is a dataset of sounds with event descriptions that was introduced for the task of audio captioning, with sounds sourced from the AudioSet dataset. Annotators were provided the audio tracks together with category hints (and with additional video hints if needed). +* [Auto-ACD](https://auto-acd.github.io/) - We present an innovative and automatic audio caption generation pipeline, construct a large-scale, high-quality, audio-language dataset, named as Auto-ACD, comprising over 1.9M audio-text pairs. The text descriptions in Auto-ACD contain long texts (18 words) and diverse vocabularies (23K), and provide information about the surrounding auditory environment(data point with shadow) in which sounds take place. +* [BBC sound effects](https://sound-effects.bbcrewind.co.uk/) - There are 33,066 sound effects in the BBC sound effects dataset, with text descriptions. Genre: Mainly ambient sound. Every audio has a natural textual description. +* [DCASE 2016](https://dcase.community/challenge2016/index) - DCASE 2016 is a dataset for sound event detection. It consists of 20 short mono sound files for each of 11 sound classes (from office environments, like clearthroat, drawer, or keyboard), each file containing one sound event instance. Sound files are annotated with event on- and offset times, however silences between actual physical sounds (like with a phone ringing) are not marked and hence “included” in the event. +* [Environmental Audio Datasets](https://homepages.tuni.fi/toni.heittola/datasets) - This page tries to maintain a list of datasets suitable for environmental audio research. In addition to the freely available dataset, also proprietary and commercial datasets are listed here for completeness. In addition to the datasets, also some of the on-line sound services are listed at the end of the page. +* [ESC-50](https://github.com/karolpiczak/ESC-50) - The **ESC-50** dataset is a labeled collection of 2000 environmental audio recordings suitable for benchmarking methods of environmental sound classification. It comprises 2000 5s-clips of 50 different classes across natural, human and domestic sounds, again, drawn from Freesound.org. +* [FAIR-Play](https://github.com/facebookresearch/FAIR-Play) - FAIR-Play is a video-audio dataset consisting of 1,871 video clips and their corresponding binaural audio clips recording in a music room. The video clip and binaural clip of the same index are roughly aligned. +* [FSD50K (Freesound Database 50K)](https://zenodo.org/record/4060432#.Y56abXZBw7c) - Freesound Dataset 50k (or **FSD50K** for short) is an open dataset of human-labeled sound events containing 51,197 Freesound clips unequally distributed in 200 classes drawn from the AudioSet Ontology. FSD50K has been created at the Music Technology Group of Universitat Pompeu Fabra. It consists mainly of sound events produced by physical sound sources and production mechanisms, including human sounds, sounds of things, animals, natural sounds, musical instruments and more. +* [FSDnoisy18k](http://www.eduardofonseca.net/FSDnoisy18k/) - The **FSDnoisy18k** dataset is an open dataset containing 42.5 hours of audio across 20 sound event classes, including a small amount of manually-labeled data and a larger quantity of real-world noisy data. The audio content is taken from Freesound, and the dataset was curated using the Freesound Annotator. The noisy set of FSDnoisy18k consists of 15,813 audio clips (38.8h), and the test set consists of 947 audio clips (1.4h) with correct labels. The dataset features two main types of label noise: in-vocabulary (IV) and out-of-vocabulary (OOV). IV applies when, given an observed label that is incorrect or incomplete, the true or missing label is part of the target class set. Analogously, OOV means that the true or missing label is not covered by those 20 classes. +* [FUSS (Free Universal Sound Separation)](https://github.com/google-research/sound-separation/blob/master/datasets/fuss/FUSS_license_doc/README.md) - The Free Universal Sound Separation (FUSS) dataset is a database of arbitrary sound mixtures and source-level references, for use in experiments on arbitrary sound separation. FUSS is based on FSD50K corpus. +* [iNaturalist Sounds Dataset](https://github.com/gvanhorn38/iNatSounds) - We present the iNaturalist Sounds Dataset (iNatSounds), a collection of 230,000 audio files capturing sounds from over 5,500 species, contributed by more than 27,000 recordists worldwide. +* [Knocking Sound Effects With Emotional Intentions](https://zenodo.org/record/3668503#.Y7kv9HZBw7c) - The dataset was recorded by the professional foley artist Ulf Olausson at the FoleyWorks studios in Stockholm on the 15th October, 2019. Inspired by previous work on knocking sounds. we chose five type of emotions to be portrayed in the dataset: anger, fear, happiness, neutral and sadness. +* [MIMII](https://zenodo.org/record/3384388) - Sound Dataset for Malfunctioning Industrial Machine Investigation and Inspection (MIMII) is a sound dataset of industrial machine sounds. +* [Mivia Audio Events Dataset](https://mivia.unisa.it/datasets/audio-analysis/mivia-audio-events/) - The MIVIA audio event data set is compose of a total of 6000 event for surveillance application, namely glass breaking, gun shot and scream. The 6000 event are divide into a training set (compose of 4200 event) and a test set (compose of 1800 event). +* [Pitch Audio Dataset (Surge synthesizer)](https://zenodo.org/record/4677097#.Y7kwVXZBw7c) - 3.4 hours of audio synthesized using the open-source [Surge synthesizer](https://surge-synthesizer.github.io/), based upon 2084 presets included in the Surge package. These represent ``natural'' synthesis sounds---i.e.presets devised by humans. We generated 4-second samples playing at velocity 64 with a note-on duration of 3 seconds. For each preset, we varied only the pitch, from MIDI 21--108, the range of a grand piano. Every sound in the dataset was RMS-level normalized using the [normalize](https://github.com/kklobe/normalize) package. There was no elegant way to dedup this dataset; however only a small percentage of presets (like drums and sound effects) had no perceptual pitch variation or ordering. +* [RemFX](https://zenodo.org/record/8187288) - RemFX: Evaluation Datasets. These datasets are initially sourced from VocalSet, GuitarSet, DSD100, and IDMT-SMT-Drums datasets before being processed in our dataset generation script. The datasets are named according to the number of effects applied (0-5). For example, 2-2.zip contains 2 effects applied to each input audio example. The targets are left untouched. The audio effects applied are from the set (Distortion, Delay, Dynamic Range Compressor, Phasor, Reverb) and randomly sampled without replacement for each example. +* [SoundCam](https://masonlwang.com/soundcam/) - SoundCam, the largest dataset of unique RIRs from in-the-wild rooms publicly released to date. It includes 5,000 10-channel real-world measurements of room impulse responses and 2,000 10-channel recordings of music in three different rooms, including a controlled acoustic lab, an in-the-wild living room, and a conference room, with different humans in positions throughout each room. +* [SoundingEarth](https://github.com/khdlr/SoundingEarth) - SoundingEarth consists of co-located aerial imagery and audio samples all around the world. +* [Spatial LibriSpeech](https://github.com/apple/ml-spatial-librispeech) - Spatial LibriSpeech, is a spatial audio dataset with over 650 hours of first-order ambisonics, and optional distractor noise (with raw 19-channel audio coming soon). Spatial LibriSpeech is designed for machine learning model training, and it includes labels for source position, speaking direction, room acoustics and geometry. Spatial LibriSpeech was generated by augmenting LibriSpeech samples with 200k+ simulated acoustic conditions across 8k+ synthetic rooms. +* [STARSS22 (Sony-TAu Realistic Spatial Soundscapes 2022)](https://zenodo.org/record/6387880#.Y1eqqezMJhE) - The Sony-TAu Realistic Spatial Soundscapes 2022(STARSS22) dataset consists of recordings of real scenes captured with high channel-count spherical microphone array (SMA). The recordings are conducted from two different teams at two different sites, Tampere University in Tammere, Finland, and Sony facilities in Tokyo, Japan. Recordings at both sites share the same capturing and annotation process, and a similar organization. +* [ToyADMOS](https://github.com/YumaKoizumi/ToyADMOS-dataset) - ToyADMOS dataset is a machine operating sounds dataset of approximately 540 hours of normal machine operating sounds and over 12,000 samples of anomalous sounds collected with four microphones at a 48kHz sampling rate, prepared by Yuma Koizumi and members in NTT Media Intelligence Laboratories. +* [TUT Sound Events 2017](https://zenodo.org/record/400516) - The TUT Sound Events 2017 dataset contains 24 audio recordings in a street environment and contains 6 different classes. These classes are: brakes squeaking, car, children, large vehicle, people speaking, and people walking. +* [UrbanSound8K](https://urbansounddataset.weebly.com/urbansound8k.html) - Urban Sound 8K is an audio dataset that contains 8732 labeled sound excerpts (<=4s) of urban sounds from 10 classes: air_conditioner, car_horn, children_playing, dog_bark, drilling, enginge_idling, gun_shot, jackhammer, siren, and street_music. The classes are drawn from the urban sound taxonomy. All excerpts are taken from field recordings uploaded to www.freesound.org. +* [VGG-Sound](https://www.robots.ox.ac.uk/~vgg/data/vggsound/) - A large scale audio-visual dataset. VGG-Sound is an audio-visual correspondent dataset consisting of short clips of audio sounds, extracted from videos uploaded to YouTube. +* [Visually Indicated Sounds](https://andrewowens.com/vis/) - Materials make distinctive sounds when they are hit or scratched — dirt makes a thud; ceramic makes a clink. These sounds reveal aspects of an object's material properties, as well as the force and motion of the physical interaction. + +

^ Back to Contents ^

+ +