-
Notifications
You must be signed in to change notification settings - Fork 48
Home
Wendy Ju edited this page Aug 8, 2019
·
47 revisions
a.k.a.
(As an homage to Tensorflow for Poets.)
In this workshop, we will teach you how to design audio classifiers using neural nets. We will guide you through the steps of collecting and organizing data, generating spectrographs, training a network, and the using that network to detect audio in realtime. We will use Jupyter notebooks, Python3, pyTorch, and Librosa to play with neural nets that can detect different music and different audio sources.
Time | Monday | Tuesday | Wednesday | Thursday | Friday |
---|---|---|---|---|---|
9am | Introductions | Review/Q&A | Review/Q&A | Review/Q&A | Review/Q&A |
10-noon | Neural Nets | Collecting & Analyzing Sounds | Designing Interaction | Generative Models | Project time |
noon-1:30 | Lunch | Lunch | Lunch | Lunch | Lunch |
1:30-3:30 | Lab Setup | Stanford Sounds Dataset Activity | Urban Sounds Dataset Lab | Final Project | Project Time/ Show and Tell |
3:30-5pm | Cats & Dogs Lab | Stanford Sounds Dataset Activity | Plotting Final Project | Final Project | Happy Hour |
Lab 1. Cats and Dogs--binary classification
Lab 2. Making the Stanford Sounds Dataset