You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We were waiting to move to huggingface's model hub before integrating consistency tests for pretrained models between Asteroid versions.
Questions: how should we do that?
For the tests, we should have a folder containing wav files pairs that correspond to outputs expected from a given input. We could have several such pairs or only one.
The data used as the input could either be real data from the dataset we used for training, or dummy data that's random.
Maybe, the dummy approach is simpler, but it would be cool to have a separation/enhancement example for each model.. If we take the "real data" approach, we should always share the same example, because of License issues.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
We were waiting to move to huggingface's model hub before integrating consistency tests for pretrained models between Asteroid versions.
Questions: how should we do that?
For the tests, we should have a folder containing wav files pairs that correspond to outputs expected from a given input. We could have several such pairs or only one.
The data used as the input could either be real data from the dataset we used for training, or dummy data that's random.
Maybe, the dummy approach is simpler, but it would be cool to have a separation/enhancement example for each model.. If we take the "real data" approach, we should always share the same example, because of License issues.
Thoughts anyone?
Beta Was this translation helpful? Give feedback.
All reactions