Spectral Selection : Event Boxing #68
Replies: 2 comments 1 reply
-
HI @paulpeyret, thanks for the suggestion and comments. I know you can do spectral selection in Audacity and Raven, I presume you use this to highlight a segment of a call or song? Implementation would be difficult. I rely on the Wavesurfer library for the spectrograms and its region plugin for regions. This doesn't support boxing. I would have to rip out the regions plugin and re-implement my own - or adapt the regions plugin. I think this would be weeks or months of work. Unless there's huge demand for such a feature I'll be honest and say up front, I don't think this is going to happen. I'll leave the issue open to see if there is significant interest, but tag it as an enhancement/won't fix for now |
Beta Was this translation helpful? Give feedback.
-
Thanks @Mattk70 for the feedback. "I know you can do spectral selection in Audacity and Raven, I presume you use this to highlight a segment of a call or song?" Yes that is useful in bioacoustics to know the frequency bounding of the call when you want to extract some acoustic features out of the songs/calls. Though, I understand that it is not a quick and easy change. |
Beta Was this translation helpful? Give feedback.
-
Hi @Mattk70,
Thanks for all the updates, it looks wonderful!
I am now considering doing manual annotations (AI assisted) inside Chirpity.
Doing it in Chirpity seems faster than doing it in Audacity.
The only thing that I would miss is the spectral selection and export the boxed sound events (time & frequency bounding boxes).
I know Chirpity and Birdnet models don't do object detection and are doing only the time segmentation of sound events in the spectrogram, however I think it would be very useful to add the event boxing capability inside Chirpity to allow for more precise annotation capabilities.
I would be pleased to see this new feature in Chirpity. 😊
Beta Was this translation helpful? Give feedback.
All reactions