You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Motivation
I noticed that the way music21objects are organized in voices in polyphonic/pianoform music is in Voice streams. Because a Voice is part of a Measure is part of a PartStaff, this means voices can not change staff, a case that sometimes happens for the piano.
Unfortunately, the way music21 is set up right now, allowing objects to be in one site only, prevents it from expressing voice and staff independently because this would mean logging an object in two different sites (Voice and Staff). I think this library really needs to allow for this to be a truly general representation for music notation. This would also be useful in other cases: Consider tuplets and beams for example. Right now, they are set up as objects that are logged in notes (correct me if I'm wrong). It would make a lot of sense from a modelling perspective to consider beams and tuplets as sites that live in Voice and that contain objects themselves. For that, see these figures from the musicdiff paper https://inria.hal.science/hal-02267454v2/document
As you can see, this makes it easy to express the nested nature of beams and tuplets but comes with the same problem as Voice/Statff: objects must be in multiple sites (both located inside Voice). This is because beams can extend over tuplet borders so you really need two independent strucures.
I think if music21 would overcome this specific limitation, it would become the library OMR researchers like myself desperately need right now. I would happily work on this if someone is open to discuss this issue. What makes me qualified to do so?
I already dedicated the last decade to develop a music model myself that can handle all these cases, albeit without being so dynamic (objects have fixed parents and can't be instantiated and used without them). I know this model can express all these cases because I also wrote a custom score rendering tool, playback and import/export from musicxml/midi. The model can also be edited and automatically tracks changes to tell the renderer what to update. Problem is, it is in Java and I need a powerful model for my work in OMR, especially to try and provide a common standard for comparing OMR results. You can look at the structure I came up with:
I don't want to translate my model because it is designed for other use cases, but I would happily work on this library if this makes it more versatile and usable for my research. My dataset contains and is explicitly tested for voices changing staffs, so I can not use it right now (among other smaller issues, see #1633, #1638).
Please tell me if you think having objects in multiple sites is feasible.
Intent
[x] I plan on implementing this myself.
[ ] I am willing to pay to have this feature added.
[ ] I am starting a discussion with the hope that community members will volunteer their time to create this. I understand that individuals work on features of interest to them and that this feature may never be implemented.
The text was updated successfully, but these errors were encountered:
Motivation
I noticed that the way music21objects are organized in voices in polyphonic/pianoform music is in Voice streams. Because a Voice is part of a Measure is part of a PartStaff, this means voices can not change staff, a case that sometimes happens for the piano.
Unfortunately, the way music21 is set up right now, allowing objects to be in one site only, prevents it from expressing voice and staff independently because this would mean logging an object in two different sites (Voice and Staff). I think this library really needs to allow for this to be a truly general representation for music notation. This would also be useful in other cases: Consider tuplets and beams for example. Right now, they are set up as objects that are logged in notes (correct me if I'm wrong). It would make a lot of sense from a modelling perspective to consider beams and tuplets as sites that live in Voice and that contain objects themselves. For that, see these figures from the musicdiff paper https://inria.hal.science/hal-02267454v2/document
As you can see, this makes it easy to express the nested nature of beams and tuplets but comes with the same problem as Voice/Statff: objects must be in multiple sites (both located inside Voice). This is because beams can extend over tuplet borders so you really need two independent strucures.
I think if music21 would overcome this specific limitation, it would become the library OMR researchers like myself desperately need right now. I would happily work on this if someone is open to discuss this issue. What makes me qualified to do so?
I already dedicated the last decade to develop a music model myself that can handle all these cases, albeit without being so dynamic (objects have fixed parents and can't be instantiated and used without them). I know this model can express all these cases because I also wrote a custom score rendering tool, playback and import/export from musicxml/midi. The model can also be edited and automatically tracks changes to tell the renderer what to update. Problem is, it is in Java and I need a powerful model for my work in OMR, especially to try and provide a common standard for comparing OMR results. You can look at the structure I came up with:
I don't want to translate my model because it is designed for other use cases, but I would happily work on this library if this makes it more versatile and usable for my research. My dataset contains and is explicitly tested for voices changing staffs, so I can not use it right now (among other smaller issues, see #1633, #1638).
Please tell me if you think having objects in multiple sites is feasible.
Intent
[x] I plan on implementing this myself.
[ ] I am willing to pay to have this feature added.
[ ] I am starting a discussion with the hope that community members will volunteer their time to create this. I understand that individuals work on features of interest to them and that this feature may never be implemented.
The text was updated successfully, but these errors were encountered: