-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose discrete voices in API #80
Comments
Device names are exposed using the For voice names I could add an additional function to the Do you have any suggestions for how the API call for separate voice buffers should work / look? |
Yeah, choosing one or the other is not ideal now that I think of it. Maybe you always provide stereo output in the first 2 channels, and the optional per-voice output in subsequent channels. I agree it wouldn't really work to call two separate methods. Maybe something like:
Buffer layout: I was trying to figure out how Modizer added multi voice output to vgmplay: yoyofr/modizer@cf11760#diff-7390462695712dbe8460edcaab01bd6597b9d920f20a15211b0172acc8ceb4df |
I disagree with this method, as it would require modifications to the emulation cores, that would have be inconsistently applied and at the very least bypass the emulated chip's mixer. That could cause accuracy issues with the sound chips that perform certain effects during mixing (YM2612, QSound that I can think of on top of my head right now, possibly others). |
A few more reasons that I can think of:
As a worst case, you can think of the YMF278B (OPL4), which has up to 45 voices and 6 output channels. Personally, I'd rather keep this off the library and have the application deal with creating outputs for each channel, either by multiple chip instances or multiplexing (i.e. save state on the "main" instance and replay on a "sub" instance for each voice you want to capture). |
Hmm...you raise some good points. I would suggest, in principle:
I think some enhancement to the cores is defensible if the purpose of the library is music playback, based on user demand. My hope was that libvgm could be a common foundation for players like Modizer or Rymcast. Currently such players do their own hacks to obtain a voice visualization. But yes, it depends on the goals of the library. :) |
The changes you're describing aren't related to playback though. You want music visualization. I don't think that adding bloat deep into the cores will enhance playback in any level. To further clarify what i think, the individual voice output code will need to be added to each core, adapted to the mixing/channel update routines (which are written in many different ways), and of course as mentioned will increase memory and CPU usage. It will be a burden to maintain, all cores would have to be adapted and new cores/ports from other emulators will be as well. Also consider the likelihood that it will be misused (ie not for visualization). I think if this function is absolutely necessary in your application, that it would be best to keep it in a separate branch or fork. |
While I'm not completely against adding functions for visualization or separate channel output, I won't put any effort into it anytime soon. I'm also not convinced about modifying the update/render function to provide additional parameters. Right now I think that functions that provide the volume / frequency of the channels/voices would be more useful and feasable than additional per-voice output. |
Thanks for the discussion here. I think we're in agreement that the best option would be to maintain a fork with voice output support. |
My question is related to this, so I thought I would ask here instead of starting a new issue. I would like to access each voice separately too, but for a different purpose. Rather than visualizing, I'd like to render them to separate files. At this point, even a manual process would do. For example, is there any way to configure some voices to be muted? Particularly with the I found this and got it to work: https://github.com/weinerjm/vgm2wav However, it lumps all the SN76489 voices together, and it appears to be a quick thing someone threw together, whereas this project looks much more mature. I played around with |
If you want to have a more "programmable" solution, you can look how player.cpp does it here: Line 703 in 5771347
If you just want something that works out-of-the-box, then compile vgmplay-libvgm, which uses libvgm. (It uses libvgm internally and includes it as a submodule, so that you can make sure the versions are compatible.) |
I was not able to get |
I have one big feature request for libvgm, and that is improved support for voices/channels in the API (I use the term "voice" to disambiguate from left/right stereo channels).
Device names/voice names
It would be nice for the API to expose the active (1) device/chip names, like YM2608, SegaPCM, etc. and (2) voice names, like FM 1, FM 2, PSG 1, etc.
It helps to have friendly names for muting. Game music emu has basic support for this, but it is not implemented for all the FM chips.
Voice buffers (for oscilloscopes, etc.)
In addition to the stereo buffer, I would love to be able to fill discrete voice audio buffers for external mixing or visualization.
The host would be responsible for any post-processing like autocorrelation, etc.
The text was updated successfully, but these errors were encountered: