-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prototype of providing feedback using sounds #1100
base: master
Are you sure you want to change the base?
Conversation
Build failed! Build osara pr1100-1468,0ed71e55 failed (commit 0ed71e55c3 by @jcsteh) |
Build failed! Build osara pr1100-1469,872f78b5 failed (commit 872f78b57f by @jcsteh) |
Build succeeded! Build osara pr1100-1470,1dda98bc completed (commit 1dda98bc57 by @jcsteh) |
This is really cool. For some reason I thought you were against adding any kind of audio queues to OSARA, but I definitely see a lot of potential for this. I was able to test it under Mac, the sounds are missing from the .dmg so the installer didn't copy them in but I was able to get it to work by copying them into the right folder from the repo.
Here's a few ideas on what could also have a sound:
- A boundary sound if you use the track/item/marker/envelope point navigation keys and are already at the first/last one
- To expand on the marker/region sounds, perhaps having 2 distinct sounds for when the cursor crosses the start and end of an item or region (I do a lot of tight item splitting to cut up sounds for games and think this might be helpful with making tighter splits. I'd also like to see these decoupled from the report position while scrubbing setting.
- An option for aural instead of spoken peek watcher alerts, even if it's something like what Studio Recorder has where you get a beep when the level goes over the limit, though I think this could be taken further with multiple sounds for every few DB of the level going over. I realize this is subjective and at least Scott prefers spoken levels, but I'd personally love something like the accessible peek meter being integrated into OSARA directly since that plugin wasn't updated in a while and on Mac has to run under Rosetta which adds an extra window to Command-tab.
- A render complete sound - there's a script to do this, but the setup is a bit complicated and I think most people would benefit from this.
- Maybe indicating some track properties, IE folder states with sound? This will probably open a whole can of worms about replacing more statuses with sound, but iirc there's multiple people wanting folder announcements to be in different places of the string so this could be a decent alternative.
If you like any of these ideas and decide to move forward with further testing I'd be happy to put together some sounds for them.
… On 2 Jun 2024, at 08:34, AppVeyorBot ***@***.***> wrote:
Build succeeded! Build osara pr1100-1470,1dda98bc completed <https://ci.appveyor.com/project/jcsteh/osara/builds/49935889> (commit 1dda98b <1dda98bc57> by @jcsteh <https://github.com/jcsteh>)
Downloads:
Windows
<https://ci.appveyor.com/api/buildjobs/ynbudsdd63xsxi84/artifacts/installer/osara_pr1100-1470,1dda98bc.exe>
<https://ci.appveyor.com/api/buildjobs/ynbudsdd63xsxi84/artifacts/locale/osara_pr1100-1470,1dda98bc.pot>
Mac
<https://ci.appveyor.com/api/buildjobs/59ekiqigqu3idc7u/artifacts/installer/osara_pr1100-1470,1dda98bc.dmg>
—
Reply to this email directly, view it on GitHub <#1100 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ABAABTLP4Q5O5T3DRHIWEGTZFK4F7AVCNFSM6AAAAABIUYX2XWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNBTG4YTSNJUGY>.
You are receiving this because you are subscribed to this thread.
|
It's not a super efficient idea but it might make sense if the sounds didn't use the asio drive. |
Playing sounds outside of REAPER would require different code for each platform, or alternatively some cross-platform audio library. I'd really prefer not to do that and I'll probably drop this idea if that's a requirement. I guess we'll see if that is a requirement among people who particularly want this.
I'm not dead against it. I think it has to be done with a lot of careful consideration, though. It could end up being more annoying than useful and I think we should lean towards less rather than more, particularly in the early days, assuming we do it at all.
Curious. I explicitly wrote code to make sure they got included in the dmg. I don't know why that isn't working, so that'll be fun to debug.
I thought about this. I'm not sure how useful it is, though, since there are already plugins like JS: Tone gate that can do something like this and they'll probably be more accurate because they're integrated directly into the audio chain.
Pitched tones would probably be better for this, but adding code to generate and play tones will be quite a bit more work. I'm not entirely ruling it out though. |
Oh crap. I made a copy/paste typo in the dmg builder, so the sounds would have been copied into your osara locale folder. Sorry about that. You might want to remove those, heh. |
Build succeeded! Build osara pr1100-1471,5aae83f5 completed (commit 5aae83f599 by @jcsteh) |
Hi James, I found it really great, it gives a sense of things that we otherwise couldn't have, it avoids having too much useless talk, I would make it independent from the other states, that is, when it's flexed it does what it has to do even if the ignitions aren't active. flags reporting the time etc. When we move we can have a perfect cognition of the edges just like sighted people do, perhaps by putting different sounds for the beginning and end of markers, regions, items: but if I did that I would really like it. I think the playback via reaper is excellent, it doesn't bother me if it is played back to asio or the driver used at that moment by reaper, in the meantime it is not something that is used live but in the studio. Thank you, I really hope you want to start this new era: in the meantime I thank you just for having been able to try it. |
Thanks just if I can make a suggestion I would suggest using left panning for the start of an item, right panning for the end of an item, left panning for the start of a region, right panning for the end of a region: and different sounds for items, markers and regions which would make everything even more immediate. |
This way we couldn't use Reaper live if you don't have a device with a minimum of four channels. |
From the topmost comment:
|
Just my 2 cents: I'm a fan of the idea to have sounds as an additional layer of feedback for situations where we've got already alot of feedback going on, just to speed up the workflow. An example would be the deletion or insertion of items, where you'd get feedback about how many items / tracks have been added/deleted, info about the current ripple state, and some additional feedback too that is already in the works. Listening through this entire message, depending on your screen reader speech settings, can take well over a second, maybe multiple seconds. Sounds can be much quicker. What I wouldn't want to see is REAPER turn into a random sound generator of sorts, where you'll have to book lessons from soneone teaching you what every sound is meant to tell you. Or, to be exact, of all the ideas mentioned above I don't think a single one should be represented by sound instead of speech feedback. As Jamie already said, tonal feedback in addition to peak watcher is something tone gate can do, although it wouldn'be pitched depending on how much you crossed the threshold, and especially a render sound is something that I personally don't think is really necessary. Its a nice gimmick, but doesn't help inproving a workflow or speed up a process in any kind of way that I could see. Crossing boundaries of items/regions could benefit of additional speech feedback though, as we already have kind of similar things going on with markers anyway, but sounds... I don't know, especially as you're most likely already listening to something when this hapens, so the sounds will just mix with the track/SFX you are currently working on. |
Hi, I'm also not usually in favor of all those programs for the blind that use sounds and jingles that do nothing but confuse, I don't even use sounds on mobile phones or on NVDA or on jaws etc., in fact I remove them all, but in this case the simple sounds to indicate the edge of an item, or the crossing of several items or the markers or the regions, would seem really productive to me, absolutely not annoying as they would usually be used when the project is at a standstill, and therefore when they cannot be confused with music : I would never want a sound that announces a region or a marker in play, but only sounds that let me know when I touch the edge of a region or a marker with the arrows, thus avoiding having to always listen to the time or to go to the beginning of the item or to the end of the item and then move a little to edit and work. These sounds would only be there while the voice is speaking, perhaps, as I said above, in stereo to make it immediately clear with the right or left position whether we are at the beginning or end of an element, therefore they would not interfere with the metronome nor with the song : sounds like those that James used are absolutely distinguishable and actually give the ear a little rest from the synthesis that already speaks to us all day and even when working together with sighted people they would be less shocking for them too in many situations. |
Some sounds can be helpful in providing quick feedback to confirm that a particular action has really taken place.
The sound feedback I especially like and find helpful when using the JAWS scripts are the sounds associated with creating the start and end of a selection when hitting the left and right bracket keys. These confirm that those selection points have been made. There is a slightly different sound for placing the start and end markers. In fact, I didn’t even realize that Osara gave a long speech output when hitting these keys until I turned off the scripts. So, in a case like this, I think the sounds are much more efficient.
Similarly, I keep feedback sounds turned on in my Microsoft Office applications. Some of these are JAWS sounds to indicate that spell check made an automatic adjustment, some are to verify cut and paste has actually happened, etc. all without having to hear a ton of speech.
In my opinion, sounds can provide useful and efficient feedback but, as others have pointed out, must be used expediciously.
…--Pete
|
In general, I'm a polar opposite of @ranetto in that I have my VoiceOver set to replace speech with sound or pitch changes whenever possible, and although I'm an NVDA user I have JAWS set up with a sound scheme whenever I use it. I feel this makes me way more productive but agree it's something that would take time to learn and would be too confusing to new users if every track state were just a sound. That being said, I don't think the Render complete sound would be a gimmick, especially for longer renders - think saving a long audio with a lot of effects like AI noise reduction or video exports which substantially increases the rendering time. I don't know about you, but when I render such a long project I usually don't stay in the render progress window and observe it. I'll often go do something else on the computer or step away from it, and that's where I think having a sound would really help, especially on Windows where Reaper doesn't send a notification when the render is finished
As for the item boundary crossing, I was indeed thinking of the case where you're scrubbing through and not just playing, and in that case even if there were additional speech feedback added it would often just get eaten for breakfast by the speech interrupt if you're holding the arrow keys to scrub.
I love the idea of sound feedback for editing actions though, I had no idea the JAWS scripts did this for time selection. The one suggestion I have for this is if you decide to do sounds to replace speech when items are added/deleted, the sound be slightly different for one item and multiple items. Could probably just have one very short sound that either gets played once for a single item and twice for multiple.
…On 7 Jun 2024, at 17:18, ptorpey ***@***.***> wrote:
Some sounds can be helpful in providing quick feedback to confirm that a particular action has really taken place.
The sound feedback I especially like and find helpful when using the JAWS scripts are the sounds associated with creating the start and end of a selection when hitting the left and right bracket keys. These confirm that those selection points have been made. There is a slightly different sound for placing the start and end markers. In fact, I didn’t even realize that Osara gave a long speech output when hitting these keys until I turned off the scripts. So, in a case like this, I think the sounds are much more efficient.
Similarly, I keep feedback sounds turned on in my Microsoft Office applications. Some of these are JAWS sounds to indicate that spell check made an automatic adjustment, some are to verify cut and paste has actually happened, etc. all without having to hear a ton of speech.
In my opinion, sounds can provide useful and efficient feedback but, as others have pointed out, must be used expediciously.
--Pete
—
Reply to this email directly, view it on GitHub <#1100 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ABAABTKSMTRJ2WHRQ5AJ57DZGHFL7AVCNFSM6AAAAABIUYX2XWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNJVGA2TCNZXG4>.
You are receiving this because you commented.
|
There's already a script that does it, think it was either written by Chessel or Meo. Holler if you can't find, I'll try to dig it up. |
I don't think the Render complete sound would be a gimmick.
[PT] Agreed. A single sound indicating that a long process had completed could be useful.
[PT]
…--Pete
|
I love the idea of sound feedback for editing actions though, I had no idea the JAWS scripts did this for time selection.
[PT] Just thought I’d mention that the other cute thing that Jim did here was to have the sound for making the start selection come out of the left speaker and the sound to make the end selection come out of the right speaker to reinforce which side of the selection was being made. He also cut off the Osara speech feedback for these actions so they don’t get in the way. Of course he also has an option whether or not to use sound cues for other events he programmed in like moving past the boundary of regions.
…--Pete
|
I honestly don't see the advantage of using sounds for making a time selection. In that case, there's no other speech happening, so the sounds don't make things more efficient. In contrast, sounds for item boundaries or markers are more efficient because adding that to the additional speech that occurs would mean it takes a lot longer to hear all of the information. |
I honestly don't see the advantage of using sounds for making a time selection. In that case, there's no other speech happening
[PT] When I make a time selection using the left and right brackets without the JAWS scripts running (i.e., using the default JAWS configuration), here is what Osara reports:
set selection start
set selection end
Whereas with the JAWS scripts running I hear one sound when hitting the left bracket and a different (but related sound) when hitting the right bracket. There is no speech when using the JAWS scripts, just sounds.
…--Pete
|
Yes, this is a useful way of using sounds intelligently. |
I don't think anyone will be developing a frontend for that on a per-sound basis because right now, there are GUI issues to address with higher priority. However, OSARA won't crash if you modify or rename individual sounds that you don't want to hear. |
That's telling us what happens. What we need to understand is why the approach Jim has taken is being described as advantagious. |
OK! I got it. |
It's a way that you could have control over individual sounds. That's what you wanted, right? |
In reality I don't want to impose anything it was just advice. Sorry. |
For the record, I don't think any of this is beautiful. I'm not even convinced that OSARA needs sounds yet. I can understand that telling you to tamper with sounds on your own isn't the answer you expected, OSARA likes to make things easy for users whenever we can. What I probably didn't explain well enough is that at the moment, development of any of OSARA's GUIs is slow, difficult work. We have other issues open, for example at the moment our GUIs don't scale, which could be very problematic for people with some usable vision. IMO, issues like that should be resolved before we add more options or more complexity to the GUI. |
Replying to Pitermach, I completely agree with you, you never play during rendering, when I render a project or a track with oversampling of some Acustica plugins or others at 64x I can't even read the window and I leave with a cigarette break or coffee, I mean sounds only for things related to editin and that's it |
That's telling us what happens. What we need to understand is why the approach Jim has taken is being described as advantagious.
[PT] I don’t think there is an advantage to getting speech feedback versus sound feedback for some actions. Seems to be mostly a matter of preference. Some people like having speech tell them precisely what has happened, while more experienced users might prefer less speech feedback and just prefer to hear a quick sound to confirm that the action actually happened. Sort of like those little cut and paste sounds one hears when using an Office application. Some users, like myself, leave those office sounds on because the audio confirms to me what a sighted person sees. Others turn the Office sounds off because they are annoyed with all of the sounds.
—
Reply to this email directly, view it on GitHub <#1100 (comment)> , or unsubscribe <https://github.com/notifications/unsubscribe-auth/ADEPTJJIXUJ5JL4J3IOSBK3ZGLXA5AVCNFSM6AAAAABIUYX2XWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNJWGAYDMNZTGE> .
You are receiving this because you commented.Message ID: ***@***.***>
|
I would just add that the sounds of Osara, as far as I have tried them, are much more precise and less latent than those of the Jaws scripts which I immediately disable as they work only once and twice, if you hold down an arrow for example they never work well and they are very latent and annoying, while those of OSara, despite James having explained to us that the code still needed to be perfected, I found them much more precise, less annoying and less latent. The jaws scripting language is slow by nature like everything in jaws itself! |
Ah yeah, checking more closely I'm seeing you weren't the person who said advantage anyway. Preference, gotcha. |
I would just add that the sounds of Osara, as far as I have tried them, are much more precise and less latent than those of the Jaws scripts
[PT] That would be nice. As you say, sometimes the JAWS scripts are a bit slow to play the sound cue, especially if you are working/typing quickly. I haven’t tried the build with the proposed sound schemes but should. I’ll see if I can find that old e-mail.
[PT]
…--Pete
|
With regard to enabling/disabling every single sound, I'd point out that even though REAPER is indeed very configurable, it doesn't allow you to configure the text, icon, shape or animation of every single thing that is displayed. That's effectively the equivalent here. Configurability is important for flexibility, yes, but excessive configurability often just reflects a failure to thoroughly consider the UX. If individual needs are so varied that every little thing needs to be configured, we've most likely failed to think broadly enough. Aside from anything else, this becomes incredibly overwhelming for new users, who now have to decide which out of the 3 billion settings they need to enable or disable before they can be remotely productive. |
To get this discussion refined to a useful point, I'll say here that if we are going to adopt any sounds at all, we need to start small and tightly scoped. At a minimum, any sounds we implement should make the experience significantly more efficient. For example, I'm ruling out sounds for setting selection for now because even though some people prefer this, it doesn't really improve efficiency. Either way, you get immediate feedback and you don't have to wait for other information to know what occurred. On the other hand, markers or item edges while moving along the timeline are something we don't currently report at all, and if we did, it would make the current reports very inefficient. Similarly, the ripple mode report when moving items is possibly a candidate because that report happens right at the end, which means you have to wait some time before you realise you have ripple enabled. |
Having a sound at the end of rendering does not force the CPU. |
@Lo-lo78, on the contrary, I think you're being overly reactive here. Your opinion is welcome and respected, but we also have the right to our own opinions. Ultimately, we have to weigh input from many different people and then make the best decision we can for the project, knowing that some people won't like it but also knowing that we have to support thousands of users with many different levels of experience. No one was suggesting that your ideas weren't welcome or that you were trying to force the issue. We just have different opinions and that's okay. I don't think there's any problem with a rendering complete sound, though I also don't really know how the script implements this and it's not something I want to look into just yet. |
You are right! |
@jcsteh |
In issue #1063 I had the idea for optionally restricting navigation to the bounds of the time selection if one is made, but thinking about this PR I think that having audio signalization could be just as effective - perhaps even better. In other words, it could perhaps be useful if we had 2 sounds - one when we enter; and one when we leave the time selection - both when scrubbing or navigating events in the MIDI editor. |
sorry, but maybe I missed something: what are the sounds for during Rendering? If we use NVDA we have the scroll bar beeps, if we use JAWS we have the rendering percentage when it can be said, so why so much attention on the rendering sounds? I think they would just be annoying. As regards the output of the sounds themselves, I believe that if you use live or something similar, you must still have a multi-channel card to listen to the screenreader and therefore you can assign the second audio port externally via the master, if you use in the studio I think that listening to these sounds in headphones is much better for those who work with us and should not be bothered by these sounds and therefore I don't understand the problem of using the same audio engine as reaper, in fact even in the studio if you have multiple pairs of speakers or different headphones, I suppose you also have a converter with different outputs and therefore in my opinion, the problem does not exist and this avoids having to implement further audio engines which if they were in SAPII or MME would certainly not be synchronized with the point at which we find ourselves in asio and therefore they would be imprecise and latent like those of the jaws scripts and would lead to further problems with non-multi Tlient drivers or with the exclusive or non-exclusive factor of the SAPII drivers. |
This is a prototype/proof of concept. In submitting this PR, I'm not suggesting that we'll necessarily implement sounds in these particular cases, nor am I even promising that we'll implement sounds at all ever. I reserve the right to throw this code away and burn it. That said, we've never tried because we haven't had the core code for it, so this at least gets us beyond that hurdle so we can at least consider it or experiment.
As a starting point, this does the following:
I guess the big question now is: if we did want sounds, when would we really want them?