Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't drop source when finished #578

Closed
marcpabst opened this issue May 22, 2024 · 10 comments
Closed

Don't drop source when finished #578

marcpabst opened this issue May 22, 2024 · 10 comments
Assignees
Labels

Comments

@marcpabst
Copy link

I've been exploring the new seeking feature and noticed a couple of issues. When a source is played, it gets removed from the queue and try_seek() stops working. On macOS, it even stalls the thread without returning an error, which might need its own bug report.

One idea I had to work around this is to implement a NeverStop source that never returns None on next(). However, I'm wondering if there's a better way to handle this, like automatically pausing when a source finishes.

My current workaround for reference>

#[derive(Clone, Debug)]
pub struct NeverStop<I>
where
    I: Source,
    I::Item: Sample,
{
    source: I,
}

impl<I> NeverStop<I>
where
    I: Source,
    I::Item: Sample,
{
    pub fn new(source: I) -> Self {
        NeverStop { source }
    }
}

impl<I> Iterator for NeverStop<I>
where
    I: Source,
    I::Item: Sample,
{
    type Item = <I as Iterator>::Item;

    fn next(&mut self) -> Option<<I as Iterator>::Item> {
        // if next is None, then the source has ended and we should return 0
        Some(self.source.next().unwrap_or_else(|| I::Item::zero_value()))
    }
}

impl<I> Source for NeverStop<I>
where
    I: Iterator + Source,
    I::Item: Sample,
{
    fn current_frame_len(&self) -> Option<usize> {
        None
    }

    fn channels(&self) -> u16 {
        self.source.channels()
    }

    fn sample_rate(&self) -> u32 {
        self.source.sample_rate()
    }

    fn total_duration(&self) -> Option<std::time::Duration> {
        None
    }

    fn try_seek(&mut self, pos: Duration) -> Result<(), rodio::source::SeekError> {
        self.source.try_seek(pos)
    }
}
@dvdsk dvdsk self-assigned this May 23, 2024
@dvdsk
Copy link
Collaborator

dvdsk commented May 23, 2024

First of all, thanks for the report! 👍

I've been exploring the new seeking feature and noticed a couple of issues. When a source is played, it gets removed from the queue and try_seek() stops working.

If I understand correctly to reproduce you:

  • create a Sink
  • add a single source
  • wait till that source is done and the queue in the sink advances
  • call try_seek on Sink

You would expect the seek call to work on the last played item and allow you to (re)start playback correct? That seems reasonable to me, the issue is the current behavior of the queue it throws away sources that are done. This would need some kind of "seek support" for the queue.

On macOS, it even stalls the thread without returning an error, which might need its own bug report.

Thats pretty bad. I think this could be due to the blocking recv here:

match feedback.recv() {

If the source has been removed by the queue that recv call will block forever. For now an easy fix would be replacing that with a recv_timeout(timeout larger then periodic_access).

One idea I had to work around this is to implement a NeverStop source that never returns None on next(). However, I'm wondering if there's a better way to handle this, like automatically pausing when a source finishes.

I like the idea of automatically pausing the queue if there is no more source past the current. I remember a PR or Issue where someone proposed expanding the queue to support referencing items in it. I gotta find that again and see if that could be useful.

@dvdsk dvdsk added the bug label May 23, 2024
@marcpabst
Copy link
Author

marcpabst commented May 23, 2024

Hi, thanks for your reply! Your description sums it up nicely and confirms what I thought was the root of the issue. I think a pause-on-finish feature would be great and could even be the default. The only downside I see is that some data might be kept in memory for longer, but that seems manageable.

support referencing items in it.

As in accessing items in the queue through a sink?

@dvdsk
Copy link
Collaborator

dvdsk commented May 23, 2024

As in accessing items in the queue through a sink?

That does not seem worked out but we do have this PR for accessing items in a new queue type: #506

dvdsk added a commit that referenced this issue May 23, 2024
@dvdsk
Copy link
Collaborator

dvdsk commented May 23, 2024

#579 should fix the hanging. I had a look at the queue that underpins most of Sinks functionality. In my opinion we have outgrown the current design for the Sink and queue. I want to make rodio easier to use and maintain. If we want to keep adding features (such as getting the current playback pos) the Sink and queue need to be re-designed.

That would also be a good point to address some current pitfalls with the API (for example Sink::new, return two objects you need to keep alive).

Right now I am working on other stuff and not really using rodio. I hope that will change in a few months at which point I'll take a look at such a re-design. Of course I'll do my best to fix more pressing issues as they pop up.

For now, pending that redesign, I would recommend using append_with_signal to get a signal when the queue is empty, you can then use that to re-add the last item if you need to seek back in it after it has left the queue. Let me know if you have trouble figuring that out!

@marcpabst
Copy link
Author

marcpabst commented May 23, 2024

Thank you! Am I correct in assuming that using append_with_signal and re-appending the last item has performance implications? I'm not entirely confident about the inner workings of the Decoders, but I would assume that re-initializing one could have a performance penalty compared to just continuously running the iterator and seeking back.

On a related note, do Decoder types stream data from disk? Would try_seek be faster if all the samples are preloaded into memory? I'm basically interested in keeping the latency during playback (and seeking) as low as possible, potentially at the expense of higher upfront cost and memory usage.

@dvdsk
Copy link
Collaborator

dvdsk commented May 23, 2024

Thank you! Am I correct in assuming that using append_with_signal and re-appending the last item has performance implications? I'm not entirely confident about the inner workings of the Decoders, but I would assume that re-initializing one could have a performance penalty compared to just continuously running the iterator and seeking back.

Depending on the format there is some cost yes, its really unfortunate. If you're NeverStop workaround is working I would keep that until we sort things out in Rodio (which could take a while).

On a related note, do Decoder types stream data from disk? Would try_seek be faster if all the samples are preloaded into memory? I'm basically interested in keeping the latency during playback (and seeking) as low as possible, potentially at the expense of higher upfront cost and memory usage.

Yes they stream data from disk if you are opening a file. The decoders wrap anything implementing Read+Seek. You could use std::io::Cursor to create a Read+Seek interface around a Vec, then read the file to that Vec.

I do not know your precise application, if it is a rhythm game or something with many sounds that need to be precisely timed you should take a look at kira. Its an audio lib purely focused on gaming and might fit your needs better, though a bit more complex to set up and less extensible.

@marcpabst
Copy link
Author

Thanks for the tip! However, I believe rodio fits my needs better. I might even contribute to cpal for some specific requirements, like very low-latency audio.

I think it should be relatively straightforward to implement an InMemory wrapper that drains a source into a buffer (assuming it has a fixed duration) upon creation, then drops the original source and uses the buffer for playing and seeking. That should fix all my problems (I tought that Buffered would so something like that, but it doesn't support seeking. so I'm not sure if it really hold the samples in memory).

@dvdsk
Copy link
Collaborator

dvdsk commented May 23, 2024

like very low-latency audio.

Atm rodio has a avg 2.5ms (max 5) delay between a seek and it starting. Planning to get rid of it in the future but you should know.

I tought that Buffered would so something like that, but it doesn't support seeking. so I'm not sure if it really hold the samples in memory.

It does hold the samples, decoded, in memory initially. It drop the part of a source already played/consumed from memory as soon as it can. Thats pretty memory efficient however makes it impossible to support seeking without undoing that.

I think it should be relatively straightforward to implement an InMemory wrapper that drains a source into a buffer.

Yeah I think so too, you should be able to make it generic around anything implementing Source (which is basically just Iterator<Item=audiosample> with some added stuff). Then make the wrapper implement Source and you can use it.

Just wrap it around a sampleconverter before you stick it into the Sink. The sample converter will make sure you have the same sample rate for the entire source. That should make implementing seeking easier.

Combine that with your NeverStop Source and your good 👍. I really love that extensibility.

Still I would have a look at just loading the file to memory. Then the decoder has fast random access and latency should be pretty low.

@marcpabst
Copy link
Author

I really love that extensibility.

I noticed that as well! I think some pretty solid API choices where made here :)

Still I would have a look at just loading the file to memory. Then the decoder has fast random access and latency should be pretty low.

I'm after sub-millisecond delays (which is tricky to begin with) and I don't want to include more sources of potential perofmance bottlenecks than necessary - altough I think that we're probbaly talking about latencies an order of magnitude lower than that.

@marcpabst
Copy link
Author

I think this can be closed, as the original issue is currently intended behavior, and the part that was actually a bug seems fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants