Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate from synchronous to async read #127

Draft
wants to merge 1 commit into
base: Omega
Choose a base branch
from

Conversation

Mynacol
Copy link

@Mynacol Mynacol commented Jul 1, 2024

To improve the performance. The current implementation severely underperforms, reaching at maximum 60 Mbit/s on a local 1 Gbit/s link (where the cmd tool scp performs at over 800 Mbit/s). With video bit rates reaching 60 Mbit/s, videos can no longer be streamed without occasional interruptions.
This problem severely worsens if the network latency is higher. One situation with about 40 ms RTT latency only reached about 10 Mbit/s, even though the network could handle more throughput. See #22 for more discussion.
This work increases the throughput on the same setup from 60 Mbit/s to almost 100 Mbit/s.

This work always initiates the next read at the end of each previous one. This however requires to specify a length, which might not match with the length Kodi requests next (e.g. at the end of file). To avoid memory leaks, we have to call sftp_async_read with a valid buffer of the correct size. This means a temporary buffer is needed. If too much was actually read, this work simply rewinds the current file offset to the Kodi-expected place.

This might also mess with other/threaded usage, as the async work can run concurrently with normally mutexed areas, and even is expected to change the file position during retrieval, leading to inconsistent results when calling GetPosition at the same time.

This PR "works" but I don't feel confident in error conditions, thread safety and others.

To improve the performance. The current implementation severely
underperforms, reaching at maximum 60 Mbit/s on a local 1 Gbit/s link
(where the cmd tool `scp` performs at over 800 Mbit/s). With video bit rates
reaching 60 Mbit/s, videos can no longer be streamed without occasional
interruptions.
This problem severely worsens if the network latency is higher. One
situation with about 40 ms RTT latency only reached about 10 Mbit/s,
even though the network could handle more throughput.
See xbmc#22 for more discussion.

This work always initiates the next read at the end of each previous one.
This however requires to specify a length, which might not match with
the length Kodi requests next (e.g. at the end of file). To avoid
memory leaks, we have to call `sftp_async_read` with a valid buffer of
the correct size. This means a temporary buffer is needed.
If too much was actually read, this work simply rewinds the current file
offset to the Kodi-expected place.

This might also mess with other/threaded usage, as the async work can
run concurrently with normally mutexed areas, and even is expected to
change the file position during retrieval, leading to inconsistent
results when calling `GetPosition` at the same time.
@wapsi
Copy link

wapsi commented Aug 7, 2024

This PR sounds interesting, as I'm also facing those issues with SFTP VFS addon when accessing the server with high RTT, even the connection throughput is enough.

May I ask: Are you able to provide ARM64 build with this patch applied? I would like to test that on my Android box.

@Mynacol
Copy link
Author

Mynacol commented Aug 12, 2024

Are you able to provide ARM64 build with this patch applied?

Sorry, no. I test this directly with NixOS on my machines.

Additionally, I should note that a file copy in Kodi's file manager seemed to work fine, but playback of a movie failed to even start playing. So this code is not really working rn.

@Mynacol Mynacol marked this pull request as draft August 12, 2024 14:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants