-
-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch to fmp4 intead of mgepg #542
base: master
Are you sure you want to change the base?
Conversation
@@ -119,8 +119,8 @@ func toSegmentStr(segments []float64) string { | |||
func (ts *Stream) run(start int32) error { | |||
// Start the transcode up to the 100th segment (or less) | |||
length, is_done := ts.file.Keyframes.Length() | |||
end := min(start+100, length) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remember to revert this after testing
Maybe it would be wisest to switch to |
I'm realising supporting this would allow av1 streaming. This would be huge to reduce bandwidth usage. Definitely want to work on this on the following days |
It seems presentation time and durations are not consistent with previous segments so the hls renderer can't keep up
For now, disabled all audios variants since it's handling will be entierly different. Found out that audio and video segments don't need to lineup. (same number/duration). As long as the whole file stays long enough it's fine. Video handling now fails when there are too many keyfranmes close enough (like 0.01, 0.3, 0.4, 2, 4). It would only output 3 segments instead of the 5 we would want. We might get arround using fragments containing more than 1 keyframe if we handle things right
Idea seem to work, need to:
|
We would need to fix the assumption that keyframes are a file level things. This should move to a stream level thing. It's legal to have audio/video with different segments. It's also legal to have two videos w/ different segments. The only constraint is that variant should have the same segments (aka video1/720p & video1/1080p needs same segments while video1/720p & video2/1080p can have different segments). Doing this would also unlock video selection when the file contains multiple video streams. |
I'm thinking of first fixing this & data store in another PR before continuing this. |
Given their similar encoding complexity and performance plus currently still wider support shouldn't HEVC be preferred here? |
hevc would be supported too this would allow avc1 & hevc transmuxing. we could also add a avc1 or hevc transcode mode but i don't know how this could be negotiated. maybe an option on the server and make them available in the manifest for the client to pick |
From what I understand currently Kyoo doesn't offer any API to tell it what codecs the client supports. Jellyfin has a whole system for this, see the DeviceProfile key in this API. Presumably in the future something similar will be required as more devices are supported which have different capabilities. This would probably entail some changes to the transcoding API too though, a session concept would likely make sense in that case too. |
The server is not the one responsible for knowing if a media file can be played on the client. The client itself does the check and requests a format it can support. This is done either using the mimeCodec field of the As long as all formats can be served under a single manifest there won't be a need for a custom protocol to negotiate formats: the hls spec already handles this. I heard that some devices play badely with fmp4 (but works well with mpegts) so we will probably need to at least support those two versions in 2 different manifests just for those devices. PS: There is already a session concept, it's used to know if a user is going to need a segment or to kill old ffmpeg processes. |
Closes #534
Refs:
Currently, this PR is not working, I guess timestamps or mp4 headers are invalid /shrug