-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GStreamer element timestamping issues #43
Comments
Hi Sam,
|
Hi, |
Pushed the changes. Please let us know if you still see the issue. |
Hi Jacob, Thanks for looking into this. I've taken your changes and re-tested, and anecdotally it seems to be an improvement, but I'm still seeing a few timestamping issues on the receiver. To answer your other questions: I have re-run my testing using the scream_sender application and the sender pipeline you gave above. I am using gigabit ethernet between the sender and receiver. I have two separate testing setups that I've been using, one using VMs and the other with physical machines. In the case of my VM testing, these are using a host-only network to communicate between each other, and my physical machine testing is done with an ethernet cable. I might try and get an older version of GStreamer built and test against that, as I'm using a more modern version of GStreamer at present. You should find attached here the output of both the sender and receiver, as well as the sender stats file. -Sam |
Hi Sam,
I don’t think you need to try an older version of GStreamer.
Just to make sure,
Did you run scream_sender and receiver applications without scream enabled and INIT_ENC_BITRATE 10000 and didn’t see any timestamp issues ?
We’ve seen such issues when the CPU load becomes too high, when using software encoder/decoder such that x264enc.
Scream has a little bit overhead so, maybe this overhead makes the CPU load too high.
You can try to decrease rates a little bit , --maxrate and INIT_ENC_BITRATE. and also add -nosummary.
I’ll test with software encoder more to see if there a specific scream problem .
* Jacob
From: Sam Hurst ***@***.***>
Sent: Monday, February 21, 2022 3:22 AM
To: EricssonResearch/scream ***@***.***>
Cc: Jacob Teplitsky ***@***.***>; Comment ***@***.***>
Subject: Re: [EricssonResearch/scream] GStreamer element timestamping issues (Issue #43)
Hi Jacob,
Thanks for looking into this. I've taken your changes and re-tested, and anecdotally it seems to be an improvement, but I'm still seeing a few timestamping issues on the receiver. To answer your other questions:
I have re-run my testing using the scream_sender application and the sender pipeline you gave above.
I am using gigabit ethernet between the sender and receiver. I have two separate testing setups that I've been using, one using VMs and the other with physical machines. In the case of my VM testing, these are using a host-only network to communicate between each other, and my physical machine testing is done with an ethernet cable.
I might try and get an older version of GStreamer built and test against that, as I'm using a more modern version of GStreamer at present.
You should find attached here the output of both the sender<https://protect2.fireeye.com/v1/url?k=31323334-501d5122-313273af-454445555731-ad50fe3d433a1021&q=1&e=e2804b54-2a9a-4516-8701-d614f7ad295c&u=https%3A%2F%2Fgithub.com%2FEricssonResearch%2Fscream%2Ffiles%2F8108746%2Fsamuelh_issue43_sender_gst.log> and receiver<https://protect2.fireeye.com/v1/url?k=31323334-501d5122-313273af-454445555731-e64ba8cd5ae629bd&q=1&e=e2804b54-2a9a-4516-8701-d614f7ad295c&u=https%3A%2F%2Fgithub.com%2FEricssonResearch%2Fscream%2Ffiles%2F8108745%2Fsamuelh_issue43_receiver_gst.log>, as well as the sender stats file<https://protect2.fireeye.com/v1/url?k=31323334-501d5122-313273af-454445555731-94cbbae6fc68f009&q=1&e=e2804b54-2a9a-4516-8701-d614f7ad295c&u=https%3A%2F%2Fgithub.com%2FEricssonResearch%2Fscream%2Ffiles%2F8108747%2Fsamuelh_issue43_sender_scream_stats.csv>.
…-Sam
—
Reply to this email directly, view it on GitHub<https://protect2.fireeye.com/v1/url?k=31323334-501d5122-313273af-454445555731-66df67345e44f228&q=1&e=e2804b54-2a9a-4516-8701-d614f7ad295c&u=https%3A%2F%2Fgithub.com%2FEricssonResearch%2Fscream%2Fissues%2F43%23issuecomment-1046771616>, or unsubscribe<https://protect2.fireeye.com/v1/url?k=31323334-501d5122-313273af-454445555731-454b6ba06afc193e&q=1&e=e2804b54-2a9a-4516-8701-d614f7ad295c&u=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAMBRFBA7M3GWSZ6CGIUVHJ3U4IN5XANCNFSM5OOZMV3Q>.
Triage notifications on the go with GitHub Mobile for iOS<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675> or Android<https://protect2.fireeye.com/v1/url?k=31323334-501d5122-313273af-454445555731-aa66f83e1a88601a&q=1&e=e2804b54-2a9a-4516-8701-d614f7ad295c&u=https%3A%2F%2Fplay.google.com%2Fstore%2Fapps%2Fdetails%3Fid%3Dcom.github.android%26referrer%3Dutm_campaign%253Dnotification-email%2526utm_medium%253Demail%2526utm_source%253Dgithub>.
You are receiving this because you commented.Message ID: ***@***.******@***.***>>
|
Hi Jacob, Yes, when I tested without scream (just rtpbin) I don't see any timestamping issues on the receiver. As far as I know, the machines aren't running out of available resources as the load averages stay fairly sensible and well below the number of available CPU threads. Also, watching the processes in top doesn't show them ever using an exorbitant amount of CPU. I have tried dropping the bitrates (x264 bitrate=2000, and screamtx settings of -initrate 2000 -minrate 500 -maxrate 4000) as well as the video size (800x480) to make the job a bit easier but it doesn't seem to help, I still get timestamping errors. However, adding -nosummary to the screamtx params definitely seems to have helped when I turn down the bitrates.
|
Hi Jacob, I ended up trying an older version of GStreamer (1.16.3, built from source), and the timestamping problem seems to have gone away completely. I've run several test runs this afternoon and not a peep out of the receiver about any timestamping issues. However, instead I'm getting occasional messages on the sender end about the queuing delay being too large, and that it's discarding a large number of packets:
Is there some specific tuning that needs to be performed with these? Or is this just an expected thing with the SCReAM algorithm? I can drop this onto another bug if you like. Best regards, |
Hi Sam, |
BTW, "and the timestamping problem seems to have gone away completely. " |
Hi Jacob, I've tried increasing the kernel buffers but that hasn't helped. I have a little utility I wrote a long time ago which analyses socket buffer residency, and the transmit buffer never really seems to grow beyond about 16 kBytes. And yes, my testing was performed with GST_DEBUG=2, so that would include ERROR and WARN debug output from the decoder. Best regards, |
Hi Sam,
With GST_DEBUG=2, don’t you also get a bunch of
WARN rtpsource rtpsource.c:1013:calculate_jitter: cannot get clock-rate for pt 96
* Jacob
|
Hi Sam,
I’ve run with higher rates:
Pipeline: videotestsrc is-live=true ! video/x-raw,format=I420,width=1280,height=720,framerate=50/1 ! x264enc name=video threads=4 speed-preset=ultrafast tune=fastdecode+zerolatency bitrate=30000 ! queue max-size-buffers=2 max-size-byte\
s=0 max-size-time=0 ! rtph264pay ssrc=1 config-interval=-1 ! queue max-size-buffers=2 max-size-bytes=0 max-size-time=0 ! screamtx name="screamtx" params=" -nosummary -forceidr -ect 1 -initrate 2500 -minrate 500 -maxrate 40000 " ! \
udpsink host=6.0.0.57 port=30112 sync=false rtpbin name=r udpsrc port=30112 address=6.0.0.59 ! queue ! screamtx.rtcp_sink screamtx.rtcp_src ! r.recv_rtcp_sink_0
and confirmed the bandwidth with bwm-ng.
Don’t see any loss.
Not sure how debug. Try to do fakesink after decoder on receiver. Are you running directly on receiver and server machines ?
* Jacob
|
Hi Jacob, I don't get any complaints about clock-rate as my receiving pipeline includes a caps filter which explicitly defines the clockrate. After some additional testing I've come to the conclusion that the "rtpQueueDelay too large" messages are only prevalent on my VM testing environment, but not if I run between two bare metal machines, so I think this is environmental on my end so I'll try and figure that out when I can. -Sam |
Hi Sam ,
Please post your current pipeline.
Also, the issue with VM is possibly has to do with display, so if you cut the pipeline after decoder, you can just verify UDP/RTP transport with scream.
* Jacob
|
Hi Jacob, My sending pipeline, as used with the scream_sender application, is as follows:
My receiving pipeline, with a video sink:
And with avdec_h264 going directly into fakesink:
This works fine between two actual computers, using GStreamer 1.16.3 and commit -Sam |
Hi Sam,
I don’t know how important for you to run receiver on server on the same hardware,
but you can try to run them without VM.
Also what if two VMs are separate hardware ?
Regards, - Jacob
|
Hello,
I’ve been trying to do some testing recently with SCReAM to understand how it performs with various levels of network quality and performance. I’m using Gstreamer to encode and decode some video streams, and then attempting to use your gstscream elements to do this all in-line, but I’m having some problems. I’m attempting to apply network conditions using the Linux tc and netem tools to apply caps to network throughput, adding jitter, random packet loss and increasing latency.
The primary issue that I’m having is that the scream elements cause timestamping problems on my platform. This manifests itself as lots of dropped frames at the receiver. This is happening even in my control tests, where I’m not applying any conditions to my test network. Using something like iperf3 in UDP mode, I can send and receive ~540Mbit/s of traffic between the sender and receiver but I’m configuring the screamtx maxrate as 10Mbit/s, so I don’t think I should be encountering any on-the-wire problems.
The pipelines I’m using are quite similar to what you’re using in the sender.sh and receiver.sh scripts available in gstscream/scripts/, except I’m adding caps to force the videotestsrc to output 720p video:
Sender:
gst-launch-1.0 -e videotestsrc is-live=true ! "video/x-raw,format=I420,width=1280,height=720,framerate=25/1" ! x264enc name=video threads=4 speed-preset=ultrafast tune=fastdecode+zerolatency bitrate=8000 ! tee name=t t. ! queue ! rtph264pay ssrc=1 config-interval=-1 ! queue max-size-buffers=2 max-size-bytes=0 max-size-time=0 ! screamtx name=screamtx params=" -forceidr -ect 1 -initrate 2500 -minrate 500 -maxrate 10000 " ! udpsink host=192.168.56.103 port=5000 sync=false t. ! queue ! fakesink silent=false sync=false rtpbin name=r udpsrc port=5001 address=192.168.56.102 ! queue ! screamtx.rtcp_sink screamtx.rtcp_src ! r.recv_rtcp_sink_0
Receiver:
gst-launch-1.0 -e rtpbin name=rtpbin udpsrc port=5000 ! screamrx name=screamrx screamrx.src ! application/x-rtp,media=video,encoding-name=H264,clock-rate=90000,payload=96 ! rtpbin.recv_rtp_sink_0 rtpbin. ! rtph264depay ! h264parse ! avdec_h264 name=videodecoder ! queue max-size-buffers=1 max-size-bytes=0 max-size-time=0 ! glupload ! glcolorconvert ! fpsdisplaysink video-sink=\"glimagesinkelement\" rtpbin.send_rtcp_src_0 ! funnel name=f ! udpsink host=192.168.56.102 port=5001 sync=false async=false screamrx.rtcp_src ! f.
I get lots of the following output on the receiver:
0:00:44.559898508 2231 0x7fdaec01d800 WARN videodecoder gstvideodecoder.c:3298:gst_video_decoder_clip_and_push_buf:<videodecoder> Dropping frame due to QoS. start:0:00:44.194611002 deadline:0:00:44.194611002 earliest_time:0:00:44.205054014 0:00:47.037605415 2231 0x7fdaec01d800 WARN videodecoder gstvideodecoder.c:3298:gst_video_decoder_clip_and_push_buf:<videodecoder> Dropping frame due to QoS. start:0:00:46.674504689 deadline:0:00:46.674504689 earliest_time:0:00:46.689336031 0:00:47.066500703 2231 0x7fdaec01d800 WARN videodecoder gstvideodecoder.c:3298:gst_video_decoder_clip_and_push_buf:<videodecoder> Dropping frame due to QoS. start:0:00:46.714503414 deadline:0:00:46.714503414 earliest_time:0:00:46.759894609
If I remove the screamtx and screamrx elements, I don't see any of this behaviour. Using the Gst-Shark interlatency tool, I can see that the amount of latency introduced on buffers in the screamrx element is definitely not consistent, occasionally spiking up to 800ms over an average of 33ms. My current working assumption is that this is what's causing my decode issues.
I have taken your code and compiled it against both GStreamer 1.18.3 and 1.20, but I seem to get similar results in both cases. I’ve also tried running it both in virtual machines and also on bare metal. Do you have any ideas of what I’m doing wrong?
I’m using Ubuntu 21.10 and Rust 1.56.1 on the host system(s).
The text was updated successfully, but these errors were encountered: