You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
But further down in the chart it has a tick next to both receiver and sender for this stat.
My understanding is that the calculation is done on the receiver, and then sent to the sender, so on this basis I am going to assume that it must be valid on both sender and receiver.
Secondly, I was thinking of experimenting with using this to automatically adjust the encoded bitrate of my stream, but obviously that idea is going to fall apart if it becomes unreliable according to how much data is being transmitted.
Anecdotally it seems that if my video bitrate drops below 1mbps the calculation for this estimtated bandwidth drops dramatically, even to below the actual rate of the data being transmitted. But if the bitrate is higher, then the estimated bandwidth seems reasonable, and as expected in the tests I did, much higher than the bitrate being transmitted.
The bitrate stat comes from the original codebase (UDT) and it was created for the conditions of file transfer. The file transfer, just like with TCP, comes with the dampener that controls the sending speed to not exceed the current sending cap and avoid losses, which means that before sending next packet you need to sleep for some predefined time to keep the speed as defined. The trick with measuring the bandwidth relied on a rule that once per 16 packets a packet is sent after the previous one with no delay at all. The time distance between two consecutive packets (15th and 16th in a row) were used on the receiver side to pick up the highest potential speed with which they can be sent.
This also contained bugs and design flaws and I have repaired some of them - like e.g. rejecting the measurement when receiving out of order, any of the packet pair was retransmitted or dropped, that is, make sure that both packets have been received originally without any delay, otherwise reject the probe.
But one problem remained (and also I don't think that this measurement was reliable when using the message mode in UDT): if you want to send a pair of two consecutive packets without a delay between one another, then you must have them both already in the sender buffer at the moment you want to send the first of them. Otherwise the second of the pair will still be sent after a delay, not intended of course, but still resulting from a delay in scheduling the packets into the sender buffer. In order to fix this problem you'd have to intentionally keep the 15th packet from being sent so that this can be done only when the 16th packet is in the buffer and this way you send 15th and 16th immediately. But this is not always possible and sometimes it may lead to dangerous delaying of the 15th packet (which might happen to be the last of the I-Frame). There must be then also some mechanism to mark that a particular pair is not supposed to be used for bandwidth measurement. This way it could be possible to either apply some maximum delay to wait for the 16th packet to be scheduled, and if this is not met, 15th packet would be marked as non-suitable for bandwidth measurement. Currently there is no possibility to pass this information to the receiver - the receiver treats always a correctly received pair of 15th and 16th packet as a bandwith probe, even if the sender hasn't really sent the 16th one with the maximum possible speed.
Firstly, I'm looking at this stat on the receiver-side. The documentation is contradictory as to whether this is permissible:
Here https://github.com/Haivision/srt/blob/master/docs/API/statistics.md#mbpsbandwidth it states it's sender-only.
But further down in the chart it has a tick next to both receiver and sender for this stat.
My understanding is that the calculation is done on the receiver, and then sent to the sender, so on this basis I am going to assume that it must be valid on both sender and receiver.
Secondly, I was thinking of experimenting with using this to automatically adjust the encoded bitrate of my stream, but obviously that idea is going to fall apart if it becomes unreliable according to how much data is being transmitted.
Anecdotally it seems that if my video bitrate drops below 1mbps the calculation for this estimtated bandwidth drops dramatically, even to below the actual rate of the data being transmitted. But if the bitrate is higher, then the estimated bandwidth seems reasonable, and as expected in the tests I did, much higher than the bitrate being transmitted.
Here's what I get between London and Stockholm:
Bitrate 2.16mb/s Bandwidth 13-20mb/s
Bitrate 4.5mb/s Bandwidth 20-30mb/s
Bitrate 800kb/s Bandwidth 500-1.5mbs/s
Static video image
Bitrate 150kb/s Bandwidth 72kb/s
Bitrates above also include an audio stream, this is the total SRT bitrate.
The text was updated successfully, but these errors were encountered: