-
-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
P2P rate limit #1596
P2P rate limit #1596
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1596 +/- ##
=======================================
Coverage 96.92% 96.92%
=======================================
Files 157 157
Lines 10571 10582 +11
=======================================
+ Hits 10246 10257 +11
Misses 325 325
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you very much for your contribution 😃
I really like the idea of a per-connection rate limit.
task = asyncio.create_task( | ||
conn.request( | ||
payload=apci.DeviceDescriptorRead(descriptor=0), | ||
expected=apci.DeviceDescriptorResponse, | ||
) | ||
) | ||
await asyncio.sleep(0) | ||
send_responses(1) | ||
|
||
if rate_limit > 0: | ||
# this should raise a timeout, since the data was not yet sent | ||
with pytest.raises(ManagementConnectionTimeout): | ||
await task |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't used management for a while, so I'm not really following this.
What I see here:
L270 creates a task to request a response from a device. It isn't sent because still in the rate limit of the previous task - so it would theoretically wait 1 second before sending.
L276 send_responses(1)
sends responses for a request it hasn't even received yet.
Since not yet waiting for an ACK, it is discarded (process()
).
Same for the DeviceDescriptorResponse
since not yet waiting for that sequence_number. (I guess Management.process()
sends an ACK for that).
L281 awaiting the response we previously discarded - we block for a second waiting for nothing.
...
If this is correct, I guess we should at least time_travel
in the pytest.raises
block. And probably don't send the response beforehand, but look for the sent request - before and after the time_travel
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's an alternative. The idea of the test was to do exactly the same sequence with and without the rate limit option. The new version checks for the sent messages.
Could you share some e example of devices that would need such rate limit? I would have guessed that when they ACK + respond, they are ready to receive a next request 🙃 Also, from your experience, what are typically needed rate limit values for those? |
We are using these P2P connections to flash device parameters for e.g. the basalte Sentido. When flashing with specific KNX/IP bridges, we noticed that sometimes follow-up packets are sent within a few ms of the ACK and are never handled by the device. ETS and several other bridges seem to automatically add some time between a new frame and the previous ACK. If this is not done, nearly 100% of the frames timeout once, making everything very slow. Adding this feature makes it much more reliable and faster. A typical value is 20 frames per second, which results in similar performance as ETS. But this is of course device-specific, so not sure how general this would be. To maintain backwards compatible, I set the rate limit by default to 0. |
Is this related to routing vs. tunnelling or a manufacturer / implementation specific difference? In routing we would already have a 20 ms timeout in place Line 72 in 0df0d34
whereas in tunnelling we solely rely on L_Data_Con for flow control.
To my knowledge, you are the only person using this for more than sending a reset, so feel free to apply good defaults as you feel. It's not really breaking anything anyway (except maybe some unittests 😬) |
The tests I performed were with tunnelling always. Okay, I'll set it to 20 frames per second. All tests are still working, so that's good 😄 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you very much! 👍
Added a rate limit option to P2PConnections.
This is not related to the global
rate_limit
parameter passed when creating the xknx object, which only affects data sent by xknx device objects on the bus.Since different devices might need different limits, I added this parameter on a per-connection level in place of using the global
xknx.rate_limit
value.Also fixes a typo in one of the management procedures
nm_invididual_address_write
was renamed tonm_individual_address_write
. This is a breaking change.Type of change
Checklist