-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[uss_qualifier] Inspect all collected queries for the usage of https (NET0220) #188
[uss_qualifier] Inspect all collected queries for the usage of https (NET0220) #188
Conversation
8089da4
to
f704892
Compare
monitoring/uss_qualifier/scenarios/astm/netrid/common/aggregate_checks.py
Outdated
Show resolved
Hide resolved
if not found_cleartext_query: | ||
self.check( | ||
"All interactions happen over https", | ||
self._queries_by_participant.keys(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When possible, we should attribute failed checks to the specific participant failing them -- multiple participants for a single check means any one (or multiple) of the specified participants may have failed the check. So, instead, we should iterate over participants and, for each participant, check whether all their queries use https. If yes, pass; if no, fail; if no queries available, don't perform check. (yes, in that case there will be multiple instances of the same check: one instance for each relevant participant)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense. This brings us back to the question of attributing queries to participants, and whether to do so by looking at the host-name or URL, or by tagging the requests where they are made:
We can probably set the server_id
field for all queries. Though, what should we do if we run into a query that is not associated with an SP? I guess failing hard so that we uncover these when scenarios are developed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think server_id should be the means we can use to identify the participants. I think we would want to use all data we have available to us to make the check, but I think we can actually simply ignore queries where we don't have a server ID. For instance, if we added a test portion that interacted with a third entity other than a Service Provider or Display Provider and we happened not to populated server_id in our queries to them, I don't think that would justify failing hard.
That said, we may not be populating server_id in all the places that we should, so failing hard temporarily to make sure we've identified all those places seems fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess we could add a dev
flag somewhere that will have more stringent internal requirements?
Or is leaving the code as-is fine and we remove the check at a later stage?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed with @BenjaminPelletier offline: we'll leave this "debug" flag hardcoded for now. Once things are stable we can simply remove it.
monitoring/uss_qualifier/scenarios/astm/netrid/common/aggregate_checks.py
Outdated
Show resolved
Hide resolved
437e27f
to
717c3da
Compare
Modulo the |
717c3da
to
7270fa9
Compare
6dd81e9
to
8208cf4
Compare
monitoring/uss_qualifier/scenarios/astm/netrid/common/aggregate_checks.py
Outdated
Show resolved
Hide resolved
monitoring/uss_qualifier/scenarios/astm/netrid/common/aggregate_checks.py
Outdated
Show resolved
Hide resolved
if query.request.url.startswith("http://"): | ||
found_cleartext_query = True | ||
if participant_id not in self._debug_mode_usses: | ||
self.record_note( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be better to attach this information to the failed check itself as notes do not obviously attach to any particular check. record_failed has an optional parameter for relevant queries that we should use as much as possible -- it should be populated with the timestamps for this set of queries in this case.
monitoring/uss_qualifier/scenarios/astm/netrid/common/aggregate_checks.py
Show resolved
Hide resolved
monitoring/uss_qualifier/scenarios/astm/netrid/v19/aggregate_checks.md
Outdated
Show resolved
Hide resolved
ea12c80
to
b12444d
Compare
Looks like the "strict dev mode" is working as intended: freshly merged tests cause this failure:
I'll tweak these tests ASAP |
9b39aff
to
a2c4757
Compare
PR ready for final reviw |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great, thanks! Just 2 copy/paste cleanup items, and one optional note
f71e8fe
to
508a6f6
Compare
508a6f6
to
89b8f8f
Compare
Deduped test-case doc and rebased, we should be ready to merge this. |
Checking for NET0220 by ensuring all queries to USSes are relying on https.
In order to validate that queries are not sent in cleartext while not breaking local non-https setups, this:
local_debug
flag to service providers, observers and dss'eslocal_debug
mode.Open questions/notes: