-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ Propose a test runner for analyzer-lsp rules #157
Conversation
7d8170c
to
cf28a39
Compare
Signed-off-by: Pranav Gaikwad <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some nits I think that I only have one major concern around the UX of logs and finding the correct information
|
||
The runner will run each test file at once setting up the provider by merging two things - provider settings passed at runtime via `--provider-settings` and locations specified in the test file itself for each provider. Each test will run at once. This can be run concurrently in future. | ||
|
||
Based on different analysis params used, there will be different combinations of analyses. The overlapping analyses runs can be grouped together to make things faster. It might potentially affect the results if there are rules that somehow affect each other. But maybe there's profit in catching those things early on. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 I don't know how that would happen exactly but is probably something that we want to be cautious on
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like a very promising start to me. Well detailed. It's hard for me to comment on some of the more technical pieces, like how a specific test for a rule should look, as I haven't spent a ton of time looking at rules. My biggest concerns are around UX and protecting ourselves from changes we may need to make in the future.
Rule authors should should be able to "declare" a test for a rule in a YAML file: | ||
|
||
```yaml | ||
providers: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've been trying to get up to speed on all the different fields and their location in the test files. Do you think we need to version these? I'm thinking like GVK konveyor.io/v1alpha1/ruletest
, in case we need to change the format in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure we are at that point yet. Do you feel strongly that we should version these?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to consider a couple of things:
- That the current version is v1 (folks could already be building rules for it; we can't break them implicitly)
- If 1 is true, then version less is also v1.
- If we find the need to change the API in a backwards incompatible way, then we should add a v2 version and make it so they can live side by side.
While, I kind of wish (we/I) would have versioned the thing from the start, I think we are in this situation now and have to deal. I don't think that we can have a versioned version in the test and not in the rulesets that exist today.
IDK thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While, I kind of wish (we/I) would have versioned the thing from the start, I think we are in this situation now and have to deal. I don't think that we can have a versioned version in the test and not in the rulesets that exist today.
IDK thoughts?
I agree that, if one then, both rulesets and tests should be versioned. I also agree that rulesets w/o a version still carry with them an implicit version (although I don't know that it's necessarily v1
). That indicates to me that this version conversation is outside the scope of this PR but one that we should prioritize for the v0.4.0 release. Namely:
- What version are we at?
v1alpha1
,v1beta1
, orv1
. Any unversioned ruleset would implicitly carry this version. - Add that version info where appropriate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@shawn-hurley introducing v1 right now doesn't sound too bad, especially if we could assume anything without a version is v1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @djzager lets have a discussion on versioning outside the scope of this pr, because it will effect quite a bit I think.
The providers can also be defined at a ruleset level in case users just want to re-use for all tests in a rulest. A new field `testConfig` can be introduced in `ruleset.yaml` file: | ||
|
||
```yaml | ||
name: my-ruleset |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes like this make me wonder if we need to be versioning these.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
might be a little late, BUT I think it is small enough that we can do something smart in the absence of a version
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we did want to version this, and not saying we need to based on my comment on the other version topic, I thinking having a new golden file for tests like ruleset-tests.yaml
would allow us to do that, with that having a version.
I am personally fine either way.
Signed-off-by: Pranav Gaikwad <[email protected]>
b1d9970
to
cb03969
Compare
|
||
#### Writing a test for a rule | ||
|
||
Rule authors should should be able to "declare" a test for a rule in a YAML file: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: "should should"
|
||
Finally, a summary line will be printed showing overall statistics. | ||
|
||
Additionally a YAML output file will be generated which will be later used by the CI: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wondering if we should also place the entire violations.yaml
for the test run somewhere
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pranavgaikwad if this isn't what you meant in line 180 then I agree, we need to add this and/or make it more clear
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving assuming we agree on test case filter by name and that we can get agreement on the versioning that @djzager has brought up
Rule authors should should be able to "declare" a test for a rule in a YAML file: | ||
|
||
```yaml | ||
providers: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to consider a couple of things:
- That the current version is v1 (folks could already be building rules for it; we can't break them implicitly)
- If 1 is true, then version less is also v1.
- If we find the need to change the API in a backwards incompatible way, then we should add a v2 version and make it so they can live side by side.
While, I kind of wish (we/I) would have versioned the thing from the start, I think we are in this situation now and have to deal. I don't think that we can have a versioned version in the test and not in the rulesets that exist today.
IDK thoughts?
The providers can also be defined at a ruleset level in case users just want to re-use for all tests in a rulest. A new field `testConfig` can be introduced in `ruleset.yaml` file: | ||
|
||
```yaml | ||
name: my-ruleset |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we did want to version this, and not saying we need to based on my comment on the other version topic, I thinking having a new golden file for tests like ruleset-tests.yaml
would allow us to do that, with that having a version.
I am personally fine either way.
* providers: List of configs needed to setup providers. The runner will take a settings file at runtime and mutate it with values defined here for every run. This can also be defined at ruleset level. Values here will take precedance: | ||
* name: Name of the provider to match in _provider_settings.json_ | ||
* location: Path to sample data for this provider | ||
* tests: List of tests, each testing a distinct rule |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Each test case should have a name so that I can filter on that name, say only run this one test case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
|
||
_If and / or are used for incident verification, the result of the top level condition will be evaluated based on results of all children accordingly_ | ||
|
||
> Having a schema for the tests would be nice. It will allow users to auto-generate the boilerplate. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+100 to do this from the start
|
||
Finally, a summary line will be printed showing overall statistics. | ||
|
||
Additionally a YAML output file will be generated which will be later used by the CI: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pranavgaikwad if this isn't what you meant in line 180 then I agree, we need to add this and/or make it more clear
Fixes #156