Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ Propose a test runner for analyzer-lsp rules #157

Merged
merged 2 commits into from
Feb 26, 2024

Conversation

pranavgaikwad
Copy link
Contributor

@pranavgaikwad pranavgaikwad commented Jan 23, 2024

Fixes #156

Copy link
Contributor

@shawn-hurley shawn-hurley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some nits I think that I only have one major concern around the UX of logs and finding the correct information

enhancements/analyzer-rules-testing/README.md Outdated Show resolved Hide resolved
enhancements/analyzer-rules-testing/README.md Outdated Show resolved Hide resolved
enhancements/analyzer-rules-testing/README.md Show resolved Hide resolved
enhancements/analyzer-rules-testing/README.md Show resolved Hide resolved
enhancements/analyzer-rules-testing/README.md Outdated Show resolved Hide resolved
enhancements/analyzer-rules-testing/README.md Show resolved Hide resolved
enhancements/analyzer-rules-testing/README.md Outdated Show resolved Hide resolved
enhancements/analyzer-rules-testing/README.md Outdated Show resolved Hide resolved

The runner will run each test file at once setting up the provider by merging two things - provider settings passed at runtime via `--provider-settings` and locations specified in the test file itself for each provider. Each test will run at once. This can be run concurrently in future.

Based on different analysis params used, there will be different combinations of analyses. The overlapping analyses runs can be grouped together to make things faster. It might potentially affect the results if there are rules that somehow affect each other. But maybe there's profit in catching those things early on.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 I don't know how that would happen exactly but is probably something that we want to be cautious on

enhancements/analyzer-rules-testing/README.md Show resolved Hide resolved
Copy link
Member

@djzager djzager left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a very promising start to me. Well detailed. It's hard for me to comment on some of the more technical pieces, like how a specific test for a rule should look, as I haven't spent a ton of time looking at rules. My biggest concerns are around UX and protecting ourselves from changes we may need to make in the future.

Rule authors should should be able to "declare" a test for a rule in a YAML file:

```yaml
providers:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been trying to get up to speed on all the different fields and their location in the test files. Do you think we need to version these? I'm thinking like GVK konveyor.io/v1alpha1/ruletest, in case we need to change the format in the future.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure we are at that point yet. Do you feel strongly that we should version these?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to consider a couple of things:

  1. That the current version is v1 (folks could already be building rules for it; we can't break them implicitly)
  2. If 1 is true, then version less is also v1.
  3. If we find the need to change the API in a backwards incompatible way, then we should add a v2 version and make it so they can live side by side.

While, I kind of wish (we/I) would have versioned the thing from the start, I think we are in this situation now and have to deal. I don't think that we can have a versioned version in the test and not in the rulesets that exist today.

IDK thoughts?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While, I kind of wish (we/I) would have versioned the thing from the start, I think we are in this situation now and have to deal. I don't think that we can have a versioned version in the test and not in the rulesets that exist today.

IDK thoughts?

I agree that, if one then, both rulesets and tests should be versioned. I also agree that rulesets w/o a version still carry with them an implicit version (although I don't know that it's necessarily v1). That indicates to me that this version conversation is outside the scope of this PR but one that we should prioritize for the v0.4.0 release. Namely:

  1. What version are we at? v1alpha1, v1beta1, or v1. Any unversioned ruleset would implicitly carry this version.
  2. Add that version info where appropriate.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shawn-hurley introducing v1 right now doesn't sound too bad, especially if we could assume anything without a version is v1.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @djzager lets have a discussion on versioning outside the scope of this pr, because it will effect quite a bit I think.

The providers can also be defined at a ruleset level in case users just want to re-use for all tests in a rulest. A new field `testConfig` can be introduced in `ruleset.yaml` file:

```yaml
name: my-ruleset
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes like this make me wonder if we need to be versioning these.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

might be a little late, BUT I think it is small enough that we can do something smart in the absence of a version

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we did want to version this, and not saying we need to based on my comment on the other version topic, I thinking having a new golden file for tests like ruleset-tests.yaml would allow us to do that, with that having a version.

I am personally fine either way.

enhancements/analyzer-rules-testing/README.md Show resolved Hide resolved
Signed-off-by: Pranav Gaikwad <[email protected]>

#### Writing a test for a rule

Rule authors should should be able to "declare" a test for a rule in a YAML file:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: "should should"


Finally, a summary line will be printed showing overall statistics.

Additionally a YAML output file will be generated which will be later used by the CI:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wondering if we should also place the entire violations.yaml for the test run somewhere

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pranavgaikwad if this isn't what you meant in line 180 then I agree, we need to add this and/or make it more clear

Copy link
Contributor

@shawn-hurley shawn-hurley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving assuming we agree on test case filter by name and that we can get agreement on the versioning that @djzager has brought up

Rule authors should should be able to "declare" a test for a rule in a YAML file:

```yaml
providers:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to consider a couple of things:

  1. That the current version is v1 (folks could already be building rules for it; we can't break them implicitly)
  2. If 1 is true, then version less is also v1.
  3. If we find the need to change the API in a backwards incompatible way, then we should add a v2 version and make it so they can live side by side.

While, I kind of wish (we/I) would have versioned the thing from the start, I think we are in this situation now and have to deal. I don't think that we can have a versioned version in the test and not in the rulesets that exist today.

IDK thoughts?

The providers can also be defined at a ruleset level in case users just want to re-use for all tests in a rulest. A new field `testConfig` can be introduced in `ruleset.yaml` file:

```yaml
name: my-ruleset
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we did want to version this, and not saying we need to based on my comment on the other version topic, I thinking having a new golden file for tests like ruleset-tests.yaml would allow us to do that, with that having a version.

I am personally fine either way.

* providers: List of configs needed to setup providers. The runner will take a settings file at runtime and mutate it with values defined here for every run. This can also be defined at ruleset level. Values here will take precedance:
* name: Name of the provider to match in _provider_settings.json_
* location: Path to sample data for this provider
* tests: List of tests, each testing a distinct rule
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each test case should have a name so that I can filter on that name, say only run this one test case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1


_If and / or are used for incident verification, the result of the top level condition will be evaluated based on results of all children accordingly_

> Having a schema for the tests would be nice. It will allow users to auto-generate the boilerplate.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+100 to do this from the start


Finally, a summary line will be printed showing overall statistics.

Additionally a YAML output file will be generated which will be later used by the CI:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pranavgaikwad if this isn't what you meant in line 180 then I agree, we need to add this and/or make it more clear

enhancements/analyzer-rules-testing/README.md Show resolved Hide resolved
@pranavgaikwad pranavgaikwad merged commit 6533a71 into konveyor:master Feb 26, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[RFE] Allow rule authors to test / verify rules easily
4 participants