Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(validation)!: validation testing framework #667

Open
wants to merge 27 commits into
base: main
Choose a base branch
from

Conversation

meganwolf0
Copy link
Collaborator

@meganwolf0 meganwolf0 commented Sep 16, 2024

Description

Initial work to specify Lula-native validation testing

  • Updates the previously (imo poorly) named inject function to transform - tried to do a bit of refactor to pull out functionality to support more granular tests
  • Addition of RunTests method to the LulaValidation + updated schema to support
  • Updated docs -> Hopefully the syntax/intended usage isn't too confusing. Added some sample tests for existing validations we have in compliance artifacts (https://github.com/defenseunicorns/compliance-artifacts/pull/151) - Will note that the one functionality not currently supported is deleting a specific list entry. Was thinking this could be a follow-on because this is already quite lengthy
  • Modified the dev validate to handle --run-tests flag, and because this was getting annoying to debug a --print-test-resources that will print out the transformed resources.json to the validation directory -> This is basically just a print-to-screen implementation right now. Probably further discussion needed on dumping a test-result object or how we want to handle that.

Follow-ups to this

  • probably a refactor of the dev validate cmd to support the e2e testing structure we have
  • addition of --run-tests to lula validate -> will need to include some exploration/decision on how the test report gets dumped
  • delete a specified list item

Related Issue

Fixes #460

Type of change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Other (security config, docs update, etc)

Checklist before merging

@meganwolf0 meganwolf0 marked this pull request as ready for review November 4, 2024 18:47
@meganwolf0 meganwolf0 requested a review from a team as a code owner November 4, 2024 18:47
Copy link
Member

@brandtkeller brandtkeller left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor comments as I look at general structure otherwise no major concerns.

docs/reference/testing.md Outdated Show resolved Hide resolved

- `name`: The name of the test
- `changes`: An array of changes or transformations to be applied to the resources used in the test validation
- `expected-result`: The expected result of the test - satisfied or not-satisfied
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The expected result of the test - satisfied or not-satisfied

This is an interesting point as we look to clear a path for validations used outside of OSCAL. We map the result of a policy (pass/fail) to align to findings/observations using the OSCAL terminology (satisfied/not-satisfied).

We might look at what terminology OPA/Kyverno use and/or decide on what Lulas will be.

Copy link
Collaborator Author

@meganwolf0 meganwolf0 Nov 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's fair - I initially was thinking like pass/fail - that also sort of conflates with the overall test result (which is pass/fail) so walked back the terminology. I mean it could be more like policy/provider-result -> true/false? Or accept/reject?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Walk me through the expected result <> test result of pass/fail and how that might conflate?

Seems like pass/fail at the test result layer would align well to the expected result also being pass/fail

@meganwolf0 meganwolf0 marked this pull request as draft November 7, 2024 14:44
@meganwolf0 meganwolf0 marked this pull request as ready for review November 12, 2024 17:46
@meganwolf0
Copy link
Collaborator Author

Made a couple updates to incorporate the templating for dev commands work

  • Added tests for the dev validate --run-tests in e2e -> Changed the logic to fail the command when --run-tests is passed and a test fails
  • Change the ctx from context.Background() to cmd.Context() - I was thinking that's the context we use in lula validate so just being consistent

Copy link
Member

@brandtkeller brandtkeller left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nearly finished with review - no major modifications noticed as of yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: No status
Development

Successfully merging this pull request may close these issues.

Validation Review Fields/Process
2 participants