Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Behavior of the status and result in testRunEnd #10

Open
gary083 opened this issue Jul 25, 2023 · 3 comments
Open

Behavior of the status and result in testRunEnd #10

gary083 opened this issue Jul 25, 2023 · 3 comments
Assignees

Comments

@gary083
Copy link

gary083 commented Jul 25, 2023

I find it strange that we receive a "testRunEnd": {"status": "COMPLETE", "result": "PASS"} even when encountering a FAIL diagnosis in TestStep or when using the TestRun.add_error() method.

For instance, in the provided example at this link: https://github.com/opencomputeproject/ocp-diag-core-python/blob/4671c0f4f1591311851c7437162dc804e3848fdb/examples/sample_output.txt#L187C1-L187C24

We still obtain the PASS result even though we use add_error().

I believe we can manually collect the status/result ourselves and raise the TestRunError if any error or failed diagnostic occurs.

I would like to inquire whether this behavior is intentional or a bug. Additionally, I am interested in knowing the best practice for handling such situations.

Thank you!

@mimir-d
Copy link
Collaborator

mimir-d commented Sep 2, 2023

sorry for the delay; just saw this now. i'll add it to my todos to look at

@mimir-d mimir-d self-assigned this Sep 2, 2023
@mimir-d
Copy link
Collaborator

mimir-d commented Nov 5, 2024

Looks like we may need to add some text to the readme to explain this situation. The python api will not make decisions on whether a test step/run succeeded or failed because there is no clear definition that it could use. Measurements may not validate, errors could be emitted, etc and yet a diag developer would still reason that the step/run succeeded.

To note, there was a prior feature request that the validation of measurements should trigger an automatic test step failure (the TODO file doesnt seem to have made it into the repo), and I did leave extensibility options in the architecture of the codebase as it is right now, specifically for this kind of request.

However, call for action here; if anyone is interested in implementing such a functionality (which would be opt-in, can be done with the lib setup api), feel free to publish a PR.

@mimir-d
Copy link
Collaborator

mimir-d commented Nov 5, 2024

Gonna keep this issue open until we have some text in the README to explain the current design decision.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants