-
Notifications
You must be signed in to change notification settings - Fork 234
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add CLI unit tests for GenAI-PA CLI #479
Conversation
def test_arguments_output(self, arg, expected_output, capsys): | ||
combined_args = ["-m", "test_model"] + arg | ||
# Redirect stdout and stderr to capture output | ||
original_stdout, original_stderr = sys.stdout, sys.stderr | ||
sys.stdout, sys.stderr = io.StringIO(), io.StringIO() | ||
|
||
try: | ||
# Call the run function | ||
run(combined_args) | ||
finally: | ||
# Restore stdout and stderr | ||
captured_stdout, captured_stderr = ( | ||
sys.stdout.getvalue(), | ||
sys.stderr.getvalue(), | ||
) | ||
sys.stdout, sys.stderr = original_stdout, original_stderr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice use of parameterize
!
I think this handling is fine for getting a "speed-of-light" solution out, but longer term, it may be more maintainable and easy to understand to use the parser directly for this kind of test (assuming I'm interpreting the test correctly).
I haven't tested this would actually work, but I'm thinking something like this:
from genai_pa import parser
combined_args = ["-m", "test_model"] + arg
args = parser.parse_args(combined_args)
# ... operate checks/assertions on the args object here
Rather than working around stdout/redirecting/parsing it, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note, all the run()
function is doing is calling:
args = parser.parse_args(argv)
args.func()
so if you're just looking for input/parser validation and don't care about running the PA/subcommand for this test, I think this would work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we are confident that the parser is correct, that should work. Just let the parser do the creation of the ArgParser object for you.
I think you just need to pass in a list of comma separated strings to represent the sys args.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To summarize, I'd recommend trying to operate on the parser directly in python objects rather than parsing stdout if it makes sense to do so for this test -- but I wouldn't block the PR approval if you don't think it's worth the time to try right now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great idea! Let me give that a try.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated! It looks much cleaner, thanks for the feedback.
args = parser.parse_args(combined_args) | ||
|
||
# Check that the attributes are set correctly | ||
for key, value in expected_attributes.items(): | ||
assert getattr(args, key) == value |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is exactly what I was thinking of - nice!!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note: I think some of the expected attributes may change if Elias uses dest
like parser.add_argument("--batch-size", ..., dest="b")
in a later PR, so these parameters may have to change to {"b": 2}
etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to guard against that with metavar but its a good call out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAIK, metavar
only modifies the --help
output, but won't change the attributes:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("-b", "--batch-size", default=1, dest="b", metavar="batch_size")
args = parser.parse_args()
for attr in ["b", "batch_size"]:
print(f'{attr=}: {hasattr(args, attr)=}')
$ python test.py
attr='b': hasattr(args, attr)=True
attr='batch_size': hasattr(args, attr)=False
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for calling this out and expounding on how metavar works, Ryan! We'll have these tests now, so we should make sure they are still passing after any implementation changes. If not, we can update the tests to match changes in how we want to the CLI designed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's moot since it looks like the current code isn't using metavar
for --batch-size
and is instead just mapping it later on before calling subprocess. Just mentioning it in case you got unexpected test failures like this later after a dest
change.
dest="u"
is being used for --url
, but looks like David accounted for that already:
(["--url", "test_url"], {"u": "test_url"}),
Built off of Ryan's test files, here are the first few unit tests for the GenAI-PA CLI. These test all the current CLI args.
To run, you can call
pip install .
andpytest tests/
inperf_analyzer/genai_pa
.This is the draft for CLI unit testing. If we want to write unit tests that actually check output, I'll need to mock the server (or copy over MA's mock) so that we can test that the subprocess call returns the expected output. Given the scope of the CLI at present, I think that may be overkill and we should be testing that the subprocess call is correct, then have end-to-end testing to show that it works as expected with a Triton server running.