-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Internal: Determine/document a process to test use cases that can't run in automated test environment #2779
Comments
… and added a reason to describe why the use cases have been disabled
For now, I will start with testing the use cases that are not run in GHA on seneca. Some of the use cases have input data that is provided with the rest of the data for the model_applications categories. Some of them have data in an additional tar file on the web.
I ran each of these using today's MET nightly build directory on seneca. I automate this by setting an environment variable in my .bashrc: |
* Per #2779, assigned a number to all use cases that are not run in GHA and added a reason to describe why the use cases have been disabled * adding location information * updating location information * Revert "adding location information" This reverts commit cc6d185. * Revert "updating location information" This reverts commit 5920779. * added location of input data that is not found with rest of input data for use case category * added information on how to run generate_release_notes.py script and updated script to clean up some formatting issues * Per #2789, added release notes for rc1 release * update version for rc1 release * fixed broken commands in use case scripts --------- Co-authored-by: lisagoodrich <[email protected]>
Currently there are a few use case that are not run in GitHub Actions for various reasons, like they exceed the memory limit or disk space limit. Some of these are noted in the Contributor's Guide. As part of the release process, we should make sure to run these use cases outside of the GitHub Actions automated testing environment to ensure that they run as expected.
The Contributor's Guide describes the process to add use cases that can't be run by not assigning a number to the use case in the internal/tests/use_cases/all_use_cases.txt file and excluding them from the .github/parm/use_case_groups.json file that determines the groups of use cases to run in the automated tests. We have since added support for a "disabled" key in the use_case_groups.json file to always skip those use cases.
There are a few use cases that are not run in the automated tests, but the reason is unclear because they are not listed in the Contributor's Guide section. This could happen for various external reasons (unrelated to GitHub Actions limitations) and we may be planning on eventually getting these cases working again.
Describe the Task
Time Estimate
This depends on how much of this work we want to complete for this release and which should move to another issue for future work. We should at very least test the use cases that need to be tested!
Sub-Issues
Consider breaking the task down into sub-issues.
Relevant Deadlines
List relevant project deadlines here or state NONE.
Funding Source
Define the source of funding and account keys here or state NONE.
Define the Metadata
Assignee
Labels
Milestone and Projects
Define Related Issue(s)
Consider the impact to the other METplus components.
Task Checklist
See the METplus Workflow for details.
Branch name:
feature_<Issue Number>_<Description>
Pull request:
feature <Issue Number> <Description>
Select: Reviewer(s) and Development issue
Select: Milestone as the next official version
Select: METplus-Wrappers-X.Y.Z Development project for development toward the next official release
The text was updated successfully, but these errors were encountered: