You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here's some brainstorming on a possible build and test architecture based on how things have mostly been tested up to now.
Put off using the R package test structure (see numbered item later in this list for what to do with that), which is oriented toward unit tests and most of the VE "units" don't do anything in isolation. That said, we'll let users have a "tests" directory do with it as they will to support their own module development (as I have done in the current "visioneval" and "VEModel" packages). Think below/later about having some unit tests be done automatically during the R package build. This portion has been generalized in the forthcoming test-architecture pull request.
Remainder is for a future "run the models" style of testing, still to be implemented.
In the VE-Components.yml, drop the test directory specification and instead (for modules) add a "TestWith" directive that specifies a model that will run that module (naming package, model name, and variant). That can be a model specially defined for the module's package, or (for standard module packages) some standard model whose ModelScript refers to that package. See below for some ideas about how to avoid redundancy.
Improve the current installModel configuration to allow "private" models (which are not listed as model variants when an end user requests the list, or perhaps make them use a flag to display those). Also add an optional parameter that specifies the package a particular model variant should be drawn from (if the variant is not unique across all the variants for that model). That would allow each package to create a private VERPAT-test or VERSPM-test or VESTATE-test.
Enlarge the inst/model/model-index.cnf to allow model components to be drawn from other packages (so if, for example, the multimodal module implements a VERSPM variant, we can add a "modifies" directive (naming package/model/variant) from which any locally-unspecified elements (defs, inputs, queries, scripts) will be copied (so we can make a variant by changing just one or two things from another variant).
Create a "tests" target in the Makefile/ve.build that will run after all the modules have been built and that will assemble all the "TestWith" parameters into some sensible order and then run all of them (and eliminating duplicates since a single run of stock VERSPM, VERPAT and (smaller) VE-State can cover lots of modules - each module would still specify the model it has to run with, but many modules could specify the same model, and we would only run one instance of the model to test all of them at once).
Those models will be installed and run in a "test" folder (in the "built" folder, next to runtime, ve-lib, ve-pkgs, etc.) that is just like the end user runtime but holds all the test results. We would want each TestWith run to be flagged as complete (or errored) for test reports and for having the "test" target be parsimonious in what it runs (only tests associated with changed modules).
That would simplify the pull request evaluation: the workflow is to pull and incorporate the module changes, then do `ve.duild(c("modules","tests")) which will rebuild just the changed module(s) and run only "out of date" tests (where the module changed or the valid test results were not present). If the tests complete without errors, the pull request can then be merged and rebased onto development-next and pushed to VisionEval-dev.
In principle, we could enlarge "TestWith" to run some other types of tests than a sample VisionEval model (e.g. specific scripts in the package test directory) so some amount of unit testing for weird inputs might exist.
This might be an argument for creating a low-level framework function for developers to sling non-Datastore data at their module to diagnose various pathological conditions. That would be more traditional unit testing. Rather than run such things after the module build, however, having such module wrapper tests might suggest re-activating the R build process package test facility and doing those tests as the module builds - the difficult with that is making sure that we have a complete environment if other packages are Call'ed, but the module build process does leave all the successfully completed modules in ve-lib so it should work as long as the VEComponents.yml has the packages queued up in the right dependency order.
Let me know what you think of that proposal - it should be pretty straightforward to implement. and would let us knock off a couple of development issues (#60 and #16 in particular).
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Here's some brainstorming on a possible build and test architecture based on how things have mostly been tested up to now.
Remainder is for a future "run the models" style of testing, still to be implemented.
Let me know what you think of that proposal - it should be pretty straightforward to implement. and would let us knock off a couple of development issues (#60 and #16 in particular).
Beta Was this translation helpful? Give feedback.
All reactions