-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Note: Unit test coverage amount #71
Comments
I can see this being useful and I'm happy that it's a NOTE, not more strict. This gets really hard when you have packages with Also, everyone starts at 0% coverage. I guess that people developing a package could ignore this. Btw, my package https://github.com/LieberInstitute/spatialLIBD has 0% coverage at the moment :P (in case those writing the code for this new note want to use it for testing their code) |
Could this be done with a direct call to the |
I think there are two challenges with this issue The first is that 'slightly increase' the run time is an understatement -- the entire package has to be checked with covr. A second challenge is that the results of covr need to be effectively conveyed to the user, in part because of challenges the user has in emulating the build system (even in 'simple' ways, like using the correct version of R and Bioconductor). |
I was curious about the overhead for > system.time(devtools::test())
user system elapsed
71.050 7.273 78.642
> system.time(covr::package_coverage())
user system elapsed
97.117 9.077 106.933
> system.time(devtools::check())
user system elapsed
336.414 24.696 365.059 I don't know if there is some other minimal way to estimate test coverage? Perhaps a compromise would be to add a message suggesting |
Well |
Yeah it's not surprising when you think about it but for some reason I had it in my head that it magically worked without running the tests 🎩. I found this script which uses regex to find function calls in tests https://gist.github.com/cannin/819e73426b4ebd5752d5. Not as detailed as |
I haven't given this much thought, but is there potential to turn off the 'standard' running of the tests, and rely on the run through with |
I think the problem with that might be that if a test fails you just get an error from |
Figure out unit test coverage amount and note if code is not sufficiently tested?
The text was updated successfully, but these errors were encountered: