-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Code test coverage should be measured and reported in CI #288
Comments
Are you proposing to measure coverage on a per-target basis? I am not sure if it will be possible to accumulate coverage data for supported targets which run in separate CI jobs. Also, we can not run tests in CI for all supported targets the first place. As for measuring code coverage, tarpaulin is a quite convenient tool for that. |
I think that ideally we would collect coverage metrics from a bunch of different runs on a bunch of different targets. Then we would have a way to "merge" all this coverage data. Given the limitations with some of our targets, certainly some files wouldn't be covered, but we would be able to see which lines of code are being hit by some of our tests. |
In the ring CI we do collect code test coverage for multiple targets. We send it to codecov.io and then codecov.io merges it all for us automatically. |
It might be a good idea to also incorporate branch coverage (in addition to line coverage) to make sure we are hitting alternative code paths: taiki-e/cargo-llvm-cov#8 |
There is a lot of runtime feature detection and other conditional logic in this crate. AFAICT, when tests are run, it is arbitrary which implementation gets picked. For example, for Linux, AFAICT only the
getrandom
syscall implementation is tested, and the file I/O fallback is not tested. Publishing the code test coverage report would make it clear which code isn't being tested on which platforms.There is a lot of code that is copy-modified-pasted. This is understandable because some targets have slightly different APIs. My hope is that when code test coverage measurement is published, we'll see clearly which duplicated coding patterns we should factor out to increase the code coverage further to minimize the amount of uncovered code for difficult-to-test (lacking test runners) platforms.
Also I expect having code test coverage will facilitate more exhaustive testing, such as writing tests that exercsise both the
getrandom
syscall branch and the File I/O, e.g. by using ptrace or equivalent, similar to what BoringSSL does.The text was updated successfully, but these errors were encountered: