-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add CI step to check indexed before/after commits #213
Comments
Instead of building kmod without the PR applied, shouldn't we just use kmod from the distro and expect to be similar? |
although then it would be hard to differentiate failures from "acceptable changes/improvements". While we don't have such a test, I think we should go back and make sure we didn't break anything after v33. |
Assuming it's not too noisy, we can have both.
My gut feeling is that noise should be once a year/couple of years If it proves more common, then we can look for solutions. Generally I also considered the junit.xml that meson produces, nicely rendered in Github. Although since there is no have native support, we require third-party actions which ask for quite a lot of permissions... One day, but not now.
Definitely on my todo list. Thanks o/ |
testlog.json would be more palatable I think, with quick reporting via a loop + jq or python script. |
While the regression is fixed, the testing plan (outlined here) is not enough to catch similar problems, in the future. Namely: it seems that upstream kernel build infra has moved from having the symbols in
Repeatedly pulling from archive.org won't be nice to them and we cannot stash it as Github artefact since public repos retention is only 90 days... |
I think (b) is the direction and we could this with a container for an older (or enterprise) distro, no? We don't need to execute the test on that old container. We can have a "donor" container based on e.g. centos 7 in which we install the kernel and then extract /lib/modules/. We can have this pipeline saved every month or so instead of done as part of each run. From the test point of view, I think it wouldn't change anything. It's just the pipeline preparing the environment to manually test on a different kernel. |
I briefly considered grabbing an older version distro/container although from working on another project availability is still a concern. In some cases the artefacts (packages) get moved, in another the package signatures/keys expire... Let me see if I can find the retention details for various container registries - GH, Dockerhub, etc. As another alternative, we could build an older kernel as part of the CI job, ran once every few weeks/month as you suggested. If needed it can be stripped down version, so it completes within reasonable time(tm) - say 10-20 mins or so? |
Our test coverage is by no means extensive. Once quick and easy way to catch issues like #207 is to extend the CI by:
This will not catch all potential problems, but it will greatly reduce the blast radius.
It will flag intentional changes like #188. Since those are bound to be very rare, we can ignore the CI. We can reconsider if they become common though.
The text was updated successfully, but these errors were encountered: