Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add CI step to check indexed before/after commits #213

Open
evelikov opened this issue Oct 25, 2024 · 8 comments
Open

Add CI step to check indexed before/after commits #213

evelikov opened this issue Oct 25, 2024 · 8 comments

Comments

@evelikov
Copy link
Collaborator

Our test coverage is by no means extensive. Once quick and easy way to catch issues like #207 is to extend the CI by:

  • install kernel matching the existing kernel headers
  • build the indexes with and without the PR changes
  • compare the two and error on variance

This will not catch all potential problems, but it will greatly reduce the blast radius.

It will flag intentional changes like #188. Since those are bound to be very rare, we can ignore the CI. We can reconsider if they become common though.

@lucasdemarchi
Copy link
Contributor

Instead of building kmod without the PR applied, shouldn't we just use kmod from the distro and expect to be similar?

@lucasdemarchi
Copy link
Contributor

although then it would be hard to differentiate failures from "acceptable changes/improvements". While we don't have such a test, I think we should go back and make sure we didn't break anything after v33.

@evelikov
Copy link
Collaborator Author

Instead of building kmod without the PR applied, shouldn't we just use kmod from the distro and expect to be similar?

Assuming it's not too noisy, we can have both.

although then it would be hard to differentiate failures from "acceptable changes/improvements".

My gut feeling is that noise should be once a year/couple of years If it proves more common, then we can look for solutions.

Generally I also considered the junit.xml that meson produces, nicely rendered in Github. Although since there is no have native support, we require third-party actions which ask for quite a lot of permissions... One day, but not now.

While we don't have such a test, I think we should go back and make sure we didn't break anything after v33.

Definitely on my todo list. Thanks o/

@lucasdemarchi
Copy link
Contributor

Generally I also considered the junit.xml that meson produces, nicely rendered in Github. Although since there is no have native support, we require third-party actions which ask for quite a lot of permissions... One day, but not now.

testlog.json would be more palatable I think, with quick reporting via a loop + jq or python script.

@evelikov
Copy link
Collaborator Author

evelikov commented Oct 29, 2024

I think we should go back and make sure we didn't break anything after v33.

Comparing v33 and 4891b4b, across different kernels:

  • 6.11 - identical
  • 5.15 - identical
  • 3.10 - missing some alias symbols and crc-foo dependencies et al

Kicked of a bisection to track things down - see #214

@evelikov
Copy link
Collaborator Author

While the regression is fixed, the testing plan (outlined here) is not enough to catch similar problems, in the future.

Namely: it seems that upstream kernel build infra has moved from having the symbols in .symtab and .strtab to __ksymtab_strings, so we would need to either:

  • a) find a way to force recent kernels to populate .symtab/.strtab, or
  • b) manually grab older kernels within the CI jobs

Repeatedly pulling from archive.org won't be nice to them and we cannot stash it as Github artefact since public repos retention is only 90 days...

@lucasdemarchi
Copy link
Contributor

I think (b) is the direction and we could this with a container for an older (or enterprise) distro, no? We don't need to execute the test on that old container. We can have a "donor" container based on e.g. centos 7 in which we install the kernel and then extract /lib/modules/. We can have this pipeline saved every month or so instead of done as part of each run.

From the test point of view, I think it wouldn't change anything. It's just the pipeline preparing the environment to manually test on a different kernel.

@evelikov
Copy link
Collaborator Author

I briefly considered grabbing an older version distro/container although from working on another project availability is still a concern. In some cases the artefacts (packages) get moved, in another the package signatures/keys expire...

Let me see if I can find the retention details for various container registries - GH, Dockerhub, etc.

As another alternative, we could build an older kernel as part of the CI job, ran once every few weeks/month as you suggested. If needed it can be stripped down version, so it completes within reasonable time(tm) - say 10-20 mins or so?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants