-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(no_std): use core::slice
in bitmap_store.rs
#305
Conversation
Hey @GZTimeWalker 👋 Could you please add a small test and CI check to ensure we don't break it again? Thank you and have a nice week 🏝️ |
Hey, let me try. |
btw, you can upgrade the deps: |
[no ci]
Here is things we found:
|
[no ci]
f386328
to
1a5fb43
Compare
Co-authored-by: YanWQ-monad <[email protected]>
Yup, but I didn't find the time to check for an alternative. It is working great for now, so I prefer to keep it like that rather than invest time in it right now.
Could you please run a
Can we propose a CI file for this, please? Modify the README to explain how to run the benchmarks CI (it will be available for the repo's owners only, obviously). |
we have provided a ci file to solve the first two problems. plz check them first, for the action-rs managed cargo command, my suggestion is not to use it, it's not necessary. as for -p arg, maybe you can test it first and tell me the result here as I have not much time today. |
For optional benchmarks, I suggest you to check how to launch it by pr tags, I think it can be implemented by add a But I don't know how to do it, maybe post a help wanted or search for other's action file. |
I think before talking about improving the benchmark CI, should we make it clear what the benchmark is used for? For now, I can't see anything except the benchmark does run in CI. I mean, is it actually be used when deciding to merge a PR, or will it send a notification if a commit causes regression? And even if it is, the result of benchmark are not used to compare with the previous or subsequent commits, which may make the result hard to interpret. In my opinion, for example, if we want to monitor the improvement or regression, it would be OK to run benchmark every time (the CI runs at no cost, though it takes time). But we can't run the benchmark only once just like now, because it can't tell if an improvement or regression happens. So we need the baseline result for comparison, it may come from running the benchmark on base commit (but it takes 2x time), or from pre-saved results (e.g. save to Actions Cache?), which may need some changes to the CI process. Anyways, I think talking about the improvement of benchmark CI seems a little bit off-topic to this PR, would it be great to open an issue for it? And I'm just not willing to see the benchmark results are wasted 😭. |
I agree to discuss the use of benchmark in another issue, and this PR has done its job. It now correctly checks for compilation issues in all feature cases and runs all test tasks. |
Any further on this? The most important thing now is that I need this library to compile properly under no_std. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @GZTimeWalker 👋
Thank you very much for the changes. It's very cool to fix this kind of issues.
I agree to discuss the use of benchmark in another issue, and this PR has done its job.
Would it be possible to open an issue about this and the ideas of @YanWQ-monad, please?
In the meantime, I plan to bump the roaring crate version and release it today.
Have a nice day 🐢
fix #304
one more thing, why the ci did not find this error?