-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor: simple code cleanups in mst implementation #269
refactor: simple code cleanups in mst implementation #269
Conversation
@ntampakas is it possible to let the workflows from forks to run and use the AWS credentials? Or is there a better way to deal with external contributions? |
@@ -50,7 +50,8 @@ pub fn parse_csv_to_entries<P: AsRef<Path>, const N_CURRENCIES: usize, const N_B | |||
balances_big_int.push(balance); | |||
} | |||
|
|||
let entry = Entry::new(username, balances_big_int.try_into().unwrap())?; | |||
let entry = Entry::new(username, balances_big_int.try_into().unwrap()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've reviewed the potential for errors during the try_into
operation. It appears that converting BigUint
in this line does not produce any errors.
However, for the record, this should be checked again if the balance type in the Entry struct changes.
Hey Alex! |
@ntampakas I'm referring to the Rust workflow for tests that runs on push, it is also using the self-hosted runner because of the past performance issues with the github runner. |
Correct! That also initiates a self hosted runner. |
I just talked with @sifnoc who explained to me why you moved to a self-hosted runner. Seems the change was made in PR #199 and time went from 15' to about 4'. I checked the runs before the PR and they were indeed around 15min but unfortunately I can't see the details because Github doesn't keep that information for that long, so I don't know what part was slow. But looking at more recent ones, it seems like running is actual test is the slow part, while installing the different components is fast. So my suggestion: why not just split the tests in different jobs (which will be run in parallel). Currently they are all run into 1 big job called "build". And I see 4 tasks being run:
|
i just pushed the change. i created a new workflow. I guess we can keep both running in parallel for now just to check the difference, and then decide which one to keep. We could also cache dependencies to make things even faster. |
How about switching between self-hosted runner and GitHub runner based on where the changes are coming from? E.g., if it's a fork, run in GitHub, but if it's in Summa org, use the current setup? Here's the way to detect the repo name: https://github.com/orgs/community/discussions/25217#discussioncomment-3246901 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the initiative to optimize the github action flow.
We had a specific reason for running tests sequentially. It's because the interface file for the verifier contract in the Backend, specifically backend/src/contracts/generated/inclusion_verifier.rs
, is generated from the ABI file, which in turn is generated from the contract. This process is similar to what is done in the script update_verifier_contract.sh
in the Backend.
However, now that the circuits and contract have matured for V1, your suggestion to test in a parallelized manner seems very appropriate.
Also, following Alexs` found, we can add condition to the workflow file that run test on self-host runner.
cfc4c16
to
598db04
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for fixing this. Now, the test finishes in about 6 minutes without using the est instance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's figure out which testing workflow we're using before we merge this PR. If the parallel tests work well, should they be merged into the old workflow?
598db04
to
6f05b33
Compare
6f05b33
to
4799f26
Compare
@teddav Do I get it right that now the tests would either run on both AWS and GitHub when pushed from Summa org or only on GitHub when pushed from a fork? In case of Summa org I was expecting the tests to only run in AWS, not on GitHub. |
@alxkzmn yes you're right. It's useless to run on both. Even though now the AWS worker seems a bit overkill even for Summa's team. It seems like everything could run fine just on Github |
This PR has been thoroughly discussed and is ready for merging. I appreciate the effort put into this PR, @teddav . Based on the discussions, it seems that the next step for this PR would be to remove the requirement of an AWS instance in the workflow. |
Just cleaned a few things while reviewing the MST. All tests are passing, but some changes (ex: removing some condition) might require to add more tests. Let me know if you think that's the case